/sci/ is boring tonight, I thought I'd bring back some fun math problems that are very easy to understand that anyone can attempt.
Let [math]A \in M_n(\mathbb{R}) [/math] (a square matrix of size n with real coefficients)
Show that if
[math]AA^TA[/math] is symmetrical, then [math]A[/math] is symmetrical as well.
Another that I found, if you fancy arithmetics:
find [math]det(gcd(i,j))_{ 1 \leq i,j \leq n}[/math]
Note to people attempting them: don't get discouraged, they're not particularily easy.
>>8081455
How is gcd defined for real numbers?
>>8081466
Not for real numbers.
the entry of i-th row and the j-th column of the matrix is gcd(i,j).
One bump before I go to bed. Maybe some US fags would like to try.
>>8081706
Wasn't this in /sqt/?
>>8081455
>trying to disguise your homework as a "fun problem" thread
>>8081455
I suspect function
[math] g(n)=det(gcd(i,j))_{1\leq i,j\leq n}[/math]
is [math] g(n)=\prod_{1\leq k\leq n}\varphi(n)[/math] where [math]\varphi [/math] is Euler totient function. This is based on the cases [math] n\leq 6[/math] (not exactly a wealth of evidence...)and the easy recursion formula [math] \varphi(p)=(p-1)\varphi(p-1)[/math] (eliminate the last row with the first, here [math] p[/math] is prime). In fact the matrices look equivalent by row transformations (without permutations) to a diagonal matrix with the Euler totient function in the diagonal, but I don't see how to prove that.
>>8081762
Further updates.
When you put the matrix in upper triangular form the sum of elements in row n seems to be n. In fact the element [math] a_{i-k+1,i}[/math] appear to be the number of elements less than or equal to i, which have gcd k with i. I think i'm quite close to setting up a funky looking induction. This is some pretty cool problem OP, lots of numerical coincidences.
>>8081790
Shit, I meant sum of elements in column n.
>>8081762
K. Here is the next installment. Expand the matrix to have infinite entries. The entries in row n should be ciclical with period n. Conjecture is that rows operation lead to the pattern 0,0,...,[math]\varphi(n)[/math],0,0,\dots.
It seems that you get this pattern by substracting those row n those rows that divide n. After looking at the picture and frowning a bit, this is equivalent to the identities:
[math] \sum_{d|n}\varphi(d)=n[/math] and for r<n [math]\sum_{d|n,d|r}\varphi(d)=(r,n)[/math] (that I see now are the same...), this is a known identity. Yay!
I'll leave the other problem to other anons.
>>8081739
I believe you are mistaken, for /sci/ is not inhibited by scum that are still taking babymaths like intro to linear algebra
>>8081739
>>8082033
niggas. pic related is my first page of /sci/, and it has been similar to that for a while now.
So instead of doing the old putnam thing, I just went here
http://forum.prepas.org/viewtopic.php?p=223754#p223754
and
here
http://forum.prepas.org/viewtopic.php?p=223997#p223997
to grab fun exercises that are very easy to understand but not necessarily easy to solve.
As you can see, I feel
>>8081762
>>8081836
had some fun.
My last homework was 4 or 5 years ago, so if you prefer to debate qualia and black IQ go right ahead.
>>8081762
>>8081836
I think that's the right answer, it's pretty neat anon. Are you interested if I try to find others like that? Do you care if it's analysis or algebra (abstract or linear)?
A purely algebraic solution to the other problem (symmetry of implies symmetry of A), sadly not as elegant.
First a lemma: for any real matrix X, if [math]X^TX=0[/math] then [math]X=0[/math] (by considering the diagonal elements of [math]X^TX[/math]). Hence for any two real matrices X and Y,
if [math]Y^TYX = 0[/math] then [math]0=X^TY^TYX= (YX)^T(YX)[/math] so [math]YX=0[/math]. -- (*)
Now symmetry of [math]AA^TA[/math] means that the ith columns equals (the transpose of) the ith row. Let [math]a_i[/math] denote the ith column of A, then we have that [math]AA^Ta_i = (a_i^TA^TA)^T = A^TAa_i[/math]. Stacking the n equations side by side, we find that [math]AA^TA = A^TAA[/math] so by symmetry, [math]A^TAA^T=A^TAA[/math] i.e. [math]A^TA(A^T-A)=0[/math] i.e. [math]A(A^T-A)=0[/math] i.e. [math]AA^T=A^2[/math] --(#).
Transposing both sides of (#) to get [math]AA^T = (A^T)^2[/math]. Postmultiply (#) by A to get [math]AA^TA = A^3 = A(A^2) = AA^TA^T[/math] i.e. [math]AA^T(A-A^T)=0[/math] i.e. [math]A^T(A-A^T)=0[/math] i.e. [math]A^TA = (A^T)^2[/math] ([math] = AA^T = A^2[/math]).
Having finally proved all four terms are equal, we can show that [math]X = A-A^T[/math] is 0. Simply note that [math]X^TX = (A^T-A)(A-A^T)[/math] which expands to the four terms and cancels to 0.
>>8082352
>I think that's the right answer, it's pretty neat anon. Are you interested if I try to find others like that? Do you care if it's analysis or algebra (abstract or linear)?
I went to sleep so didn't see this for a while. I don't think I'll solve any more problems, I'll be busy with work. I don't know if this is the best website for these kinds of problems either, the thread could be archived and I don't know if there are some more active forums. Maybe you know of some?
On the other hand some other anon >>8082895 found a solution to the other problem, so maybe I'm just underestimating this site...
>>8082352
The Putnam thing was fun. Why did that stop being posted?
>>8083104
Putnam is nice because they have new questions every day. I don't really care where the questions come from, though.