12
$\begingroup$

The utility of the Matrix Inversion Lemma has been well-exploited for several questions on MO. Thus, with some positive hope, I'd like to field a question of my own.

Suppose we pick $n$ values $x_1,\ldots,x_n$, independently sampled from $N(0,1)$ (mean 0, unit variance gaussian). Then, we form the (rank 3 at best) positive semidefinite matrix: $$A = \alpha ee^T + [\cos(x_i-x_j)],$$ where $e$ denotes the vector of all ones, and $\alpha > 0$ is a fixed scalar.

For $n \ge 3$, simple experiments lead one to conjecture that: $$e^TA^\dagger e = \alpha^{-1},$$ where $A^\dagger$ is the Moore-Penrose pseudoinverse of $A$ (obtained in Matlab using the 'pinv' function).

This should be fairly easy to prove with the right tools, such as a Matrix inversion lemma that allows rank deficient matrices or pseudoinverses. So my question is:

How to prove the above conjecture (without too much labor, if possible)?

$\endgroup$
4
  • 1
    $\begingroup$ If you just let $x_1,\dots,x_n$ be distinct scalars instead of specifying a particular probability distribution, shouldn't you get the same result? $\endgroup$ Commented Aug 4, 2011 at 4:13
  • $\begingroup$ Actually, I suspected it to be true as long as all the $x$'s were such that their contribution remains independent of $ee^T$ (Mikael makes this explicit in the answer below) $\endgroup$ Commented Aug 4, 2011 at 16:08
  • $\begingroup$ Why can the rank never exceed 3? $\endgroup$ Commented Aug 5, 2011 at 21:22
  • 1
    $\begingroup$ @Michael: expand $\cos(x-y)=\cos x\cos y + \sin x \sin y$ to notice that $A$ is a sum of three rank-1 matrices. $\endgroup$ Commented Aug 5, 2011 at 23:13

1 Answer 1

13
$\begingroup$

In fact more generally for any positive semidefinite matrix $A = \sum_{i=0}^k e_i e_i^T$ with $e_i$'s linearly independent, we have that $e_i^T B e_i = 1$, where $B$ is the Moore-Penrose pseudoinverse of $A$. This applies here since almost surely your matrix $A$ is of this form with $k=3$ and $e_1 = \sqrt \alpha e$.

Proof: Let $E$ be the linear span of the $e_i$'s. If I understood correctly the notion of Moore-Penrose pseudoinverse, $B$ is described in the following way: as a linear map, $B$ is zero on the orthogonal of $E$, and on $E$ it is the inverse of the restriction of $A$ to $E$. Let $\beta_{i,j}$ be defined by $B e_i = \sum_j \beta_{i,j} e_j$, so that $e_i^T B e_i = \sum_j\beta_{i,j} e_i^T e_j$. Expressing that $A B e_i = e_i$, we get in particular that $\sum_j\beta_{i,j} e_i^T e_j = 1$, QED.

$\endgroup$
1
  • $\begingroup$ Nice clean answer Mikael. It seems so easy once somebody proves it :-) thanks! $\endgroup$ Commented Aug 4, 2011 at 16:09

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.