I am working with the next optimization downside beneath.
$$min_{Pi} left[
frac{1}{4 lambda
}left((Pivec{1}-s)^T K(Pivec{1}-s) + left(Pi^T vec{1}-tright)^T K left(Pi^T vec{1}-tright)right)-text{Tr}[Pi K]
proper]$$
Right here, $lambda$ is a scalar, $Pi$ and $Ok$ are $ntimes n$ matrices, and $s,t$ are $ntimes 1$ chance vectors (i.e. the elements sum as much as $1$). Additionally $Ok$ is PSD and $Pi$ has components summing as much as $1$.
I used to be in a position to empirically present that the diagonal components of $Pi^{*}$, the optimum $Pi$, correspond to $frac{s+t}{2}$ whatever the selection of $lambda$, however I’m having issue displaying this to be the case theoretically. Does anybody have any recommendation? I attempted for the case when $n=2$ utilizing Lagrange multipliers and even then it will get very messy.
I arrange a Lagrange multipier the place the matrix constraint $1^T Pi 1=1$, and I find yourself getting as my first order situations
$$2K [(Pi+Pi^T) vec{1}-(s+t)] vec{1}^T – Ok – lambda vec{1}vec{1}^T = 0$$
and
$$1-vec{1}^T Pi vec{1}=0.$$
I simply do not see the place to go from right here. I might like to make use of these to point out $vec{diag(Pi^*)} = vec{s+t}$. Anybody have any concepts?