What I did today:
1. Watched final project presentations for 229T
2. Attended NLP Reading Group
3. Tried to work out how to choose an optimal set of random variables if we only care about the KL divergence to some subset of the variables, but realized there were some issues with trying to do this
4. Worked through conditioning with an abstract evaluation function in the special case of a linear-Gaussian system with an output non-linearity. In this case we basically get assumed density filtering (a special case of EP for Gaussian HMMs) and there are all sorts of non-convexities which are ugly. Tomorrow I plan to try to work out a different way of conditioning based on caring more about situations with high mass under the prior. Another possibility is to try conditioning in a different order that allows more pooling of information.
I felt like I wasn’t super-productive today, not sure exactly why; was a bit hard to motivate myself to think about this stuff, maybe because I’m back in the creative stage vs. follow-through stage but haven’t quite internalized that yet.