What I did today:
1. Read about algorithms for finding junction trees
2. Met with Noah / Andreas. One take-away from the meeting is that Noah sees our project as an attempt to solve the frame problem, which our current approach doesn’t really do.
3. Thought about some specific instances of the frame problem in the context of object recognition, I now have at least some basic thoughts about the right way to form approximate models in this context, but they haven’t yet coalesced. Basically, it seems like you want some way of identifying which nodes in your model are responsible for a given inaccuracy in your output, and then building better models for those nodes. The issue is that it’s not entirely clear how to combine together all these different models in a principled way (that avoids double-counting evidence).
4. Thought about possible simpler problems than hierarchical structure learning that could be used as a test-bed for the probabilistic abstractions work with Percy. I eventually settled on factorial HMMs as a good start, and spent a few hours implementing a factorial HMM together with a particle filter for doing inference in the HMM.
5. Started to read up on logic; in particular, these notes.