The main goal of today was to get a write-up of the approach we’re going to take for the project with Noah. This failed, due to various issues with my approach.
I started by trying to work out my idea more concretely on the “asia” example, which seemed to work reasonably. Then I worked out the criterion for when to further expand a node. I then took a break to eat lunch / meet with Dario about Vannevar. When I came back, I started to write up the approach, but after about 90 minutes of writing (plus a 1-hour nap, I think) I realized that we would run into issues because trying to well-approximate the posterior marginal yields a very flat cost landscape, and so greedily expanding nodes works poorly.
So, at this point, I stopped trying to work on the write-up and tried to find a way to fix this. The idea I came up with was to, instead of optimizing the posterior marginal for one set of evidence in an online fashion, optimize the posterior marginal given several different sets of evidence in an offline fashion; since we’re doing it offline, we can afford to do non-myopic reasoning, and hence don’t have as many issues with the flatness of the cost landscape. However, there’s still a problem that any fixed offline structure will probably not respond well to new data; it seemed that boosting would probably help with this, and googling quickly led me to this very nice paper by Vincent Tan, whose PhD thesis also looks really cool and is on similar topics.
After talking with Andreas, it seems that the upfront offline learning stage is not viable in the probabilistic programming setting, so we spent a while trying to get rid of it. However, I’m becoming increasingly convinced that something like it is necessary, although I have not yet been able to give any formal reasons why. We agreed that Andreas would write some code to do some preliminary exploration on the data set we created.
In the meantime, I started reading about sequential Monte Carlo and particle filters, which are relevant to my project with Percy.