Log 12-12-2012

Today I made my caching much faster (by storing compressed representations of logical forms as strings, rather than the logical forms themselves). Now I can do a full beam search with a beam size of 30 in 1 second per utterance. This means that the full 800 training examples take 13 minutes to get through, which is way way more manageable than before, where an utterance took more like 10 seconds (meaning that just 5 examples took 1 minutes per iteration, and we want to be able to do many iterations).

I had to re-run a bunch of pre-computation, but it’s probably worth it. I can only do the pre-computation in batches of about 40 training examples before I exceed RAM; I should have probably done all of the precomputation overnight, but I only pre-computed 80 examples overnight because I was too foolish to write a script.

I also compiled a list of things that the parser currently gets wrong on the training data (in the sense of finding the wrong logical form), which I plan to go through tomorrow. The original plan was to go through it today, but I got distracted talking to / bouldering with Simon Lacoste-Julien. Getting distracted was probably optimal, though, since Simon is leaving for Paris pretty soon and I only get to see him fairly rarely.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s