Stuff I did today:
1. Added an option to only use the best logical form (instead of a weighted combination of logical forms) during optimization. This is useful because it forces the optimization problem to be convex (probably at the loss of accuracy, but it helps for debugging L-BFGS).
2. Fixed a permissions error that was making it hard to track programs as they ran.
3. Figured out how to set up a local scratch directory so that I can save large output files.
Code is still running…will debug preliminary results tomorrow, e.g.:
1. Does convexity make L-BFGS converge more quickly?
2. Does smoothness make L-BFGS converge more quickly?
3. Does early stopping prevent overfitting?
4. Do L1 penalties prevent overfitting? Does increasing regularization help?
5. How many iterations are necessary for things to effectively converge?
And a question I was hoping to answer but somewhat failed to because I didn’t log enough info:
6. How important is top-down information at later stages of the search? (This is mainly useful for speeding things up because if it’s not useful we can stop computing it.)