Log 12-28-2013

Didn’t time things today, but spent most of the day working out an algorithm for performing first-order $l^2$-constrained optimization (similar to stochastic gradient descent) but with guarantees that look like the guarantees from the multiplicative weights algorithm.

Log 12-16-2013

1. Meet with Percy [1:30]
2. Robust utility write-up [0:45]
3. Try to directly optimize regret bound (for AHK conditional gradient project) [1:20]
4. Try to prove minimax regularization result [0:35]
5. Set up evaluation suite for loop invariants [1:30]
6. Group meeting [1:20]
7. Write up minimax regularization result [0:45]