Log 12-28-2013

Didn’t time things today, but spent most of the day working out an algorithm for performing first-order $l^2$-constrained optimization (similar to stochastic gradient descent) but with guarantees that look like the guarantees from the multiplicative weights algorithm.

Advertisements

Log 12-27-2013

1. Work out condition for \|x\|_{\infty} strong convexity [0:10]
2. Install MATLAB, Yalmip, SeDumi [0:30]
3. Implement, test, run numerical regularizer code [0:45]
4. Experiment with numerical regularizer code [2:30]
5. Try approach based on looking at \frac{d}{dx}\log f(x) and h^*(w_t). [1:45]

Log 12-16-2013

1. Meet with Percy [1:30]
2. Robust utility write-up [0:45]
3. Try to directly optimize regret bound (for AHK conditional gradient project) [1:20]
4. Try to prove minimax regularization result [0:35]
5. Set up evaluation suite for loop invariants [1:30]
6. Group meeting [1:20]
7. Write up minimax regularization result [0:45]