Didn’t time things today, but spent most of the day working out an algorithm for performing first-order $l^2$-constrained optimization (similar to stochastic gradient descent) but with guarantees that look like the guarantees from the multiplicative weights algorithm.

# Monthly Archives: December 2013

# Log 12-27-2013

1. Work out condition for strong convexity [0:10]

2. Install MATLAB, Yalmip, SeDumi [0:30]

3. Implement, test, run numerical regularizer code [0:45]

4. Experiment with numerical regularizer code [2:30]

5. Try approach based on looking at and . [1:45]

# Log 12-18-2013

1. Meet with Percy [1:00]

2. Prove convexity of optimization problem [0:20]

3. Read robust optimization literature [1:50]

4. Set up evaluation suite for loop invariants [1:00]

5. Minimax write-up [1:35]

# Log 12-16-2013

1. Meet with Percy [1:30]

2. Robust utility write-up [0:45]

3. Try to directly optimize regret bound (for AHK conditional gradient project) [1:20]

4. Try to prove minimax regularization result [0:35]

5. Set up evaluation suite for loop invariants [1:30]

6. Group meeting [1:20]

7. Write up minimax regularization result [0:45]