Fall 2019 — 40391 Topics in computer science and optimization
This course is an introduction to algorithms for combinatorial optimization problems and for convex optimization problems. Topics will include greedy and dynamic programming approaches to network optimization problems, cut, flow and matching algorithms, linear programming, gradient descent, mirror descent, and Follow-the-Regularized-Leader algorithm for online convex optimization.
General information
Instructor: Luca Trevisan
Lectures:
- Tuesdays and Wednesdays 16:15-17:45
Office hours: Thursdays 11-noon in 3-E1-14 Roentgen (from Sept 12 to Dec 5)
Exam: the take-home final exam was out on October 21 and is due on November 1
References:
- A good reference for optimization techniques in discrete algorithms (greedy, dynamic programming, iterated improvements) and a general introduction to graphs and graph algorithms is
- S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani. Algorithms
- Dasgupta et al. is also a good reference for max flow, min cut, and linear programming. Additional material on such topics, including online optimization, is in
- I will write additional notes on online convex optimization and FTRL algorithms.
Past lectures
- 9/11. Overview of the course. Greedy methodology. Minimum spanning tree. (Reference: Dasgupta section 5.1)
- 9/17. Dynamic programming methodology. O(nm) time algorithm to compute edit distance. (Reference: Dasgupta 6.3 - it is a good idea to read sections 6.1 and 6.2 as well)
- 9/18. Dynamic programming algorithms for shortest paths in the presence of negative weights: single source in time O(|V|*|E|), all pairs in time O(|V|^3), in DAGs in time O(|V| +|E|). (References: Dasgupta sections 4.6, 4.7 and 6.6)
- 9/24. Dijsktra's algorithm. (Reference: Dasgupta section 4.4)
- 9/25. Ford-Fulkerson algorithm for maximum flow. (Reference: lecture 9 of my CS261 notes and Section 7.2 of Dasgupta)
- 10/1. Analysis of Ford-Fulkerson algorithm, max flow - min cut theorem, analysis of Edmonds-Karp algorithm. (Reference: references of last lecture and section 11.2 of my CS261 notes)
- 10/2. Reduction of matching in bipartite graphs to max flow. (Reference: Dasgupta section 7.3 and sections 14.1 and 14.2 of my CS261 notes)
- 10/8. Proof of Hall's theorem using reduction of matching to flow. Beginning of discussion of linear programming. (Reference: Dasgupta 7.1, lecture 5 of my CS261 notes)
- 10/9. Linear programming: geometric interpretation, simplex algorithm, phase 1 and phase 2 of simplex algorithm. (Reference: Dasgupta 7.6)
- 10/15. Linear programming: informal overview of ellipsoid and interior-point algorithms. Duality. Introduction to 2-player 0-sum games. (Reference: Dasgupta 7.4 and 7.5, Sections 5.2.4 and Lecture 6 of my CS261 notes)
- 10/16. Online optimization, multiplicative weight updates. Reference: Notes for lecture 11
- 10/23. The "Follow-the-Regularized-Leader" algorithm. Reference: Notes for lecture 12
Plan for the course
- Lectures 1-2: introduction, greedy algorithms, dynamic programming
- Lectures 3-5: iterative-improvement algorithms, max flow, min cut, matching
- Lectures 6-9: linear programming, simplex and duality, application to compressed sensing, application to 2-player 0-sum games
- Lectures 10-12: online convex optimization, multiplicative weights, solving 2-player 0-sum games, FTRL, recovering multiplicative weights and gradient descent as special cases of FTRL, Bregman projection