Systems | Information | Learning | Optimization
 

Quantum Compressed Sensing

Quantum computation and quantum information are of great current interest in computer science, mathematics, physical sciences and engineering. They will likely lead to a new wave of technological innovations in communication, computation and cryptography. As the theory of quantum physics is fundamentally stochastic, randomness and uncertainty are deeply rooted in …

Modeling and diagnosing the exercise of market power in the wholesale electricity industry | Active Learning on Graphs

Gautam’s Talk: TItle: Active Learning on Graphs Label prediction on graphs, i.e., the prediction of the labels of the vertices of a given graph using the labels of a subset of vertices, is a problem that commonly occurs in many areas of machine learning and data analysis. In this talk …

Packing Ellipsoids and Chromosomes

Problems of packing shapes with maximal density, possibly into a container of restricted size, are classical in discrete mathematics. We describe here the problem of packing ellipsoids of given (but varying) dimensions into a finite container, in a way that minimizes the maximum overlap between adjacent ellipsoids. A bilevel optimization …

Universal Laws and Architectures

his talk will focus on progress towards a more “unified” theory for complex networks motivated by biology and technology, and involving several elements: hard limits on achievable robust performance ( “laws”), the organizing principles that succeed or fail in achieving them (architectures and protocols), the resulting high variability data and …

Kevin: Query Complexity of Derivative-Free Optimization || Pari: Covariance Sketching

Kevin: This work provides lower bounds on the convergence rate of Derivative Free Optimization (DFO) with noisy function evaluations, exposing a fundamental and unavoidable gap between the performance of algorithms with access to gradients and those with access to only function evaluations. However, there are situations in which DFO is …

Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension d to grow with (and …