This work provides lower bounds on the convergence rate of Derivative Free Optimization (DFO) with noisy function evaluations, exposing a fundamental and unavoidable gap between the performance of algorithms with access to gradients and those with access to only function evaluations. However, there are situations in which DFO is unavoidable, and for such situations we propose a new DFO algorithm that is proved to be near optimal for the class of strongly convex objective functions. A distinctive feature of the algorithm is that it uses only Boolean-valued function comparisons, rather than function evaluations. This makes the algorithm useful in an even wider range of applications, such as optimization based on paired comparisons from human subjects, for example. We also show that regardless of whether DFO is based on noisy function evaluations or Boolean-valued function comparisons, the convergence rate is the same.
Learning covariance matrices from high-dimensional data is an important problem that has received a lot of attention recently. For example, network links can be detected by identifying strongly correlated variables. We are particularly interested in the high-dimensional setting, where the number of samples one has access to is much fewer than the number of variates. Fortunately, in many applications of interest, the underlying covariance matrix is sparse and hence has limited degrees of freedom. In most existing work however, it is assumed that one can obtain samples of all the variates simultaneously. This could be very expensive or physically infeasible in some applications. As a means of overcoming this limitation, we propose a new framework whereby: (a) one can pool information about the covariates by forming sketches of the samples and (b) reconstruct the original covariance matrix from just these sample sketches. We show theoretically that this is indeed possible and we discuss efficient algorithms to solve this problem.
Discovery Building, Orchard View Room
Kevin Jamieson, Parikshit Shah