We consider a probabilistic version of classical 0-1 packing problem, where we have a set of items with random weights and a random knapsack capacity. The objective is to choose a set of
items that maximizes profit while ensuring the probability that the knapsack constraint is satisfied is higher than a given risk tolerance. We introduce a simple decomposition algorithm based on a probabilistic extension of cover inequalities to solve a sample average approximation (SAA) of this problem. These inequalities ensure an exact formulation by cutting o ff all infeasible solutions, but, similar to deterministic cover inequalities, these inequalities are weak. We propose a probabilistic sequential lifting procedure to strengthen these inequalities, leveraging successful computational strategies for the deterministic knapsack problem. Exact lifting is hard, but we obtain an effective upper bound for the lifting problem using a scenario decomposition approach. Additional valid inequalities are proposed to further strengthen the bounds. A key advantage of our algorithm is that the number of branch-and-bound nodes searched is nearly independent of the number of scenarios used in the SAA, which is in stark contrast to formulations that introduce binary variables for choosing which scenario should be enforced to satisfy the chance constraint. Computational results will be presented for instances of chance-constrained single and multi-dimensional knapsack problems and set packing problems.
Linear programming (LP) decoding for low density parity check (LDPC) codes proposed by Feldman et al. has attracted considerable attention in recent years. Despite having theoretical guarantees in some regimes, at low SNRs LP decoding
is observed to have worse error performance than belief propagation (BP) decoding. In this work, we present a class of
decoding algorithm obtained by trying to solve a non-convex optimization problem using the alternating direction method of
multipliers (ADMM). This non-convex problem is constructed by adding a penalty term to the LP decoding objective.
The goal of the penalty term is to make “pseudocodewords”, which are the non-integer vertices of the LP relaxation to
which the LP decoder fails, more costly.
Based on this non-convex problem, we develop a reweighted LP decoding algorithm. We show that this reweighted LP has an improved theoretical recovery threshold compared to the original LP. In addition, simulations show that, in comparison to LP decoding, these two decoders both achieve significantly lower error rates and are not observed to have an “error floor”.
Discovery Building, Orchard View Room
Xishuo Liu, Yongjia Song