Systems | Information | Learning | Optimization
 

Adversarial Networks with Structured Inputs and Outputs

Machine Learning holds large potential for changing how the Air Force operates from advanced reasoning over massive heterogeneous data to learning adaptive models for multi-agent planning, strategy development, and decision making. However, key challenges must be first addressed before this potential is fully realized. This talk will first highlight key Air Force challenges spanning data-efficient, robust, and interactive learning techniques to support the dynamic and complex missions of the Air Force. Then, this talk will highlight two efforts using adversarial training techniques. In the first, we learn a model that given an image, generates a corresponding scene graph. Scene graphs are data structures that can be used to represent images and can be used for tasks ranging from visual question answering to content-based image search using complex queries. Our method operates by first generating individual facts about a scene, and then “stitches” these facts together to form a scene graph. We show that our method is able to generate more accurate, and expressive scene graphs than the prior state-of-the-art without needing ground truth bounding boxes. The second work focuses on the task of learning an interactive image generator. Our model is able to constrain its outputs based on input in the form of a variable size set of relative constraints. As such, a user is able to iteratively interact with the model by providing a constraint, viewing the output of the model, and then adding more constraints to the constraint set as she sees fit. We show that our method is able to learn a generator that satisfies a large percentage of input constraints without sacrificing image quality relative to state-of-the-art Generative Adversarial Networks.
September 26 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Eric Heim, Lee Seversk