Systems | Information | Learning | Optimization
 

An Active Learning System with applications to Psychology Research

Today, machine learning is responsible for most of what we perceive as the personalization of the web: automatic recommendations for movies (Netflix) or music (Spotify,Last.fm), personalized search results based on your recent searches or email (Google), automatic credit card fraud detection (Chase), social network friend identification (Facebook,Linked-in), and, of course, personalized ad-serving. Machine (passive) learning is essentially described by the following problem: given N paired examples (X_i,Y_i) for i=1,…,N drawn independently and identically distributed from some joint distribution, pick a rule h among some set H such that E[ loss( h(X),Y ) ] is small where the expectation is over the random variables (X,Y) and loss( u,v ) is just a measure of similarity (for instance, loss(u,v) = (u-v)^2). Active learning slightly changes the rules of the game by asking: if I can sample X_i from the marginal distribution P_{X} = \int dP_{X,Y} essentially for free (or it is known) but I am charged $1 for asking to observe the corresponding Y_i given X_i, can I learn the same rule h as the above problem using far fewer than N observations of the Y_i variables? For many real-world problems, the answer is yes and in some cases it can be shown that only O( logN ) observations are necessary and sufficient. In light of this, it may be surprising that almost no one is using active learning in practice. This talk will be a high level discussion about active learning, where the challenges lie, and why it is rarely implemented. I will then discuss the efforts made by myself and others in Robert Nowak’s research lab towards building an active learning system for the real-world. The goal of the system, named NEXT.Discovery, is to implement state-of-the-art active learning algorithms in a web-framework that is as easy to setup and use as a doodle poll. Our first success story of the system has been an application for UW psychologists that Chris Cox will describe in detail in the second portion of the talk.

Knowing the conceptual similarity structure perceived by a population of people between a set of objects or ideas can be extremely useful for a number of tasks in psychology research. Such structure can be approximated by a Euclidean embedding of the objects where the relative distance in Euclidean distance describes the relative similarity. While there are many ways to design experiments in order to discover some Euclidean embedding, a natural candidate is an approach called Non-Metric Multidimensional Scaling that shows three objects labelled A,B, and C to the participant and asks “Is object A closer to object B or C?” For N objects, there are O( N^3 ) such possible queries and many can be redundant. NEXT.Discovery makes it possible to obtain such estimates for targeted groups of people, including young children and elderly and people from around the world, with relative ease for participants and researchers alike. Critically, the estimates obtained using these new techniques tend to be psychologically valid, in that the predictive accuracy for a particular triplet is often comparable to assuming no Euclidean structure and taking the majority vote of the target population for that triplet. The ability to obtain such estimates facilitate research into normal learning and aging, abnormal learning and dementia, and stands to further our understanding of how knowledge is represented in the human brain.

September 10 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Chris Cox, Kevin Jamieson