Mind Reading involves a subtle interplay of statistics and physical modeling, and often the latter is not considered as critically as it should be. For example, parapsychological experiments have demonstrated mind reading effects that are statistically significant, but physically implausible. But we don’t have to look at fringe science to find such problems. Cognitive neuroscience—the effort to understand how the brain gives rise to the mind—has long adopted untested assumptions that might best be described as modular: (1) different cognitive functions are supported by discrete regions; (2) neighboring regions process the same information in essentially the same way; (3) a given region always subserves the same function; (4) regions communicate with each other through fixed functional/anatomical networks; and (5) functional regions and networks are located in the same place across healthy individuals. These assumptions are problematic for reasons that have been repeatedly laid bare by major figures over the course of modern neuroscience, from Flourens in the 1850s, to Lashley and Hebb in the mid 20th century, to Rumelhart and McClelland in the 1980s. In recent years, neuro-computational approaches that eschew modularity have again emerged as a major theoretical force. Yet in functional brain imaging—the most important and ubiquitous method in cognitive neuroscience—modular assumptions remain firmly entrenched. They lie at the heart of the statistical methods that are now standard in the discipline. If these assumptions are incorrect, then existing methods have led to an extremely misleading textbook view of human brain function. I will briefly describe our ongoing project that aims to redesign statistical analysis of brain imaging data from the bottom up, beginning with assumptions that are more consistent with contemporary views about how neural systems store and process information. Specifically, we assume that neural representation and processing is radically distributed and dynamically configured.
Multiple-input multiple-output (MIMO) systems that exploit multi-antenna arrays and millimeter-wave systems operating in the 30-300GHz band offer synergistic opportunities for meeting the exploding data requirements of wireless networks. While combining these technologies offers the advantages of high-dimensional MIMO, it increases transceiver complexity dramatically in conventional MIMO approaches due to the large number of antennas required for optimal performance. On the other hand, the dimension of the communication subspace is typically much smaller than the system dimension in practice. By multiplexing data onto highly directional orthogonal beams, beamspace MIMO (B-MIMO) provides direct and near-optimum access to the low-dimensional communication subspace and dramatically lowers transceiver complexity when realized with analog beamforming. In this paper we present and analyze the capacity of several low-complexity B-MIMO transceivers for realizing multi-Gigabits/s speeds.
Discovery Building, Orchard View Room
John Brady, Rob Nowak