2021
What can typing tell us about language production and monitoring?
From the perspective of cognitive science, typing is fascinating in that it sits right at the intersection of language production, motor control, and visual processing systems. So far, the majority of studies on typing have focused on the motor control aspect. The common assumption is that the lexical and sublexical processes involved in typing are encapsulated modules, and typing errors are mostly motoric errors. In this talk, I will first show that typing errors are heavily linguistically influenced, and that their pattern suggests a system in which sublexical processing influences lexical processing, contrary to the claim of modular views. Next, I will present a series of experiments examining the role of visual information in accuracy and timing of typing, as well as conscious detection and correction of typing errors. Time permitting, I will discuss some EEG data on how information from the internal (linguistic and motor) channels is combined with the external (visual processing) channel to monitor and regulate typing performance.
I argue that the evolution of human life history, with its distinctively long, protected human childhood and high level of investment in the young, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. I relate this developmental pattern to computational ideas about explore-exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults. This exploration, in turn, depends on the equally distinctive human period of elderhood, which provides both care and teaching.
Gasper Begus, Department of Linguistics, UC Berkeley
Friday, March 5, 2021
Modeling Language with Generative Adversarial Networks
Can we build models of language acquisition from raw acoustic data in an unsupervised manner? Can deep convolutional neural networks learn to generate speech using linguistically meaningful representations? In this talk, I will argue that language acquisition can be modeled with Generative Adversarial Networks (GANs) and that such modeling has implications both for the understanding of language acquisition and for the understanding of how neural networks learn internal representations. I propose a technique that allows us to wug-test neural networks trained on raw speech. I further propose an extension of the GAN architecture in which learning of meaningful linguistic units emerges from a requirement that the networks output informative data. With this model, we can test what the networks can and cannot learn, how their biases match human learning biases (by comparing behavioral data with networks’ outputs), how they represent linguistic structure internally, and what GAN’s innovative outputs can teach us about productivity in human language. This talk also makes a more general case for probing deep neural networks with raw speech data, as dependencies in speech are often better understood than those in the visual domain and because behavioral data on speech acquisition are relatively easily accessible.
2020
John Campbell, Department of Philosophy, UC Berkeley
Friday, November 20, 2020
Temporal Reasoning and Free Will
This talk is about the connection between two puzzles, about human temporal reasoning and about free will. Practically all animals have some kind of sensitivity to time, whether a mere sensitivity to time of year or day, or an ability to perform complex calculations on the results of data from interval timing. Humans seem, though, universally, to think of time as linear: to think in terms of an ongoing time in which one’s birth and death can be plotted, for example. This seems to be unique to humans. And it’s not just a capacity we have: human languages tend to make it mandatory, through the demands of tense, for example, to indicate the place of events reported in this linear time. So why do humans do this, when the other animals seem to get along fine without linear temporal reasoning?
The other puzzle is the characterization of free will. Freedom is usually taken to be a matter of a capacity for self-control, or self-regulation, of one kind or another. But there is a similar puzzle here: why do humans need and use this capacity for self-regulation, when other animals seem to manage without it? The answer I propose is that this has to do with the unique causal structure of human psychology, in which we find singular causation that is not grounded in patterns of general causation. This means that humans have unique problems with social coordination and with the organization of the individual’s own time. And our ability to think in terms of linear time lets us solve them, more or less.
Emily Cooper, School of Optometry, UC Berkeley
Friday, October 30, 2020
Perceptual Science for Augmented Reality
Recent years have seen impressive advances in near-eye display systems for augmented reality. In these systems, digital content is merged with the user’s view of the physical world. There are, however, unique perceptual challenges associated with designing a display system that can seamlessly blend the real and the virtual. By understanding the relevant principles that underlie our visual perception, I will show how we can address some of these challenges.Link to event will be provided closer to date.
Keith Holyoak, Department of Psychology, UCLA
Friday, October 23, 2020
Abstract Semantic Relations in Mind, Brain, and Machines
Abstract semantic relations (e.g., category membership, part-whole, antonymy, cause-effect) are central to human intelligence, underlying the distinctively human ability to reason by analogy. I will describe a computational project (Bayesian Analogy with Relational Transformations) that aims to extract explicit representations of abstract semantic relations from non-relational inputs automatically generated by machine learning. BART’s representations predict patterns of typicality and similarity for semantic relations, as well as similarity of neural signals triggered by semantic relations during analogical reasoning. In this approach, analogy emerges from the ability to learn and compare relations; mapping emerges later from the ability to compare patterns of relations.
Richard Futrell, Department of Language Science, UC Irvine
Friday, September 18, 2020
Efficiency-based models of natural language: Predicting word order universals Using information theory
Why is human language the way it is? I claim that human languages can be modeled as systems for efficient communication: that is, codes that maximize information transfer subject to constraints on the cognitive resources used during language production and comprehension. I use this efficiency-based framework to formulate quantitative theories of word order, aiming to explain the cross-linguistic universals of word order documented by linguists as well as the statistical distribution of word orders in massively cross-linguistic corpus studies.
I present three results. First, I show that word orders in 54 languages are shaped by dependency locality: a pressure for words linked in syntactic dependencies to be close to each other, which minimizes working memory usage during language processing. Second, I introduce a new model of information-processing difficulty in online language processing, which simultaneously captures effects of probabilistic expectations and working memory constraints, recovering dependency locality as a special case and making new predictions, in particular about adjective order. Third, I present a computational framework in which grammars can be directly optimized for efficiency. When grammars are optimized to maximize information transfer while minimizing processing difficulty, they end up reproducing 8 typological universals of word order.
2019
Stuart Russell, Computer science, UC Berkeley
Friday, October 18, 2019
What if we succeed?
It is reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. The “standard model” in AI and related disciplines aims to create systems that optimize arbitrary objectives. As we shift to a regime in which systems are more capable than human beings, this model fails catastrophically. Instead, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help. This uncertainty causes machine and human behavior to be inextricably (and game-theoretically) linked, while opening up many new avenues for research.
David Freedman, Neurobiology, The University of Chicago
Friday, November 8, 2019
Neural circuits of cognition in artificial and biological neural networks
Humans and other advanced animals have a remarkable ability to interpret incoming sensory stimuli and plan task-appropriate behavioral responses. This talk will present parallel experimental and computational approaches aimed at understanding how visual feature encoding in upstream sensory cortical areas is transformed across the cortical hierarchy into more flexible task-related encoding in the parietal and prefrontal cortices. The experimental studies utilize multielectrode recording approaches to monitor activity of neuronal populations, as well as reversible cortical inactivation approaches, during performance of visual decision making tasks. In parallel, our computational work employs machine learning approaches to train recurrent artificial neural networks to perform the same tasks as in the experimental studies, allowing a deep investigation of putative neural circuit mechanisms used by both artificial and biological networks to solve cognitively demanding behavioral tasks.