Papers by Stephen J Hanson
Back to the future: The return of cognitive functionalism
Behavioral and Brain Sciences, 2017
The claims that learning systems must build causal models and provide explanations of their infer... more The claims that learning systems must build causal models and provide explanations of their inferences are not new, and advocate a cognitive functionalism for artificial intelligence. This view conflates the relationships between implicit and explicit knowledge representation. We present recent evidence that neural networks do engage in model building, which is implicit, and cannot be dissociated from the learning process.
Proceedings of the National Academy of Sciences of the United States of America, 1981
Frontiers in Human Neuroscience, Jun 19, 2023
An Exchange about Localism
The MIT Press eBooks, Apr 30, 2010
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears... more Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact
Connectionist modeling and brain function : the developing interface
MIT Press eBooks, 1990
ABSTRACT

Selecting good models
This is the third in a series of edited volumes exploring the evolving landscape of learning syst... more This is the third in a series of edited volumes exploring the evolving landscape of learning systems research which spans theory and experiment, symbols and signals. It continues the exploration of the synthesis of the machine learning subdisciplines begun in volumes I and II. The nineteen contributions cover learning theory, empirical comparisons of learning algorithms, the use of prior knowledge, probabilistic concepts, and the effect of variations over time in the concepts and feedback from the environment. The goal of this series is to explore the intersection of three historically distinct areas of learning research: computational learning theory, neural networks andAI machine learning. Although each field has its own conferences, journals, language, research, results, and directions, there is a growing intersection and effort to bring these fields into closer coordination. Can the various communities learn anything from one another? These volumes present research that should b...
Independent Multiple-Sample Greedy Equivalence Search Implementation [R package IMaGES version 0.1.1]

Constraints and prospects
Part 1 Foundations: logic and learning, Daniel N. Osherson et al learning theoretical terms, Rana... more Part 1 Foundations: logic and learning, Daniel N. Osherson et al learning theoretical terms, Ranan B. Banerji how loading complexity is affected by node function sets, Stephen Judd defining the limits of analogical planning, Diane J. Cook. Part 2 Representation and bias: learning hard concepts through constructive induction - framework and rationale, Larry Rendell and Raj Seshu learning disjunctive concepts using domain knowledge, Harish Ragavan and Larry Rendell learning in an abstraction space, George Drastal binary decision trees and an "average-case" model for concept learning - implications for feature construction and the study of bias, Raj Seshu refining algorithms with knowledge-based neural networks - improving the Chou-Fasman algorithm for protein folding, Richard Maclin and Jude W. Shavlik. Part 3 Sampling problems: efficient distribution-free learning of probabilistic concepts, Michael J. Kearns and Robert E. Schapire VC dimension and sampling complexity of lea...
On Learning the Neural Network Architecture: A Case Study
This chapter contains sections titled: Introduction, Definitions, The Learning Algorithm, Average... more This chapter contains sections titled: Introduction, Definitions, The Learning Algorithm, Average Case Analysis of the Learning Algorithm, Conclusion

Neural Information Processing Systems, Oct 1, 1990
Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geomet... more Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model : (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of , Krushke, 1990) ) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning.
arXiv (Cornell University), Aug 10, 2018
Multimodal Integration
Signal processing and communications, Sep 13, 2005
Categorization in Neuroscience: Brain Response to Objects and Events
Elsevier eBooks, 2005
Abstract Neuroscientific methods have become an increasingly important influence on the study of ... more Abstract Neuroscientific methods have become an increasingly important influence on the study of cognitive processing. In this chapter, we look at how the study of patient populations in addition to neuroimaging techniques have been used to address basic questions about category knowledge. How does the brain represent category knowledge? What information is acquired during category learning? Why do people parse action streams into discrete events? We examine how neuroscience has shaped the way we ask and answer questions about category learning and representation. There may not be agreement about the answers, but neuroscientific methods have helped to make investigating the questions more interesting.
Categorization in Neuroscience
Elsevier eBooks, 2005
Neuroscientific methods have become an increasingly important influence on the study of cognitive... more Neuroscientific methods have become an increasingly important influence on the study of cognitive processing. In this chapter, we look at how the study of patient populations in addition to neuroimaging techniques have been used to address basic questions about category knowledge. How does the brain represent category knowledge? What information is acquired during category learning? Why do people parse action streams into discrete events? We examine how neuroscience has shaped the way we ask and answer questions about category learning and representation. There may not be agreement about the answers, but neuroscientific methods have helped to make investigating the questions more interesting.

We introduce a new method for decoding neural data from fMRI. It is based on two assumptions, fir... more We introduce a new method for decoding neural data from fMRI. It is based on two assumptions, first that neural representation is distributed over networks of neurons embeded in voxel noise and second that the stimuli can be decoded as learned relations from sets of categorical stimuli. We illustrate these principles with two types of stimuli, color (wavelength) and letters (visual shape), both of which have early visual system response, but at the same time must be learned within a given function or category (color contrast, alphabet). Key to the decoding method is reducing the stimulus cross-correlation by a matched noise voxel sample by normalizing the stimulus voxel matrix thus unmasking a highly discriminative neural profile per stimulus. Projection of this new voxel space (ROI) to a smaller set of dimensions (with e.g., non-metric Multidimensional scaling), the relational information takes a unique geometric form revealing functional relationships between sets of stimuli, defined by R. Shepard, as second-order isomorphisms (SOI). In the case of colors the SOI appears as a nearly equally spaced set of wavelengths arranged in a color wheel, with a gap between the "purples" and "reds" (consistent with the gap in the original Ekman's color set). In the case of letters, a cluster space resulted from the decorrelated voxel neural profiles, which matched the phrase structure of the mnemonic used for more than 100 years to teach children the alphabet (across multiple languages), The Alphabet Song.
Measurement and modeling of behavior under fixed-interval schedules of reinforcement
Journal of experimental psychology, Apr 1, 1981
Abstract 1. Three Carneaux pigeons were exposed to various FI schedules of reinforcement to provi... more Abstract 1. Three Carneaux pigeons were exposed to various FI schedules of reinforcement to provide data for a general treatment of timing in such schedules. It was found that postreinforcement pause varied as a power function of interval length whereas breakpoint varied proportionately with interval length. A simple count-register model of timing accommodated the data and provided a flexible mechanism whose parameters could be fixed to bring it into conformation with each of J. Gibbon's (1979) types of timing systems. ...
Arousal: Its genesis and manifestation as response rate
Psychological Review, 1978

bioRxiv (Cold Spring Harbor Laboratory), Jan 8, 2023
The default mode network (DMN) is a collection of brain regions including midline frontal and par... more The default mode network (DMN) is a collection of brain regions including midline frontal and parietal structures, medial and lateral temporal lobes, and lateral parietal cortex. Although there is evidence that the network can be subdivided into at least two subcomponents, the network reliably exhibits highly correlated activity both at rest and during task performance. Current understanding regarding the function of the DMN rests on a large body of research indicating that activity in the network decreases during task epochs of experimental paradigms relative to inter-trial intervals. A seeming contradiction arises when the experimental paradigm includes tasks involving autobiographical memory, thinking about one's self, planning for the future, or social cognition. In such cases, the DMN's activity increases and is correlated with attentional networks. Some have therefore concluded that the DMN supports advanced human cognitive abilities such as interoceptive processing and theory of mind. This conclusion may be called into question by evidence of correlated activity in homologous brain regions in other, even non-primate, species. Thus, there are contradictory findings related to the function of the DMN that have been difficult to integrate into a coherent theory regarding its function. Using data from the Human Connectome Project, we explore the temporal dynamics of activity in different regions of the DMN in relation to stimulus presentation. We show that generally the dorsal portion of the network exhibits only a transient initial decrease in activity at the start of trials that increases over trial duration. The ventral component often has more similarity in its time course to that of task-activated areas. We propose that task-associated ramping dynamics in the network are incompatible with a task-negative view of the DMN and propose the dorsal and ventral sub-components of network may rather work together to support bottom-up salience detection and subsequent top-down voluntary action. In this context, we re-interpret the body of anatomical and neurophysiological experimental evidence, arguing that this interpretation can accommodate the seeming contradictions regarding DMN function in the extant literature. .

arXiv (Cornell University), May 30, 2016
Scale-free networks (SFN) arise from simple growth processes, which can encourage efficient, cent... more Scale-free networks (SFN) arise from simple growth processes, which can encourage efficient, centralized and fault tolerant communication (1). Recently its been shown that stable network hub structure is governed by a phase transition at exponents (>2.0) causing a dramatic change in network structure including a loss of global connectivity, an increasing minimum dominating node set, and a shift towards increasing connectivity growth compared to node growth. Is this SFN shift identifiable in atypical brain activity? The Pareto Distribution (P(D)~D^-β) on the hub Degree (D) is a signature of scale-free networks. During resting-state, we assess Degree exponents across a large range of neurotypical and atypical subjects. We use graph complexity theory to provide a predictive theory of the brain network structure. Results. We show that neurotypical resting-state fMRI brain activity possess scale-free Pareto exponents (1.8 se .01) in a single individual scanned over 66 days as well as in 60 different individuals (1.8 se .02). We also show that 60 individuals with Autistic Spectrum Disorder, and 60 individuals with Schizophrenia have significantly higher (>2.0) scale-free exponents (2.4 se .03, 2.3 se .04), indicating more fractionated and less controllable dynamics in the brain networks revealed in resting state. Finally we show that the exponent values vary with phenotypic measures of atypical disease severity indicating that the global topology of the network itself can provide specific diagnostic biomarkers for atypical brain activity.
Uploads
Papers by Stephen J Hanson