(Return to ICNN97 Homepage) (Return to ICNN'97 Agenda)
ICNN'97 FINAL ABSTRACTS
BI: BIOLOGICAL NEURAL NETWORKS
ICNN97 Biological Neural Networks Session: BI1A Paper Number: 231 Oral
Hippocampal lesions may impair, spare, or facilitate latent inhibition: A neural network explanation
Catalin V. Buhusi and Nestor A. Schmajuk
Keywords: neural network latent inhibition hippocampus attention
Abstract:
Return-Path: vcb1@acpub.duke.edu
Date: Wed, 12 Mar 1997 14:53:11 -0500 (EST) From: Vasile Catalin Buhusi <vcb1@acpub.duke.edu> To: pci-inc@mindspring.com
Subject: Abstract #231
#231
Hippocampal Lesions May Impair, Spare, or Facilitate Latent Inhibition: A Neural Network Explanation
Catalin V. Buhusi and Nestor A. Schmajuk Department of Psychology: Experimental, Duke University,
Durham NC 27706, U.S.A.
Email: vcb1@acpub.duke.edu
Although until recently experimental data suggested that hippocampal lesions impair latent inhibition (LI), the use of more selective lesion techniques and different behavioral protocols indicates that the lesions might impair, spare, or even facilitate the phenomenon. We apply a neural network model to explain these apparently contradictory experimental results.
The contradictory pattern of results is explained in terms of the interactions between the information encoding mechanisms and the attentional feedback mechanisms under the assumption that hippocampal lesions hinder information encoding. Under conditions corresponding to specific experimental procedures, the model counterintuitively predicts a facilitation of LI after hippocampal lesions.
A parametric study suggests that the main factors in the above interaction are preexposure time and experimental procedure. The study suggests that both selective and nonselective hippocampal lesions impair LI under experimental parameters that do not favor LI in normal animals, preserve LI with experimental parameters that do favor LI in normal animals, and facilitate LI under experimental procedures that impair LI during conditioning in normal animals.
_____
ICNN97 Biological Neural Networks Session: BI1B Paper Number: 614 Oral
Stable and unstable chaos states of receptor cell syncytium and stochastic resonance without noise
Hirofumi Funakubo, Yoshiki Kashimori and Takeshi Kambara
Keywords: chaos states receptor cell syncytium stochastic resonance
Abstract:
Return-Path: kambara@nerve.pc.uec.ac.jp Date: Thu, 6 Mar 97 17:46:16 JST
From: kambara@nerve.pc.uec.ac.jp (T Kambara) To: pci-inc@mindspring.com
Cc: kambara@nerve.pc.uec.ac.jp
Subject: Abstract of paper number 614
paper number is 614.
paper title
Stable and Unstable Chaos States of Receptor Cell Syncytium and Stochastic Resonance without Noise
Hirofumi Funakubo,Yoshiki Kashimori, and Takeshi Kambara
Department of Applied Physics and Chemistry, The University of Electro-Communications,Chofu,Tokyo,182,Japan E-mail kambara@nerve.pc.uec.ac.jp
abstract
We studied the dependence of response property of the syncytium consisting of receptor cells on dynamics of collective ion channel gating in the cells, which are periodic, stably chaotic, and unstably chaotic. The dynamics of unstable chaos state synchronizes the potential variation of all receptor cells in the syncytium induced by external stimuli. The syncytium system in the stable and unstable chaos states can detect a weak periodic input signal without assistance of any level of noise.
Takeshi Kambara
_____
ICNN97 Biological Neural Networks Session: BI1C Paper Number: 314 Oral
Canonical models for mathematical neuroscience
Frank Hoppensteadt and Eugene Izhikevich
Keywords: multiple bifurcations weakly connected neural network
Abstract:
Return-Path: eugene@math.la.asu.edu
X-Sender: eugene@math.la.asu.edu
Date: Mon, 10 Mar 1997 02:32:51 -0800
To: pci-inc@mindspring.com
From: "Eugene M. Izhikevich" <eugene@math.la.asu.edu> Subject: Abstract for #314
# 314
Canonical Models for Mathematical Neuroscience Frank C. Hoppensteadt and Eugene M. Izhikevich
Abstract.
A major drawback to most mathematical models in neuroscience is that they are either implausible or the results depend on the model.
An alternative approach takes advantage of the fact that many complicated systems behave similarly near critical regimes, such as bifurcations.
It is possible to prove that all systems near certain critical regimes are governed
by the same, canonical, model. Briefly, a model is canonical if there is a change
of variables that transforms any other system near the same critical regime to this one. Thus, the question of plausibility of a mathematical model is replaced
by the question of plausibility of the critical regime.
Another advantage of this approach is that derivation of canonical models is possible when only partial information is known about physiology of brain.
Then, studying canonical models reveals some general laws and restrictions.
In particular, one can determine what certain brain structures cannot accomplish
regardless of their mathematical model. Since existence of such canonical models
sounds too good to be true, we present a list of some of them for weakly connected neural networks (WCNNs). Studying such models provides information about all WCNNs, even those that have not been invented yet.
_____
ICNN97 Biological Neural Networks Session: BI1D Paper Number: 580 Oral
A neural network model of hippocampally mediated trace conditioning
William B. Levy and Per B. Sederberg
Keywords: hippocampus mediated trace conditioning classical conditioning
Abstract:
Return-Path: pbs5u@silicone.med.virginia.edu Date: Mon, 10 Mar 1997 16:55:12 -0500 (EST) From: "Per B. Sederberg" <pbs5u@silicone.med.virginia.edu> To: pci-inc@mindspring.com
Cc: jmv3c@virginia.edu
Subject: Abstract for paper #580
Paper #580
Title: A Neural Network Model of Hippocampally Mediated Trace Conditioning
Authors: William B Levy and Per B. Sederberg
Department of Neurological Surgery
University of Virginia Health Sciences Center
Charlottesville, Virginia 22908, USA
E-mail: wbl@virginia.edu, pbs5u@virginia.edu
Abstract:
In this paper a simple biological model of hippocampal region CA3 simulates the learning of hippocampally dependent trace classical conditioning. In this biologically based model, the time span of the associative modification rule is 5-fold less than the trace interval, implying that recurrent cell firing must play a significant role in encoding the trace interval. The results show that this simple network--- with its moderate time spanning synaptic modification rule and sparse connectivity--- can learn to span trace intervals comparable to those in rabbit eyeblink experiments \cite{moyer90,solomon}. That is, the model learns to produce a cell firing pattern equivalent to an anticipatory unconditioned stiumulus. This anticipatory pattern contains the information needed to intercept the unconditioned stimulus with a conditioned response because it is delivered at an appropriate time before the actual unconditioned stimulus.
_____
ICNN97 Biological Neural Networks Session: BI1E Paper Number: 550 Oral
A neural mechanism for hyperacuity in jamming avoidance response of electric fish Eigenmannia
Yoshiki Kashimori and Takeshi Kambara
Keywords: hyperacuity jamming avoidance response electric fish Eigenmannia
Abstract:
Return-Path: kashi@nerve.pc.uec.ac.jp
Date: Fri, 7 Mar 97 15:16:57 JST
From: kashi@nerve.pc.uec.ac.jp (Y Kashimori) To: pci-inc@mindspring.com
Cc: kashi@nerve.pc.uec.ac.jp
Subject: Abstract of paper number 550
paper number is 550
paper title
A neural mechanism for hyperacuity in jamming avoidance response of electric fish Eigenmannia
Yoshiki Kashimori, and Takeshi Kambara
Department of Applied Phys. and Chem.
The Univ. of Electro-Communications, Chofu,Tokyo,182,Japan e-mail:kashi@nerve.pc.uec.ac.jp
Eigenmannia can detect the phase difference (Delta t x w ) of the order of Delta t= 1 microsec between jamming signal and its own signal. We propose that this hyperacuity is accomplished through a combination of two kinds of functional contributions. One comes from a functional structure of neural unit for coincidence detection, and the other results from a functional structure of network consisting of the units. The network model is made based on the Jefferess model, but each unit is connected with nearest neighboring units through excitatory synapse and with the other neighbors through inhibitory synapse. The netwok can detect a tenth of the magnitude of the difference which the single unit can do. The unit of coincidence detector consists of a neuron with active dendrites which receives inputs through two excitatory connections and one inhibitory synapse. A single unit can detect the difference of the order of 10 microsec.
Yoshiki Kashimori
_____
ICNN97 Biological Neural Networks Session: BI1F Paper Number: 382 Oral
Self-organizing map develops V1 organization given biologically realistic input
Jeff McKinstry and Clark Guest
Keywords: Self-organizing map V1 organization retinal wave input
Abstract:
Return-Path: mckinsma@oa.ptloma.edu
Date: Fri, 7 Mar 1997 13:07:39 -0800
X-Sender: mckinsMA@oa.ptloma.edu
To: pci-inc@mindspring.com
From: Jeff McKinstry <mckinsma@oa.ptloma.edu> Subject: Abstract
Paper number: 382
Self-organizing map develops V1 organization given biologically realistic input
Abstract
In this paper we show that, given biologically realistic retinal wave input, our Self-Organizating Map Extended (SOME) model can develop all of the features of the maps found in the primary visual cortex (V1) of infant monkeys with no prior visual experience, and all but one of the features found in adult V1. This supports the idea that retinal waves cannot fully account for the adult feature map structure. We therefore predict that if retinal waves alone are responsible for the organization of V1 in infant monkeys with no visual experience then that organization differs from the adult in a non-trivial way.
--------------------------------------------------------------------------- "It is difficult to make a man miserable while he feels he is worthy of himself and claims kindred to the great God who made him." - Abraham Lincoln
_____
ICNN97 Biological Neural Networks Session: BI2A Paper Number: 617 Oral
Feature extraction from mixed odor stimuli based on spatio-temporal representation of odors in olfactory bulb
Osamu Hoshino, Yoshiki Kashimori and Takeshi Kambara
Keywords: Feature extraction mixed odor stimuli spatio-temporal representation
Abstract:
Return-Path: hoshino@nerve.pc.uec.ac.jp Date: Sat, 8 Mar 97 18:25:07 JST
From: hoshino@nerve.pc.uec.ac.jp (O Hoshino) To: pci-inc@mindspring.com
Subject: Abstruct of paper number 617
March 8, 1997 Dear Sir
I send my abstract:
Paper number: 617
Paper title:
Feature Extraction from Mixed Odor Stimuli Based on Spatio-temporal Representation of Odors in Olfactory Bulb
Authors:
Osamu Hoshino, Yoshiki Kashimori, and Takeshi Kambara
Department of Information Network Science Graduate School of Information Systems The University of Electro-Communications,Chofu,Tokyo,182,Japan
ABSTRACT
In order to clarify the neural mechanism by which olfactory features are extracted from mixed odor stimuli, we present a model in which each constituent molecule of odor is encoded into a spatial firing pattern in the olfactory bulb and the order of mixing ratio of the molecule is coded into the order of temporal sequence of the spatial patterns.
That is, quality and intensity of each odor are encoded into a limit cycle attractor in the dynamical network of olfactory bulb.
It depends on types of temporal fluctuation of constituent odor intensity
what kinds of limit cycle attractors are formed under application of mixed odor stimulation.
When the fluctuation of each odor in the mixture is independent of those of the other odors,
each odor is encoded separately into each attractor.
However, some odors with coherent fluctuation are encoded into a single limit cycle attractor,
that is, recognized as a single feature.
Thank you.
Sincerly yours
Osamu Hoshino
Kambara Lab., Department of Information Network Science Graduate School of Information Systems
The University of Electro-Communications,Chofu,Tokyo,182,Japan tel. +81 424 83 9748
fax. +81 424 89 9748
e-mail: hoshino@nerve.pc.uec.ac.jp
_____
ICNN97 Biological Neural Networks Session: BI2B Paper Number: 527 Oral
Computing with dynamic synapses: a case study of speech recognition
Jim-Shih Liaw and Theodore W. Berger
Keywords: dynamic synapses speech recognition presynaptic feedback inhibition
Abstract:
Return-Path: liaw@bmsr14.usc.edu
Date: Fri, 14 Mar 1997 15:27:12 -0800 (PST) From: Jim-Shih Liaw <liaw@bmsr14.usc.edu> To: pci-inc@mindspring.com
Subject: Abstract, for ICNN'97
Cc: liaw@bmsr14.usc.edu
Paper Number: 527
Computing with Dynamic Synapses: A Case Study of Speech Recognition
Jim-Shih Liaw and Theodore W. Berger
Department of Biomedical Engineering
and Program in Neuroscience
University of Southern California, Los Angeles, CA 90089-1451 liaw@bmsrs.usc.edu berger@bmsrs.usc.edu
Abstract
A novel concept of dynamic synapse is presented which incorporates fundamental features of biological neurons including presynaptic mechanisms influencing the probability of neurotransmitter release from an axon terminal. The consequence of the presynaptic mechanisms is that the probability of release becomes a function of the temporal pattern of action potential occurrence, and hence, the strength of a given synapse varies upon the arrival of each action potential invading the terminal region. From the perspective of neural information processing, the capability of dynamically tuning the synaptic strength as a function of the level of neuronal activation gives rise to a significant representational and processing power at the synaptic level. Furthermore, there is an exponential growth in such computational power when the specific dynamics of presynaptic mechanisms varies quantitatively across axon terminals of a single neuron. A dynamic learning algorithm is developed in !
which alterations of the presynaptic mechanisms lead to different pattern transformation functions while changes in the postsynaptic mechanisms determines how the synaptic signals are to be combined. We demonstrate the computational capability of dynamic synapses by performing speech recognition from unprocessed, noisy raw waveforms of words spoken by multiple speakers with a simple neural network consisting of a small number of neurons connected with synapses incorporating dynamically determined probability of release.
_____
ICNN97 Biological Neural Networks Session: BI2C Paper Number: 221 Oral
The selective integration neural network model of lightness perception
William D. Ross and Luiz Pessoa
Keywords: selective integration lightness perception context-sensitive
Abstract:
Return-Path: ross@cs.unc.edu
Date: Thu, 20 Mar 1997 18:19:43 -0500 (EST) From: Bill Ross <ross@cs.unc.edu>
To: pci-inc@mindspring.com
Here is the abstract for paper #221
The Selective Integration Neural Network Model of Lightness Perception
A new neural network model of 3-D lightness perception is presented which builds upon previous models of contrast detection and filling-in. The consideration of a wealth of data suggests that the visual system performs the luminance-to-lightness transformation in a highly context-sensitive manner. In particular we propose that a key component of this transformation is the selective integration of early luminance ratios encoded at the retina. Simulations of the model address recent stimuli by Adelson (1993), White's illusion and the classic Benary cross, among others.
_____
ICNN97 Biological Neural Networks Session: BI2D Paper Number: 432 Oral
A neural network model of motion detection for moving plaid stimuli
Lars Liden
Keywords: motion detection moving plaid stimuli recurrent neural network
Abstract:
Return-Path: laliden@cns.bu.edu
From: laliden@cns.bu.edu (Lars Liden)
Date: Tue, 4 Mar 1997 14:34:37 -0500
To: pci-inc@mindspring.com
Subject: ABSTRACT
Cc: laliden@cns.bu.edu
A Neural Network Model of Motion Detection
for Moving Plaid Patterns
,
Lars Liden
(Paper #432)
The perception of moving plaid stimuli is examined using a recurrent neural network trained on the intersection-of-constraints solution.
It is found that the nodes in the network develop representations similar to neurons seen in the animal literature. Although motion signals from 2D-intersections are available, early layers in the network develop component directional-selectivity similar to neurons in V1 of the macaque monkey. Nodes higher up in the network show both object directional-selectivity and component directional-selectivity, similar to neurons found in MT of the macaque. These results support the notion of a two stage motion system which relies on component motions rather than feature tracking.
_____
ICNN97 Biological Neural Networks Session: BI2E Paper Number: 579 Oral
A simple, biologically motivated neural network solves the transitive inference problem
William B. Levy and Xiangbao Wu
Keywords: transitive inference problem configural learning problems syllogistic reasoning
Abstract:
Return-Path: xw3f@avery.med.virginia.edu Date: Mon, 10 Mar 1997 15:54:11 -0500
From: Xiangbao Wu <xw3f@avery.med.virginia.edu> To: pci-inc@mindspring.com, wbl@virginia.edu, xw3f@virginia.edu Subject: abstract (paper #579)
A Simple, Biologically Motivated Neural Network
Solves the Transitive Inference Problem
William B Levy and Xiangbao Wu
Department of Neurological Surgery
University of Virginia Health Sciences Center
Charlottesville, Virginia 22908, USA
E-mail: wbl@virginia.edu, xw3f@virginia.edu
Abstract (paper #579)
Configural learning problems can be resolved by both rats and humans if they are not too difficult (e.g. Alvarado and Rudy 1992, 1995). The configural learning problem which we explore here is transitive inference. Transitive inference (learn the four pairs A>B, B>C, C>D, D>E, then test with the novel pair B?D) was once viewed as a logical problem.
However, it is now acknowledged that when the stimuli are appropriate even three year old humans can solve this problem and, as well, so can pigeons and rats. Thus, even though the problem is a simple exercise in logic, there is reason to suspect that mammals, or for that matter, neural networks will solve such a problem without recourse to any explicit syllogistic reasoning. In fact, by casting the input stimuli in a form appropriate for a sequence learning neural network, a hippocampal-like network can solve the transitive inference problem. Furthermore, performance is appropriately disrupted by turning the linear sequence of relationships into a nonlinear (circular) relationship.
--
Xiangbao Wu voice: +1-804-924 0356
Cobb Hall - Room 2028
Department of Neurosurgery fax: +1-804-982 3829 University of Virginia
Health Sciences Center Box 420 e-mail: xw3f@virginia.edu Charlottesville, VA 22908, USA http://www.med.virginia.edu/~xw3f
_____
ICNN97 Biological Neural Networks Session: BI2F Paper Number: 315 Oral
Thalamo-cortical interactions modeled by forced weakly connected oscillatory networks
Frank Hoppensteadt and Eugene Izhikevich
Keywords: multiple Andeonov-Hopf bifurcations weakly connected neural network Weakly forced oscillators FM radio
Abstract:
Return-Path: eugene@math.la.asu.edu
X-Sender: eugene@math.la.asu.edu
Date: Mon, 10 Mar 1997 02:32:53 -0800
To: pci-inc@mindspring.com
From: "Eugene M. Izhikevich" <eugene@math.la.asu.edu> Subject: Abstract for #315
#315
Thalamo-Cortical Interactions
Modeled by Forced Weakly Connected Oscillatory Networks
Frank C. Hoppensteadt and Eugene M. Izhikevich
Abstract.
In this paper we do not discuss what a thalamo-cortical system modeled by a weakly connected oscillators can do, but we rather discuss what it cannot do.
Interactions between any two cortical columns having oscillatory dynamics crucially depend on their frequencies. When the frequencies are different, the interactions are functionally insignificant (i.e., they average to zero) even when there are synaptic connections between the cortical columns. We say that there is a frequency gap that prevents interactions. When the frequencies are equal (or close) the oscillators interact via phase deviations. By adjusting the frequency of oscillations, each cortical column can turn on or off its connections with other columns. This mechanism resembles that of selective tuning in Frequency Modulated (FM) radios.
A weak non-constant thalamic input can remove the frequency gap and link any two oscillators provided the input is chosen appropriately. In the case of many cortical columns with incommensurable frequency gaps the thalamic forcing will be chaotic. By adjusting its temporal activity, the thalamus has complete control over information processing taking place in important parts of the cortex.
*__________________________________________________________________________* Eugene M. Izhikevich, Ph.D., Mathematics, voice / fax (602) 874-9778, Center for Systems Science & Engineering <eugene@brain.eas.asu.edu> Arizona State University <eugene@math.la.asu.edu> Box 87-7606 Tempe, AZ 85287-7606
*__________________________________________________________________________*
_____
ICNN97 Biological Neural Networks Session: BIP2 Paper Number: 525 Poster
A model of the neuro-ommatidium of the fly's eye
J. A. Moya, G. W. Donohoe and M. J. Wilcox
Keywords: neuro-ommatidium fly's eye computer vision
Abstract:
Return-Path: jamoya@eece.unm.edu
From: jamoya@eece.unm.edu (john moya)
Posted-Date: Sat, 15 Mar 1997 02:20:12 -0700 Date: Sat, 15 Mar 1997 02:20:12 -0700
To: pci-inc@mindspring.com
Subject: abstract paper 525
Cc: jamoya@eece.unm.edu
The eyes of the fly have been extensively studied at both the structural and the cellular level. Up to the present, however, this data has not been util- ized to develop a comprehensive model of its eye. With the presentation of a model for the sensory portion of the fly's eye, this paper describes some initial steps towards the development of a comprehensive model. The effective- ness of the model is addressed through a comparison of its performance with that found in experimental data.
_____
ICNN97 Biological Neural Networks Session: BIP2 Paper Number: 445 Poster
A possible intralaminar mechanism for interhemispheric communication
Juan Fernando Gomez Molina and Francisco Lopera
Keywords: 4 interhemispheric communication thalamic oscillator multiplexing
Abstract:
Return-Path: avillega@quimbaya.udea.edu.co Date: Thu, 13 Mar 1997 09:05:24 -0500 (GMT-0500) From: Andres Villegas <avillega@quimbaya.udea.edu.co> To: pci-inc@mindspring.com
Subject: Abstract paper IEEE # 445
A THALAMIC ELECTRIC OSCILLATOR FOR INTERHEMISPHERIC COMMUNICATION By Juan Fernando Gomez M.(1) and Francisco Javier Lopera R.(2) ABC Micros Ltda.
Cra. 64c #48-94 (603) Medellin-Colombia(1); Fac. Medicine, Neuroscience Program, University of Antioquia, Medellin-Colombia(2) ABSTRACT. We propose a model for interhemispheric (IH) communication through the callosum, based on previous computer analysis of electric (EEG) and magnetic (MEG) recordings. Intralaminar oscillators, producing a synchronic front-occipital sweep in each hemisphere of 12.5 ms, improve the use of the callosal fibers by time multiplex, coincident- redundant multiplexing and "tuning" between homologous regions. It permits IH sampling from integration zones, phase modulation (PM) with respect to a periodic (or referenced) spikes, coding by the shape of the action potential and sensitivity to tiny electrical variations. It is postulated that the intralaminar complex is part of a set of parallel chai instance, a motor-sensory wave. These chains could be closed at different levels as in REM sleep and awake state. Phase lags in the chain permit "periodic sensitivity" and decoding of collective signals for specific neurons.
Reduction of resonance ir's disease has been reported during MEG. Hence, synchronization pathologies caused by unilateral lesions, result in "cross talk", lack of access to working memory and hallucinosis.
Andres Villegas
Medico general
E. mail: avillega@quimbaya.udea.edu.co Medellin
_____
ICNN97 Biological Neural Networks Session: BIP2 Paper Number: 116 Poster
A Model of the Basal Ganglia for the Procedure of drive
Kenta Hori
Keywords: Basal ganglia prediction
Abstract:
Return-Path: khori@bpe.es.osaka-u.ac.jp To: pci-inc@mindspring.com
Cc: khori@bpe.es.osaka-u.ac.jp
Subject: abstract for ICNN'97
Date: Thu, 13 Mar 1997 07:09:27 +0900
From: Kenta Hori <khori@bpe.es.osaka-u.ac.jp>
The paper number
----------------
116
Abstract
----------------
A model of the basal ganglia (BG) is derived from a computational viewpoint.
It is explained how anatomy and physiology of BG can be understood in terms of the procedure for drive.
Conceptual principles for the procedure are proposed, including how to compute the desirability of a command.
Model predictions on electroencephalogram (EEG) have shown good coincidences with clinical data on so called Fm$\theta$ and readiness potential, although those data are not taken into
consideration when the model was derived.
Results from computer simulations of EEG also supported the consistency of the model with EEG data.
A model prediction on the period of reverberation in BG-thalamo-cortical loop coincides strikingly with an estimate from animal experiments.
Taken together, the model appears to have gained a strong support from those unexpected coincidences.
_____
ICNN97 Biological Neural Networks Session: BIP2 Paper Number: 14 Poster
Anti-Hebbian synapses as a linear equation solver
Kechen Zhang, Giorgio Ganis, and Martin I. Sereno
Keywords: Hebbian synapses weight normalization solving system of equations
Abstract:
Return-Path: zhang@helmholtz.salk.edu
From: Kechen Zhang <zhang@salk.edu>
Subject: ICNN'97 paper #014: abstract
To: pci-inc@mindspring.com
Date: Thu, 20 Feb 1997 22:48:20 -0800 (PST) Cc: zhang@helmholtz.salk.edu (Kechen Zhang)
Paper #014
Anti-Hebbian synapses as a linear equation solver
Kechen Zhang
Giorgio Ganis
Martin I. Sereno
Department of Cognitive Science
University of California, San Diego
La Jolla, California 92093-0515
Abstract
It is well-known that Hebbian synapses, with appropriate weight normalization, extract the first principal component of the input patterns (Oja 1982).
Anti-Hebb rules have been used in combination with Hebb rules to extract additional principal components or generate sparse codes (e.g., Rubner and Schulten 1990; Foldiak 1990). Here we show that the simple anti-Hebbian synapses alone can support an important computational function: solving simultaneous linear equations. During repetitive learning with a simple anti-Hebb rule, the weights onto an output unit always converge to the exact solution of the linear equations whose coefficients correspond to the input patterns and whose constant terms correspond to the biases, provided that the solution exists. If there are more equations than unknowns and no solution exists, the weights approach the values obtained by using the Moore-Penrose generalized inverse (pseudoinverse). No explicit matrix inversion is involved and there is no need to normalize weights. Mathematically, the anti-Hebb rule may be regarded as an iterative algorithm for learning a special case of the linear associative mapping (Kohonen 1989; Oja 1979). Since solving systems of linear equations is a very basic computational problem to which many other problems are often reduced, our interpretation suggests a potentially general computational role for the anti-Hebbian synapses and a certain type of long-term depression (LTD).
_____
ICNN97 Biological Neural Networks Session: BIP2 Paper Number: 321 Poster
A point-process coincidence network for representing interaural delay
Timothy A. Wilson and Rachod Thongprasirt
Keywords: point-process coincidence interaural delay acoustic information
Abstract:
Return-Path: tim@lear.csp.ee.memphis.edu Date: Thu, 13 Mar 1997 15:11:03 -0600
From: Tim Wilson <tim@lear.csp.ee.memphis.edu> To: pci-inc@mindspring.com
Subject: ICNN '97 Abstract, paper 321
Reply-To: t-wilson@memphis.edu
Conference: 1997 ICNN
Paper Number: 321
Title: A Point-Process Coincidence Network
for Representing Interaural Delay
Keywords: Coincidence detector, Point-process,
Interaural delay, Auditory
Authors: Timothy A. Wilson and Rachod
Thongprasirt
Affiliation: Department of Electrical
Engineering, The University of Memphis
Address (Wilson): Room 204-A, Department of
Electrical Engineering, Campus Box 6574, The
University of Memphis, Memphis, Tennessee,
38152-6574
Electronic Mail (Wilson): t-wilson@memphis.edu
Abstract:
A Jeffress-Licklider coincidence network for detecting interaural delays was implemented using a tick-by-tick simulation of point-process neurons with firing-event outputs. Both primary and coincidence neurons were simulated by renewal-processes having absolute and relative refractory effects. Coincidence neurons were implemented by a piecewise nonlinearity operating on lowpass filtered versions of the sum of the spike-train outputs of neurons representing left and right ears. A network of coincidence neurons was implemented by taking the left- and right-ear spike trains for various values of delay. Coincidence detector outputs were determined, and the computational results were examined at several points corresponding to particular relative delays.
--
Dr. Timothy A. Wilson http://www.ee.memphis.edu/~tim/ Assistant Professor of Electrical Engineering mailto:t-wilson@memphis.edu Engineering Science Building, Room 204-A phone: 901-678-3251 Memphis, Tennessee 38152-6574 fax: 901-678-5469