{"title": "Connectionist Modeling and Parallel Architectures", "book": "Advances in Neural Information Processing Systems", "page_first": 1178, "page_last": 1179, "abstract": null, "full_text": "Connectionist  Modeling  and \n\nParallel  Architectures \n\nJoachim  Diederich \n\nAh  Chung Tsoi \n\nNeurocomputing Research Centre \n\nSchool of Computing Science \n\nQueensland University of Technology \n\nBrisbane Q 400 1 Australia \n\nDepartment of Electrical and \n\nComputer Engineering \nUniversity of Queensland \n\nSt Lucia, Queensland 4072, Australia \n\nThe  introduction  of  specialized  hardware  platforms  for  connectionist  modeling \n(\"connectionist  supercomputer\")  has  created a number  of research  topics.  Some of \nthese issues are controversial, e.g.  the efficient implementation of incremental learn(cid:173)\ning techniques, the need for the dynamic reconfiguration of networks and possible \nprogramming environments for these machines. \n\nJoachim Diederich, Queensland University of Technology (Brisbane), started with \na brief introduction to connectionist modeling and parallel machines. Neural network \nmodeling can be done on various levels of abstraction. On a low level of abstraction, \na  simulator can support the definition  and simulation of  \"compartmental  models,\" \nchemical  synapses, dendritic trees  etc., i.e.  explicit computational  models  of single \nneurons. These models have been built by use of SPICE (DC Berkeley) and Genesis \n(Caltech). On a higher level of abstraction,  the  Rochester  Connectionist Simulator \n(RCS~ University of Rochester) and ICSIM (lCSI Berkeley) allow  the definition of \nunit types  and  complex  connectivity patterns.  On a very high level  of abstraction, \nsimulators like tleam (UCSD) allow the easy realization of pre-defined network archi(cid:173)\ntectures (feedforward networks) and leaming algorithms such as backpropagation. \n\nBen  Gomes,  International  Computer Science Institute  (Berkeley)  introduced  the \nConnectionist Supercomputer  1.  The CNS-l is a multiprocessor system designed for \nmoderate precision fixed point operations used extensively in connectionist network \ncalculations.  Custom VLSI digital processors employ an on-chip vector coprocessor \nunit tailored for neural network calculations and controlled by RISC scalar CPU. One \nprocessor and associated commercial DRAM comprise a node, which is connected in a \nmesh topology with other nodes to establish a MIMD array. One edge of the commu(cid:173)\nnications mesh is reserved for attaching various 110 devices, which connect via a cus(cid:173)\ntom network adaptor chip. The CNS-l operates as  a compute server and one 110 port \nis used for connecting to a host workstation. \n\nUsers with mainstream connectionist applications can use CNSim, an object-oriented, \ngraphical  high-level interface to the CNS-l environment.  Those with more compli(cid:173)\ncated applications can use one of several high-level programming languages (C. C++. \n\n1178 \n\n\fConnectionist Modeling and Parallel Architectures \n\n1179 \n\nSather}, and access  a complete set of hand-coded assembler subroutine libraries  for \nconnectionist applications.  Simulation, debugging and profiling  tools  will  be avail-\nable to aid both types of users.  Additional  tools are available for  the  systems pro-\ngrammer to code at a low  level  for  maximum perfonnance. Access  to  the 1I0-level \nprocessor and network functions  are provided, along with the evaluation tools needed \nto complement the process. \n\nUrs Muller, Swiss Federal Institute of Technology  (Zurich) introduced MUSIC:  A \nhigh  performance  neural  network  simulation  tool  on  a  multiprocessor.  MUSIC \n(Multiprocessor System  with Intelligent Communication),  a 64 processor  system, \nruns backpropagation at a speed of 247 million connection updates per second using \n32 bit floating-point precision.  TIlUS  the  system reaches  supercomputer  speed  (3.8 \ngflops peak), it still can be used as a personal desk-top computer at a researchers own \ndisposal:  The complete system consumes less  than 800 Watt and fits  into a  19 inch \nrack. \n\nFin Martin, Intel  Corporation, introduced  INiI000,\" an REF processor which  ac(cid:173)\ncepts  40,000 patterns  per  second.  Input  patterns  of 256  dimensions  by  5  bits  are \ntransferred from  the host to the NilO00 and compared with the chip's  \"memory\"  of \n1024 stored reference patterns, in parallel. A custom  16 bit on-chip microcontroller \nruns  at 20 MIPS  and controls all  the programming and algorithm functions.  RBF's \nare considered an advancement over traditional template matching algorithms and back \npropagation. \n\nPaul  Murtagh  and  Ah  Chung  Tsoi,  University  of Queensland  (St.  Lucia)  de(cid:173)\nscribed a reconfigurable VLSI  Systolic Array for artificial  neural  networks.  After a \nbrief review  of some of the most common neural  network architectures, e.g., multi(cid:173)\nlayer perceptron, Hopfield net, Boltzmann machine, Ah Chung Tsoi showed that the \ntraining algorithms of these networks can be written in a unified manner. This unified \ntraining algoritlml is then shown to be implementable in a systolic array fashion. The \nindividual processor can be designed accordingly. Each processor incorporates func(cid:173)\ntionality  reconfiguration to allow  a number of neural  network models  to be imple(cid:173)\nmented. The architecture also incorporates reconfiguration for fault tolerance and pro(cid:173)\ncessor arrangement. Each processor occupies very little silicon area  with  16 proces(cid:173)\nsors being able to fit onto a  lOx 10 nm12 die. \n\nGUnther Palm and  Franz Kurfess introduced  \"Neural  Associative  Memories.\" \nDespite having processing elements which are thousands of times faster than the neu(cid:173)\nrons in the brain, modem computers still  cannot match quite a few  processing capa(cid:173)\nbilities  of the brain, many  of which  we  even consider  trivial  (such  as  recognizing \nfaces  or voices, or following a conversation). A common principle for those capabili(cid:173)\nties lies in the use of correlations between patterns in order to identify patterns which \nare similar. Looking at the brain as  an information processing  mechanism  with  -(cid:173)\nprobably among  others  -- associative processing capabilities together with the con(cid:173)\nverse view of associative memories as certain types of artificial neural networks initi(cid:173)\nated a number of interesting results. These range from theoretical considerations to in(cid:173)\nsights in the functioning of neurons, as  well as parallel hardware implementations of \nneural  associative memories.  The talk  discussed some implementation aspects  and \npresented a few  applications. \n\nFinally, Ernst Niebur, California Institute of Technology  (pasadena) presented his \nwork on biologically realistic modeling on SIMD machines (No abstract available). \n\n\f", "award": [], "sourceid": 828, "authors": [{"given_name": "Joachim", "family_name": "Diederich", "institution": null}, {"given_name": "Ah", "family_name": "Tsoi", "institution": null}]}