{"title": "Functional Models of Selective Attention and Context Dependency", "book": "Advances in Neural Information Processing Systems", "page_first": 1180, "page_last": 1181, "abstract": null, "full_text": "Functional Models of Selective Attention \n\nand  Context Dependency \n\nThomas H.  Hildebrandt \n\nDepartment of Electrical  Engineering and Computer Science \nRoom 304  Packard Laboratory \n19  Memorial Drive West \nLehigh  University \nBethlehem  PA  18015-3084 \n\nthildebr@aragorn.eecs.lehigh.edu \n\nScope \n\nThis workshop reviewed and classified the various models which have emerged from \nthe  general  concept  of selective  attention  and  context  dependency,  and  sought  to \nidentify  their  commonalities.  It  was  concluded  that  the  motivation  and  mecha(cid:173)\nnism of these functional  models are  \"efficiency\"  and  ''factoring'', respectively.  The \nworkshop  focused  on  computational models of selective  attention  and  context  de(cid:173)\npendency  within the  realm of neural networks.  We treated only  ''functional''  mod(cid:173)\nels;  computational models of biological neural systems,  and symbolic or rule-based \nsystems were omitted from  the  discussion. \n\nPresentations \n\nThomas  H.  Hildebrandt  presented  the  results  of his  recent  survey  of the  lit(cid:173)\nerature  on  functional  models  of selective  attention  and  context  dependency.  He \nset  forth  the  notions  that selective  attention and  context  dependency  are  equiva(cid:173)\nlent,  that the goal of these  methods is  to reduce  computational requirements,  and \nthat  this  goal  is  achieved  by  what  amounts  to factoring  or  a  divide-and-conquer \ntechnique which takes advantage of nonlinearities in  the problem. \nDaniel S. Levine (University of Texas at Arlington) showed how the gated dipole \nstructure often used  in  the ART models can be used  to account for  time-dependent \nphenomena such as  habituation and overcompensation.  His  adjusted model appro(cid:173)\npriately modelled the public's adverse reaction  to  \"New  Coke\". \n\nLev  Goldfarb  (University of New  Brunswick)  presented  a  formal  model  for  in(cid:173)\nductive learning based on  symbolic transformation systems and parametric distance \nfunctions  as  an  alternative to the  commonly used  algebraic transformation system \nand  Euclidean  distance  function.  The drawbacks of the  latter system were  briefly \ndiscussed,  and  it was shown  how  this new  formal system  can  give rise  to  learning \nmodels which overcome these  problems. \n\n1180 \n\n\fFunctional Models of Selective Attention and Context Dependency \n\n1181 \n\nChalapathy  Neti  (IBM,  Boca  Raton)  presented  a  model  which  he  has  used  to \nincrease signal-to-noise ratio (SNR) in  noisy speech signals.  The model is based on \nadaptive filtering of frequency  bands with a constant frequency to bandwidth ratio. \nThis thresholding in  the wavelet domain gives results which are superior to similar \nmethods operating in  the  Adaptive Fourier  domain.  Several  types of signal could \nbe detected  with SNRs close to Odb. \nPaul N.  Refenes (University of London  Business  School) demonstrated the need \nto  take  advantage  of contextual  information  in  attempting  to  model  the  capital \nmarkets.  There exist some fundamental economic formulae,  but they  hold only in \nthe long term.  The  desire  to model events on a  finer  time scale  requires  reference \nto significant  factors  within  a  smaller window.  To do  this effectively  requires  the \nidentification  of appropriate  short-term  indicators,  as  mere  overparameterization \nhas been  shown  to lead to negative results. \n\nJonathan A. Marshall (University of North Carolina) reviewed  the EXIN model, \nwhich  correctly  encodes  partially  overlapping  patterns  as  distinct  activations  in \nthe  output  layer,  while  allowing  the  simultaneous  appearance  of nonoverlapping \npatterns  to give  rise  to multiple activations in  the  output  layer.  The  model  thus \nproduces a factored  representation of complex scenes. \nAlbert Nigrin (American University) presented a model, similar in concept  to the \nEXIN  model.  It correctly  handles synonymous inputs by means of cross-inhibition \nof the links connecting  the synonyms to the target node. \n\nThomas H.  Hildebrandt also presented a model for adaptive classification based \non  decision  feedback  equalization.  The  model  shifts  the  decision  boundaries  of \nthe  underlying  classifier  to  compensate  shifts  in  the  statistics  of the  input.  On \nhandwritten  character  classification,  it  outperformed  an  identical  classifier  which \nused  only static decision  boundaries. \n\nSummary \n\nAccording to Hildebrandt's first  talk, the concepts underlying selective attention are \nquite broad and generally applicable.  Large nonlinearities in the problem permit the \nuse of problem subdivision or factoring (by analogy with the factoring of a  Boolean \nequation).  Factoring  is  a  good  method for  reducing  the  complexity  of nonlinear \nsystems. \n\nThe talks by Levine and Refenes  showed that context enters naturally into the de(cid:173)\nscription, formulation, and solution ofreal-world modelling problems.  Those by Neti \nand  Hildebrandt  showed  that  specific  reference  to  temporal  context  can  result  in \nimmediate performance gains.  The presentations by Marshall and Nigrin  provided \nmodels for  appropriately encoding  contexts involving overlapping and synonymous \npatterns, respectively.  The talk by Goldfarb indicates that abandoning assumptions \nregarding linearity  ab  initio may lead  to more powerful learning systems.  Refer  to \n[1]  for further  information. \n\nReferences \n\n[1]  Hildebrandt,  Thomas H.  Neural  Network  Models  for  Selective  Attention  and \n\nContext Dependency.  Submitted to  Neural Networks,  December  1993. \n\n\f", "award": [], "sourceid": 813, "authors": [{"given_name": "Thomas", "family_name": "Hildebrandt", "institution": null}]}