{"title": "Performance of Synthetic Neural Network Classification of Noisy Radar Signals", "book": "Advances in Neural Information Processing Systems", "page_first": 281, "page_last": 288, "abstract": null, "full_text": "281 \n\nPERFORMANCE OF  SYNTHETIC  NEURAL \n\nNETWORK  CLASSIFICATION  OF  NOISY \n\nRADAR SIGNALS \n\nS. C.  Ahalt \n\nF. D.  Garber \n\nI. Jouny \n\nA.  K . Krishnamurthy \n\nDepartment of Electrical Engineering \n\nThe Ohio State University,  Columbus,  Ohio  43210 \n\nABSTRACT \n\nThis study evaluates the performance of the multilayer-perceptron \nand  the frequency-sensitive  competitive  learning network  in  iden(cid:173)\ntifying  five  commercial  aircraft  from  radar  backscatter  measure(cid:173)\nments.  The performance  of the neural network  classifiers  is  com(cid:173)\npared  with  that of the  nearest-neighbor  and  maximum-likelihood \nclassifiers.  Our  results  indicate  that  for  this  problem,  the  neural \nnetwork  classifiers  are  relatively  insensitive  to  changes  in  the  net(cid:173)\nwork  topology,  and  to the noise  level  in  the  training data.  While, \nfor  this  problem,  the traditional algorithms outperform these sim(cid:173)\nple neural classifiers,  we  feel  that neural networks show  the poten(cid:173)\ntial for  improved performance. \n\nINTRODUCTION \n\nThe  design  of systems  that  identify  objects  based on  measurements  of their radar \nbackscatter signals has traditionally been predicated upon decision-theoretic meth(cid:173)\nods of pattern recognition [1].  While it is  true that these methods are characterized \nby  a  well-defined  sense  of optimality,  they  depend  on  the  availability  of accurate \nmodels for  the statistical properties of the radar measurements. \n\nSynthetic neural networks are an  attractive alternative  to this problem,  since  they \ncan learn to perform the classification from labeled training data, and do not require \nknowledge of statistical models [2].  The primary objectives of this investigation are; \nto establish the feasibility  of using synthetic neural networks  for  the identification \nof radar  objects,  and  to  characterize  the  trade-oft's  between  neural  network  and \ndecision-theoretic  methodologies  for  the  design  of radar  object  identification  sys(cid:173)\ntems. \n\nThe present study is focused on the performance evaluation of systems operating on \nthe  received  radar  backscatter signals  of five  commercial  aircraft;  the  Boeing  707, \n727,  747,  the  DC-lO,  and  the  Concord.  In  particular,  we  present  results  for  the \nmulti-layer  perceptron  and  the  frequency-sensitive  competitive  learning  (FSCL) \nsynthetic  network  models  [2,3]  and  compare  these  with  results  for  the  nearest(cid:173)\nneighbor and maximum-likelihood classification  algorithms. \n\nIn  this paper, the performance of the classification algorithms is evaluated by means \n\n\f282 \n\nAhalt, Garber, Jouny and Krishnamurthy \n\nof computer simulation studies; the results are compared for  a number of conditions \nconcerning the radar environment and receiver models.  The sensitivity of the neural \nnetwork  classifiers,  with  respect  to the number of layers and  the  number of hidden \nunits,  is  investigated.  In  each  case,  the results  obtained  using  the synthetic neural \nnetwork classifiers are compared with those obtained using an (optimal) maximum(cid:173)\nlikelihood  classifier and a  (minimum-distance)  nearest-neighbor classifier. \n\nPROBLEM  DESCRIPTION \n\nThe radar system is modeled as a stepped-frequency system measuring radar backscat(cid:173)\nter  at  8,  11,  17,  and  28  MHz.  The  8-28  MHz  band  of frequencies  was  chosen  to \ncorrespond to the \"resonant region\" of the aircraft, i.e., frequencies with wavelengths \napproximately equal to the length of the object.  The four  specific frequencies  em(cid:173)\nployed  for  this study were  pre-selected from  the  database maintained  at The Ohio \nState  University  ElectroScience  Laboratory  compact  radar  range  as  the  optimal \nfeatures  among  the available  measurements in  this band  [4] . \n\nPerformance results are presented below for systems modeled as having in-phase and \nquadrature measurement capability (coherent systems)  and for  systems modeled as \nhaving only  signal  magnitude measurement  capability  (non coherent  systems).  For \ncoherent systems, the observation vector X  =  [(xI, x~), (x~, x~), (x~, x~), (xt x~)] T \nrepresents  the  in-phase  and  quadrature  components  of the noisy  backscatter  mea(cid:173)\nsurements  of an  unknown  target.  The  elements  of X  correspond  to  the  complex \nscattering  coefficient  whose  magnitude  is  the  square  root  of the  measured  cross \nsection  (in  units  of square  meters,  m 2 ),  and  whose  complex  phase  is  that  of the \nmeasured  signal  at  that frequency.  For  noncoherent  systems,  the observation  vec-\ntor  X  =  [aI, a2, a3, a4]T  consists  of components  which  are  the  magnitudes  of the \nnoisy  backscatter  measurements corresponding to the square root  of the measured \ncross section. \n\nFor  the  simulation experiments, it is  assumed  that  the received  signal is  the result \nof a  superposition  of the  backscatter signal  vector  S  and  noise  vector  W  which  is \nmodeled as samples from  an  additive  white  Gaussian  process. \nCOHERENT MEASUREMENTS \nIn  the case  of a  coherent  radar system, the  kth  frequency  component of the obser(cid:173)\nvation  vector is given  by: \n\nxL  = (s{ + wi), \n\n(1) \n\nwhere sL  and  s~ are  the  in-phase  and  quadrature  components  of the  backscatter \nsignal,  and wi and W~ are the in-phase and quadrature components of the sample \nof the additive  white  Gaussian  noise  process at that frequency.  Each of these  com(cid:173)\nponents is  modeled  as  a  zero-mean  Gaussian  random  variable  with  variance  u 2/2 \n\n\fPerformance of Synthetic Neural Network Classification \n\n283 \n\nso  that  the  total  additive  noise  contribution  at  each  frequency  is  complex-valued \nGaussian with zero mean and variance  0'2. \n\nDuring  operation,  the  neural  network  classifier  is  presented  with  the  observation \nvector,  of dimension  eight,  consisting of the  in-phase  and  quadrature  components \nof each of the four  frequency  measurements; \n\n(2) \n\nTypically,  the neural  net is  trained using  200  samples of the  observation  vector  X \nfor each of the five  commercial aircraft discussed above.  The desired output vectors \nare of the form \n\n(3) \n\nwhere  di,j  = 1 for  the  desired  aircraft  and  is  0  otherwise.  Thus,  for  example,  the \noutput  vector  di  for  the  second  aircraft  is  0,1,0,0,0,  with  a  1  appearing  in  the \nsecond  position. \n\nThe structure of the neural nets used can be represented by [8, nl, ... , nh, 5],  where \nthere  are  8  input  neurons,  ni  hidden  layer  neurons  in  the  h  hidden  layers,  and  5 \noutput neurons. \n\nThe first  experiment  tested  the  perceptron nets of varying  architectures,  as  shown \nin Figures 1,  and 2.  As can be seen,  there was little change in performance between \nthe  various  nets. \n\nThe  effects  of the  signal-to-noise  ratio  of  the  data  observed  during  the  training \nphase on the performance of the  perceptron was  also  investigated.  The results  are \npresented  in  Figure  3.  The  network  showed  little  change  in  performance  until  a \ntraining data SNR of 20  dB  was reached. \n\nWe  repeated  this  basic  experiment  using a  winner-take-all  network,  the  FSCL net \n[3].  Figure  4 shows that the performance of this network is  also effected  minimally \nby changes in network  architecture. \n\nWhen  the  FSCL  net  is  trained  with  noisy  data,  as  shown  in  Fig.  5,  the  perfor(cid:173)\nmance  decreases  as  the  SNR  of the  training  data  increases,  however,  the  overall \nperformance  is  still very  close  to the performance of the  multi-layer perceptron. \n\nOur  final  coherent-data experiment  compared  the  performance  of the  multi-layer \nperceptron,  the  FSCL  net,  a  max-likelihood  classifier  and  the  nearest  neighbor \nclassifier.  The results are shown in  Figure 6.  For this experiment, the training data \nhad  no  superimposed  noise.  These  results  show  that  the  max-likelihood  classifier \nis  superior,  but requires  full  knowledge  of the noise  distribution.  On  average,  the \nFSCL  net  performs better than  the perceptron, but  the nearest  neighbor  classifier \nperforms  better than either of the neural network models. \n\n\f284 \n\nAhalt, Garber, Jouny and Krishnamurthy \n\n100 \n\n90 \n\n80 \n\n70 \n\n60 \n\n50 \n\n40 \n\n30 \n\n20 \n\n10 \n\n0 \n\nl \n~ \nt: \u2022 \n\n- - 8x5x5 \nE- ----- ,  8x 10x 5 \n------ 8x2Ox 5 \nE- -\n8x3Ox5 \n.... ~ ..........  8x40x 5 \n\n~ \n\n\\~ \n\n....... ~ \n~ \\ \n\\ \n~ \n\n\" \n\n~ , . \n~ \n'\\ r:::.. \n\n-30 \n\n-25 \n\n\u00b720 \n\n-15 \n\n\u00b710 \n\n\u00b75 \n\no \n\n5 \n\n10 \n\n15 \n\n20 \n\nSNR Idbl \n\nFigure  1:  Performance of the perceptron  with different  number of hidden  units. \n\n8x1Ox5.200 \n\n-\nc- ----_.  8x1Ox10x5.18OO \n------ 8X1Ox10x10x5.18OO \n\n~ \nI\\~, \n~ ~ \n\\ \n\\ \n\\ \n\\ \n\n100 \n\n90 \n\n80 \n\n70 \n\n80 \n\n50 \n\n40 \n\n30 \n\n20 \n\n10 \n\n0 \n\nl \ne \nai \n\n~ \n\n'\\ \n\n\u00b730 \n\n\u00b725 \n\n\u00b720 \n\n\u00b715 \n\n-10 \n\n\u00b75 \n\no \n\n5 \n\n10 \n\n15 \n\n20 \n\nSNR Idbl \n\nFigure 2:  Performance of the  perceptron with  1,  2 and  3 hidden layers. \n\n\fPerfonnance of Synthetic Neural Network Classification \n\n285 \n\nNoIse Free \n\n-\nr- ----_.  -5 db \n------ Odb \nr- -\n6db \n..........  12db \n20 db \n\n0  ___  -\n\nt--\n\n, \n\ni\\ \n\\ \n\n~ '\\ \n\\\\ \n\\ \\ \n\" \\ \n\n~\\ \n.~ \"-\n\n... -....... \n\n~ .... . \no \n\n5 \n\n100 \n\n90 \n\n80 \n\n70 \n\n80 \n\n50 \n\n40 \n\n30 \n\n20 \n\n10 \n\n0 \n\nl \n15 \n~ \n\n100 \n\n90 \n\n80 \n\n70 \n\n80 \n\n50 \n\n40 \n\n30 \n\n20 \n\n10 \n\n0 \n\nl \n~ e \n~ \n\n-3b \n\n-25 \n\n-20 \n\n-15 \n\n-10 \n\n-5 \n\nSNRldbl \n\n10 \n\n15 \n\n20 \n\nFigure 3:  Performance of the perceptron for  different  SNR of the training  data. \n\n- - 8 x 10 x5 \n,.- ----_.  8x2Ox5 \n------ 8x30x5 \nr- - - 8 x40x5 \n..........  8x50x5 \n\n~ \n\n~ \n,~ \n\\ \n\\ \n\\ \n\\  ~ \n\n.'\\k. \n\n-30 \n\n-25 \n\n-20 \n\n-15 \n\n-10 \n\n-5 \n\no \n\n5 \n\n10 \n\n15 \n\n20 \n\nSNRldbl \n\nFigure 4:  Performance of FSCL with varying no.  of hidden  units. \n\n\f286 \n\nAhalt, Garber, Jouny and Krishnamurthy \n\n- - Noise Free \n:-- ----_.  -5 db \n------ Odb \n:- -\n6db \n\u2022 \u2022 \u2022   12db \n..... \n\n\u2022 \u2022 \u2022 \u2022 \u2022 \u2022  0 \n\n,~ \\ \n\n\\ ...... \n\n, . \n\n'  . \n.~ \n\n\\ \n\n'. \n'. \n\n~ .... \n\\~ \\ \n~ ~ \n\n'.~ \n\n-30 \n\n-25 \n\n-20 \n\n-15 \n\n-10 \n\n-5 \n\nSNR Idbl \n\n~ ..  ...... \no \n\n5 \n\n10 \n\n15 \n\n20 \n\nFigure 5:  Performance of the FSCL network for  different SNR of the training data. \n\n100 \n\n90 \n\n80 \n\n70 \n\n60 \n\nl \n\n50 \n\ne \u2022  40 \n\n30 \n\n20 \n\n10 \n\n0 \n\n100 \n\n90 \n\n80 \n\n70 \n\n60 \n\n50 \n\n40 \n\n30 \n\n20 \n\n10 \n\n0 \n\nt \ng \n\u2022 \n\n:-- ----_.  perceptron 8x1Ox5 \n\n- - FSCL 8x1Ox5 \n------ max. likelihood \n\nnearest neighbor \n\n:--\n\n~ \n... \"..~ \n\n~ \n~~ \n1\\, \n~ , , \n~\\ \n~ \\ \n\n~ ~ \n\n-30 \n\n-25 \n\n-20 \n\n-15 \n\n-10 \n\n-5 \n\no \n\n5 \n\n10 \n\n15 \n\n20 \n\nSNR Idbl \n\nFigure 6:  Comparison of all four  classifiers  for  the  coherent  data case. \n\n\fPerformance of Synthetic Neural Network Classification \n\n287 \n\nNONCOHERENT MEASUREMENTS \nFor  the case of a  noncoherent radar system model,  the  kth  frequency  component of \nthe observation  vector is  given  by: \n\n(4) \n\nwhere,  as  before,  s{  and  s~ are  the  in-phase  and  quadrature  components  of the \nbackscatter signal,  and wI  and  w~ are  the in-phase  and  quadrature  components \nof the  additive  white  Gaussian  noise.  Hence,  while  the  underlying  noise  process \nis  additive  Gaussian,  the  resultant  distribution  of the  observation  components  is \nRician  for  the non coherent system model. \n\nFor the case of non coherent measurements, the neural network classifier is presented \nwith  a  four-dimensional  observation  vector  whose  components  are  the  magnitudes \nof the noisy  measurements at each of the four  frequencies; \n\n(5) \n\nAs  in  the  coherent  case,  the  neural  net  is  typically  trained  with  200  samples  for \neach of the five  aircraft  using exemplars of the form  discussed  above. \n\nThe  structure  of the  neural  nets  in  this  experiment  was  [4, nl, ... ,nh, 5]  and  the \nsame training and  testing procedure as  in  the coherent  case  was  followed.  Figure 7 \nshows  a  comparison  of the performance of the neural  net  classifiers  with both  the \nmaximum likelihood  and nearest neighbor classifiers. \n\nAs  before,  the max-likelihood  out-performs  the other  classifiers,  with  the  nearest(cid:173)\nneighbor classifier  is second in  performance,  and the neural network  classifiers  per(cid:173)\nform roughly the same. \n\nCONCLUSIONS \n\nThese  experiments  lead  us  to  conclude  that  neural  networks  are  good  candidates \nfor  radar  classification  applications.  Both of the neural  network learning  methods \nwe  tested  have  a  similar  performance  and  they  are  both  relatively  insensitive  to \nchanges  in  network  architecture,  network  topology,  and  to  the  noise  level  of the \ntraining data. \n\nBecause  the  methods  used  to implement  the neural  networks  classifiers  were  rela(cid:173)\ntively  simple,  we  feel  that the level of performance of the neural  classifiers  is  quite \nimpressive.  Our ongoing research is concentrating on improving neural classifier per(cid:173)\nformance  by introducing  more  sophisticated  learning  algorithms such  as  the  LVQ \nalgorithm proposed by Kohonen  [5].  We are also investigating methods of improving \nthe performance of the perceptron,  for  example, by  increasing training time. \n\n\f288 \n\nAhalt, Garber, Jouny and Krishnamurthy \n\n100 \n\n90 \n\n80 \n\n70 \n\n60 \n\n50 \n\n40 \n\n30 \n\n20 \n\n10 \n\n0 \n\nl \n!5 \nI: \u2022 \n\n-\n-- FSCL4x20x5 \n:- ----_.  perceptron 4X2Ox5 \n------ max-Okellhood \n:-- -\n\nnear~, ralghbor-O db \n---\n\nI~ \n\n\\\\ \n'\\ \\ \n\\\\ \n\\\\, \n.~ ~\\, \n'\\\\ , , \n,~ ~ , , \n' .. \n\n-30 \n\n-25 \n\n-20 \n\n-15 \n\n-10 \n\n-5 \n\n0 \n\n5 \n\n10 \n\n15 \n\n20 \n\nSNR rdbl \n\nFigure  7:  Comparison of all four  classifiers for  the non coherent data case. \n\nReferences \n\n[1]  B. Bhanu,  \"Automatic target recognition:  State of the art survey,\"  IEEE Trans(cid:173)\n\nactions  on  Aerospace  and Electronic  Systems,  vol.  AES-22,  no.  4,  pp. 364-379, \nJuly 1986. \n\n[2]  R.  R.  Lippmann,  \"An  Introduction  to  Computing  with  Neural  Nets,\"  IEEE \n\nASSP Magazine,  vol.  4,  no.  2,  pp. 4-22, April  1987. \n\n[3]  S.  C.  Ahalt, A.  K.  Krishnamurthy, P.  Chen, and D.  E.  Melton,  \"A  new  compet(cid:173)\n\nitive learning algorithm for  vector  quantization  using neural networks,\"  Neural \nNetworks,  1989.  (submitted). \n\n[4]  F.  D.  Garber,  N.  F.  Chamberlain,  and  O.  Snorrason,  \"Time-domain  and \nfrequency-domain  feature  selection  for  reliable  radar  target  identification,\"  in \nProceedings  of the  IEEE 1988  National Itadar  Conference,  pp . 79-84,  Ann  Ar(cid:173)\nbor, MI,  April 20-21,  1988. \n\n[5]  T .  Kohonen,  Self-Organization  and  Associative  Memory,  2nd  Ed.  Berlin: \n\nSpringer-Veralg,  1988. \n\n\f", "award": [], "sourceid": 146, "authors": [{"given_name": "Stanley", "family_name": "Ahalt", "institution": null}, {"given_name": "F.", "family_name": "Garber", "institution": null}, {"given_name": "I.", "family_name": "Jouny", "institution": null}, {"given_name": "Ashok", "family_name": "Krishnamurthy", "institution": null}]}