{"title": "CMOL CrossNets: Possible Neuromorphic Nanoelectronic Circuits", "book": "Advances in Neural Information Processing Systems", "page_first": 755, "page_last": 762, "abstract": "", "full_text": "CMOL CrossNets: Possible Neuromorphic\n\nNanoelectronic Circuits  \n\n \n     Jung Hoon Lee                        Xiaolong Ma              Konstantin K. Likharev \n\nStony Brook University \n\nStony Brook, NY 11794-3800 \n klikharev@notes.cc.sunysb.edu  \n\n \n \n\nAbstract \n\nHybrid  \u201cCMOL\u201d  integrated  circuits,  combining  CMOS  subsystem \nwith  nanowire  crossbars  and  simple  two-terminal  nanodevices, \npromise  to  extend  the  exponential  Moore-Law  development  of \nmicroelectronics  into  the  sub-10-nm  range.  We  are  developing \nneuromorphic  network  (\u201cCrossNet\u201d)  architectures  for  this  future \ntechnology, in which neural cell bodies are implemented in CMOS, \nnanowires  are  used  as  axons  and  dendrites,  while  nanodevices \n(bistable  latching  switches)  are  used  as  elementary  synapses.  We \nhave  shown  how  CrossNets  may  be  trained  to  perform  pattern \nrecovery  and  classification  despite  the  limitations  imposed  by  the \nCMOL  hardware.    Preliminary  estimates  have  shown  that  CMOL \nCrossNets may be extremely dense (~107 cells per cm2) and operate \napproximately a million times faster than biological neural networks, \nat  manageable  power  consumption.  In  Conclusion,  we  discuss  in \nbrief possible short-term and long-term applications of the emerging \ntechnology. \n\n1  Introduction: CMOL Circuits \n\nRecent  results  [1,  2]  indicate  that  the  current  VLSI  paradigm  based  on  CMOS \ntechnology  can  be  hardly  extended  beyond  the  10-nm  frontier:  in  this  range  the \nsensitivity  of  parameters  (most  importantly,  the  gate  voltage  threshold)  of  silicon \nfield-effect  transistors  to  inevitable  fabrication  spreads  grows  exponentially.  This \nsensitivity will probably send the fabrication facilities costs skyrocketing, and may \nlead to the end of Moore\u2019s Law some time during the next decade.  \nThere  is  a  growing  consensus  that  the  impending  Moore\u2019s  Law  crisis  may  be \npreempted by a radical paradigm shift from the purely CMOS technology to hybrid \nCMOS/nanodevice circuits, e.g., those of \u201cCMOL\u201d variety (Fig. 1). Such circuits (see, \ne.g., Ref. 3 for their recent review) would combine a level of advanced CMOS devices \nfabricated by the lithographic patterning, and two-layer nanowire crossbar formed, \ne.g.,  by  nanoimprint,  with  nanowires  connected  by  simple,  similar,  two-terminal \nnanodevices at each crosspoint. For such devices, molecular single-electron latching \nswitches [4] are presently the leading candidates, in particular because they may be \nfabricated using the self-assembled monolayer (SAM) technique which already gave \nreproducible results for simpler molecular devices [5]. \n \n\n\f \n\nnanodevices \n\n(a) \n\n \n\n(b) \n\nnanowiring \n\n \n\nand \n\nFnano \n\n\u03b1 \n\nstack \n\n\u03b2FCMOS \n\nnanodevices \ninterface pins \nupper wiring \nlevel of CMOS \n\nFig.  1.  CMOL  circuit:  (a)  schematic  side  view,  and  (b)  top-view  zoom-in  on  several \nadjacent interface pins. (For clarity, only two adjacent nanodevices are shown.)  \n\n \n \n \n \n \n \n \nIn order to overcome the CMOS/nanodevice interface problems pertinent to earlier \nproposals of hybrid circuits [6], in CMOL the interface is provided by pins that are \ndistributed all over the circuit area, on the top of the CMOS stack. This allows to use \nadvanced  techniques  of  nanowire  patterning  (like  nanoimprint)  which  do  not  have \nnanoscale accuracy of layer alignment [3]. The vital feature of this interface is the tilt, \nby  angle  \u03b1  =  arcsin(Fnano/\u03b2FCMOS),  of  the  nanowire  crossbar  relative  to  the  square \narrays of interface pins (Fig. 1b). Here Fnano is the nanowiring half-pitch, FCMOS is the \nhalf-pitch of the CMOS subsystem, and \u03b2 is a dimensionless factor larger than 1 that \ndepends on the CMOS cell complexity. Figure 1b shows that this tilt allows the CMOS \nsubsystem to address each nanodevice even if Fnano << \u03b2FCMOS.  \nBy now, it has been shown that CMOL circuits can combine high performance with \nhigh  defect  tolerance  (which  is  necessary  for  any  circuit  using  nanodevices)  for \nseveral digital applications. In particular, CMOL circuits with defect rates below a \nfew  percent  would  enable  terabit-scale  memories  [7],  while  the  performance  of \nFPGA-like  CMOL  circuits  may  be  several  hundred  times  above  that  of  overcome \npurely  CMOL  FPGA  (implemented  with  the  same  FCMOS),  at  acceptable  power \ndissipation and defect tolerance above 20% [8].  \nIn addition, the very structure of CMOL circuits makes them uniquely suitable for the \nimplementation  of  more  complex,  mixed-signal  information  processing  systems, \nincluding ultradense and ultrafast neuromorphic networks. The objective of this paper \nis to describe in brief the current status of our work on the development of so-called \nDistributed Crossbar Networks (\u201cCrossNets\u201d) that could provide high performance \ndespite the limitations imposed by CMOL hardware. A more detailed description of \nour earlier results may be found in Ref. 9. \n\n2  Synapses \nThe central device of CrossNet is a two-terminal latching switch [3, 4] (Fig. 2a) which is a \ncombination of two single-electron devices, a transistor and a trap [3]. The device may be \nnaturally implemented as a single organic molecule (Fig. 2b). Qualitatively, the device \noperates  as follows:  if voltage V  =  Vj  \u2013 Vk applied between  the  external  electrodes (in \nCMOL, nanowires) is low, the trap island has no net electric charge, and the single-electron \ntransistor is closed. If voltage V approaches certain threshold value V+ > 0, an additional \nelectron is inserted into the trap island, and its field lifts the Coulomb blockade of the \nsingle-electron transistor, thus connecting the nanowires. The switch state may be reset \n(e.g., wires disconnected) by applying a lower voltage V < V- < V+. \nDue to the random character of single-electron tunneling [2], the quantitative description of \nthe switch is by necessity probabilistic: actually, V determines only the rates \u0393\u2191\u2193  of device \n\n\fswitching between its ON and OFF states. The rates, in turn, determine the dynamics of \nprobability p to have the transistor opened (i.e. wires connected):   \n\ndp/dt  = \u0393\u2191(1 - p) - \u0393\u2193p. \n\n(1) \n\nThe theory of single-electron tunneling [2] shows that, in a good approximation, the rates \nmay be presented as \n\n \n\n \n \n \n \n \n \n \n\n\u0393\u2191\u2193 = \u03930 exp{\u00b1e(V - S)/kBT} , \n\n single-electron trap \n\n(2) \n\n(a) \n\n Vj \n\n tunnel  \njunction \n\n Vk \n\nsingle-electron transistor \n\nR\n\nO\n\nO\n\nN\n\nO\n\nO\n\nN\n\nO\n\ndiimide \n\nacceptor groups \n\nO\n\nN\n\nO\n\nN\n\n(b) \n\nO\n\nO\n\nR\n\nR\n\nR\n\nOPE wires\n\nR\n\nR\n\nN\n\nC\n\nN\n\nC\n\nR = hexyl\nclipping \ngroup \n\nC\n\nN\n\nR\n\nO\n\nR\n\nR\n\nR\n\nR\n\nR\n\nO\n\nO\n\n \nFig.  2.  (a)  Schematics  and  (b)  possible  molecular  implementation  of  the  two-terminal \nsingle-electron latching switch \n \nwhere \u03930 and S are constants depending on physical parameters of the latching switches. \nNote that despite the random character of switching, the strong nonlinearity of Eq. (2) \nallows to limit the degree of the device \u201cfuzziness\u201d.  \n\n3  CrossNets \n\nFigure  3a  shows  the  generic  structure  of  a  CrossNet.  CMOS-implemented  somatic \ncells (within the Fire Rate model, just nonlinear differential amplifiers, see Fig. 3b,c) \napply their output voltages to \u201caxonic\u201d nanowires. If the latching switch, working as \nan elementary synapse, on the crosspoint of an axonic wire with the perpendicular \n\u201cdendritic\u201d wire is open, some current flows into the latter wire, charging it. Since \nsuch  currents  are  injected  into  each  dendritic  wire  through  several  (many)  open \nsynapses, their addition provides a natural passive analog summation of signals from \nthe corresponding somas, typical for all neural networks. Examining Fig. 3a, please \nnote the open-circuit terminations of axonic and dendritic lines at the borders of the \nsomatic cells; due to these terminations the somas do not communicate directly (but \nonly via synapses).   \nThe network shown on Fig. 3 is evidently feedforward; recurrent networks are achieved in \nthe evident way by doubling the number of synapses and nanowires per somatic cell (Fig. \n3c). Moreover, using dual-rail (bipolar) representation of the signal, and hence doubling \nthe number of nanowires and elementary synapses once again, one gets a CrossNet with \n\n\f \n\n \n\n \n\n \n\n \n\n \n\njk- \n\nRL \n+\n-\n-\nRL \n\nRL \n\n+ \n- \n- \nRL \n\n+ \n-\nsoma \n\nk \n\n \n\n \n\njk\n\n \nsinh\n\n04\n= \u2212 \u0393\n\nd w\ndt\n\nsomas coupled by compact 4-switch groups [9]. Using Eqs. (1) and (2), it is straightforward \nto show that that the average synaptic weight wjk of the group obeys the \u201cquasi-Hebbian\u201d \nrule: \n \n \n \n \n \n \n \n \n\n+ - \nsoma \n\n(\nV\n\u03b3\n\n \nj\n\n(\nS\n\u03b3\n\n)\n \nsinh\n\nsinh\n\n(\nV\n \n\u03b3\nk\n\n)\n\n.\n\n(a) \n\njk+ \n\n(b) \n\n \n\n(3) \n\n \n\nj \n\n)\n\n \n\n(c) \n\n \n \nFig. 3. (a) Generic structure of the simplest, (feedforward, non-Hebbian) CrossNet. Red lines \nshow  \u201caxonic\u201d,  and  blue  lines  \u201cdendritic\u201d  nanowires.  Gray  squares  are  interfaces  between \nnanowires  and  CMOS-based  somas  (b,  c).  Signs  show  the  dendrite  input  polarities.  Green \ncircles denote molecular latching switches forming elementary synapses. Bold red and blue \npoints are open-circuit terminations of the nanowires, that do not allow somas to interact in \nbypass of synapses \n \nIn  the  simplest  cases  (e.g.,  quasi-Hopfield  networks  with  finite  connectivity),  the \ntri-level synaptic weights of the generic CrossNets are quite satisfactory, leading to \njust  a  very  modest  (~30%)  network  capacity  loss.  However,  some  applications  (in \nparticular, pattern classification) may require a larger number of weight quantization \nlevels L (e.g., L \u2248 30 for a 1% fidelity [9]). This may be achieved by using compact \nsquare arrays (e.g., 4\u00d74) of latching switches (Fig. 4). \nVarious  species  of  CrossNets  [9]  differ  also  by  the  way  the  somatic  cells  are \ndistributed  around  the  synaptic  field.  Figure  5  shows  feedforward  versions  of  two \nCrossNet types most explored so far: the so-called FlossBar and InBar. The former \nnetwork is more natural for the implementation of multilayered perceptrons (MLP), \nwhile the latter system is preferable for recurrent network implementations and also \nallows a simpler CMOS design of somatic cells.  \nThe  most  important  advantage  of  CrossNets  over  the  hardware  neural  networks \nsuggested earlier is that these networks allow to achieve enormous density combined \nwith large cell connectivity M >> 1 in quasi-2D electronic circuits. \n\n4  CrossNet training \n\nCrossNet training faces several hardware-imposed challenges: \n\n\f \n\n(a) \n\nVw\u2013 A/2 \n\n(b) \n\ni' =  1      2      \u2026        n  \n\ni' =  1       2     \u2026         n  \n\ni = 1 \n \n     2 \n \n    \u2026 \n \n     n  \n\ni = 1 \n \n     2 \n \n    \u2026 \n \n     n  \n\n          (i) The synaptic weight contribution provided by the elementary latching switch is \nbinary, so that for most applications the multi-switch synapses (Fig. 4) are necessary. \n          (ii) The only way to adjust any particular synaptic weight is to turn ON or OFF the \ncorresponding latching switch(es). This is only possible to do by applying certain voltage V \n= Vj \u2013 Vk between the two corresponding nanowires. At this procedure, other nanodevices \nattached to the same wires should not be disturbed. \n          (iii) As stated above, synapse state switching is a statistical progress, so that the \ndegree of its \u201cfuzziness\u201d should be carefully controlled.  \n \n  Vj \n \n \n \n \n \n \n \n  Vj \n \n \n \n \nFig. 4. Composite synapse for providing L = 2n2+1 discrete levels of the weight in (a) operation \nand  (b)  weight  adjustment  modes.  The  dark-gray  rectangles  are  resistive  metallic  strips  at \nsoma/nanowire interfaces \n \n \n \n \n \n \n \n \n \n \n \nFig.  5.  Two  main  CrossNet  species:  (a)  FlossBar  and  (b)  InBar,  in  the  generic  (feedforward, \nnon-Hebbian, ternary-weight) case for the connectivity parameter M = 9. Only the nanowires and \nnanodevices coupling one cell (indicated with red dashed lines) to M post-synaptic cells (blue dashed \nlines) are shown; actually all the cells are similarly coupled \n\nRS  \n\n\u00b1(Vt +A/2) \n\nRL \n\nRS  \n\n\u00b1(Vt \u2013A/2) \n\nVw+ A/2 \n\n(a)\n\n(b) \n\nWe  have  shown  that  these  challenges  may  be  met  using  (at  least)  the  following \ntraining methods [9]: \n\n\f \n\n(i)  Synaptic  weight  import.  This  procedure  is  started  with  training  of  a \n \nhomomorphic \u201cprecursor\u201d artificial neural network with continuous synaptic weighs \nwjk,  implemented  in  software,  using  one  of  established  methods  (e.g.,  error \nbackpropagation). Then the synaptic weights wjk are transferred to the CrossNet, with \nsome \u201cclipping\u201d (rounding) due to the binary nature of elementary synaptic weights. \nTo  accomplish  the  transfer,  pairs  of  somatic  cells  are  sequentially  selected  via \nCMOS-level  wiring.  Using  the  flexibility  of  CMOS  circuitry,  these  cells  are \nreconfigured to apply external voltages \u00b1VW to the axonic and dendritic nanowires \nleading to a particular synapse, while all other nanowires are grounded. The voltage \nlevel VW is selected so that it does not switch the synapses attached to only one of the \nselected nanowires, while voltage 2VW applied to the synapse at the crosspoint of the \nselected wires is sufficient for its reliable switching. (In the composite synapses with \nquasi-continuous weights (Fig. 4), only a part of the corresponding switches is turned \nON or OFF.) \n \nis \nstraightforward when wjk may be simply calculated, e.g., for the Hopfield-type networks. \nHowever, for very large CrossNets used, e.g., as pattern classifiers the precursor network \ntraining may take an impracticably long time. In this case the direct training of a CrossNet \nmay become necessary. We have developed two methods of such training, both based on \n\u201cHebbian\u201d  synapses  consisting  of  4  elementary  synapses  (latching  switches)  whose \naverage  weight  dynamics  obeys  Eq.  (3).  This  quasi-Hebbian  rule  may  be  used  to \nimplement the backpropagation algorithm either using a periodic time-multiplexing [9] or \nin a continuous fashion, using the simultaneous propagation of signals and errors along the \nsame dual-rail channels.  \nAs a result, presently we may state that CrossNets may be taught to perform virtually \nall major functions demonstrated earlier with the usual neural networks, including the \ncorrupted  pattern  restoration  in  the  recurrent  quasi-Hopfield  mode  and  pattern \nclassification in the feedforward MLP mode [11]. \n\n(ii)  Error  backpropagation.  The  synaptic  weight \n\nimport  procedure \n\n5  CrossNet performance estimates \n\nThe  significance  of  this  result  may  be  only  appreciated  in  the  context  of  unparalleled \nphysical  parameters  of  CMOL  CrossNets.  The  only  fundamental  limitation  on  the \nhalf-pitch Fnano (Fig. 1) comes from quantum-mechanical tunneling between nanowires. If \nthe  wires  are  separated  by  vacuum,  the  corresponding  specific  leakage  conductance \nbecomes  uncomfortably  large  (~10-12  \u03a9-1m-1)  only  at  Fnano  =  1.5  nm;  however,  since \nrealistic insulation materials (SiO2, etc.) provide somewhat lower tunnel barriers, let us use \na more conservative value Fnano= 3 nm. Note that this value corresponds to 1012 elementary \nsynapses per cm2, so that for 4M = 104 and n = 4 the areal density of neural cells is close to \n2\u00d7107 cm-2. Both numbers are higher than those for the human cerebral cortex, despite the \nfact that the quasi-2D CMOL circuits have to compete with quasi-3D cerebral cortex. \nWith  the  typical  specific  capacitance  of  3\u00d710-10  F/m  =  0.3  aF/nm,  this  gives  nanowire \ncapacitance  C0  \u2248  1  aF  per  working  elementary  synapse,  because  the  corresponding \nsegment has length 4Fnano. The CrossNet operation speed is determined mostly by the time \nconstant  \u03c40  of  dendrite  nanowire  capacitance  recharging  through  resistances  of  open \nnanodevices. Since both the relevant conductance and capacitance increase similarly with \nM and n, \u03c40 \u2248 R0C0.  \nThe possibilities of reduction of R0, and hence \u03c40, are limited mostly by acceptable power \n2/(2Fnano)2R0. For room-temperature operation, \ndissipation per unit area, that is close to Vs\nthe  voltage  scale  V0  \u2248  Vt  should  be  of  the  order  of  at  least  30  kBT/e  \u2248  1  V  to  avoid \nthermally-induced  errors  [9].  With  our  number  for  Fnano,  and  a  relatively  high  but \nacceptable power consumption of 100 W/cm2, we get R0 \u2248 1010\u03a9 (which is a very realistic \n\n\f \n\nvalue  for  single-molecule  single-electron  devices  like  one  shown  in  Fig.  3).  With  this \nnumber, \u03c40 is as small as ~10 ns. This means that the CrossNet speed may be approximately \nsix orders of magnitude (!) higher than that of the biological neural networks. Even scaling \nR0 up by a factor of 100 to bring power consumption to a more comfortable level of 1 \nW/cm2, would still leave us at least a four-orders-of-magnitude speed advantage.  \n\n6  Discussion: Possible applications \n\nThese estimates make us believe that that CMOL CrossNet chips may revolutionize \nthe neuromorphic network applications. Let us start with the example of relatively \nsmall (1-cm2-scale) chips used for recognition of a face in a crowd [11].  The most \ndifficult  feature  of  such  recognition  is  the  search  for  face  location,  i.e.  optimal \nplacement  of  a  face  on  the  image  relative  to  the  panel  providing  input  for  the \nprocessing  network.  The  enormous  density  and  speed  of  CMOL  hardware  gives  a \npossibility to time-and-space multiplex this task (Fig. 6). In this approach, the full \nimage  (say,  formed  by  CMOS  photodetectors  on  the  same  chip)  is  divided  into  P \nrectangular panels of h\u00d7w pixels, corresponding to the expected size and approximate \nshape of a single face. A CMOS-implemented communication channel passes input \ndata from each panel to the corresponding CMOL neural network, providing its shift \nin time, say using the TV scanning pattern (red line in Fig. 6). The standard methods \nof image classification require the network to have just a few hidden layers, so that the \ntime interval \u0394t necessary for each mapping position may be so short that the total \npattern recognition time T = hw\u0394t may be acceptable even for online face recognition. \n \n \n \n \n \n \n \n \n \n \nFig. 6. Scan mapping of the input image on CMOL CrossNet inputs. Red lines show the possible time \nsequence of image pixels sent to a certain input of the network processing image from the upper-left \npanel of the pattern \n\n  network  \n  input  \n\nw \n\nh\n\nimage \n\nIndeed, let us consider a 4-Megapixel image partitioned into 4K 32\u00d732-pixel panels (h \n= w = 32). This panel will require an MLP net with several (say, four) layers with 1K \ncells  each  in  order  to  compare  the  panel  image  with  ~103  stored  faces.  With  the \nfeasible 4-nm nanowire half-pitch, and 65-level synapses (sufficient for better than \n99% fidelity [9]), each interlayer crossbar would require chip area about (4K\u00d764 nm)2 \n= 64\u00d764 \u03bcm2, fitting 4\u00d74K of them on a ~0.6 cm2 chip. (The CMOS somatic-layer and \ncommunication-system  overheads  are  negligible.)  With  the  acceptable  power \nconsumption of the order of 10 W/cm2, the input-to-output signal propagation in such \na network will take only about 50 ns, so that \u0394t may be of the order of 100 ns and the \ntotal time T = hw\u0394t of processing one frame of the order of 100 microseconds, much \nshorter  than  the  typical  TV  frame  time  of  ~10  milliseconds.  The  remaining \n\n\f \n\ntwo-orders-of-magnitude time gap may be used, for example, for double-checking the \nresults via stopping the scan mapping (Fig. 6) at the most promising position. (For this, \na simple feedback from the recognition output to the mapping communication system \nis necessary.) \nIt  is  instructive  to  compare  the  estimated  CMOL  chip  speed  with  that  of  the \nimplementation of a similar parallel network ensemble on a CMOS signal processor (say, \nalso combined on the same chip with an array of CMOS photodetectors). Even assuming \nan  extremely  high  performance  of  30  billion  additions/multiplications  per  second,  we \nwould need  ~4\u00d74K\u00d71K\u00d7(4K)2/(30\u00d7109)  \u2248  104  seconds ~  3 hours  per frame,    evidently \nincompatible with the online image stream processing. \nLet us finish with a brief (and much more speculative) discussion of possible long-term \nprospects of CMOL CrossNets. Eventually, large-scale (~30\u00d730 cm2) CMOL circuits may \nbecome available. According to the estimates given in the previous section, the integration \nscale of such a system (in terms of both neural cells and synapses) will be comparable with \nthat  of  the  human  cerebral  cortex.  Equipped  with  a  set  of  broadband  sensor/actuator \ninterfaces, such (necessarily, hierarchical) system may be capable, after a period of initial \nsupervised training, of further self-training in the process of interaction with environment, \nwith the speed several orders of magnitude higher than that of its biological prototypes. \nNeedless to say, the successful development of such self-developing systems would have a \nmajor impact not only on all information technologies, but also on the society as a whole. \n\nAcknowledgments \nThis work has been supported in part by the AFOSR, MARCO (via FENA Center), \nand NSF. Valuable contributions made by Simon F\u00f6lling, \u00d6zg\u00fcr T\u00fcrel and Ibrahim \nMuckra, as well as useful discussions with P. Adams, J. Barhen, D. Hammerstrom, V. \nProtopopescu, T. Sejnowski, and D. Strukov are gratefully acknowledged.  \n\nReferences   \n \n[1]  Frank,  D.  J.  et  al.  (2001)  Device  scaling  limits  of  Si  MOSFETs  and  their  application \n\ndependencies. Proc. IEEE  89(3): 259-288.  \n\n[2]  Likharev,  K.  K.  (2003)  Electronics  below  10  nm,  in  J.  Greer  et  al.  (eds.),  Nano  and  Giga \n\nChallenges in Microelectronics, pp. 27-68. Amsterdam: Elsevier.  \n\n[3]  Likharev, K. K. and Strukov, D. B. (2005) CMOL: Devices, circuits, and architectures, in G. \n\nCuniberti et al. (eds.), Introducing Molecular Electronics, Ch. 16. Springer, Berlin. \n\n[4]  F\u00f6lling, S., T\u00fcrel, \u00d6. & Likharev, K. K. (2001) Single-electron latching switches as nanoscale \nsynapses, in Proc. of the 2001 Int. Joint Conf. on Neural Networks, pp. 216-221. Mount Royal, \nNJ: Int. Neural Network Society. \n\n[5]  Wang,  W.  et  al.  (2003)  Mechanism  of  electron  conduction  in  self-assembled  alkanethiol \n\nmonolayer devices. Phys. Rev. B 68(3): 035416 1-8. \n\n[6]  Stan  M.  et  al.  (2003)  Molecular  electronics:  From  devices  and  interconnect  to  circuits  and \n\narchitecture, Proc. IEEE  91(11): 1940-1957. \n\n[7]  Strukov, D. B.  & Likharev, K. K. (2005) Prospects for terabit-scale nanoelectronic memories. \n\nNanotechnology 16(1): 137-148. \n\n[8]  Strukov,  D.  B.  &  Likharev,  K.  K.  (2005)  CMOL  FPGA:  A  reconfigurable  architecture  for \n\nhybrid digital circuits with two-terminal nanodevices.  Nanotechnology 16(6): 888-900. \n\n[9]  T\u00fcrel, \u00d6. et al. (2004) Neuromorphic architectures for nanoelectronic circuits\u201d, Int. J. of Circuit \n\nTheory and Appl. 32(5): 277-302. \n\n[10]  See, e.g., Hertz J. et al. (1991) Introduction to the Theory of Neural Computation. Cambridge, \n\n[11]  Lee, J. H. & Likharev, K. K. (2005) CrossNets as pattern classifiers. Lecture Notes in Computer \n\nMA: Perseus. \n\nSciences 3575: 434-441. \n\n\f", "award": [], "sourceid": 2955, "authors": [{"given_name": "Jung", "family_name": "Lee", "institution": null}, {"given_name": "Xiaolong", "family_name": "Ma", "institution": null}, {"given_name": "Konstantin", "family_name": "Likharev", "institution": null}]}