{"title": "A Bio-inspired Redundant Sensing Architecture", "book": "Advances in Neural Information Processing Systems", "page_first": 2379, "page_last": 2387, "abstract": "Sensing is the process of deriving signals from the environment that allows artificial systems to interact with the physical world. The Shannon theorem specifies the maximum rate at which information can be acquired. However, this upper bound is hard to achieve in many man-made systems. The biological visual systems, on the other hand, have highly efficient signal representation and processing mechanisms that allow precise sensing. In this work, we argue that redundancy is one of the critical characteristics for such superior performance. We show architectural advantages by utilizing redundant sensing, including correction of mismatch error and significant precision enhancement. For a proof-of-concept demonstration, we have designed a heuristic-based analog-to-digital converter - a zero-dimensional quantizer. Through Monte Carlo simulation with the error probabilistic distribution as a priori, the performance approaching the Shannon limit is feasible. In actual measurements without knowing the error distribution, we observe at least 2-bit extra precision. The results may also help explain biological processes including the dominance of binocular vision, the functional roles of the fixational eye movements, and the structural mechanisms allowing hyperacuity.", "full_text": "A Bio-inspired Redundant Sensing Architecture\n\nAnh Tuan Nguyen, Jian Xu and Zhi Yang\u2217\n\nDepartment of Biomedical Engineering\n\nUniversity of Minnesota\nMinneapolis, MN 55455\n\u2217yang5029@umn.edu\n\nAbstract\n\nSensing is the process of deriving signals from the environment that allows arti\ufb01-\ncial systems to interact with the physical world. The Shannon theorem speci\ufb01es\nthe maximum rate at which information can be acquired [1]. However, this up-\nper bound is hard to achieve in many man-made systems. The biological visual\nsystems, on the other hand, have highly ef\ufb01cient signal representation and pro-\ncessing mechanisms that allow precise sensing. In this work, we argue that re-\ndundancy is one of the critical characteristics for such superior performance. We\nshow architectural advantages by utilizing redundant sensing, including correction\nof mismatch error and signi\ufb01cant precision enhancement. For a proof-of-concept\ndemonstration, we have designed a heuristic-based analog-to-digital converter - a\nzero-dimensional quantizer. Through Monte Carlo simulation with the error prob-\nabilistic distribution as a priori, the performance approaching the Shannon limit\nis feasible. In actual measurements without knowing the error distribution, we\nobserve at least 2-bit extra precision. The results may also help explain biological\nprocesses including the dominance of binocular vision, the functional roles of the\n\ufb01xational eye movements, and the structural mechanisms allowing hyperacuity.\n\n1\n\nIntroduction\n\nVisual systems have perfected the art of sensing through billions of years of evolution. As an exam-\nple, with roughly 100 million photoreceptors absorbing light and 1.5 million retinal ganglion cells\ntransmitting information [2, 3, 4], a human can see images in three-dimensional space with great\ndetails and unparalleled resolution. Anatomical studies determine the spatial density of the photore-\nceptors on the retina, which limits the peak foveal angular resolution to 20-30 arcseconds according\nto Shannon theory [1, 2]. There are also other imperfections due to nonuniform distribution of cells\u2019\nshape, size, location, and sensitivity that further constrain the precision. However, experiment data\nhave shown that human can achieve an angular separation close to 1 arcminute in a two-point acu-\nity test [5]. In certain conditions, it is even possible to detect an angular misalignment of only 2-5\narcseconds [6], which surpasses the virtually impossible physical barrier. This ability, known as\nhyperacuity, has baf\ufb02ed scientists for decades: what kind of mechanism allows human to read an\nundistorted image with such a blunt instrument?\nAmong the approaches to explain this astonishing feat of human vision, redundant sensing is a\npromising candidate. It is well-known that redundancy is an important characteristic of many bio-\nlogical systems, from DNA coding to neural network [7]. Previous studies [8, 9] suggest there is\na connection between hyperacuity and binocular vision - the ability to see images using two eyes\nwith overlapping \ufb01eld of vision. Also known as stereopsis, it presents a passive form of redun-\ndant sensing. In addition to the obvious advantage of seeing objects in three-dimensional space,\nthe binocular vision has been proven to increase visual dynamic range, contrast, and signal-to-noise\nratio [10]. It is evident that seeing with two eyes enables us to sense a higher level of information\n\n30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.\n\n\fFigure 1: Illustration of n-dimensional quantizers without (ideal) and with mismatch error. (a) Two-\ndimensional quantizers for image sensing.\n(b) Zero-dimensional quantizers for analog-to-digital\ndata conversion.\n\nas well as to correct many intrinsic errors and imperfections. Furthermore, the eyes continuously\nand involuntarily engage in a complex micro-\ufb01xational movement known as microsaccade, which\nsuggests an active form of redundant sensing [11]. During microsaccade, the image projected on the\nretina is shifted across a few photoreceptors in a pseudo-random manner. Empirical studies [12] and\ncomputational models [13] suggest that the redundancy created by these micro-movements allows\nef\ufb01cient sampling of spatial information that can surpass the static diffraction limitation.\nBoth biological and arti\ufb01cial systems encounter similar challenges to achieve precise sensing in the\npresence of non-ideal imperfections. One of those is mismatch error. At a high resolution, even a\nsmall degree of mismatch error can degrade the performance of many man-made sensors [14, 15].\nFor example, it is not uncommon for a 24-bit analog-to-digital converter (ADC) to have 18-20 bits\neffective resolution [16]. Inspired by the human visual system, we explore a new computational\nframework to remedy mismatch error based on the principle of redundant sensing. The proposed\nmechanism resembles the visual systems\u2019 binocular architecture and is designed to increase the\nprecision of a zero-dimensional data quantization process. By assuming the error probabilistic dis-\ntribution as a priori, we show that precise data conversion approaching the Shannon limit can be\naccomplished.\nAs a proof-of-concept demonstration, we have designed and validated a high-resolution ADC in-\ntegrated circuit. The device utilizes a heuristic approach that allows unsupervised estimation and\ncalibration of mismatch error. Simulation and measurement results have demonstrated the ef\ufb01cacy\nof the proposed technique, which can increase the effective resolution by 2-5 bits and linearity by\n4-6 times without penalties in chip area and power consumption.\n\n2 Mismatch Error\n\n2.1 Quantization & Shannon Limit\n\nData quantization is the partition of a continuous n-dimensional vector space into M subspaces,\n\u22060, ..., \u2206M\u22121, called quantization regions as illustrated in Figure 1. For example, an eye is a two-\ndimensional biological quantizer while an ADC is a zero-dimensional arti\ufb01cial quantizer, where the\npartition occurs in a spatial, temporal and scalar domain. Each quantization region is assigned a\nrepresentative value, d0, ..., dM\u22121, which uniquely encodes the quantized information. While the\nrepresentative values are well-de\ufb01ned in the abstract domain, the actual partition often depends on\nthe physical properties of the quantization device and has a limited degree of freedom for adjustment.\nAn optimal data conversation is achieved with a set of uniformly distributed quantization regions. In\npractice, it is dif\ufb01cult to achieve due to the physical constraints in the partition process. For example,\nindividual pixel cells can deviate from the ideal morphology, location, and sensitivity. These relative\ndifferences, referred to as mismatch error, contribute to the data conversion error.\nIn this paper, we consider a zero-dimensional (scalar) quantizer, which is the mathematical equiv-\nalence of an ADC device. A N-bit quantizer divides the continuous conversion full-range (FR =\n[0, 2N ]) into 2N quantization regions, \u22060, ..., \u22062N\u22121, with nominal unity length E(|\u2206i|) = \u2206 = 1\n\n2\n\n\fFigure 2: (a) Degeneration of entropy, i.e. maximum effective resolution, due to mismatch er-\nror versus quantizer\u2019s intrinsic resolution. (b) The proportion of data conversion error measured\nby mismatch-to-quantization ratio (MQR). With a conventional architecture, mismatch error is the\ndominant source, especially in a high-resolution domain. The proposed method allows suppressing\nmismatch error below quantization noise and approaching the Shannon limit.\n\nleast-signi\ufb01cant-bit (LSB). The quantization regions are de\ufb01ned by a set of discrete references1,\nSR = {\u03b80, ..., \u03b82N}, where 0 = \u03b80 < \u03b81 < ... < \u03b82N = 2N . An input signal x is assigned the\ndigital code d(x) = i \u2208 SD = {0, 1, 2, ..., 2N \u2212 1}, if it falls into region \u2206i de\ufb01ned by\n\nx \u2190 d(x) = i \u21d4 x \u2208 \u2206i \u21d4 \u03b8i \u2264 x < \u03b8i+1.\n\n(1)\nThe Shannon entropy of a N-bit quantizer [17, 18] quanti\ufb01es the maximum amount of information\nthat can be acquired by the data conversion process\n\u221a\nH = \u2212 log2\n(cid:90) 2N\n2N\u22121(cid:88)\n\nwhere M is the normalized total mean square error integrated over each digital code\n\n[x \u2212 d(x) \u2212 1/2]2dx\n\n(cid:90) \u03b8i+1\n\n12 \u00b7 M ,\n\n(2)\n\n(3)\n\n(x \u2212 i \u2212 1/2)2dx.\n\nM =\n\n1\n23N\n\n1\n23N\n\n=\n\n0\n\ni=0\n\n\u03b8i\n\nIn this work, we consider both quantization noise and mismatch error. The Shannon limit is generally\npreferred as the maximum rate at which information can be acquired without any mismatch error,\nwhere \u03b8i = i,\u2200i or SR\\{2N} = SD, M is equal to the total quantization noise Q = 2\u22122N /12,\nand the entropy is equal to the quantizer\u2019s intrinsic resolution H = N. The differences between\nSR\\{2N} and SD are caused by mismatch error and result in the degeneration of entropy. Figure\n2(a) shows the entropy, i.e. maximum effective resolution, versus the quantizer\u2019s intrinsic resolution\nwith \ufb01xed mismatch ratios \u03c30 = 1% and \u03c30 = 10%. Figure 2(b) describes the proportion of error\ncontributed by each source, as measured by mismatch-to-quantization ratio (MQR)\n\nMQR =\n\n.\n\n(4)\n\nM \u2212 Q\n\nQ\n\nIt is evident that at a high resolution, mismatch error is the dominant source causing data conver-\nsion error. The Shannon theory implies that mismatch error is the fundamental problem relating to\nthe physical distribution of the reference set. [19, 20] have proposed post-conversion calibration\nmethods, which are ineffective in removing mismatch error without altering the reference set itself.\nA standard workaround solution is using larger components thus better matching characteristics;\nhowever, this incurs penalties concerning cost and power consumption. As a rule of thumb, 1-bit\nincrease in resolution requires a 4-time increase of resources [14]. To further advance the system\nperformance, a design solution that is robust to mismatch error must be realized.\n\n1\u03b82N = 2N is a dummy reference to de\ufb01ne the conversion full-range.\n\n3\n\n\fFigure 3: Simulated distribution of mismatch error in terms of (a) expected absolute error |PE(i)|\n(c, d)\nand (b) expected differential error PD(i) in a 16-bit quantizer with 10% mismatch ratio.\nOptimal mismatch error distribution in the proposed strategy. At the maximum redundancy 16 \u00b7\n(15, 1), mismatch error becomes negligible.\n\n2.2 Mismatch Error Model\n\nFor arti\ufb01cial systems, binary coding is popularly used to encode the reference set. It involves parti-\ntioning the array of unit cells into a set of binary-weighted components SC, and assembling different\ncomponents in SC to form the needed references. The precision of the data conversion is related\nto the precise matching of these unit cells, which can be in forms of comparators, capacitors, re-\nsistors, or transistors, etc. Due to fabrication variations, undesirable parasitics, and environmental\ninterference, each unit cell follows a probabilistic distribution which is the basis of mismatch error.\nWe consider the situation where the distribution of mismatch error is known as a priori. Each unit\ncell, cu, is assumed to be normally distributed with mismatch ratio \u03c30: cu \u223c N(1, \u03c32\n0). SC is then a\ncollection of the binary-weighted components ci, each has 2i independent and identically distributed\nunit cells\n\nSC = {ci|ci \u223c N(2i, 2i\u03c32\n\n0)},\n\n\u2200i \u2208 [0, N \u2212 1].\n\n(5)\n\nEach reference \u03b8i is associated with a unique assembly Xi of the components2\n\nSR\\{2N} = {\u03b8i =\n\n|Xi \u2208 P(SC)},\n\n\u2200i \u2208 [0, 2N \u2212 1],\n\n(6)\n\n(cid:80)\n\n(cid:80)N\u22121\n\nck\u2208Xi\n\nck\n\n1\n\n2N\u22121\n\nj=0 cj\n\nwhere P(SC) is the power set of SC. Binary coding allows the shortest data length to encode the\nreferences: N control signals are required to generate 2N elements of SR. However, because each\nreference is bijectively associated with an assembly of components, it is not possible to rectify the\nmismatch error due to the random distribution of the components\u2019 weight without physically altering\nthe components themselves.\nThe error density function de\ufb01ned as PE(i) = \u03b8i \u2212 i quanti\ufb01es the mismatch error at each digital\ncode. Figure 3(a) shows the distribution of |PE(i)| at 10% mismatch ratio through Monte Carlo\n2The dummy reference \u03b82N = 2N is exempted. Other references are normalized over the total weight to\n\nde\ufb01ne the conversion full-range of FR = [0, 2N ]\n\n4\n\n\fFigure 4: Associating and exchanging the information between individual pixels in the same \ufb01eld of\nvision generate an exponential number of combinations and allow ef\ufb01cient spatial data acquisition\nbeyond physical constraints. Inspired by this process, we propose a redundant sensing strategy that\ninvolves blending components between two imperfect sets to gain extra precision.\n\nsimulations, where there is noticeably larger error associating with middle-range codes. In fact, it\ncan be shown that if unit cells are independent, identically distributed, PE(i) approximates a normal\ndistribution as follows\n\nPE(i) = \u03b8i \u2212 i \u223c N(0,\n\n2j\u22121\n\ni \u2208 [0, 2N \u2212 1],\n\n(7)\n\nN\u22121(cid:88)\n\nj=0\n\n(cid:12)(cid:12)(cid:12)(cid:12)Dj \u2212\n\n(cid:12)(cid:12)(cid:12)(cid:12) \u03c32\n\n0),\n\ni\n\n2N \u2212 1\n\nwhere i = DN\u22121...D1D0 (Dj \u2208 {0, 1},\u2200j) is the binary representation of i.\nAnother drawback of binary coding is that it can create differential \u201cgap\u201d between the references.\nFigure 3(b) presents the estimated distribution of differential gap PD(i) = \u03b8i+1 \u2212 \u03b8i at 10% mis-\nmatch ratio. When the gap exceeds two unit-length, signals that should be mapped to two or multiple\ncodes collapse into a single code, resulting in a loss of information. This phenomenon is commonly\nknown as wide code, an unrecoverable situation by any post-conversion calibration methods. Also,\nwide gaps tend to appear at two adjacent codes that have large Hamming distance, e.g. 01111 and\n10000. Subsequently, the amount of information loss can be signal dependent and ampli\ufb01ed at\ncertain parts of data conversation range.\n\n3 Proposed Strategy\n\nThe proposed general strategy is to incorporate redundancy into the quantization process such that\none reference \u03b8i can be generated by a large number of distinct component assemblies Xi, each\nyields a different amount of mismatch. Among numerous options that lead to the same goal, the\noptimal reference set is the collection of assemblies with the least mismatch error over every digital\ncode.\nFurthermore, we propose that such redundant characteristic can be achieved by resembling the visual\nsystems\u2019 binocular structure. It involves a secondary component set that has overlapping weights\nwith the primary component set. By exchanging the components with similar weights between the\ntwo sets, excessive redundant component assemblies can be realized. We hypothesize that a simi-\nlar mechanism may have been employed in the brain that allows associating information between\nindividual pixels on the same \ufb01eld of vision in each eye as illustrated in Figure 4. Because such\nassociation creates an exponential number of combinations, even a small percentage of 100 million\nphotoreceptors and 1.5 million retinal ganglion cells that are \u201cinterchangeable\u201d could result in a\nsigni\ufb01cant degree of redundancy.\nThe design of the primary and secondary component set, SC,0 and SC,1, speci\ufb01es the level and\ndistribution of redundancy. Speci\ufb01cally, SC,1 is derived by subtracting from the conventional binary-\nweighted set SC, while the remainders form the primary component set SC,0. The total nominal\nci,j\u2208(SC,0\u222aSC,1) ci,j = 2N0 \u2212 1, where N0 is the resolution of the\n\nweight remains unchanged as(cid:80)\n\n5\n\n\fFigure 5: The distribution of the number of assemblies NA(i) with different geometrical identity\nin (a) 2-component-set design and (b) 3-component-set design. Higher assembly count, i.e., larger\nlevel of redundancy, is allocated for digital codes with larger mismatch error.\n\n(cid:26)2i,\n\nquantizer as well as the primary component set. It is worth mentioning that mismatch error is mostly\ncontributed by the most-signi\ufb01cant-bit (MSB) rather than the least-signi\ufb01cant-bit (LSB) as implied\nby Equation (5). Subsequently, to optimize the level and distribution of redundancy, the secondary\nset should advantageously consist of binary-weighted components that are derived from the MSB.\nSC,0 and SC,1 can be described as follows\n\nPrimary: SC,0 = {c0,i|c0,i =\n\nif i < N0 \u2212 N1\notherwise\nSecondary: SC,1 = {c1,i|c1,i = 2N0\u2212N1+i\u2212s1 ,\u2200i \u2208 [0, N1 \u2212 1]},\n\n2i \u2212 c1,i\u2212N0+N1,\n\n,\u2200i \u2208 [0, N0 \u2212 1]},\n\n(8)\n\nwhere N1 is the resolution of SC,1 and s1 is a scaling factor satisfying 1 \u2264 N1 \u2264 N0 \u2212 1 and\n1 \u2264 s1 \u2264 N0 \u2212 N1. Different values of N1 and s1 result in different degree and distribution\nof redundancy. Any design within this framework can be represented by its unique geometrical\nidentity: N0 \u00b7 (N1, s1). The total number of components assemblies is |P(SC,0 \u222a SC,1)| = 2N0+N1,\nwhich is much greater than the cardinality of the reference-set |SR| = 2N0, thus implies the high\nlevel of intrinsic redundancy.\nNA(i) is de\ufb01ned as the number of assemblies that represent the same reference \u03b8i and is an essential\nindicator that speci\ufb01es the redundancy distribution\n\nNA(i) = |{X|X \u2208 P(SC,0 \u222a SC,1) \u2227 (cid:88)\n\ncj,k = i}|,\n\ni \u2208 [0, 2N0 \u2212 1].\n\n(9)\n\ncj,k\u2208X\n\n(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)i \u2212 (cid:88)\n\ncj,k\u2208X\n\n(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ,\n\nFigure 5(a) shows NA(i) versus digital codes with N0 = 8 and multiple combinations of\n(N1, s1). The design of SC,1 should generate more options for middle-range codes, which suf-\nfer from larger mismatch error. Simulations suggest N1 decides the total number of assemblies,\nNA(i) = |P(SC,0 \u222a SC,1)| = 2N0+N1; s1 de\ufb01nes the morphology of the redundancy dis-\n\n(cid:80)2N0\u22121\n\ntribution. A larger value of s1 gives a more spreading distribution.\nRemoving mismatch error is equivalent to searching for the optimal component assembly Xop,i that\ngenerates the reference \u03b8i with the least amount of mismatch\n\ni=0\n\nXop,i =\n\nargmin\n\nX\u2208P(SC,0\u222aSC,1)\n\ncj,k\n\ni \u2208 [0, 2N0 \u2212 1].\n\n(10)\n\nThe optimal reference set SR,op is then the collection of all references generated by Xop,i. In this\nwork, we do not attempt to \ufb01nd Xop,i as it is an NP-optimization problem with the complexity of\nO(2N0+N1) that may not have a solution in the polynomial space.\nInstead, this section focuses\non showing the achievable precision with the proposed architecture while section 4 will describe a\nheuristic approach. The simulation results in Figure 2(b) demonstrate our technique can suppress\n\n6\n\n\fmismatch error below quantization noise, thus approaching the Shannon limit even at high resolution\nIn this simulation, the secondary set is chosen as N1 = N0 \u2212 1 for\nand large mismatch ratio.\nmaximum redundancy. Figure 3(c, d) shows the distribution of mismatch error after correction.\nEven at the minimum redundancy (N1 = 1), a signi\ufb01cant degree of mismatch is recti\ufb01ed. At the\nmaximum redundancy (N1 = N0 \u2212 1), the mismatch error becomes negligible compared with\nquantization noise.\nBased on the same principles, a n-set components design (n = 3, 4, ...) can be realized, which gives\nan increased level redundancy and more complex distribution as shown in Figure 5(b), where n = 3\nand the geometrical identity is N0 \u00b7 (N1, s1) \u00b7 (N2, s2). With different combinations of Nk and\nsk (k = 1, 2, ...), NA(i) can be catered to a known mismatch error distribution and yield a better\nperformance. However, adding more component set(s) can increase the computational burden as\nthe complexity increases rapidly with every additional set(s): O(2N0+N1+N2+...). Given mismatch\nerror can be well recti\ufb01ed with a two-set implementation over a wide range of resolution, n > 2\nmight be unnecessary.\nSimilarly, three or more eyes may give better vision. However, the brain circuits and control network\nwould become much more complicated to integrate signals and information. In fact, stereopsis is an\nadvanced feature to human and animals with well-developed neural capacity [7]. Despite possessing\ntwo eyes, many reptiles, \ufb01shes and other mammals, have their eyes located on the opposite sides of\nthe head, which limits the overlapping region thus stereopsis, in exchange for a wider \ufb01eld of vision.\nCertain species of insect such as Arachnids can possess from six to eight eyes. However, studies have\npointed out that their eyes do not function in synchronous to resolve the \ufb01ne resolution details [21].\nIt is not a coincidence that at least 30% of the human brain cortex is directly or indirectly involved\nin processing visual data [7]. We conjecture that the computational limitation is a major reason that\nmany higher-order animals are evolved to have two eyes, thus keep the cyclops and triclops remain\nin the realm of mythology. No less as it would sacri\ufb01ce visual processing precision, yet no more as\nit would overload the brain\u2019s circuit complexity.\n\n4 Practical Implementation & Results\n\nA mixed-signal ADC integrated circuit has been designed and fabricated to demonstrate the feasi-\nbility of the proposed architecture. The nature of hardware implementation limits the deployment\nof sophisticated learning algorithms. Instead, the circuit relies on a heuristic approach to ef\ufb01ciently\nestimate the mismatch error and adaptively recon\ufb01gure its components in an unsupervised manner.\nThe detailed hardware algorithm and circuits implementation are presented seperately. In this paper,\nwe only brie\ufb02y summarize the techniques and results.\nThe ADC design is based on successive-approximation register (SAR) architecture and features\nredundant sensing with a geometrical identity 14 \u00b7 (13, 1). The component set SC is a binary-\nweighted capacitor array. We have chosen the smallest capacitance available in the CMOS process to\nimplement the unit cell for reducing circuits power and area. However, it introduces large capacitor\nmismatch ratios up to 5% which limits the effective resolution to 10-bit or below for previous works\nreported in the literature [14, 19, 20].\nThe resolution of the secondary array is chosen as N1 = N0 \u2212 1 to maximize the exchange capacity\nbetween two component sets\n\ni \u2208 [1, N \u2212 2].\n\nc0,i = c1,i\u22121 = 1/2c0,i+1,\n\n(11)\nIn the auto-calibration mode, the mismatch error of each component is estimated by comparing the\ncapacitors with similar nominal values implied by Equation (11). The procedure is unsupervised\nand fully automatic. The result is a reduced dimensional set of parameters that characterize the\ndistribution of mismatch error. In the data conversion mode, a heuristic algorithm is employed that\nutilizes the estimated parameters to generate the component assembly with near-minimal mismatch\nerror for each reference. A key technique is to shift the capacitor utilization towards the MSB by\nexchanging the components with similar weight, then to compensate the left-over error using the\nLSB. Although the algorithm has the complexity of O(N0 + N1), parallel implementation allows\nthe computation to \ufb01nish within a single clock cycle.\nBy assuming the LSB components contribute an insigni\ufb01cant level of mismatch error as implied by\nEquation (5), this heuristic approach trades accuracy for speed. However, the excessive amount of\n\n7\n\n\fFigure 6: High-resolution ADC implementation. (a) Monte Carlo simulations of the unsupervised\nerror estimation and calibration technique. (b) The chip micrograph. (c) Differential nonlinearity\n(DNL) and (d) integral nonlinearity (INL) measurement results.\n\nredundancy guarantees the convergence of an adequate near-optimal solution. Figure 6(a) shows\nsimulated plots of effective-number-of-bits (ENOB) versus unit-capacitor mismatch ratio, \u03c30(Cu).\nWith the proposed method, the effective resolution is shown to approach the Shannon limit even with\nlarge mismatch ratios. It is worth mentioning that we also take the mismatch error associated with\nthe bridge-capacitor, \u03c30(Cb), into consideration. Figure 6(b) shows the chip micrograph. Figure\n6(c, d) gives the measurement results of standard ADC performance merit in terms of differential\nnonlinearity (DNL) and integral nonlinearity (INL). The results demonstrate that a 4-6 fold increase\nof linearity is feasible.\n\n5 Conclusion\n\nThis work presents a redundant sensing architecture inspired by the binocular structure of the hu-\nman visual system. We show architectural advantages of using redundant sensing in removing mis-\nmatch error and enhancing sensing precision. A high resolution, zero-dimensional data quantizer\nis presented as a proof-of-concept demonstration. Through Monte Carlo simulation with the error\nprobabilistic distribution as a priori, we \ufb01nd the precision can approach the Shannon limit. In actual\nmeasurements without knowing the error probabilistic distribution, the gain of extra 2-bit precision\nand 4-6 times linearity is observed. We envision that the framework can be generalized to handle\nhigher dimensional data and apply to a variety of applications such as digital imaging, functional\nmagnetic resonance imaging (fMRI), 3D data acquisition, etc. Moreover, engineering such bio-\ninspired arti\ufb01cial systems may help better understand the biological processes such as stereopsis,\nmicrosaccade, and hyperacuity.\n\nAcknowledgment\n\nThe authors would like to thank Phan Minh Nguyen for his valuable comments.\n\n8\n\n\fReferences\n[1] Shannon, C.E. (1948) A Mathematical Theory of Communication. Bell System Technical Journal, vol.\n\n27(3), pp. 379423.\n\n[2] Curcio, C.A., Sloan, K.R., Kalina, R.E., Hendrickson, A.E. (1990) Human photoreceptor topography.\n\nJournal of Comparative Neurology, vol. 292(4), pp. 497-523.\n\n[3] Curcio, C. A., Allen, K. A. (1990) Topography of ganglion cells in human retina. Journal of Comparative\n\nNeurology, vol. 300(1), pp. 5-25.\n\n[4] Read, J.C. (2015) What is stereoscopic vision good for? Proc. SPIE 9391, Stereoscopic Displays and\n\nApplications XXVI, pp. 93910N.\n\n[5] Westheimer, G. (1977) Spatial frequency and light-spread descriptions of visual acuity and hyperacuity.\n\nJournal of the Optical Society of America, vol. 67(2), pp. 207-212.\n\n[6] Beck, J., Schwartz, T. (1979) Vernier acuity with dot test objects. Vision Research, vol. 19(3), pp. 313-\n\n319.\n\n[7] Reece, J.B., Urry, L.A, Cain, M.L., Wasserman, S.A, Minorsky, P.V., Jackson R.B., Campbell, N.A.\n\n(2010) Campbell biology, 9th Ed. Boston: Benjamin Cummings/Pearson.\n\n[8] Westheimer, G., McKee, S.P. (1978) Stereoscopic acuity for moving retinal images. Journal of the Optical\n\nSociety of America, vol. 68(4), pp. 450-455.\n\n[9] Crick, F.H., Marr, D.C., Poggio, T. (1980) An information processing approach to understanding the\n\nvisual cortex. The Organization of the Cerebral Cortex, MIT Press, pp. 505-533.\n\n[10] Cagenello, R., Arditi, A., Halpern, D. L. (1993) Binocular enhancement of visual acuity. Journal of the\n\nOptical Society of America A, vol. 10(8), pp. 1841-1848.\n\n[11] Martinez-Conde, S., Otero-Millan, J., Macknik, S.L. (2013) The impact of microsaccades on vision:\n\ntowards a uni\ufb01ed theory of saccadic function. Nature Reviews Neuroscience, vol. 14(2), pp. 83-96.\n\n[12] Hicheur, H., Zozor, S., Campagne, A., Chauvin, A. (2013) Microsaccades are modulated by both atten-\ntional demands of a visual discrimination task and background noise. Journal of vision, vol. 13(13), pp.\n18-18.\n\n[13] Hennig, M.H., W\u00a8org\u00a8otter, F. (2004) Eye micro-movements improve stimulus detection beyond the\n\nNyquist limit in the peripheral retina. Advances in Neural Information Processing Systems.\n\n[14] Murmann, B. (2008) A/D converter trends: Power dissipation, scaling and digitally assisted architectures.\n\nCustom Integrated Circuits Conference, 2008. CICC 2008. IEEE, pp. 105-112.\n\n[15] Nguyen, A.T., Xu, J., Yang, Z. (2015) A 14-bit 0.17 mm2 SAR ADC in 0.13\u00b5m CMOS for high precision\n\nnerve recording. Custom Integrated Circuits Conference (CICC), 2015 IEEE, pp. 1-4.\n\n[16] Analog Devices (2016) 24-Bit Delta-Sigma ADC with Low Noise PGA. AD1555/1556 datasheet.\n[17] Frey, M., Loeliger., H.A. (2007) On the static resolution of digitally corrected analog-to-digital and\ndigital-to-analog converters with low-precision components. Circuits and Systems I: Regular Papers,\nIEEE Transactions on, vol. 54(1), pp. 229-237.\n\n[18] Biveroni, J., Loeliger, H.A. (2008) On sequential analog-to-digital conversion with low-precision com-\n\nponents. Information Theory and Applications Workshop, 2008. IEEE, pp. 185-187.\n\n[19] Um, J.Y., Kim, Y.J., Song, E.W., Sim, J.Y., Park, H.J. (2013) A digital-domain calibration of split-\ncapacitor DAC for a differential SAR ADC without additional analog circuits. Circuits and Systems I:\nRegular Papers, IEEE Transactions on, vol. 60(11), pp. 2845-2856.\n\n[20] Xu, R., Liu, B., Yuan, J. (2012) Digitally calibrated 768-kS/s 10-b minimum-size SAR ADC array with\n\ndithering. Solid-State Circuits, IEEE Journal of, vol. 47(9), pp. 2129-2140.\n\n[21] Land, M.F. (1985) The morphology and optics of spider eyes. Neurobiology of arachnids, pp. 53-78,\n\nSpringer Berlin Heidelberg.\n\n9\n\n\f", "award": [], "sourceid": 1240, "authors": [{"given_name": "Anh Tuan", "family_name": "Nguyen", "institution": "University of Minnesota"}, {"given_name": "Jian", "family_name": "Xu", "institution": "University of Minnesota"}, {"given_name": "Zhi", "family_name": "Yang", "institution": "University of Minnesota"}]}