There is growing interest in using biological-inspiration to improve the design of computation systems, particularly in the areas of sensory processing and pattern recognition where biological systems far outperform the best man-made systems. In our analog circuit research, we consider the role of biologically-inspired spike representations in a wide-range of applications including CMOS imager design, novel analog-to-digital converters and neural recording circuitry for brain-machine interfaces. We also consider the implications this work has on our understanding of neurobiological systems.
Analog VLSI hardware for amplifying and compressing extracellular neural
An implantable extracellular neural recording system, such as that required for Brain Machine Interfaces (BMIs), must satisfy strict power, size and wireless transmission bandwidth constraints. Currently hundreds of channels are recorded with the desire to increase to thousands in the near future. Transmitting the raw signal for this many channels with the current desired sampling rate of at least 25kHz and at least 8 bits of resolution is impossible given the challenging power, size and bandwidth constraints. While compact low-power subthreshold CMOS circuitry can be employed to reduce the size and power of a neural implant, the transmission bandwidth remains a problem. The Computational NeuroEngineering Laboratory at the University of Florida is exploring three different approaches for compressing the neural signals. The first uses a novel encoding scheme with asynchronous biphasic pulses to transmit the raw voltages. The second extracts and transmits features from the spikes. The third and most drastic data reduction method performs multi-scale spike detection and transmits only the timing of the spikes.
Time-based computer architectures and algorithm for
We are studying methods for scaling analog VLSI computation to deep submicron CMOS and nanoelectronic technology.
A Spiked-Based Computer Architecture for Sensory Processing
We and many other researchers discovered that there are important technological advantages of coding signal amplitude into asynchronous timing of events (spike trains) instead of periodic sampling. These technological advantages are centered in ultra low power, noise resilient analog VLSI implementations giving sparse time signals as required for remote sensing and sensor networks, biomedical (brain machine interfaces), biometric applications, and very likely the new generation of molecular computers. However, there is an almost complete void on the theory and properties of timing representations of signals, as well as on how to perform computation directly in the spike train domain. The long term goal of our work seeks to develop a general purpose computational substrate, driven by both biological inspiration and the constraints of CMOS implementations. Spike trains are ubiquitous in neural tissue, so evolution has found this solution better than others to process information in time under some (still unclear) constraints. But the design of artificial computational substrates in silicon or molecular electronics require the identification of principles, the design of signal transformations, and the fabrication of substrates with huge numbers of sufficiently small components. While it is clear that the brain uses spike timing in its basic building blocks, it is much less clear how it organizes these primitives into powerful larger-scale systems. The proposed architecture for our computational substrate is based on a very compelling model of brain computation called the Liquid State Machine (LSM) recently proposed by Wolfgang Maass. This model provides a conceptual framework for working with biologically realistic pulsed neuron models (integrate-and-fire neurons) as the basic computational element within a recurrent nonlinear architecture, where the connections can be randomly set and the weights are fixed. The dynamics in this network are non-convergent, i.e. the state of the system evolves in time without point attractors.