Janet M. Roveda and Linda S. Powers*
Department of Electrical and Computer Engineering, University of Arizona, USA
Received date: July 27, 2015; Accepted date: July 29, 2015; Published date: July 31, 2015
Citation: Roveda JM, Powers LS (2015) Compressive Sensing: Real-time Data Acquisition and Analysis for Biosensors and Biomedical Instrumentation. Biosens J 4:e105. doi:10.4172/2090-4967.1000e105
Copyright: © 2015 Roveda JM, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Visit for more related articles at Biosensors Journal
The rapid growth of sensors for a vast number of applications have required that CPU and processors be applied as part of the data acquisition systems for a number of applications including communication, position tracking, health care monitoring, environmental changes, and transportation. In addition, the advances in nanometer electronic systems, compressive sensing (CS) [1-11] based information processing, and stream computing technologies provide great potential in creating novel hardware/software platforms and having fast data acquisition capability. Driven by these new technology developments, it is possible to develop a high speed “Adaptive Design for Information” (ADI) system that leverages the advantages of featurebased data compression, low power nanometer CMOS technology, and stream computing for biomedical instrumentation.
Figure 1 is a diagram for a typical data acquisition system for biomedical instrumentation. It has a sensor with analog mixed signal front end and a stream processor. The performance of these two components is very different. Most analog front ends consume 2/3 of the total chip area. While the power consumption is in the μW to mW range, the ability of the analog front end to sample and process data is a lot slower than digital processors. For example, a typical 24-bit Texas Instruments A/D converter [12] is capable of 125k sps (sample per second) which leads to 3 Mbps (megabits per second) data processing speed. With 197 Gb/s of cell processor SPU, the analog front end is several orders of magnitude slower. This means that with the current stream processor capability, we can consider real-time control for the analog front end to obtain USEFUL samples. The term “useful samples” refers to the most important information embedded in the samples. It is well known that most of today’s data acquisition systems discard a lot of data right before it is transmitted. For example, we use JPEG to reduce the data amount right before transmission to avoid lengthy communication times. In this article, we present an architecture that is data/information oriented which allows us to reduce the amount of data from the very beginning.
Figure 2 demonstrates such a new architecture. Note that the data is reduced at the analog front end, thus the stream processor receives only a fraction of the total amount of original data. With a reduced amount of data, less energy will be consumed in “data movement”, “driving long wires”, “accessing arrays” and “control overhead” in the stream processors. The key technology used here is compressive sensing.
To illustrate how compressive sensing can contribute to fast data acquisition, let us begin with a brief review of compressive sensing theory [1-11]. An example image can be presented as an NxM matrix Х. By using a random selection matrix Ф, we can generate a new У matrix :
Because m<< N, we reduce the total data amount. Different from JPEG and other nonlinear compression algorithms, compressive sensing linearly reduces data and preserves key features without much distortion. This is the key reason why compressive sensing can be applied to the front end of a data acquisition system instead of right before data transmission. One such example is a “single pixel camera” [13]. The camera performs random selection Ф on the sampled object. Thus, less data amount will be generated by the camera and subsequently enter the following data acquisition system. Depending on the sparsity of sampled data, the average data reduction resulting from compressive sensing is about 50%. If we use joule per bit as energy estimation, this indicates that this compression algorithm may lead to a total energy/power reduction for data processing architecture. While compressive sensing provides great potential for biomedical instrumentation, it requires careful implementation. For example, it is still being debated whether data created by a single pixel camera can provide good randomly sampled data.
Over the past several years, much effort has been expended on analog-to-information conversion (AIC) [14-19], i.e., to acquire raw data at a low rate while accurately reconstructing the compressed signals. The key components under investigation were analog-todigital converters, random filtering, and demodulation. Hasler et al. [20-24] were first ones to apply compressive sensing to pixel array data acquisition systems. In the traditional data flow, the A/D converter is placed right after the pixel array. That is, the pixel data are directly digitized at the Nyquist sample rate. When a compressive sensing algorithm is applied, the A/D converter is placed after the random selection/demodulation and the sample rate is significantly slower. Even though compressive sensing algorithms help reduce the sample rate of the A/D converter, it comes with a price. It requires an analog front end to achieve randomized measurements which, in turn, leads to large analog computing units at the front end. These components are cumbersome and slow. For example, an analog multiplier works at 10 MHz with over 200 ns setup time. While most elements in the front end use the 0.25 μm technology node, some exploit the 0.5 μ technology node (i.e. floating gate technology to store random selection coefficients).
By using compressive sensing to reduce the sample rate of the A/D converter, it appears that we are moving away from the current technology trend (i.e. smaller feature size transistors to achieve higher speed and lower power). Instead, we rely heavily on analog designs and computations which have difficulties in scaling. Little is known on how to build circuits that can create “good” measurement matrices. Here, “good” not only refers to effective selection matrices, but also includes circuit implementation costs such as power and space requirements. In addition, the high complexity of reconstruction algorithms demands high performance computing capabilities. Our recent implementation [25] showed that it is possible to use a level-crossing sampling approach to replace Nyquist sampling. With a new in-memory design, the new compressive sensing based biomedical instrumentation performs digitization only when there is enough variation in the input and when the random selection matrix chooses this input. This new implementation also can be applied to a much wider range of applications including real-time applications like telemedicine and remote monitoring. Additional work such as Yoo et al., [26] also indicated that it is possible to integrate compressive sensing in the A/D converter.
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals