Abstract
Event-driven silicon retina imagers are useful for structural dynamics applications as they are low-power, high-information bandwidth, and high-dynamic range devices. Currently, event-driven imagers can detect light intensity changes associated with an LED blinking at 18 kHz, which indicates event-driven imagers are capable of capturing dynamic motion associated with the commonly accepted frequency range of human hearing, 20 Hz–20 kHz. As the development and utilization of event-driven imagers advances, it is reasonable to expect the upper bound of measurable frequency to increase. Therefore, event-driven imagers have the potential to be a viable replacement for high-speed imagers in the field of structural dynamics. The majority of statistical techniques currently developed for structural dynamics assume data is captured with a uniform sampling rate. This convention is problematic when using event-based imagers because event-based sensors generate send-on-delta data. The widespread use of these imagers will require the development of an acceptable technique for converting event data to time-series data, which can then be sampled uniformly. Another challenge with utilizing event-driven imagers is the inherent and significant variations exhibited between pixels on the same imager during the generation of an event. These variations need to be better understood and examined. Investigative research has revealed that certain aspects of the two problems, data conversion and pixel variation, have been addressed by the neuroscience community. The spike-based data analysis techniques that have been developed in the neuroscience community may have applicability to structural dynamics in the context of event-driven imagery. These practices will then be used as inspiration for developing event-based data processing techniques to tie event-based data to existing structural dynamics analysis frameworks.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
4.1 Introduction
Event-based imaging, utilizing silicon retinas, has been a developing field of study for approximately 35 years. The pioneers of this emergent imaging technology were Misha Mahowald and Carver Mead [1, 2]. For many years, the use of event-driven retina cameras was limited to experimental study and development in a relatively few research laboratories. However, event-based vision sensors are now commercially available for industrial use [3].
The use of silicon retina cameras/sensors in the field of structural dynamics is both promising and attractive because of the way data is captured and generated. These types of sensors generate data asynchronously with low latency, high dynamic range, high temporal resolution, high pixel bandwidth, and low power consumption [1]. Additionally, when compared to traditional high-speed video cameras, work completed at Los Alamos National Laboratory (LANL) has indicated the potential for silicon retina cameras in the field of structural dynamics [4]. Further, high-speed video cameras suffer from relatively slow synchronous frame rates and shutter speeds and are computationally inefficient as compared to silicon retinas. When compared to other sensor technologies, such as strain gauges and accelerometers, event-based cameras are also attractive because they can provide noncontact vibrational analysis of a feature or structure [3, 5,6,7,8,9,10].
When paired with a neuromorphic processor, silicon retinas become the silicon analogue of the human eye, neural pathway, and the occipital lobe of the brain. Just as human vision operates with spiking neurons, event-based silicon retinas utilize a spiking neural network (SNN) and neuromorphic algorithms and computing to generate visual data. As is the case with the human eye and brain, event-driven imagers and spiking neural networks do not process and generate data synchronously based on time series. This presents a problem since time-series data is typically used for structural analysis [11,12,13].
Our overall project goal is to convert silicon retina data into time-series data and find the frequencies and directions of traveling waves using the Legendre memory unit and beamforming through spike-based computing. The Legendre memory unit is used to find the frequencies of the wave pool, and beamforming is used to find the direction.
We start with raw data of the motion of a wave pool, and this data is recorded through a silicon retina. Using neuromorphic spike-based processing and a python extension called Nengo, we can use the Legendre memory unit to efficiently represent data, and then implement beamforming to understand the frequencies and angles of traveling waves. The Legendre memory unit is used in performing two important integral transformations, definite integration and short-time Fourier transformation, on event-based imagery. We demonstrate a phased array running on neuromorphic hardware analyzing acoustic data, such as video of a water tank.
4.2 Background
The Legendre memory unit (LMU) is composed of a linear layer and a nonlinear layer. The linear layer is a linear time-invariant (LTI) system of the form:
and can optimally reconstruct a delayed input signal with an output matrix:
where:
where i, j ∈ [0, d − 1] and \( {P}_i\left(\frac{\theta^{\prime }}{\theta}\right) \) are the shifted Legendre polynomials.
4.3 LMU Definite Integration and Fourier Transform
Voelker briefly mentions that integral transformations over the rolling window history are possible [14]. We provide examples of such transformations on the input signal by replacing the output matrix C with a transformation of the C matrix. Two examples of this are definite integration and the Fourier transform. The definite integration can be thought of as a scaled partial summation of the original output:
where t is time, θ is the time shift, u(σ) is used to integrate by substitution, and m(t) is the state of the LMU as detailed in Eq. (4.1).
The Fourier transform is simply the FFT matrix multiplied by the measurement matrix C.
4.4 LMU Beamforming
After constructing the two integral transformations, we provided an application of these two data processing techniques: an LMU beamformer. The beamforming technique we used was the “delay and sum” since the LMU is particularly effective at delaying signals. We selected 12 pixels, which serve as virtual receivers, which were aligned vertically, equidistantly spaced apart. The signals were each delayed according to Eq. 4.8:
where y i is the i th pixel’s position, θ is the desired orientation of the beam, c is the speed of sound in pixels per second, and τ i is the delay imposed on the i th pixel’s signal. The delay of each signal was implemented using 12 LMUs (Fig. 4.1).
We want to perform spectral analysis on the traveling waves; the outputs corresponding to different orientations of the beams are each passed through an LMU spectrogram. Since we’re interested in the dominant traveling wave, we compute the magnitude of the signal received for each orientation. The magnitude of the signal is plotted on Fig. 4.2 (left) and was computed as the square of the magnitude. The spectrogram of the maximal traveling wave is plotted on Fig. 4.2 (right).
4.5 Conclusion
We demonstrate the Legendre memory unit’s ability in performing two important integral transformations, definite integration and short-time Fourier transformation, on event-based imagery. We demonstrate a phased array running on neuromorphic hardware analyzing acoustic data, such as video of a water tank. For future work, a projected application is that the silicon retina data and the Legendre memory unit can enable the development of a prototype neuromorphic computing framework suitable for online (real-time) data processing.
References
Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. (2020). https://doi.org/10.1109/TPAMI.2020.3008413
Mahowald, M.: VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function. California Institute of Technology (1992)
Schmuker, M., Tayarani-Najaran, M.-H.: Event-based sensing and signal processing in the visual, auditory, and olfactory domain: a review. Front. Neural Circuits. 15, 610446 (2021, May). https://doi.org/10.3389/fncir.2021.610446
Silva, M., Figueiredo, E., Costa, J.C.W.A., Mascarenas, D.: Event-based phased array signal processing. Struct. Health Monit. (2019). https://doi.org/10.12783/shm2019/32250
Liao, F., Zhou, F., Chai, Y.: Neuromorphic vision sensors: principle, progress and perspectives. J. Semicond. 42(1), 013105 (2021, January). https://doi.org/10.1088/1674-4926/42/1/013105
Liu, S.-C., Rueckauer, B., Ceolini, E., Huber, A., Delbruck, T.: Event-driven sensing for efficient perception: vision and audition algorithms. IEEE Signal Process. Mag. 36(6), 29–37 (2019, November). https://doi.org/10.1109/MSP.2019.2928127
Dong, C.Z., Ye, X.W., Jin, T.: Identification of structural dynamic characteristics based on machine vision technology. Measurement. 126, 405–416 (2018, October). https://doi.org/10.1016/j.measurement.2017.09.043
Feng, D., Feng, M.Q.: Computer vision for SHM of civil infrastructure: from dynamic response measurement to damage detection – a review. Eng. Struct. 156, 105–117 (2018, Feburary). https://doi.org/10.1016/j.engstruct.2017.11.018
Lai, Z., Alzugaray, I., Chli, M., Chatzi, E.: Full-field structural monitoring using event cameras and physics-informed sparse identification. Mech. Syst. Signal Process. 145, 106905 (2020, November). https://doi.org/10.1016/j.ymssp.2020.106905
Dorn, C., et al.: Efficient full-field vibration measurements and operational modal analysis using neuromorphic event-based imaging. J. Eng. Mech. 144(7), 04018054 (2018, July). https://doi.org/10.1061/(ASCE)EM.1943-7889.0001449
Roy, K., Jaiswal, A., Panda, P.: Towards spike-based machine intelligence with neuromorphic computing. Nature. 575(7784), 607–617 (2019, November). https://doi.org/10.1038/s41586-019-1677-2
Delbruck, T., Liu, S.-C.: Event-based silicon retinas and cochleas. Front. Sens., 87–100. http://springerlink.bibliotecabuap.elogim.com/10.1007/978-3-211-99749-9_6 (2012)
Indiveri, G., Horiuchi, T.K.: Frontiers in neuromorphic engineering. Front. Neurosci. 5 (2011). https://doi.org/10.3389/fnins.2011.00118
Voelker, A. R.: Dynamical systems in spiking neuromorphic hardware. UWSpace. (2019). http://hdl.handle.net/10012/14625
Acknowledgments
This research was funded by Los Alamos National Laboratory (LANL) through the Engineering Institute’s Los Alamos Dynamics Summer School. The Engineering Institute is a research and education collaboration between LANL and the University of California San Diego’s Jacobs School of Engineering. This collaboration seeks to promote multidisciplinary engineering research that develops and integrates advanced predictive modeling, novel sensing systems, and new developments in information technology to address LANL mission-relevant problems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Society for Experimental Mechanics, Inc.
About this paper
Cite this paper
Zheng, K. et al. (2023). Neuromorphic Data Processing for Event-Driven Imagery for Acoustic Measurements. In: Di Maio, D., Baqersad, J. (eds) Rotating Machinery, Optical Methods & Scanning LDV Methods, Volume 6. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, Cham. https://doi.org/10.1007/978-3-031-04098-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-04098-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-04097-9
Online ISBN: 978-3-031-04098-6
eBook Packages: EngineeringEngineering (R0)