diff --git "a/8tE1T4oBgHgl3EQf7wU5/content/tmp_files/load_file.txt" "b/8tE1T4oBgHgl3EQf7wU5/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/8tE1T4oBgHgl3EQf7wU5/content/tmp_files/load_file.txt" @@ -0,0 +1,1207 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf,len=1206 +page_content='1 TinyVers: A Tiny Versatile System-on-chip with State-Retentive eMRAM for ML Inference at the Extreme Edge Vikram Jain, Sebastian Giraldo, Jaro De Roose, Linyan Mei, Bert Boons, and Marian Verhelst Abstract—Extreme edge devices or Internet-of-thing nodes require both ultra-low power always-on processing as well as the ability to do on-demand sampling and processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, support for IoT applications like voice recognition, machine monitoring, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=', requires the ability to execute a wide range of ML workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This brings challenges in hardware design to build flexible processors operating in ultra-low power regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This paper presents TinyVers, a tiny versatile ultra-low power ML system-on-chip to enable enhanced intelligence at the Ex- treme Edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' TinyVers exploits dataflow reconfiguration to enable multi-modal support and aggressive on-chip power management for duty-cycling to enable smart sensing applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC combines a RISC-V host processor, a 17 TOPS/W dataflow reconfigurable ML accelerator, a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 µW deep sleep wake-up controller, and an eMRAM for boot code and ML parameter retention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC can perform up to 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 GOPS while achieving a power consumption range from 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 µW-20 mW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Multiple ML workloads aimed for diverse applications are mapped on the SoC to showcase its flexibility and efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' All the models achieve 1-2 TOPS/W of energy efficiency with power consumption below 230 µW in continuous operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In a duty-cycling use case for machine monitoring, this power is reduced to below 10 µW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Index Terms—Extreme edge, tinyML, machine learning accel- erators, ultra-low power, system-on-chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' INTRODUCTION E Xtreme edge devices [1] or Internet-of-Things (IoT) nodes mostly perform non-vision tasks and can achieve good accuracy, even with small and lightweight neural network (NN) models [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This is in contrast to more traditional tasks designed for processing image data and contain millions to billions of parameters and operations with high hardware re- source demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Consider the Google voice assistant as an ex- ample, which needs only 14 kilo bytes (kB) of NN parameters to run a keyword-spotting application on edge devices [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The insight that not all applications require maximum accuracy, large and complex NN models, has resulted in a new paradigm of ML application development, called tinyML or ML at the extreme edge [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This trend, at its core, has been driven by the V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jain, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mei, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst are with the Department of Electrical Engineering - MICAS, KU Leuven, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Giraldo was with the Department of Electrical Engineering - MICAS, KU Leuven, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' He is now with B12 Consulting, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' De Roose and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Boons were with the Department of Electrical Engineering - MICAS, KU Leuven, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' They are now with Magics Technologies, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' © 2023 IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Personal use of this material is permitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' requirements imposed by battery-operated, performance- and power-constrained IoT nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Most IoT sensor nodes consist of a microcontroller unit (MCU) with a subset of sensors, a memory for storing acquired data, a CPU and a wireless data transceiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The presence of these MCUs for data collection provides opportunities to process data very close to the sensor when the NN model is small, and avoids the high penalty of raw data transmission to more powerful edge or cloud units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yet, this local ML processing, brings several new chal- lenges: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') As these nodes are battery-operated, the system is typically severely power or energy constrained requiring ultra- low power operation, with the ability to idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') the MCU, moreover, has limited compute power and memory space, resulting in a critical trade-off between model size, execution performance and hardware complexity;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') despite the need for efficiency, the system should also be flexible enough to support different classes of NN models across different applications, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') it should have a small footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Several hardware for ML have been proposed in the recent literature and can be divided into three main categories: 1) extremely specialized edgeML accelerators designed for ultra-low power operation with little to no flexibility at low performance [5]– [8],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2) multi-modal edgeML accelerators providing medium level of flexibility with high performance at medium to high power consumption [9]–[13],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' and,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 3) commercial-off-the-shelf (COTS) MCUs delivering higher flexibility but at low perfor- mance and medium power consumption [14]–[16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Most of these hardware designs do not meet all the requirements of an extreme edge device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' An exception is Vega [17] which presents a complete SoC, however, the specialized accelerator of Vega does not have the flexibility to handle all DNN workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Thus, a new class of flexible ultra-low power (ULP) platforms towards extreme edge deployment is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In this context, this work presents TinyVers [18], a highly adaptive SoC platform which significantly enhances the trade- off between energy efficiency and flexibility needed in extreme edge devices, through the use of: A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') a RISC-V proces- sor extended with a flexible ML accelerator (FlexML) with dataflow reconfiguration supporting diverse ML workloads and support for efficient zero-skipping in block structured sparsity and deconvolution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') an embedded magnetoresistive random access memory (eMRAM) for non-volatile storage enabling standalone operation with efficient power-down (or idling);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') a programmable wake-up controller (WuC) supporting different power-on and idle modes to enable both always-on inference as well as on-demand and duty-cycled smart sensing arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='03537v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='AR] 9 Jan 2023 2 and computation used in typical tinyML IoT applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC provides users flexibility not only in mapping diverse ML workloads for diverse tinyML applications, but also in supporting various use cases such as duty-cycling and smart sensing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' We demonstrate TinyVers’ capabilities and improve- ments over state-of-the-art (SotA) on diverse applications in machine monitoring, anomaly detection, audio signal analysis, and image classification through the use of both deep learning as well as traditional ML workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The rest of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The basics of ML compute kernels is introduced in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Section III discusses the architecture overview of TinyVers, followed by Section IV providing further details of the FlexML accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Section V provides details on how the software stack for ML deployment on TinyVers is undertaken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Subsequently, Section VI presents the experimental results of mapping different workloads and application use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Finally, Sec- tion VII compares TinyVers’ performance with related works and Section VIII concludes the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' ALGORITHMIC BACKGROUND ML applications heavily exploit deep neural networks (DNN) with traditional convolutional (CNN) and fully con- nected (FC) layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' However, a plethora of new NN layer topologies are emerging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Some examples of these are the use of temporal convolutional networks (TCN) used in audio tasks like keyword spotting [19]–[21], or auto-encoders (AE) using convolution and deconvolution pairs in machine monitoring and anomaly detection tasks [22]–[24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Morever, also machine learning models not relying on neural network layers are still used in extreme edge IoT nodes, such as support vector ma- chines (SVM) [25] used in novelty and anomaly detection ap- plications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The execution efficiency of all these workloads can can be improved with orders of magnitude when deployed on specialized accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yet, the wide variety in the compute kernels of interest complicates their efficient mapping on a single hardware platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The following subsections deal with the different ML operation characteristics, their categorization into mathematical operations, and their hardware implications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Convolution and Dense Operation Convolutional and dense layers are the most common com- pute kernels used in DNNs and they can be decomposed into matrix-matrix multiplication (MMM) and matrix-vector multiplication (MVM) resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='. These two matrix operations can be represented mathematically as nested for loops as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Most ML compute kernels can be categorized into one of these two mathematical operations, with some special layers requiring extra hardware changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' One such kernel is the TCN layer which can be represented as a 1D CNN and requires extra support for programmable dilation which is similar to strides in a convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Recurrent neural networks (RNN) like long short-term memory (LSTM) and gated recurrent unit (GRU) can be decomposed to MVM with need for extra hardware for activation functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' These hardware changes would be discussed further in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' C C IY FY OY OX IX FX K K Convolution Operation = MMM Input FMAP Weights Output FMAP C C K Dense Operation = MVM Input FMAP Weights Output FMAP TCN CNN GAN AE LSTM FC SVM K for(y=0 to Y-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output row for(x=0 to X/N-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output column for(k=0 to K/N-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output channel for(c=0 to C-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each input channel for(fy=0 to Fy-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each filter row for(fx=0 to Fx-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each filter column o[k][x][y] += i[c][x+fx][y+fy]*w[k][c][fx][fy] for(k=0 to K/N-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output channel for(c=0 to C/N-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each input channel o[k] += i[c]*w[k][c] PE Spatial Unrolling X Temporal Unrolling Spatial Unrolling Y PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Different ML models and their mathematical representation in terms of MMM and MVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The nested for loop representation can be mapped onto specialized accelerators through spatial and temporal unrolling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' When mapping MMMs and MVMs on specialized hardware accelerators, the nested for loops can be unrolled spatially and temporally, which is called dataflow in literature [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' On a 2D processing element (PE) array, two for loops can be spatially unrolled, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=', the loops can be parallelized along the X and Y dimensions, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In the rest of the paper, this spatial unrolling is represented as (Spatial Unrolling X)|(Spatial Unrolling Y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The remaining for loops are temporally unrolled, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=', sequential execution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Depending on the available parallelism and available re-usability, the spatial unrolling (X and Y) needs to be configurable, to be able to efficiently map all workloads, detailed in Section IV-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Deconvolution Autoencoders used in many machine monitoring applica- tions consist of an encoder and a decoder pair, which tries to reconstruct the input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' After training on normal data, a reconstruction error signals an anomaly in the test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Deconvolution or transposed convolution are used in these autoencoders and are built by combining the convolution and upsampling into a single operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Deconvolution can be mapped as a convolution (MMM) but needs extra hardware to support zero-skipping of input for efficient mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Hard- ware modification can improve the mapping efficiency of this operation, and better exploit its inherent sparsity, as will be discussed in Section IV-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Support Vector Machines (SVMs) SVMs are ML algorithms used for classification and re- gression tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' When classification of input data between normal behavior and an anomaly is required, a binary classifier called a one-class support vector machine (OC-SVM) can be 3 used [27], [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The decision function of a OC-SVM using the radial basis function (RBF) kernel is given by the equation (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For the Laplacian kernel, the L2 norm is replaced by L1 norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' f(x) = N � i=0 αi · exp −∥x−svi∥2 2σ2 − b (1) where x is the input vector with length D, sv are the support vectors with length D, N is the number of support vectors, σ the standard deviation, α the Lagrange multiplier, and b the bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The number of support vectors N, in combination with the vector length D, can become large in these workloads, making the L1 and L2 norm calculation complex, and their deployment can gain orders of magnitude in performance when deployed on specialized accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The D and N dimensions of the norm operations can be treated similar to C and K dimensions of a dense layer (MVM) and can be spatially unrolled on the PE array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In addition to unrolling the norms, extra hardware to support squaring, subtraction, rounding and absolute operation needs to be added to each PE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The result of the norm calculation can then be used by a CPU core to compute the overall kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Structured Sparsity Exploiting sparsity in DNNs can help to reduce the com- putational complexity and memory requirements, by skipping zeros and compressing the NN parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' However, random pruning or unstructured sparsity tends to be hard to efficiently map on hardware and requires special logic for zero-skipping and load balancing [29]–[31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The structure of sparsity (gran- ularity of pruning) has high impact on hardware efficiency and prediction accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Some works have found that unstructured sparsity achieves better prediction accuracy than structured sparsity but structured sparsity tends to be more hardware amenable and improves computational efficiency [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Thus, a structured sparse model could be trained with more iterations to revert back closer to the same prediction accuracy achieving similar overall efficiency/cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, more coarse-grained sparsity can reduce the additional memory requirements im- posed for storing indices of non-sparse data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' With all of these diverse ML workloads and their charac- teristics in mind, a platform which can efficiently map all of the above, needs to be designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' TINYVERS HARDWARE ARCHITECTURE TinyVers, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2, is a heterogeneous SoC consisting of a single core RISC-V processor, a flexible ML accelerator called FlexML, a 512 kB shared level-2 (L2) SRAM memory, a micro-DMA (uDMA) for data movement between peripherals/memory, a 512 kB eMRAM for non- volatile storage, and a WuC for power management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC development is rooted in the PULPissimo platform [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' It embeds a 2 kB read-only memory (ROM), which acts as the first stage boot loader (FSBL) and also controls boot from JTAG, external SPI flash or the eMRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Two communication busses are used: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') a logarithmic interconnect, which enables a tightly-coupled data memory (TCDM) providing single cycle eMRAM (512 KB) ROM Shared Memory L2 (512 kB) GPIO UART SPI I2C I2S CPI JTAG SCAN CHAINS eMRAM CNTL LP Data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Memory L2 (64 kB) TCDM interconnect uDMA DMA Source Source Sink RISC-V APB WuC (RTC & Power FSM) 2D SIMD Array 8x8 Weight L1 Memory Instruction Memory Activation L1 Memory DMA Control Registers Logic PD LP Data Acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mem Data Acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mem PD L1 PD UDMA PD AON PD MRAM PD Power Modes PD= Power Domain Boot OFF OFF OFF OFF OFF OFF OFF OFF OFF OFF OFF OFF ON/OFF OFF ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON Active Data Acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' LP Data Acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Deep Sleep % VDD WAKE VDD SCL ** VDD SRAM ^^ VDD MRAM # VCS MRAM V+ bias V- bias ^^ # % % Data Mover FSM FlexML Accelerator ** ** ** ** ** FlexML Control Unit Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Overview of the complete TinyVers SoC showing the different power domains (PD) with their constituting modules and the power modes supported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' access to the shared L2, and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') the APB standard bus, which is used for controlling different memory mapped modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The interface between the SoC and FlexML accelerator is based on the HWPE framework presented in [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Using the streamers from [33], data is moved to-and-from the shared L2 memory with the help of FlexML’s DMA engine which is a FSM controlling the data (un)loading of its private memories and double buffering operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Several peripheral interface protocols are supported by the SoC including UART, SPI, I2C, I2S, and CPI, in addition to having 32 general purpose IOs (GPIO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Separate clocks are used for the main core logic, the peripheral interfaces, and the always-on domain which includes the WuC and the IO pads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Smart Sensing Modes for TinyML IoT tinyML applications typically operate by collecting data across a specified time window through an array of sensors, after which the collected data can be processed to make decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In many applications, the time window across which the data needs to be collected before processing can start, can vary from a few ms to sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, during the sensor data collection, many modules of the MCU are not used since no heavy processing is done yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This brings opportunities in improving power saving in many tinyML applications: during data collection, only the modules necessary for moving the windowed data from the sensor peripheral interfaces to the memory need to remain active, while e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' the CPU can be put to sleep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Furthermore, in applications which work on time series data like audio, the memory requirement for the windowed data is small (< 64 kB), such that also a large part of the main memory of the MCU can be powered-down to avoid leakage power of the unused memory section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' To this end, TinyVers introduces two tinyML optimized data acquisition power modes: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') ‘Data acq.’ and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') ‘LP data acq.’, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' mode, targeted towards applications with large sample data like vision, keeps the uDMA module and the complete shared L2 memory (512 4 Full Active Data Acq LP Data Acq 0 100 200 300 31 20 8 325 77 10 356 97 18 Power(µW) Dynamic Leakage Total Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power simulation of post-synthesis netlist undertaken in Cadence Genus tool for the three power modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In all the three modes, I2S data is collected at a sampling frequency of 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 kHz for a window of 2 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Full active power reported includes configuration of uDMA by RISC-V core and interrupt handling procedure, in addition to data collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power uDMA Power OFF Power ON Switch Power 1 Switch Power 2 Power OFF Reset Isolate Clk enable Power ON Top level FSM Bottom level FSM Power Logic & L1 Power MRAM Power L2 & L2 udma Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Flow diagram showing the hierarchical FSM used in the WuC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' kB) powered up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In contrast to that, the LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' mode only keeps part of the shared L2 memory (64 kB) powered up, in addition to the uDMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This mode is specifically targeted towards applications which needs time series and audio data like keyword spotting, machine monitoring, biosignal analysis, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 3 shows an estimation of the power saving that can be achieved when moving from a full active mode to the two tinyML sensing modes, with almost 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5× improvement between the full active and data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' modes and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5× between data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' and LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power Management Aggressive power management is pursued in TinyVers on top of standard low power design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC is divided into 6 switchable power domains and 1 always-on domain (AON), as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Each switchable power domain consists of multiple power gating switches, which isolate the VDD of the power domain from the global VDD supply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' These power gating switches are controlled by control signals driven from the WuC of the AON domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' All interconnect crossings between the power domains are equipped with bidirectional level shifters and isolation cells, such that the individual supply voltages of the domains can be controlled independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The smart WuC is in charge of this power management control, relying on a real-time counter (RTC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The counter can be programmed by the RISC-V core with millisecond granularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The RISC-V core can instruct the WuC to bring the SoC into one of the five supported power modes shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' To this end, the WuC encompasses hierarchical finite- state machines (FSM) driven by the RTC, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 4, controlling the power-up and power-down of the complete SoC and the different power domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The top level FSM controls the sequence of power-up/down of the different power domains and the bottom level FSMs control the fine-grain sequence to (de)activate the isolation cells and the power gating switches of the individual power domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Emerging memories like ReRAM, MRAM, FeRAM, PCM, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [34], [35], have shown promise in building cost-effective embedded non-volatile memories (NVM) targeting applica- tions in edge computing for automotive or industry 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' NVM memories can be used as the storage space for boot code and other parameters that need to be stored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This enables two things: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') Duty-cycling can be used as a means of reducing power consumption in applications which do not require always-on operation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') the SoC does not need to go to a central cloud server in order to fetch its boot codes and NN parameters when it is power-cycled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, the availability of the NVM embedded on-chip, avoids the high energy cost of fetching data from off-chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' MRAM promoted as a universal memory, uses magnetic polarity to store data in its bitcells [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Being non-volatile and almost as dense as traditional SRAM, they are a good fit for tinyML applications using extreme edge SoCs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' With this in mind, TinyVers integrates a 512 kB embedded MRAM on- chip, enabling extreme power management strategies for smart sensing and on-demand computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In the SoC, the eMRAM acts as a non-volatile storage for the boot code that the RISC- V needs to wake-up and start processing, and can also store the NN parameters of the mapped ML workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The eMRAM can, finally, also be used as a non-volatile scratchpad space for storing windowed data in smart sensing applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The interface between eMRAM and the shared L2 memory uses the uDMA unit and the design is based on the work of [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' FLEXML ACCELERATOR This section firstly describes the architecture overview of the FlexML accelerator, followed by the dataflow reconfiguration used for flexible mapping, efficient zero-skipping used for deconvolution and structured sparsity, and finally the hardware for supporting SVM, as briefly discussed in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' FlexML Architecture Overview The FlexML accelerator is TinyVers’ specialized, versatile hardware accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' FlexML is designed to efficiently support the large diversity in ML workloads for tinyML applications, while exploiting the data reuse present in individual layer characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This is achieved through a zero-latency runtime dataflow reconfiguration, discussed in Section IV-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 5, FlexML encompasses an 8×8 single instruction multiple data (SIMD) array of processing elements (PE), wherein each processing element consists of a precision- scalable multiply-accumulate (MAC) unit with support for INT 8/4/2 [37], shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' As a result of the precision- scalability, the SIMD array can be reconfigured to be a 8×8/16/32 array of INT8/4/2 MAC units, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='. Each PE per- forms 1/2/4 MAC operations per cycle based on the selected precision (INT8/4/2) and the results are accumulated in a 32- bit register with full/partial output stationarity, reducing the movement cost of the large bit-width partial sums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The final output is passed through a ReLU function (if enabled), fol- lowed by re-quantization to the selected precision and written 5 DMA Engine Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mem Cntl FSM Sparsity Mem 2x2 kB Weight Mem (L1) 2x32 kB Adder Trees Act Mem (L1) 2x32 kB NLFG & Max Pool Input FIFO (L0) SIMD PE Array 8x8 IX Layer Type ucode Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' K Fx Fy Input pointer Weight Pointer .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' IY C PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' FlexML accelerator architecture overview with ucode instruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' >> ABS REG REG Round Sub ReLU Overflow Control Input Activation Output Activation 1 0 0 0 Input Weight From Neighbor PE + 8b 20b 8b 16b … … … … … … unused 4b 4b Gated 4b 4b Gated 12b 9b 2b 2b 2b … … … … … … Gated 2b Gated 2b 2b 2b 2b 8b 6b Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Block diagram of the processing elements used in the flexML accelerator, showing the precision-scalable MAC unit and the additional hardware to support SVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' back to the activation L1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mixed precision quantization can help in improving performance of DNN models when moving below 8 bits precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' However, the hardware overhead of mixed precision can reduce the overall efficiency of PEs due to varying bandwidth and serialized dataflow [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Thus, FlexML only supports symmetric precision for its weights and activation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In addition, a simple shift and ReLU is used for normalization of output, which also keeps hardware overhead low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In order to maintain accuracy of the models, a hardware aware training framework, mentioned in Section V, is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Supporting the SIMD PE array, are private level-1 (L1) SRAM based memories for storing both weights (64 kB) and activations (64 kB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Both the weight L1 and activation L1 are composed of two 32 kB banks operating in a ping- pong manner to overlap data writing and reading, improving the overall performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' An intermediate memory level L0 is provided between the activation L1 and the PE array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This L0 memory is a FIFO buffer of size 16×8 bits, used to improve data locality when doing shifting window operation in convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Furthermore, a separate non-linear function generator (NLFG) and a max pooling unit are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The for(y=0 to Y-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output row for(x=0 to X/8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output column for(k=0 to K/8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output channel for(c=0 to C-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each input channel for(fy=0 to Fy-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each filter row for(fx=0 to Fx-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each filter column parfor(k=0 to 8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' spatial unrolled output channel parfor(x=0 to 8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' spatial unrolled output column o[k][x][y] += i[c][x+fx][y+fy]*w[k][c][fx][fy] for(k=0 to K/8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each output channel for(c=0 to C/8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' for each input channel parfor(k=0 to 8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' spatial unrolled output channel parfor(c=0 to 8-1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' spatial unrolled input channel o[k] += i[c]*w[k][c] FIFO PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE C Weight Memory FIFO MMM MVM Weight Memory OX K PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE K Bank 0 Bank 1 Bank 3 Bank 0 Bank 1 Bank 3 Bank 7 Bank 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Diagram showing the dataflow reconfiguration used to switch from OX|K dataflow (left) for MMM to C|K dataflow for MVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The nested for loops below show the addition of parfor loops for the spatial unrolling used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' NLFG uses LUT-based linear approximation to generate the various activation functions (other than ReLU) used in NN models such as tanh, sigmoid, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' To control the dataflow and control flow inside the accelerator, a control unit with FSMs fetches ucode instructions from the instruction memory, decodes the instruction and deploys the relevant layer on the PE array by updating the control signals and counters that track the workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The ucode instructions are generated by a pseudo-compiler built in python (Section V), and consists of CISC-like layerwise long instructions with hyperparameters and shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The control unit is also extended to enable support for efficient zero-skipping of activations in the case of deconvolution and zero-skipping of pruned weights in conjunction with the sparsity index memories (Section IV-C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Dataflow Reconfiguration In order to efficiently map the diverse set of ML workloads, runtime dataflow reconfiguration is supported in the FlexML accelerator at no latency overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The configurability enables efficient mapping of both: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') MMMs used for CNN, decon- volution and TCN, exploiting both input and weight spatial data reuse under an OX|K dataflow with output stationarity, and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') MVMs used for FC, RNNs and norm calculation of SVMs with batch size 1, exploiting the available input spatial data reuse under a C|K dataflow with partial output stationarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Multiple previous works have proposed dataflow reconfiguration in hardware to optimally map different work- loads [39]–[41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' However, these works suffer from large hard- ware overhead and latency for diverse dataflow support and are not suitable for extreme edge devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This work limits the dataflows to two optimal mapping schemes, thereby, keeping hardware and power overhead low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, none of the prior works have looked into mapping of TCN, AE, and SVM on the same hardware accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 7 shows the OX|K (left) and C|K dataflow (right) and their hardware implementation, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='. In the OX|K dataflow, the spatial unrolling is applied to the OX and the K dimension of the nested for loop, allocating the unrolled OX dimension along the columns and the unrolled K along the rows of the SIMD PE array of dimension 8×8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 6 Input FIFO Input FIFO Input data from activation L1 Normal operation Deconvolution operation Input data from activation L1 0 0 0 0 0 0 0 0 Ctrl Control Unit Fetch instruction and enable deconvolution Control the demux and muxes for deconvolution Skip rows and columns with all zeros Cycle #0: Enable demux to push data to input FIFO Cycle #1: Set Ctrl to 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' data sent to PEs -> a 0 b 0 c 0 d 0 Cycle #2: Set Ctrl to 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' data sent to PEs -> 0 b 0 c 0 d 0 e,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Shift 0 into input FIFO Cycle #3: Set Ctrl to 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' data sent to PEs -> b 0 c 0 d 0 e 0 IX Input Activation Filter Deconvolution layer in software Orange represents pruned pixels IY Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Representation of deconvolution layer in software (top left), control unit running the zero-skip operation (bottom left), the architectural change required on the L0 FIFO to support deconvolution (top right), and cycle by cycle operation of the FIFO and PEs (bottom right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The rest of the for loops are temporally unrolled as shown in the nested for loops in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 7, resulting in an output stationary dataflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Under this dataflow regime, the activation L1 memory multicasts input activation data in the vertical dimension to the L0 FIFO memory, which fetches 8 words in the first cycle followed by single word during the shifting window operation, thereby, reducing the memory bandwidth and number of memory fetches by utilizing the reuse oppor- tunity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The weight L1 memory provides data in the horizontal dimension, providing 8 words using 2 internal banks, where each word is multi-cast along the row.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Due to the output stationarity, accumulation continues till the final output is generated which are then systolically shifted out vertically to the activation L1, requiring 8 cycles to complete the output write-back.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The input data shifting inside the L0 FIFO is made programmable to support the variable dilation used in TCNs or variable strides in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The alternative C|K dataflow is used for MVM, as this workload cannot utilize the OX|K dataflow efficiently due to lack of re-usability of weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Under this dataflow, the C dimension is spatially unrolled along the vertical column dimension and the K dimension along the horizontal row dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The activation L1 memory multicasts 8 words of input activation along the vertical dimension, bypassing the L0 FIFO memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' With a batch size of 1, no weight reuse is available and, thus, each PE needs a new weight every cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In order to meet this requirement, the weight memory utilizes all of its 8 banks to unicast 64 different weight words to the PEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' PE rows operate on different input channels (C) of the same output channel (K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Hence, once the required MAC operations per PE are done, the outputs of PEs of the same row are accumulated using an adder tree and one final output per row is shifted out to the activation memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Efficient Zero-skipping for Deconvolution and Blockwise Structured Sparsity The FlexML accelerator supports efficient zero-skipping of deconvolution workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 8, the input FIFO Sparsity Index Mem storing the sparse nature Sparse block 1 0101 0000 0000 3 C 0 1010 2 C C K K x8 K 8 8 Blockwise Str.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Sparsity for CNN Input Weight Matrix Weights Blockwise Str.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Sparsity for FC/RNN Output Control Unit Fetch sparsity index for block 1 to 8 from sparsity memory Check bit wise, if one present then update counters of control FSM to skip current C Fetch next blockwise index and repeat Orange represents pruned pixels Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Blockwise structured sparsity applied to CNN and dense layers (top), control unit operation in tandem with sparsity index memory to support zero- skipping (bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' is designed such that when in deconvolution mode, it only fetches one set of words and shuffles it with zero padding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The control unit skips the rows and columns with zeros that would result in redundant computation, resulting in a performance gain of up to 2× compared to running deconvolution in convolution mode with upsampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' TinyVers also supports structured sparsity, more specifically, blockwise kernel-level sparsity (2D) for both convolutional and dense layers [29], [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In this scheme, shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 9, complete input channels of the filter kernels are pruned with a constraint that a block size of 8 filter kernels (K = 8) should share the same pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The block size is decided by the dimension of the PE array and the spatial unrolling of K along the horizontal dimension of the 2D PE array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In our case, the selected block size makes controlling the dataflow and control flow easier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Applying the same channel pruning to all the 8 filter kernels mapped in parallel on the PE array makes the mapping efficiency higher as all the rows can still operate with a common control logic, and enables not only energy savings, but also throughput benefits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For this, the FlexML accelerator consists of specialized sparsity index memories which store the bit encoded indices of the pruned channel groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 9 shows the sparsity index memory and the control flow logic used in the control unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Before every filter kernel block increment, the control unit fetches an index memory word and checks the data bit-by-bit for sparsity state, as the input channels increment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' If a sparse channel is detected, the complete computation of the channel is skipped, thus, avoiding any zero computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Support Vector Machine The L1 and L2 norm of OC-SVM requires modification of the PEs in order to use the same hardware for mapping the workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' As shown in Fig, 6, each PE is extended with a subtraction block, absolute unit, rounding unit, and the modification of the multiplier to also enable squaring for the norm calculation within the PE array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The input data vector x and the support vector svi are of dimension D and the number of support vectors is N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' When used in the C|K dataflow, the 7 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-01 L1 MEMORY 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 mm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 mm L2 MEM RISC-V & ACCEL uDMA eMRAM WuC Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Measurement setup and chip microphotograph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D dimension of the input data vector of x is unrolled and multicasted vertically (C) along the PE array, while the N dimension of the support vector svi are unrolled and unicasted horizontally (K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The results of the N norm calculations, computed in the PEs, are then sent to the shared L2 memory where it is then post-processed by the RISC-V core with the GNU C in-built exponential function, multiplication with α and summation over N to generate the final output shown in equation (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' DEPLOYMENT OF NEURAL NETWORKS ON TINYVERS Hardware used for ML applications also requires a user programmable full stack that can translate ML algorithms directly from existing ML training and inference frameworks like Tensorflow, Keras, PyTorch, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This makes the quick and easy deployment of various ML workloads onto an existing hardware possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A python based pseudo-compiler frame- work created for TinyVers taking into account its heterogeneity is created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' An ML algorithm is first quantized to selected precision using the QKeras framework [42] for quantized- aware training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The quantization-aware training framework takes into consideration the hardware constraints such as symmetric quantization and the shift based scaling of output in the PEs of the accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The quantized model is then passed to a python-based NN compilation which takes in the hardware description and provides a set of C-based header files for the RISC-V core, consisting of ucode instructions for the accelerator, NN parameters and also a golden model for verification of the mapped workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' CHIP IMPLEMENTATION AND MEASUREMENT The TinyVers chip microphotograph shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 10 was implemented and fabricated in GlobalFoundries 22FDXTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The figure shows the different sub-modules used in the SoC and detailed in previous sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 10 also shows the lab setup used for measurements and benchmarking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The follow- ing subsections details the measurements and benchmarking done on the SoC for power, energy efficiency and performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Peak Performance Analysis First, a peak performance analysis is undertaken using a single CNN layer with 32 input channels, 32 output channels and a 3×3 filter kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Selection of the used layer for peak @Vdd Mem, Vdd Logic 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 3 6 9 12 18 15 5 Clock Frequency (MHz) Peak energy eff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (TOPS/W) Throughput (GOPS) 10 20 30 40 50 100 120 150 @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='55, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='65, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='65, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='65, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='65, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8V @0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8V 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='586 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='47 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='85 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='43 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='833 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='838 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='863 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='17 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='35 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='69 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='52 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='86 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Peak performance analysis of CNN3×3 layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 5 MHz 10 MHz 20 MHz 30 MHz 40 MHz 50 MHz 100 MHz 120 MHz 150 MHz 0 5,000 10,000 15,000 20,000 Power(µW) WuC L2 L2uDMA L1 Logic DMA Mram(P) Mram(A) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power breakdown of the peak perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' analysis with CNN3×3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' MRAM power consumption is negligible as it is OFF in active mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' MRAM(A) and MRAM(P) represents MRAM array and MRAM periphery resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='. performance is driven by the fact that convolutional layers with a 3×3 filter kernel are the most commonly used layer in modern DNN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The hyperparameter selection of the CNN layer is driven by the constraint of maximum utilization of the PE array and the size of the private L1 memories of the accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The 8 bit quantized activation and non- sparse (structured) weights of the CNN are generated using the compiler framework using the Google speech dataset for keyword spotting [43] and verified against the golden model for functional correctness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 11 plots the peak energy efficiency and the throughput with respect to the clock frequency while sweeping the voltage supply of the logic and memories for the benchmarked CNN layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For fair comparison with other SotA chips, no body biasing is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 12 shows the power breakdown of individual modules when running the benchmarking layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC shows a large flexibility in delivered performance ranging from high energy efficiency/low throughput of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 TOPS/W, 586 MOPS when operating at a clock frequency of 5 MHz with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4 V logic, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 V memories, to low energy efficiency / high throughput of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 TOPS/W, 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 GOPS operating at 150 MHz with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 V logic and memories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This provides a large range for extreme edge tinyML applications to operate, trading-off between speed and energy efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Workload Benchmarks Using the peak energy efficiency operating point (5 MHz, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4 V logic and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 V memory) from Section VI-A, further performance analysis of different synthetic and actual real- time benchmarks are evaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Table I shows the SoCs flexibility through mapping of different ML layers and full MICAS naikraveraels ADIGILENT 二 店 K2_VDD_L1K2_0D_L ZedBoardLens: E20:X80 2022/02/038 TABLE I WORKLOAD BENCHMARKS Workload Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power (µW) Peak perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (GOPS) Peak (effective NZ) energy eff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (TOPS/W) Synthetic CNN@8b 237 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='586 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='47(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='47) CNN@4b 197 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='17 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='94(5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='94) CNN@2b 197 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='35 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9(11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9) CNN@8b, 239 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='03 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='31(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='46) 50% sparse CNN@8b, 212 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='64 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='76) 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5% sparse FC/RNN/SVM, 140 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='116 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='829(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='829) batch=16 Deconv@8b 235 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='36 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='78(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='49) Real-time TCN (KWS) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3%∗ 193 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='204 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='05(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='05) CAE 209 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='442 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='11(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='27) ResNet-8 82%+ 228 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='267 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='17(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='17) OC-SVM 129 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='126 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='972(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='972) ∗ 12-class task, baseline=93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='46%, + baseline=85% 2% ResNet8 OC-SVM CAE TCN 1% 31% 2% 21% 41% 4% 0% 29% 2% 19% 44% 6% WuC L2 L2uDMA L1 Logic DMA Mram(P) Mram(A) 0% 31% 18% 44% 5% 1% 47% 3% 15% 25% 9% Power: 129 μW, Latency: 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3ms Power: 228 μW, Latency: 76ms Power: 193 μW, Latency: 11ms Power: 209 μW, Latency: 30ms Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Energy breakdown showing the distribution of measured energy of the chip modules for running a single inference of the four real-time workloads on FlexML and RISC-V with input data already available in L2 memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The power and latency measurements starts from setting up of accelerator parameters by RISC-V, data movement from L2 to L1, inference computations, and ends with post processing by RISC-V core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' MRAM power consumption is negligible as it is OFF in active mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The CNN layer from Section VI-A is extended and measured with different precision and blockwise structured sparsity (BSS) levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' When moving to lower precision of INT- 4 and INT-2, the peak throughput improves by 2× and 4× while the peak energy efficiency improves by 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4× and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8× resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=', achieving a maximum of 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9 TOPS/W at INT-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' As shown in Table I, at 8 bit precision with 50% BSS (16/32 input channels pruned) the performance improves by around 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7× while at 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5% BSS (28/32 input channels pruned) the performance increases by approximately 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Further performance improvement can be gained when moving to lower precision, however, low precision combined with high BSS levels can cause a large drop in accuracy and, thus, is not explored in this benchmarking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Other synthetic benchmarks such as FC, RNN, SVM and a deconvolutional layer similar to the CNN layer in terms of hyperparameters are explored and the results are shown in the table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For the dense layers, batching of 16 is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Finally, 4 real-time application benchmarks are used to TABLE II MEASUREMENT RESULTS OF DIFFERENT LOW POWER MODES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power Mode AON Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (kHz) Core Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (MHz) Power (µW) Wakeup Latency (µs) Deep Sleep 33 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 788 LP Data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='∗ 33 5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 788 Data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='∗ 33 5 67 788 ∗ @Fs=44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 kHz Wake-up latency (s) 4 8 12 16 24 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='033 Clock Frequency (MHz) Power (μW) 1 5 10 788 μs 26 μs 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='2 μs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 μs 20 40 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3 μs 650 ns 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 μW 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 μW 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 μW 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 μW 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 μW 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 μW Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Deep sleep power-latency-frequency tradeoff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' show the capabilities of the SoC: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') keyword spotting (KWS) using TCN model [21], [44] on google speech dataset, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') continuous machine monitoring with a convolutional auto- encoder (CAE) [24] on MIMII dataset [45], 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') ResNet-8 image classification on CIFAR-10 used in MLPerfTM tiny benchmark [46], and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=') Novelty detection with OC-SVM [47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Table I shows the peak performance characteristics of these benchmarks on the SoC, more specifically the RISC-V core and FlexML, with 8-bit precision, a single inference, and assuming all input data is available in the shared L2 memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For TCN and ResNet-8, hardware-aware quantization was used and the energy and performance metrics were measured, while for the CAE and OC-SVM workloads, random inputs and weights were used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' All the 4 workloads can be deployed with less than 230 µW of continuous real-time power at peak energy efficiency between 1-2 TOPS/W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This means that the SoC can provide high level of flexibility in workload mapping at sub- mW power to enable truly power efficient tinyML application on extreme edge devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 13 shows the power breakdown of the 4 real-time workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For OC-SVM (dense operation), the power consumption of memory dominates, due to the lack of re-usability of weights leading to more data fetches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' On the other hand, power breakdown of CNN based workloads (TCN, ResNet8 and CAE) shows equal distribution between memory and logic as the dataflow exploits maximum re-usability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Power Management Table II shows the measured real-time power of the different low power modes of the SoC detailed in Section III-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In deep sleep mode the SoC operates with an AON clock frequency of 33 kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In this mode, only the AON domain consisting of the WuC and the logic controlling the IO pads stays powered 9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-02 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-01 Power (W) Time (ms) 0 280 140 Erase followed by write output to MRAM TCN processing (16 batch) Boot from MRAM I2S LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (2s window) Deep sleep Vdd SCL, Mem: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='55 V Vdd AON: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 V Core Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' : 5 MHz 2000 2140 2280 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Instantaneous power trace showing the KWS application scenario with one full period of smart sensing and TCN processing followed by idling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' ON.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The resulting deep sleep power measured is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 µW when operating at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 V voltage supply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' When compared to the peak power measured for the CNN layer, the deep sleep power is 12,000× lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The measure latency of waking up the SoC from deep sleep mode to active mode is 788 µs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This wake- up latency can be traded off to deep sleep power by sweeping the AON clock frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 14 plots the this trade-off for the measured power and wake-up latency when sweeping the AON clock frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Applications that need low latency can operate the AON clock at 40 MHz to attain a wake-up latency of 650 ns at a real-time power of 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 µW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Table II also shows the measured power for the two tinyML optimized power modes of data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' and LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' These power modes are measured with an I2S protocol based win- dowed test vector collection with the AON clock frequency at 33 kHz and the core and peripheral clock frequency at 5 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The SoC is programmed to collect I2S audio data through its uDMA at a sampling frequency of 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 kHz and a sampling window of 2 second.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The sampling clock is generated by the SoC using the 5 MHz clock and lasts for the duration of sampling window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' or LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' mode is then initiated and power is measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The measured power for LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' and data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' is 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 µW and 67 µW resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' which is 850× and 300× reduced power consumption compared to the peak power, when the core and peripheral frequencies can be dynamically lowered to 5 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Instantaneous Power Trace In order to show the complete end-to-end application de- ployable on the SoC and to show the SoC’s full ML func- tionality, duty cycling and features of power management, two applications are mapped onto the heterogeneous SoC with windowed data collection done in the LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' mode: keyword spotting with a TCN model operating in continuous mode [21];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' and a machine monitoring use case with a Mel Frequency Energy Coefficient (MFEC) based feature extraction with a CAE in duty cycled mode [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1) Keyword-spotting Application: The first application sce- nario is the keyword-spotting with TCN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In this ap- plication scenario, audio data from a microphone of window 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-02 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='00E-01 Power (W) MFEC processing on RISC-V Autoencooder processing on FlexML Boot from MRAM I2S LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' (1s window) Deep sleep Time (ms) Vdd SCL, Mem: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='55 V Vdd AON: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 V Core Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' : 5 MHz Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Instantaneous power trace showing the machine monitoring appli- cation scenario with one period of smart sensing, FE, and CAE processing followed by idling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' size 2 seconds (16 batches) at a sampling frequency of 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 kHz is collected using the I2S peripheral interface protocol, the collected data is simultaneously stored in the special L2 uDMA memory using the SoC’s uDMA with the SoC being in the LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' After 2 seconds the SoC wake’s up into active mode and the collected data is processed using the TCN model from Section VI-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The output of the TCN processing is then stored into the MRAM for future processing or transmission while the SoC can either go into deep sleep mode or collect new windowed sampling data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 15 shows the complete instantaneous power consumption trace of the KWS application scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' When operating in this duty-cycled mode, the average power of the complete application is 173 µW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The power can be further reduced to 10-20 µW by using the deep sleep power mode of the SoC during periods of no sensing or computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2) Machine Monitoring Application: Machine monitoring used for predictive maintenance is the second application scenario selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' In this scenario, I2S peripheral interface protocol is used to collect audio data from a microphone with window size 1 second at a sampling frequency of 16 kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The collection of I2S audio data is operated in the LP data acq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' mode of the SoC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Once the complete windowed data is collected, the SoC switches to the active mode in which the RISC-V core is used for the MFEC based feature extraction followed by running the CAE on the accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 16 plots the instantaneous power trace of running the machine monitoring application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Unlike the previous application which works on raw audio data, the CAE model need pre-processing MFEC data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' As the MFEC algorithm is not supported on the accelerator, it is executed on the RISC-V core with INT16 precision instead of INT32 or FP32 to reduce power consumption [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The power trace plots show that running large feature extraction on RISC-V is not energy efficient taking large time to complete owing to single core operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The average power for continuous operation remains below 164 µW, but for this use case, 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 µW is consumed with a duty cycling of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The MFEC execution on the RISC-V 10 TABLE III PERFORMANCE COMPARISON WITH STATE-OF-THE-ART.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [48] [17] [49] TinyVers [50] [51] Extreme Edge SoCs edgeML Accelerators Technology 28nm FDSOI 22FDX 55nm 22FDX 28nm 65nm Die Area (mm2) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 12 10 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='55 16 Applications IoT GP, DNN, IoT GP, DNN, IoT GP, DNN, IoT GP, DNN+, Always-on KWS DNN NSA NSA NSA Trad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' ML, NSA Supported CNN, CNN, CNN, CNN, DSCNN CNN, ML layers FC/RNN FC/RNN FC/RNN FC/RNN, GAN, FC/RNN AE, TCN, SVM Architecture 1×RI5CY+ 10×RI5CY+ 9×RI5CY 1×RI5CY+ DSCNN DNN ML accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' ML accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' FlexML accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' SRAM 464 kB (40 kB) 128 kB(L1) 64 kB (L1) 132 kB (L1) 2 kB 256 kB (State retentive) (16-1600 kB (L2)) (512 kB (L2)) (64/512 kB (L2)) eNVM 4 MB MRAM 512 kB MRAM Deep sleep power (µW) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 SRAM ret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8-123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 30 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6-67 sleep power (µW) Int precision (bits) 8, 16, 32 8, 16, 32 8, 16, 32 2, 4, 8 8 1-16 Supply voltage (V) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='45-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 1-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='63-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1 Max frequency (MHz) 350 450 250 150 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='04 200 Power range 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4µW-96mW 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7µW-49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4mW 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6µW-75mW 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7µW-20mW 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='51µW 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='2-297mW Best ML perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 36 GOPS 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='2 GOPS 12 GOPS 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 GOPS 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3 MOPS+ 691.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='2 GOPS @8b∗ @8b∗ @8b∗ @8b∗∗ @8b∗∗ @8b ∗∗ Best ML eff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3 TOPS/W@ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3 TOPS/W@ 200 GOPS/W@ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='47 TOPS/W@ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 TOPS/W@ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='57 TOPS/W, @Perf 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8 GOPS, 8b∗ 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 GOPS, 8b∗ 7 GOPS, 8b∗ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='58 GOPS, 8b∗∗ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3 MOPS, 8b∗∗ 8b∗∗ 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9 TOPS/W@ 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 TOPS/W, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='4 GOPS, 2b∗∗ 4b∗∗ + estimated at 90% utilization of MACs, ∗ Matmul, ∗∗ CNN, 1 MAC = 2 Ops can be optimized using special DSP extensions available with the PULP libraries, which is left for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' COMPARISON WITH SOTA Table III shows the comparison of our SoC with SoTA on two fronts: on one hand, comparing with existing extreme edge SoCs (left), and on the other hand, with edge ML accelera- tors (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Our SoC has similar or increased flexibility in application mapping compared to the extreme edge SoCs on the left, with much improved energy efficiency and power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' TinyVers supports not only the IoT general processing (GP), DNNs and near-sensor analytics (NSA) like [17], [48], [49], but also DNN+ such as TCN and AE and traditional ML like SVM, all at better energy efficiency because of efficient mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This is evident from the best energy efficiency of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='47 TOPS/W for running a CNN layer on TinyVers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The energy efficiency is further enhanced to 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='9 TOPS/W when the CNN workload is quantized to INT2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Compared to the extreme edge SoCs, TinyVers provides support for precision scalability and, thus, can take advantage of improved perfor- mance using quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Furthermore, by utilizing support for block structured sparsity, TinyVers can reach a peak performance of 17 TOPS/W for an 8-bit CNN layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' This is much higher than the efficiencies reported by [17], [48], [49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Compared to the edge ML accelerators on the right, TinyVers shows much more flexibility at comparable per- formance metrics in terms of energy efficiency and power consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The edgeML accelerators only support a single or few models extremely efficiently, but this approach has drawbacks in deployment for extreme edge devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' For ex- ample, [50] can only perform KWS with depthwise separable CNN and its performance is much lower than TinyVers with comparable energy efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' UNPU [51], can only support CNN and FC/RNN layers and also does not have a complete standalone SoC, which effects efficiency at the system level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, these edgeML accelerators cannot support any kind of duty cycling as they lack power management and retention memory support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' TinyVers supports the multi-modal require- ments of extreme edge devices at relatively similar energy efficiencies of the order of TOPS/W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, it adds the possibility of extreme low power idle states for duty-cycling use cases to enable < 10µW operation, shown empirically in Section VI-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' To summarize, TinyVers brings the best of both worlds of extreme edge processors and edgeML accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' CONCLUSION TinyML applications at the extreme edge needs not only heterogeneous SoCs with flexible accelerators to support di- verse workloads, but also adaptive power management for different duty-cycling operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moreover, to enable such adaptive power management, the need for embedded non- volatile memories arises.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' TinyVers extends a RISC-V core with a flexible ML accelerator supporting a diverse set of ML workload mapping in terms of diverse compute kernels, different precision and structured sparsity conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Further- more, the inclusion of a WuC and an eMRAM enables the adaptive power management required in many duty-cycling use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Measurement result shows that the chip can achieve an energy efficiency range of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8-17 TOPS/W at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='58 GOPS to 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6 GOPS of throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The different low power modes enable the chip to achieve power range from 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7µW-20 mW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The application of machine monitoring takes advantage of the 11 deep sleep mode to consume only 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5µW of power at a duty cycle of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Thus, TinyVers takes a step towards creating a new class of ultra-low power extreme edge SoCs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' ACKNOWLEDGMENTS The authors would like to thank ETHZ for their support on PULP platform and GlobalFoundries for 22FDX tapeout support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' The work has been supported under ISAAC project (FOD Economie Belgium Energietransitiefonds (oproep II)) in collaboration with Magics Technologies and received funding from the Flemish Government (AI Research Program).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' REFERENCES [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Portilla, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mujica, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lee, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Riesgo, “The Extreme Edge at the Bottom of the Internet of Things: A Review,” IEEE Sensors Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 3179–3190, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [2] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Zhang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Suda, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lai, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chandra, “Hello Edge: Keyword Spotting on Microcontrollers,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='07128, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='07128 [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Warden and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Situnayake, Tinyml: Machine learning with tensorflow lite on arduino and ultra-low-power microcontrollers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' O’Reilly Media, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [4] TinyML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='tinyml.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/ [5] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jokic, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Azarkhish, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cattenoz, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' T¨uretken, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Benini, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Emery, “A Sub-mW Dual-Engine ML Inference System-on-Chip for Complete End-to-End Face-Analysis at the Edge,” in 2021 Symposium on VLSI Circuits, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1–2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [6] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Bernardo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Gerum, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Frischknecht, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' L¨ubeck, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Bring- mann, “UltraTrail: A Configurable Ultralow-Power TC-ResNet AI Accelerator for Efficient Keyword Spotting,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 4240–4251, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Giraldo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lauwereins, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Badami, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “Vocell: A 65-nm Speech-Triggered Wake-Up SoC for 10-µW Keyword Spotting and Speaker Verification,” IEEE Journal of Solid-State Circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 55, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 868–878, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Giraldo and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “Laika: A 5uW Programmable LSTM Accelerator for Always-on Keyword Spotting in 65nm CMOS,” in ESSCIRC 2018 - IEEE 44th European Solid State Circuits Conference (ESSCIRC), 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 166–169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [9] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Im, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Park, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ryu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Han, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lee, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yoo, “DSPU: A 281.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6mW Real-Time Depth Signal Processing Unit for Deep Learning-Based Dense RGB-D Data Acquisition with Depth Fusion and 3D Bounding Box Extraction in Mobile Platforms,” in 2022 IEEE International Solid- State Circuits Conference (ISSCC), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 65, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 510–512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [10] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ju and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Gu, “A 65nm Systolic Neural CPU Processor for Com- bined Deep Learning and General-Purpose Computing with 95% PE Utilization, High Data Locality and Enhanced End-to-End Performance,” in 2022 IEEE International Solid- State Circuits Conference (ISSCC), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 65, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Zhu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Hu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wei, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Liu, “9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='2 A 28nm 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1TOPS/W Dual-Mode CNN Processor Using Effective- Weight-Based Convolution and Error-Compensation-Based Prediction,” in 2021 IEEE International Solid- State Circuits Conference (ISSCC), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 64, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 146–148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [12] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Moons, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Uytterhoeven, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Dehaene, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='5 en- vision: A 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='26-to-10tops/w subword-parallel dynamic-voltage-accuracy- frequency-scalable convolutional neural network processor in 28nm fdsoi,” in 2017 IEEE International Solid-State Circuits Conference (ISSCC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 246–247.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Krishna, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Emer, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Sze, “Eyeriss: An energy- efficient reconfigurable accelerator for deep convolutional neural net- works,” IEEE journal of solid-state circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 52, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 127–138, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [14] GAPuino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://greenwaves-technologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='com/ product/gapuino/ [15] Arduino Nano BLE 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://docs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='arduino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='cc/ hardware/nano-33-ble/ [16] Silicon Labs Thunderboard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='silabs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' com/development-tools/thunderboard/thunderboard-sense-two-kit [17] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rossi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Conti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Eggiman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mauro, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Tagliavini, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mach, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Guermandi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pullini, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Loi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Flamand, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Benini, “Vega: A Ten-Core SoC for IoT Endnodes With DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode,” IEEE Journal of Solid-State Circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 57, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 127–139, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [18] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Giraldo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Roose, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Boons, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mei, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “TinyVers: A 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='8-17 TOPS/W, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7 µW-20 mW, Tiny Versatile System- on-chip with State-Retentive eMRAM for Machine Learning Inference at the Extreme Edge,” in 2022 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 20–21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [19] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Bai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kolter, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Koltun, “An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='01271, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='01271 [20] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Choi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Seo, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Shin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Byun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kersner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kim, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ha, “Temporal Convolution for Real-time Keyword Spotting on Mobile Devices,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='03814, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='03814 [21] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ibrahim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Huisken, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fatemi, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pineda de Gyvez, “Keyword Spotting using Time-Domain Features in a Temporal Convolutional Network,” in 2019 22nd Euromicro Conference on Digital System Design (DSD), 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 313–319.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [22] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chow, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mao, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wang, “Anomaly detection of defects on concrete structures with the convolutional autoen- coder,” Advanced Engineering Informatics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 45, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 101105, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [23] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Duman, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Bayram, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' ˙Ince, “Acoustic anomaly detection us- ing convolutional autoencoders in industrial processes,” in International Workshop on Soft Computing Models in Industrial and Environmental Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Springer, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 432–442.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [24] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Coelho, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pereira, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Matos, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ribeiro, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Nunes, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ferreira, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cortez, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pilastri, “Deep Dense and Convolutional Autoencoders for Machine Acoustic Anomaly Detection,” in Artificial Intelligence Applications and Innovations, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Maglogiannis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Macintyre, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Il- iadis, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Springer International Publishing, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 337–348.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [25] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cortes and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Vapnik, “Support-vector networks,” Machine learning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 273–297, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [26] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mei, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Houshmand, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Giraldo, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators,” IEEE Transactions on Computers, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 70, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1160–1174, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [27] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Noumir, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Honeine, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Richard, “On simple one-class classifica- tion methods,” in 2012 IEEE International Symposium on Information Theory Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' IEEE, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2022–2026.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [28] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Perera, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Oza, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Patel, “One-Class Classification: A Survey,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='03064, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='03064 [29] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Han, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pool, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Dally, “Exploring the Regularity of Sparse Structure in Convolutional Neural Networks,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='08922, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='08922 [30] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Hoefler, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Alistarh, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ben-Nun, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Dryden, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Peste, “Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 22, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1, jan 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [31] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kadetotad, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Berisha, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chakrabarti, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Seo, “An 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='93 TOPS/W LSTM Recurrent Neural Network Accelerator Featuring Hierarchical Coarse-Grain Sparsity for On-Device Speech Recognition,” IEEE Journal of Solid-State Circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 55, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1877–1887, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [32] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Schiavone, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rossi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pullini, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Di Mauro, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Conti, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Benini, “Quentin: an Ultra-Low-Power PULPissimo SoC in 22nm FDX,” in 2018 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S), 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [33] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Conti, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Schiavone, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Benini, “XNOR Neural Engine: A Hardware Accelerator IP for 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='6-fJ/op Binary Neural Network Inference,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2940–2951, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [34] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Boukhobza, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rubini, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chen, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Shao, “Emerging NVM: A Survey on Architectural Integration and Research Challenges,” ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Des.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Autom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2, nov 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='1145/3131848 [35] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yu and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chen, “Emerging Memory Technologies: Recent Trends and Prospects,” IEEE Solid-State Circuits Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 8, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 43–56, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [36] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' K¨ult¨ursay, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kandemir, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Sivasubramaniam, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mutlu, “Eval- uating STT-RAM as an energy-efficient main memory alternative,” 12 in 2013 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 256–267.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [37] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mei, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Dandekar, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rodopoulos, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Constantin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Debacker, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lauwereins, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “Sub-Word Parallel Precision-Scalable MAC Engines for Efficient Embedded DNN Inference,” in 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 6–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [38] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Camus, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Mei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Enz, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “Review and Benchmarking of Precision-Scalable Multiply-Accumulate Unit Architectures for Em- bedded Neural-Network Processing,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 697–711, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [39] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Gong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Han, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, “FlexFlow: A Flexible Dataflow Accelerator Architecture for Convolutional Neural Networks,” in 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 553–564.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [40] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kwon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Samajdar, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Krishna, “Maeri: Enabling flexible dataflow mapping over dnn accelerators via reconfigurable intercon- nects,” ACM SIGPLAN Notices, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 53, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 461–475, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [41] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Emer, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Sze, “Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 292–308, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [42] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Coelho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kuusela, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Zhuang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Aarrestad, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Loncar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ngadiuba, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pierini, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pol, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Summers, “Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors,” 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='10159 [43] Speech Commands: A Public Dataset for Single-Word Speech Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: :http://download.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='tensorflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/data/ speech commands v0/ [44] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Giraldo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jain, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Verhelst, “Efficient Execution of Temporal Convolutional Networks for Embedded Keyword Spotting,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 29, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 2220–2228, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [45] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Purohit, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Tanabe, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ichige, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Endo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Nikaido, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Suefusa, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kawaguchi, “MIMII Dataset: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='09347, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='09347 [46] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Banbury, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Reddi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Torelli, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Holleman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jeffries, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kir´aly, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Montino, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kanter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Ahmed, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pau, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Thakker, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Torrini, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Warden, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cordaro, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Guglielmo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Duarte, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Gibellini, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Parekh, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Tran, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Tran, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Niu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Xu, “MLPerf Tiny Benchmark,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='07597, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='07597 [47] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kpotufe, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Feamster, “An Efficient One-Class SVM for Anomaly Detection in the Internet of Things,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='11146, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='org/abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 11146 [48] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Miro-Panades, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Tain, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Christmann, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Coriat, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lemaire, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Jany, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Martineau, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Chaix, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Quelen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pluchart, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Noel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Boumchedda, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Makosiej, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Montoya, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Bacles-Min, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Briand, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Philippe, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Valentian, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Heitzmann, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Beigne, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cler- midy, “SamurAI: A 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='7MOPS-36GOPS Adaptive Versatile IoT Node with 15,000× Peak-to-Idle Power Reduction, 207ns Wake-Up Time and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='3TOPS/W ML Efficiency,” in 2020 IEEE Symposium on VLSI Circuits, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1–2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [49] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Flamand, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rossi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Conti, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Loi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Pullini, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rotenberg, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Benini, “GAP-8: A RISC-V SoC for AI at the Edge of the IoT,” in 2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP), 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1–4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [50] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Shan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cai, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Xu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Wu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Shi, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yang, “A 510-nW Wake-Up Keyword-Spotting Chip Using Serial- FFT-Based MFCC and Binarized Depthwise Separable CNN in 28-nm CMOS,” IEEE Journal of Solid-State Circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 56, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 151– 164, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [51] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Lee, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Shin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Kim, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Yoo, “UNPU: An Energy-Efficient Deep Neural Network Accelerator With Fully Variable Weight Bit Precision,” IEEE Journal of Solid-State Circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 173–185, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' [52] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Fariselli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Rusci, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Cambonie, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' Flamand, “Integer-Only Approximated MFCC for Ultra-Low Power Audio NN Processing on Multi-Core MCUs,” in 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'} +page_content=' 1–4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tE1T4oBgHgl3EQf7wU5/content/2301.03537v1.pdf'}