Abstract
The NNEF (Neural Network Exchange Format) is a de facto standard file format for the neural network description and exchange. In this work, we represent a simple implementation of the NNEF execution system, which is similar to the programming language interpreters. While the original NNEF file format focused on the data exchange format itself, our prototype implementation shows that the contents of an NNEF file can be directly executed by the underlying computing systems.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Recently, the machine learning and big-data analysis applications are among the most important key issues in the area of computation and data handling [2]. One of the distinguished characteristics of the target data is that they are too large and/or too complex to be dealt with traditional data analysis tools. Thus, we need brand-new data handling tools for those large-scale data processing.
The neural networks and their related tools are actually one of the most suitable tools for the machine learning applications and large-scale complex data analysis, at this time. Combining the neural network tools with large-scale data processing, we can achieve remarkable results. As the result, we now have many neural network frameworks, including TensorFlow [1], Caffe [4], Keras [5], and others.
In the field of neural networks and their handling, the NNEF (Neural Network Exchange Format) [3] is one of the de facto standard file formats to exchange the trained data and the computation network itself, as shown in Fig. 1. At this time, NNEF file loaders and exporters are widely used in a variety of areas, including neural network applications, big-data handling, vision processing, and others.
In this work, we present an approach to directly execute the described operations in the NNEF files, similar to the programming language interpreters. Our goal is to build up a framework to support the execution and the translation of the NNEF files. As the first step, we show that the computation result of NNEF files can be calculated directly from our prototype implementation of the NNEF interpreting system.
2 Design and Implementation
For the ease of NNEF file handling, the Khronos group provides some tools to generate and consume NNEF documents, on the GitHub repository [6]. Among those tools, the NNEF parser provides a standard way of parsing the standard NNEF files.
When successfully parsed, the Khronos NNEF parser generates a computational graph corresponding to the input NNEF file. An NNEF computational graph is actually represented with the following three major data structures:
-
nnef::graph—contains the computational graph itself, which consists of lists of tensors and operations;
-
nnef::tensor—contains a specific tensor, which can be an activation tensor or a variable tensor in the computational graph;
-
nnef::operation—represents a single pre-defined operation in the computational graph.
The NNEF standard specification [3] shows that the tensors may have arbitrary dimensions. Additionally, the original NNEF standard already specified more than 100 operations, and additional extensions can be applied to those operations.
Starting from those computational graphs, our system can do the required actions, such as directly executing the NNEF computation graph, or generating translated programming codes. Our overall design for this work flow is summarized in Fig. 2.
As the first step of our NNEF handling system, we targeted an NNEF interpreter: a single program executing the NNEF computations in a step-by-step manner. Our NNEF interpreter takes the computational graph from the Khronos NNEF parser, as the most important input.
In the next step, it traverses all the NNEF operations (or nnef::operation nodes), through the given order in the NNEF file. For each NNEF operation, it takes all the corresponding input tensors (or nnef::tensor nodes) and does the computations. The computation results are again stored into the tensors in the computational graph. Thus, the values of tensors in the computational graph can be updated after executing the computational graph itself.
3 Results
As an example case, we selected the famous MNIST hand-writing training [8] and used the very simple Python implementation, which is available in [7].
The input of the example NNEF file is a 1D array with 784 floating-point numbers, which actually represents 28-by-28 pixel images for hand-written decimal numbers from 0 to 9. Using the trained values of the internal neural networks, from the disk files of “mnist-wih” and “mnist-who,” we can calculate the results as given in Fig. 3.
After executing the example NNEF file in Fig. 3, with selected samples as its input values, our NNEF interpreter gets the result shown in Fig. 4. Since the result is actually the same to that of the original Python program, we can consider our NNEF interpreter works well at least for the MNIST examples.
4 Conclusion
For the machine learning applications and also for the big-data handlings and their related works, the most intuitive computational tool may be the tools based on neural networks. To support neural network computations, NNEF is the de facto standard from the Khronos Group.
At this time, NNEF handling tools are focused on the data loading and the file format conversions. In contrast, we tried to directly execute the contents of NNEF files, after interpreting them. Our prototype implementation shows that our approach works for real examples. Other kinds of NNEF handling tools, including code generations and NNEF training tools, are also possible in the near future.
References
M. Abadi, et al., TensorFlow: Large-scale machine learning on heterogeneous systems (2015). White paper available from tensorflow.org
S.L. Brunton, J.N. Kutz, Data-Driven Science and Engineering (Cambridge University Press, Cambridge, 2019)
T.K.N.W. Group, Neural Network Exchange Format, version 1.0.1. (Khronos Group, Beaverton, 2019)
Y. Jia, et al., Caffe: Convolutional architecture for fast feature embedding, in Proceedings of the 22nd ACM International Conference on Multimedia (MM ’14) (2014)
Keras Homepage. http://www.keras.io. Retrieved Jun 2020
KhronosGroup/NNEF-Tools, https://github.com/khronosgroup/nnef-tools. Retrieved Jun 2020
Makeyourownneural network, https://github.com/makeyourownneuralnetwork/makeyourownneuralnetwork. Retrieved Jun 2020
T. Rashid, Make Your Own Neural Network (CreateSpace Independent Publishing Platform, Scotts Valley, 2016)
Acknowledgements
This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grand No. NRF-2019R1I1A3A01061310).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Baek, N. (2021). A Prototype Implementation of the NNEF Interpreter. In: Arabnia, H.R., Ferens, K., de la Fuente, D., Kozerenko, E.B., Olivas Varela, J.A., Tinetti, F.G. (eds) Advances in Artificial Intelligence and Applied Cognitive Computing. Transactions on Computational Science and Computational Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-70296-0_51
Download citation
DOI: https://doi.org/10.1007/978-3-030-70296-0_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-70295-3
Online ISBN: 978-3-030-70296-0
eBook Packages: Computer ScienceComputer Science (R0)