Abstract
Tensor Product Variable Binding is an important aspect of building the bridge between the connectionist approach and the symbolic paradigm. It can be used to represent recursive structures in the tensor form that is an acceptable form for neural networks that are highly distributed in nature and, therefore, promise computational benefits from using it. However, practical aspects of tensor binding implementation using modern neural frameworks are not covered in the public research. In this work, we have made an attempt of building the topology that can perform binding operation for a well-known framework Keras. Also we make the analysis of the proposed solution in terms of its applicability for other important connectionist aspects of Tensor Product Variabl Binding. Proposed design of the binding network is the first step towards expressing any symbolic structure and operation in the neural form. This will make it possible for traditional decision making algorithms to be replaced with a neural network that brings scalability, robustness and guaranteed performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Smolensky, P.: Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif. Intell. 46(1–2), 159–216 (1990)
Wang, H., Dou, D., Lowd, D.: Ontology-based deep restricted Boltzmann machine. In: International Conference on Database and Expert Systems Applications, pp. 431–445. Springer (2016)
Margem, M., Yilmaz, O.: How much computation and distributedness is needed in sequence learning tasks? In: Artificial General Intelligence, pp. 274–283. Springer (2016)
Dehaene, S., Meyniel, F., Wacongne, C., Wang, L., Pallier, C.: The neural representation of sequences: from transition probabilities to algebraic patterns and linguistic trees. Neuron 88(1), 2–19 (2015)
Serafini, L., d’Avila Garcez, A.: Logic tensor networks: deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422 (2016)
Browne, A., Sun, R.: Connectionist inference models. Neural Netw. 14(10), 1331–1355 (2001)
Teso, S., Sebastiani, R., Passerini, A.: Structured learning modulo theories. Artif. Intell. 244, 166–187 (2017)
Besold, T.R., Kühnberger, K.-U.: Towards integrated neural-symbolic systems for human-level AI: two research programs helping to bridge the gaps. Biol. Inspired Cogn. Archit. 14, 97–110 (2015)
Gallant, S.I., Okaywe, T.W.: Representing objects, relations, and sequences. Neural Comput. 25(8), 2038–2078 (2013)
Blacoe, W., Lapata, M.: A comparison of vector-based representations for semantic composition. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 546–556. Association for Computational Linguistics (2012)
Cheng, J., Wang, Z., Wen, J.-R., Yan, J., Chen, Z.: Contextual text understanding in distributional semantic space. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 133–142. ACM (2015)
Rumelhart, D.E., McClelland, J.L., PDP Research Group, et al.: Parallel Distributed Processing, vol. 1. MIT Press, Cambridge (1987)
Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: International Conference on Machine Learning, pp. 1188–1196 (2014)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Shang, L., Lu, Z., Li, H.: Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364 (2015)
Widdows, D., Cohen, T.: Reasoning with vectors: a continuous model for fast robust inference. Log. J. IGPL 23(2), 141–173 (2014)
Frank, R., Mathis, D., Badecker, W.: The acquisition of anaphora by simple recurrent networks. Lang. Acquis. 20(3), 181–227 (2013)
Yilmaz,Ö., d’Avila Garcez, A.S., Silver, D.L.: A proposal for common dataset in neural-symbolic reasoning studies. In: NeSy@HLAI (2016)
Wang, H.: Semantic deep learning, pp. 1–42. University of Oregon (2015)
Rumelhart, D.E., Hinton, G.E., McClelland, J.L., et al.: A general framework for parallel distributed processing. Explor. Microstruct. Cogn. 1(45–76), 26 (1986)
Chollet, F., et al.: Keras (2015). https://keras.io
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org
Paszke, A., Gross, S, Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Demidovskij, A. (2020). Implementation Aspects of Tensor Product Variable Binding in Connectionist Systems. In: Bi, Y., Bhatia, R., Kapoor, S. (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1037. Springer, Cham. https://doi.org/10.1007/978-3-030-29516-5_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-29516-5_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29515-8
Online ISBN: 978-3-030-29516-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)