Skip to main content
Log in

Long-Short-Term Memory Based on Adaptive Convolutional Network for Time Series Classification

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Deep learning technology is effective to solve time series classification tasks. The existing deep learning algorithms with fixed step convolution cannot effectively extract and focus on multi-scale features. Based on the complexity and long-term dependence of time series data, an end-to-end model, called as Adaptive Convolutional Network Long-Short-Term Memory (ACN-LSTM), is proposed in this paper. This network is composed of two branches: long-short-term memory (LSTM) and adaptive convolution neural network (ACN). To control the transmission of sequence information, fully extract the correlation information of time series, and enhance the discriminative power of the network, LSTM uses memory cells and gate mechanism. ACN obtains local features of time series by stacking one-dimensional convolutional neural block (Conv1D) and then the multi-scale convolutional neural block is used to capture different scales of information. Meanwhile, to adaptively adjust the feature information between layers, an inter-layer adaptive channel feature adjustment mechanism (ACFM) is proposed. ACN-LSTM not only fully extracts long-term time correlation information, but also enables neurons to adaptively adjust their receptive field sizes, thus, it obtains more accurate classification results. The experiment results on 65 UCR standard datasets show that the proposed ACN-LSTM achieves highest arithmetic and geometric mean rank, and the lowest mean error, which are 2.492 and 2.108, and 0.127, respectively, compared with other models, which indicates that it is effective in univariate time series classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Paparrizos J, Gravano L (2017) Fast and accurate time-series clustering. ACM Trans Database Syst 42(2):1–49

    Article  MathSciNet  MATH  Google Scholar 

  2. Xing Z, Pei J, Keogh E (2010) A brief survey on sequence classification. ACM SIGKDD Explor Newsl 12(1):40–48

    Article  Google Scholar 

  3. Bagnall A, Lines J, Bostrom A, Large J, Keogh E (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min Knowl Disc 31(3):606–660

    Article  MathSciNet  Google Scholar 

  4. Baydogan M, Runger G, Tuv E (2013) A bag-of-features framework to classify time series. IEEE Trans Pattern Anal Mach Intell 35(11):2796–2802

    Article  Google Scholar 

  5. Schäfer P (2015) The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Disc 29(6):1505–1530

    Article  MathSciNet  MATH  Google Scholar 

  6. Hinton G, Osindero S, Teh Y (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  7. Lin J, Keogh E, Wei L, Stefano L (2007) Experiencing SAX: a novel symbolic presentation of time series. Data Min Knowl Disc 15(2):107–144

    Article  Google Scholar 

  8. Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Disc 29(3):565–592

    Article  MathSciNet  MATH  Google Scholar 

  9. Bagnall A, Lines J, Hills J (2015) Time-series classification with COTE: the collective of transformation-based Ensembles. IEEE Trans Knowl Data Eng 27(9):1–1

    Article  Google Scholar 

  10. Lines J, Taylor S, Bagnall A (2018) Time series classification with HIVE-COTE: the hierarchical vote collective of transformation-based ensembles. ACM Trans Knowl Discov Data 12(5):52.1-52.35

    Article  Google Scholar 

  11. Zhu H, Zhang J, Cui H, Wang K, Tang Q (2022) TCRAN: Multivariate time series classification using residual channel attention networks with time correction. Appl Soft Comput 114:108–117

    Article  Google Scholar 

  12. Muhammad K, Mustaqeem UA, Imran A, Sajjad M, Kiran M, Sannino G, Albuquerque V (2021) Human action recognition using attention based LSTM network with dilated CNN features. Futur Gener Comput Syst 125:820–830

    Article  Google Scholar 

  13. Mustaqeem KS (2021) Optimal feature selection based speech emotion recognition using two-stream deep convolutional neural network. Int J Intell Syst 36(9):5116–5135

    Article  Google Scholar 

  14. Shelhamer E, Long J, Dattell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651

    Article  Google Scholar 

  15. Rumelhart D, Hinton G, Williams R (1988) Learning internal representations by error propagation. Read Cogn Sci 323(6088):399–421

    Article  Google Scholar 

  16. Karim F, Majumdar S, Darabi H (2019) Insights into LSTM fully convolutional networks for time series classification. IEEE Access 7:67718–67725

    Article  Google Scholar 

  17. Karim F, Majumdar S, Darabi H (2018) LSTM fully convolutional networks for time series classification. IEEE Access 6(99):1662–1669

    Article  Google Scholar 

  18. Chen W, Shi K (2021) Multi-scale attention convolutional neural network for time series classification. Neural Netw 136:126–140

    Article  Google Scholar 

  19. Karim F, Majumdar S, Darabi H, Harford S (2019) Multivariate LSTM-FCNs for time series classification. Neural Netw 116:237–245

    Article  Google Scholar 

  20. Xiao Z, Xu X, Xing H, Luo S, Dai P, Zhan D (2021) RTFN: a robust temporal feature network for time series classification. Inf Sci 571:65–86

    Article  MathSciNet  Google Scholar 

  21. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE conference on computer vision and pattern recognition, pp 770–778

  22. Fawaz H, Lucas B, Forestier G, Pelletier C, Petitjean F (2020) Inception-time: finding Alexnet for time series classification. Data Min Knowl Disc 34:1936–1962

    Article  Google Scholar 

  23. Tang W, Long G, Liu L, Zhou T, Blumenstein M (2020) Rethinking 1D-CNN for time series classification: a stronger baseline. arXiv preprint https://arxiv.org/abs/200210061

  24. Marisa C, Antoine B (2019) Spatial attention alters visual appearance. Curr Opin Psychol 29:56–64

    Article  Google Scholar 

  25. Lecun Y, Boser B, Denker J (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551

    Article  Google Scholar 

  26. Zhao B, Zhang X, Zhan Z, Wu Q (2021) Deep multi-scale adversarial network with attention: a novel domain adaptation method for intelligent fault diagnosis. J Manuf Syst 59:565–576

    Article  Google Scholar 

  27. Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K (2015) Spatial transformer networks. In: Proceedings of the 2015 annual conference on neural information processing systems, pp 2017–2025

  28. Jie H, Li S, Gang S (2018) Squeeze-and-Excitation Networks. In: Proceedings of the conference on computer vision and pattern recognition, pp 7132–7141

  29. Woo S, Park J, Lee J, Kweon I (2018) CBAM: convolutional block attention module. Compt Vis 11211:3–19

    Google Scholar 

  30. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: Proceedings of the 2016 IEEE conference on computer vision and pattern recognition, pp 2921–2929

  31. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  32. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  33. Fang C, He D, Li K, Wang F (2021) Image-based thickener mud layer height prediction with attention mechanism-based CNN. ISA Trans 128:677–689

    Article  Google Scholar 

  34. Gulli A, Pal S (2017) Deep learning with Keras. Packt Publishing Ltd, pp 15–24

    Google Scholar 

  35. Fawaz H, Forestier G, Weber J, Idoumghar L, Muller P (2019) Deep learning for time series classification: a review. Data Min Knowl Disc 33(4):917–963

    Article  MathSciNet  MATH  Google Scholar 

  36. Ye L, Keogh E (2011) Time series shapelets: a novel technique that allows accurate, interpretable and fast classification. Data Min Knowl Disc 22(1):149–182

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the Natural Science Foundation of Hubei Province (No. 2020CFB546), National Natural Science Foundation of China under Grants 12001411, 12201479, and the Fundamental Research Funds for the Central Universities (WUT: 2021IVB024, 2020-IB-003).

Author information

Authors and Affiliations

Authors

Contributions

Author contributions: Yujuan Li contributed to the conception of the study; Yujuan Li performed the experiment; Yonghong Wu helped perform the analysis with constructive discussions.

Corresponding author

Correspondence to Yonghong Wu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Wu, Y. Long-Short-Term Memory Based on Adaptive Convolutional Network for Time Series Classification. Neural Process Lett 55, 6547–6569 (2023). https://doi.org/10.1007/s11063-023-11148-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-023-11148-w

Keywords

Navigation