Abstract
using neural networks in hyperspectral imaging helps to get through the obstruction to solving data analysis, classification, and segmentation problems. There are problems, such as vegetations analysis in agriculture, which cannot be solved using classic RGB images due to lack of information. Applying neural networks to hyperspectral images is a sophisticated problem. The aim of this study is to examine concerns about using convolutional neural networks for the semantic segmentation of hyperspectral data. The following problems were considered: large spatial resolution, the influence of neural network’s input size on accuracy and performance; hyperspectral data preprocessing, the influence of dimensionality reduction and brightness equalization; neural network architecture influence on analyzing hyperspectral imaging. Also, the accuracy of neural networks was compared to classic approaches: multinominal logistic regression, random forest algorithm, discriminant analysis. As the result of the study the importance of choosing neural network’s architecture and hyperspectral data preprocessing methods are discussed.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 INTRODUCTION
A hyperspectral image is an image comprising a wide spectrum of light instead of classic red, green, and blue colors, which depict in common RGB images. In Fig. 1a visualization of hyperspectral image is shown. In hyperspecters, each band (layer or channel) depicts an intensity of light of a certain wavelength. Thus, the hyperspectral images have high spatial and spectral information. Considering information from infra specter opens new opportunities in data analyzing and helps in solving the issues which were unsolvable before. For example, using hyperspectral data may help in agriculture, since near-infra layers contain significant information required for determining normalized difference vegetation index (NDVI) [1]. NDVI can be computed using the following formula:
where RED is hyperspectral’ layer represented red wavelength, and NIR is hyperspectral’ layer represented near-infra wavelength.
A visualization of NDVI is shown on Fig. 2. Figure 3 shows the meaning of NDVI. This index is widely used to measure crop health [2]. The hyperspecters are also useful in the medical field. They can help to determine perfusion parameters of tissue and wounds [3], to detect brain tumors [4], to detect head and neck cancer [5]. In the food industry, for example, hyperspecters can help to assess the chemical composition and texture of meat [6]. Another field, where the hypesrpecters can improve quality of life is geometallurgy. Here, the hyperspecters can substitute whole labs since they can be used for orebodies investigation [7]. In this paper, the agriculture field is considered mainly.
Usually, classic approaches are used to perform hyperspectral data analysis. Such algorithms as logistic regression [8–10], random forest [11], clustering [12], dimensionality reduction [13], and discriminant analysis [14] can perform well in different tasks of hyperspectral data classification and segmentation. Recently, neural networks become very popular in many fields of computer vision, but their application in hyperspectral data analysis has not been sufficiently studied. There are several reasons for this. The first one is the lack of labeled data. Special expertise in hyperspectral data understanding is required for the process of hyperspectral data labeling. The most popular HSI (hyperspectral imaging) benchmark consists just of fourth hyperspecters [15], whereas RGB datasets usually comprise millions of labeled images [16]. Another issue is the specificity of hyperspectral data. Hyperspecters have large spatial resolution and hundreds of bands, so there is a need for powerful algorithms which can generalize such an amount of data and special data preprocessing algorithms that can consider the hyperspecter’s nature. The last issue we mention is the process of hyperspectral data capturing. This process can take a significant amount of time; meanwhile the weather’s conditions may change. The weather can significantly affect the result of capturing. So, the preprocessing and analyzing algorithms should take this issue into account.
2 RELATED WORKS
The most popular approaches to analyze hyperspectral data include classic algorithms such as logistic regression, random forest, discriminant analysis.
Logistic regression can be applied in a naive way, i.e., to each pixel of a hyperspectral image and it will work. In most cases, considering multi-label tasks, a multinomial logistic regression (MLR) is used. Most recent researches combine MLR models with additional algorithms or use MLR for doing preprocessing steps. For instance, in [17] were considered an application of subspace-based MLR algorithm which enhances class separability by using class-dependent subspace feature vectors. These feature vectors help to manage nonlinearities and better characterize noise and mixed pixels.
Random forest (RF) also can be used naively to classify each pixel in the hyperspecters. This approach will give the result similar to the MLR gives. In recent studies, RF is used for data preprocessing. For example, in [13] RF algorithm is applied on hyperspectral images in order to estimate feature importance. Features with the highest importance can be used for image segmentation. Here, the researchers used the RF algorithm for dimensionality reduction. In [18] researchers propose several algorithms based on RF ensembles.
In [19] different variation of linear discriminant analysis (LDA) was studied. Through exhausting tests, the authors show the effectiveness of LDA, especially their modification named regularized LDA in case of ill-posed hyperspectral images classification tasks. The tasks are ill-posed because of few numbers of training samples regarding a number of spectral features (bands). In [20] the authors suggest a combination of local linear feature extraction methods and LDA. Their framework was designed, so it conducts information inferred from unlabeled samples while simultaneously maximizing class discrimination of the data inferred from the labeled samples. In [14] the authors suggest using Independent Component Discriminant Analysis (ICDA) in order to find a transform matrix that transforms components so they are independent as much as possible. These transformed components when can be used to find the Bayes rule for the classification.
Another specific type of algorithms performing hyperspectral data analysis is indices such as vegetation indices. The indices use hyperspectral bands to enhance the target properties of the image. A process of index creation requires a solid knowledge of physics and an extensive experience in the hyperspectral field. Hence, index creation is a complicated task. Despite, there are some methods which allow building custom indices for specific cases. One of such methods builds Informative Normalized Difference Index (INDI) [21].
In recent, neural networks were studied too. The most common application of neural networks leads us to pixel-wise (1D) strategy. In the following papers [22–24] the authors applied convolutional neural networks to classify a single pixel of hyperspectral image. In these papers, the authors used the convolutional neural networks with architectures such as AlexNet [25], VGG [26], LeNet [27], which use dense layers at the end of the net. These architectures are pretty old nowadays. The authors use a pixel-wise strategy, so the neural networks miss neighborhoods’ pixels information.
In most novel papers, the authors are using fully-convolutional architectures such as Unet [28] which combines downsampling and up-sampling paths. In [29] the authors suggest using as input a special multi-feature fusion block instead of a raw hyperspecter. According to the result of their study, this block improves the overall accuracy of fully-convolutional networks in hyperspectral analysis.
In [30] the authors made a comparison of 1D-CNN (pixel-wise strategy), 2D-CNN (fully convolutional NN like Unet), and 3D-CNN (same as previous one, but usage 3d convolutions). By their study, there is no big difference between different architectures. But the 2D-CNN and 3D-CNN produce slightly better results in most cases.
In [31] the authors also compare 1D-CNN and 2D-CNN approaches. And, they additional study the influence of using a special feature selection layer that doesn’t use non-informative and noisy bands of a hyperspecter. They also study the influence of separating visual spectrum and near-infrared spectrum in different streams. By the result of their study, the 2D-CNN has better accuracy in all the experiments and there is no big difference in using a dual-stream neural network.
In this work, the different architecture with classic approaches was compared, as well as the influence of CNN’s input size and data preprocessing methods.
3 EXPERIMENTS SETUP
3.1 Dataset
A dataset of labeled hyperspectral images [32] was used for the experiments. The dataset comprised 385 hyperspecters with 236 bands depicting wavelength in the range from 420 to 979 nm. Each hyperspecter is labeled with a segmentation mask. The hyperspectral is labeled with 16 classes, excluding the background class. The following classes are used: apple cucumber (I), beet (II), cabbage (III), carrot (IV), corn (V), cucumber (VI), eggplant (VII), grass (VIII), milkweed (IX), oats (X), pepper (XI), potato (XII), amaranth (XIII), strawberry (XIV), soy (XV) и tomato (XVI). This dataset was manually divided into train and test sets. Figure 4 shows the classes distribution of the train set. The classes’ distribution is unbalanced. Hyperspectral images were normalized using standardization.
3.2 Classic Algorithms Setup
For the experiments, we used the following algorithms: logistic regression, discriminant analysis, random forest. All the algorithms were trained using 1% of pixels from every hyperspecter and all their results were validated using k-fold with five folds.
In the experiment, a multinomial logistic regression (MLR) was used. MLR was trained using the cross-entropy loss and L2 penalty.
To perform discriminant analysis, a quadratic discriminant analysis was used.
Random forest classifier was used with gini criterion; the number of trees in the forest 100, the maximum depth of the tree selects automatically until all leaves are pure or contain a few samples.
3.3 Neural Networks Setup
All the neural networks were trained using Focal Loss [33] with gamma parameter 5.5. This loss function was chosen since it developed for the cases with class imbalance. The gamma value was chosen from the special set of experiments. Neural networks were trained with a different number of epochs as well as batch size because of the neural network’s input size. The learning rate had an initial value of 0.001 and was changed using “Cosine Annealing With Warm Restart” [34] with t_0=2,t_mult=1 parameters. Hyperspecters are dynamically augmented using rotation on a random angle and horizontal/vertical flips. Neural networks were trained using the Pytorch framework [35].
4 ARCHITECTURE OF NEURAL NETWORK
4.1 Used Architectures
In this experiment, two distinct architectures were used. The Unet is a classic neural network for performing semantic segmentation. The feature of the Unet is in a fully convolutional nature. Unet comprises down-sampling and up-sampling paths. The down-sampling path extracts features from an image and the up-sampling path performs classes’ localization. Many experiments show Unet-like architectures’ predominance in the segmentation tasks of RGB images. The second one is architecture inspired by L2Net [36]. We developed our architecture shown in Fig. 5 to achieve two goals. First, we wanted to have a small neural network with a few weights. So, it will allow us to train networks faster and the networks became more robust and generalizing. Also, we wanted to try a completely different architecture. Our architecture has no bottleneck, up sampling and down-sampling paths, which are quite unusual nowadays.
4.2 Experiment Description
In this experiment, a comparison between different neural networks and classic approaches will be done. The classic approaches include multinominal logistic regression (MLR), random forest classifier (RFC), and quadratic discriminant analysis (QDA).
As the input of neural networks, PCA [37] processed hyperpsecters were used. We choose 17 primary components because we have 17 classes, including background class. The PCA is a commonly used dimensionality reduction algorithm. Hyperspectral images are usually preprocessed using PCA to significantly decrease the number of bands. Classic algorithms were trained on original hyperspecters using 1% of total pixels from each hyperspecter.
4.3 Result of the Experiment
In Table 1, we show the results of the experiment in terms of F1 metric. Neural network with our architecture showed the best result in the experiment. Because of the small number of training parameters in our neural network, it learns more general features and does not tend to overfit. The Unet showed a worse result, even the classic approaches, because it has many training parameters and tends to overfit. RFC and MLR models showed similar performance in the experiment, worse than a neural network with the proposed architecture. In some classes, classic approaches perform better than a neural network. For example, for classes IV, V, VI, VII, IX, XVI neural network has a worse value of the F1 metric than classic approaches. However, in some classes, the neural network performs much better than classic algorithms. For example, on I, VIII, X neural network of proposed architecture performs significantly better than other algorithms.
By this study, we can conduct that neural network performs better than classical algorithms. Also, we can say that a choice of architecture significantly affects the result. Unet architecture may not be suitable for our task. The proposed architecture showed better results and may be more suitable for our case.
5 DIMENSIONALITY REDUCTION
5.1 Experiment Description
In this experiment, a comparison of different approaches to hyperspecter preprocessing in terms of dimensionality reduction was done. We study the usage of different inputs of neural networks such as hyperspecters preprocessed with PCA algorithm, RGB components of hyperspecters, and raw hyperspecters. PCA algorithm was used with 17 primary components, as in the previous experiment. This experiment was performed for both Unet and the proposed architecture.
5.2 Results of the Experiment
In Table 2 results of the experiment are shown. The main conclusion is PCA impacts neural networks significantly; hence the dimensionality reduction in the case of hyperspectral images is extremely important. Instead, neural networks which take RGB components on input show result substantially worse. Hence, we can conduct that the current task is unsolvable with only RGB images. In the same way, the neural networks trained on hyperspecters with all bands show the worst result in the experiment. The main reason is a large amount of information stored in 245 original bands. The neural network, in such a case, does not generalize the knowledge.
6 HYPERSPECTER EQUALIZATION
6.1 Experiment Description
In this experiment, the influence of equalization on hyperspecters was studied. The results of training neural networks on hyperspecters preprocessed with CLAHE and histogram equalization were compared to the result of a neural network trained on a non-equalized hyperspecter. Prerequisites to use equalization in non-uniform brightness or in the process of hyperspecters capturing, which is susceptible to weather influence. In the experiment, only two architectures were used: Unet and proposed one. Before doing Hyperspectral equalization, the PCA algorithm was applied.
6.2 Results of the Experiment
In Table 3 results of the experiment are shown. By the results, histogram equalization makes the neural network performs worse than without equalization. On the other part, CLAHE equalization allows training neural networks with the same performance as neural networks trained on original hyperspecters or slightly better.
7 SPATIAL SHAPE OF THE NEURAL NETWORK’S INPUT
7.1 Experiment Description
In this experiment, an input shape of the neural network was studied. It is an important decision which researchers should consider computing power and neural network performance. In the experiment, different shapes were tested: 1 × 1 (pixel-wise), 2 × 2, 4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64, 128 × 128, 256 × 256, 512 × 512 (original shape). To satisfy the input shape of the neural network, original hyperspecters were cropped using the sliding-window method. This experiment throws light on the difference between pixel-wise strategy and common semantic segmentation. As in previous experiments, the hyperspecters were preprocessed using the PCA algorithm. Two different architectures were used: Unet and proposed one. The main idea is that images of original size may provide substantial semantic information, which leads to better accuracy.
7.2 Results of the Experiment
The results of the experiments are shown in Table 4. By this study, the best results were acquired using a 128 × 128 input shape. In particular, the difference between results with pixel-wise strategy and full-size images is not as big as expected. Usage of shapes from 2 × 2 up to 128 × 128 allows achieving the best result. Nonetheless, the training process is strikingly different. Training the neural network with big images even for 128 × 128 resolutions requires much more time than other resolutions. In the other hand, dealing with cropped images requires training pipeline modification i.e., additional time may be consumed on data preparation.
8 CONCLUSIONS
This paper presents a set of experiments aimed at studying the usage of neural networks for hyperspectral images segmentation. Experiment shows that neural networks can achieve a good result, but the specificity of hyperspecters should be considered. The architecture and dimensionality reduction play the most important role in the hyperspecters segmentation. As experiments show, the classic architecture Unet couldn’t achieve a result better than algorithms such as logistic regression and discriminant analysis can do. Otherwise proposed in the paper architecture achieves the best results among all the models. By the study, doing dimensionality reduction with PCA is a significant step in hyperspecters preprocessing. Hyperspectral image bands’ equalization has no significant impact on models’ accuracy. The spatial resolution used in the neural networks’ input is also worth mentioning. The different input shapes affect neural networks’ training process and accuracy significantly. The experiments show that there is a tradeoff between training time and data preparation time. Also, the experiments show that pixel-wise strategy and semantic segmentation of the full hyperspecter do not allow achieving the best result. Pixel-wise strategy lacks information about neighborhoods’ pixels. For full hyperspecters, the neural networks should have a proper receptive field. By the experiments, probably the best starting point in tasks of hyperspecters semantic segmentation might be: not a big neural network with simple architecture, small input shape from 32 × 32 up to 128 × 128, preprocessing hyperspecters with PCA algorithm.
REFERENCES
Asrar, G.Q., Fuchs, M., Kanemasu, Hatfield, En.J.L., Estimating absorbed photosynthetic radiation and leaf area index from spectral reflectance in wheat 1, Agron. J., 1984, vol. 76, no. 2, pp. 300–306,
Yang, W., Yang, C., Hao, Z., Xie, C., Li, en M., Diagnosis of plant cold damage based on hyperspectral imaging and convolutional neural network, IEEE Access, 2019, vol. 7, pp. 118239–118248.
Kulcke, A., Holmer, A., Wahl, P., Siemers, F., Wild, T., Daeschlein, en G., A compact hyperspectral camera for measurement of perfusion parameters in medicine, Biomed. Eng./Biomed. Technik, 2018, vol. 63, no. 5, pp. 519–527.
Fabelo, H. et al., Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations, PloS One, 2018, vol. 13, no. 3, p. e0193721.
Halicek, M. et al., Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging, J. Biomed. Opt., 2017, vol. 22, no. 6, p. 060503.
Reis, M.M. et al., Chemometrics and hyperspectral imaging applied to assessment of chemical, textural and structural characteristics of meat, Meat Sci., 2018, vol. 144, pp. 100–109.
Barton, I.F., Gabriel, M.J., Lyons-Baral, J., Barton, M.D., Duplessis, L., and Roberts, en C., Extending geometallurgy to the mine scale with hyperspectral imaging: A pilot study using drone-and ground-based scanning, Min., Metall. Explor., 2021, vol. 38, no. 2, pp. 799–818.
Li, J., Bioucas-Dias, J.M., Plaza, en A., Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning, IEEE Trans. Geosci. Remote Sens., 2010, vol. 48, no. 11, pp. 4085–4098.
Li, J., Bioucas-Dias, J.M., Plaza, and en A., Spectral–spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields, IEEE Trans. Geosci. Remote Sens., 2011, vol. 50, no. 3, pp. 809–823.
Wu, Z., Wang, Q., Plaza, A., Li, J., Sun, L., and Wei, en Z., Real-time implementation of the sparse multinomial logistic regression for hyperspectral image classification on GPUs, IEEE Geosci. Remote Sens. Lett., 2015, vol. 12, no. 7, pp. 1456–1460.
Amini, S., Homayouni, S., Safari, A., and Darvishsefat, en A.A., Object-based classification of hyperspectral data using Random Forest algorithm, Geo-spatial Inf. Sci., 2018, vol. 21, no. 2, pp. 127–138.
Zimichev, E.A., Kazanskiy, N.L., and Serafimovich, P.G., Spectral-spatial classification with k-means++ particional clustering, Comput. Opt., 2014, vol. 38, no. 2, pp. 281–286.
Myasnikov, E.V., Hyperspectral image segmentation using dimensionality reduction and classical segmentation approaches, Comput. Opt., 2017, vol. 41, no. 4, pp. 564–572.
Villa, A., Benediktsson, J.A., Chanussot, J., and Jutten, en C., Hyperspectral image classification with independent component discriminant analysis, IEEE Trans. Geosci. Remote Sens., 2011, vol. 49, no. 12, pp. 4865–4876.
Graña, M., Veganzons, M.A., and Ayerdi, B., Hyperspectral Remote Sensing Scenes. Accessed on March 1, 2022. [Online]. Available: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, en L., Imagenet: A large-scale hierarchical image database”, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
Khodadadzadeh, M., Li, J., Plaza, A., and Bioucas-Dias, en J.M., A subspace-based multinomial logistic regression for hyperspectral image classification, IEEE Geosci. Remote Sens. Lett., 2014, vol. 11, no. 12, pp. 2105–2109.
Xia, J., Ghamisi, P., Yokoya, N., and Iwasaki, en A., Random forest ensemp.es and extended multiextinction profiles for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., 2017, vol. 56, no. 1, pp. 202–216.
Bandos, T.V., Bruzzone, L., and Camps-Valls, en G., Classification of hyperspectral images with regularized linear discriminant analysis, IEEE Trans. Geosci. Remote Sens., 2009, vol. 47, no. 3, pp. 862–873.
Liao, W., Pizurica, A., Scheunders, P., Philips, W., and Pi, en Y., Semisupervised local discriminant analysis for feature extraction in hyperspectral images, IEEE Trans. Geosci. Remote Sens., 2012, vol. 51, no. 1, pp. 184–198.
Paringer, R.A., Mukhin, A.V., and Kupriyanov, A.V., Formation of an informative index for recognizing specified objects in hyperspectral data, Comput. Opt., 2021, vol. 45, no. 6, pp. 873–878.
Feng, L. et al., Detection of subtle bruises on winter jujube using hyperspectral imaging with pixel-wise deep learning method, IEEE Access, 2019, vol. 7, pp. 64494–64505.
Wang, R. et al., Classification and segmentation of hyperspectral data of hepatocellular carcinoma samples using 1D convolutional Neural Network, Cytometry, Part A, 2020, vol. 97, no. 1, pp. 31–38.
Sarker, Y., Fahim, S.R., Sarker, S.K., Badal, F.R., Das, S.K., and Mondal, en M.N.I., A multidimensional pixel-wise convolutional neural network for hyperspectral image classification, in 2019 IEEE International Conference on Robotics, Automation, Artificial-Intelligence and Internet-of-Things (RAAICON), 2019, pp. 104–107.
Krizhevsky, A., Sutskever, I., and Hinton, en G.E., Imagenet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst., 2012, vol. 25.
Simonyan, K. and Zisserman, en A., Very deep convolutional networks for large-scale image recognition, 2014. arXiv preprint arXiv:1409. 15564.
Szegedy, C. et al., Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
Ronneberger, O., Fischer, P., and Brox, en T., U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241.
Liu, Z., Jiang, J., Qiao, X., Qi, X., Pan, Y., and Pan, en X., Using convolution neural network and hyperspectral image to identify moldy peanut kernels, LWT, 2020, vol. 132, p. 109815.
Chen, S.-Y., Cheng, Y.-C., Yang, W.-L., and Wang, en M.-Y., Surface defect detection of Wet-P.ue leather using hyperspectral imaging, IEEE Access, 2021, vol. 9, pp. 127685–127702.
Trajanovski, S., Shan, C., Weijtmans, P.J.C., de Koning, S.G.B., and Ruers, en T.J.M., Tongue tumor detection in hyperspectral images using deep learning semantic segmentation, IEEE Trans. Biomed. Eng., 2020, vol. 68, no. 4, pp. 1330–1340.
HSI-Dataset-API. Accessed on March 1, 2022. [Online]. Available: https://pypi.org/project/HSI-Dataset-API.
Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, en P., Focal loss for dense object detection, in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2980–2988.
Loshchilov, I. and Hutter, en F., Sgdr: Stochastic gradient descent with warm restarts, 2016. arXiv preprint arXiv:1608. 03983.
Paszke, A. et al., PyTorch: An Imperative Style, high-performance deep learning library, in Advances in Neural Information Processing Systems 32, Wallach, H., Larochelle, H., Beygelzimer, A., d\textquotesingle Alché-Buc, F., Fox, E., Garnett, en R., Ed., Reds Curran Associates, Inc., 2019, pp. 8024–8035.
Tian, Y., Fan, B., and Wu, en F., L2-net: Deep learning of discriminative patch descriptor in Euclidean space, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 661–669.
Wold, S., Esbensen, K., and Geladi, en P., Principal component analysis, Chemom. Intell. Lab. Syst., 1987, vol. 2, no. 1–3, pp. 37–52.
Funding
The work was supported by the Ministry of Science and Higher Education of the Russian Federation, project no. FSSS-2021-0016 in the framework of the research performed by the laboratory “Photonics for a smart home and smart city” (state contract with the Samara University (theoretical research and software development) and as part of the “Priority 2030” federal strategic academic leadership program under “2021–2030 Samara University Development Program” by the Government of the Samara Region (experiments).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
The authors declare that they have no conflicts of interest.
About this article
Cite this article
Mukhin, A., Danil, G. & Paringer, R. Semantic Segmentation of Hyperspectral Imaging Using Convolutional Neural Networks. Opt. Mem. Neural Networks 31 (Suppl 1), 38–47 (2022). https://doi.org/10.3103/S1060992X22050071
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3103/S1060992X22050071