Abstract
The detection and localization of bone joint regions in medical X-ray images are essential for contemporary medical diagnostics. Traditional methods rely on subjective interpretation by physicians, leading to variability and potential errors. Automated bone joint detection techniques have become feasible with advancements in general-purpose object detection. However, applying these algorithms to X-ray images faces challenges due to the domain gap. To overcome these challenges, a novel framework called effective and efficient network (EAE-Net) is proposed. It incorporates a context augment module (CAM) to leverage global structural information and a ghost bottleneck module (GBM) to reduce redundant features. The EAE-Net model achieves exceptional detection performance, striking a balance between accuracy and speed. This advancement improves efficiency, enabling clinicians to focus on critical aspects of diagnosis and treatment.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
LITJENS G J S, KOOI T, BEJNORDI B E, et al. A survey on deep learning in medical image analysis[J]. Medical image analysis, 2017, 42: 60–88.
MIOTTO R, WANG F, WANG S, et al. Deep learning for healthcare: review, opportunities and challenges[J]. Briefings in bioinformatics, 2018, 19(6): 1236–1246.
WANG C Y, LIAO H Y M, YEH I H, et al. CSPNet: a new backbone that can enhance learning capability of CNN[C]//Conference on Computer Vision and Pattern Recognition (CVPR), June 14–19, 2020, Seattle, WA, USA. New York: IEEE, 2020: 1571–1580.
TAGHANAKI S A, ABHISHEK K, COHEN J P, et al. Deep semantic segmentation of natural and medical images: a review[J]. Artificial intelligence review, 2021, 54(1): 137–178.
REN S, HE K, GIRSHICK R B, et al. Faster RCNN: towards real-time object detection with region proposal networks[C]//Advances in Neural Information Processing Systems 28, December 7–12, 2015, Montreal, Quebec, Canada. Ottawa: NIPS, 2015, 28: 91–99.
REDMON J, FARHADI A. YOLOV3: an incremental improvement[EB/OL]. (2018-04-08) [2023-05-23]. https://arxiv.org/abs/1804.02767.
HAN Y, CHEN C, TEWFIK A H, et al. Pneumonia detection on chest X-ray using radiomic features and contrastive learning[C]//International Symposium on Biomedical Imaging, April 13–16, 2021, Nice, France. 247–251.
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: transformers for image recognition at scale[C]//IEEE International Conference on Learning Representations, May 3–7, 2021, Austria. New York: IEEE, 2021.
HAN K, WANG Y, TIAN Q, et al. GhostNet: more features from cheap operations[C]//Conference on Computer Vision and Pattern Recognition (CVPR), June 13–19, 2020, Seattle, WA, USA. New York: IEEE, 2020: 1577–1586.
LI J, XU Z, XU L. Vehicle and pedestrian detection method based on improved YOLOV4-tiny[J]. Optoelectronics letters, 2023, 19(10): 623–628.
HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE transactions on pattern analysis and machine intelligence, 2015, 37(9): 1904–1916.
HE K M, ZHANG X Y. Deep residual learning for image recognition[C]//Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA. New York: IEEE, 2016: 770–778.
JIANG P, ERGU D, LIU F, et al. A review of YOLO algorithm developments[C]//Proceedings of the 8th International Conference on Information Technology and Quantitative Management, July 9–11, 2021, Chengdu, China. Amsterdam: Elsevier, 2021, 199: 1066–1073.
ZHOU X Y, WANG D Q, KRHENBÜHL P. Objects as points[EB/OL]. (2019-04-16) [2023-05-23]. https://arxiv.org/abs/1904.07850v1.
ZHANG H, LU C, CHEN E. Obstacle detection: improved YOLOX-S based on swin transformer-tiny[J]. Optoelectronics letters, 2023, 19(11): 698–704.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare no conflict of interest.
Additional information
This work has been supported by the National Key R&D Program of China (No.2018YFB1307802), and the Tianjin Science and Technology Plan Project (No.18PTLCSY00070).
MA Xinlong is a professor at the Institute of Medical Engineering and Translational Medicine, Tianjin University. He received his master degree from Tianjin University in 2004. His research interests are mainly in orthopedics, digital orthopedics, orthopedic biomechanics and sports medicine.
Rights and permissions
About this article
Cite this article
Wu, Z., Wan, M., Bai, H. et al. EAE-Net: effective and efficient X-ray joint detection. Optoelectron. Lett. 20, 629–635 (2024). https://doi.org/10.1007/s11801-024-3129-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11801-024-3129-y