Abstract
Purpose
Data quality is essential for all types of research, including health registers. However, data quality is rarely reported. We aimed to assess the accuracy of data in a national spine register (NORspine) and its agreement with corresponding data in electronic patient records (EPR).
Methods
We compared data in NORspine registry against data in (EPR) for 474 patients operated for spinal stenosis in 2015 and 2016 at four public hospitals, using EPR as the gold standard. We assessed accuracy using the proportion correctly classified (PCC) and sensitivity. Agreement was quantified using Kappa statistics or interaclass correlation coefficient (ICC).
Results
The mean age (SD) was 66 (11) years, and 54% were females. Compared to EPR, surgeon-reported perioperative complications displayed weak agreement (kappa (95% CI) = 0.51 (0.33–0.69)), PCC of 96%, and a sensitivity (95% CI) of 40% (23–58%). ASA classification had a moderate agreement (kappa (95%CI) = 0.73 (0.66–0.80)). Comorbidities were underreported in NORspine. Perioperative details had strong to excellent agreements (kappa (95% CI) ranging from 0.76 ( 0.68–0.84) to 0.98 (0.95–1.00)), PCCs between 93% and 99% and sensitivities (95% CI) between 92% (0.84–1.00%) and 99% (0.98–1.00%). Patient-reported variables (height, weight, smoking) had excellent agreements (kappa (95% CI) between 0.93 (0.89–0.97) and 0.99 (0.98–0.99)).
Conclusion
Compared to electronic patient records, NORspine displayed weak agreement for perioperative complications, moderate agreement for ASA classification, strong agreement for perioperative details, and excellent agreement for height, weight, and smoking. NORspine underreported perioperative complications and comorbidities when compared to EPRs. Patient-recorded data were more accurate and should be preferred when available.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
In clinical research, it is crucial to question how true and accurate data are; however, data validity and accuracy assessments are rarely published explicitly. National medical registries collect large-scale data during the dynamic workflow of daily clinical practice and have become essential sources of evidence-based medicine and health care policies. Register-based studies reflect everyday practice and have high external validity, and complement randomized control trials (RCTs) that assess smaller populations with lower external validity. Register data are collected and recorded by healthcare personnel, and not by dedicated research assistants. Therefore, it is essential to periodically assess the quality of register data reported by healthcare personnel and patients by validating it against other sources of data [1,2,3]. Because systematic errors can lead to bias, register validations may impact the robustness of medical and political conclusions based on register data. The literature on the validity of medical register data is sparse. Some studies are reporting good validity of medical and cancer registries [4,5,6]. However, a recent validation study of a German spine registry (DWG) showed high inaccuracy [7] and the authors recommended against using these register data.
Our study aimed to assess the accuracy and agreement of NORspine data by comparing it to electronic patient records (EPR). Such information can aid in identifying pitfalls and conceptual problems related to data collection, not only relevant for other spine registers but also others, routinely recording clinical data.
Patients and methods
In this cross-sectional study, we reviewed electronic patient records (EPRs) of patients operated for lumbar spinal stenosis (LSS) who consented and responded to NORspine between January 1, 2015, and December 31, 2016. The authors were authorized to access data from four public hospitals within one health region (South-Eastern Norway Regional Health Authority) in Norway. To assess the representativity of our sample, we compared the study population to those treated at the remaining hospitals.
In Norway, all 39 hospitals (coverage = 100%) that offer surgery for degenerative spinal disorders are obliged to report data to NORspine. Seventy percent of all patients that undergo elective spine surgery in Norway are included in NORspine, and the proportion that responds one year after surgery is seventy-four percent [8].
A NORspine data set consists of a preoperative form completed by the patient at admission for surgery. This form covers items related to sociodemographic and lifestyle variables (e.g., smoking, height, and weight) and a standard battery of questionnaires assessing pain and disability (Table 5). Immediately after completing surgery, and optimally while still in the operating theater, the surgeon completes a standardized form and reports clinical and radiological diagnosis, relevant comorbidities, ASA classification—usually as graded by the anesthetist, and details about the surgery, e.g., previous surgery, surgical access, surgical methods, and level(s) operated. The surgeon also reports perioperative complications by a predefined list (Table 6).
Patients report the clinical outcome at 3 and 12 months after surgery as assessed by standard Patient-Reported Outcome Measures (PROMs).
Electronic patient records (EPRs) consist of non-structured text documents (free text) recorded by DIPS® software within predetermined headings. We reviewed the EPRs using a standard empty NORspine form, and the investigators (OKA and SK) had no access to the corresponding data previously recorded in the NORspine. The study group selected a set of NORspine variables that could be recaptured from EPRs. Furthermore, we reviewed EPR documents (e.g., admission and surgeon’s notes) at the same time point as the time of surgery recorded in NORspine. We did not assess variables that were not registered routinely or consistently in EPRs, such as PROMs, symptom duration, marital status, education level, mother tongue, and working capability. The clinical follow-up at the treating centers was not standardized, and it was performed at different time points at the hospitals without structural recording in EPR. Hence, follow-up data (including reoperations) in NORspine were not evaluated against EPRs in this study.
The EPRs of 22 patients were independently reviewed by two raters (OA and SK) to estimate interobserver reliability.
We calculated concordance in terms of agreement when comparing the structured NORspine data with EPR data; we also calculated accuracy for dichotomous variables, using EPR as the gold standard. We chose to report both accuracy and agreement because the use of certain EPR variables as a reference could be questioned (e.g., smoking and comorbidity).
The NORspine form requires the surgeon to report relevant comorbidities from a list, such as cardiovascular disease, diabetes, and osteoarthritis. In the EPR, comorbidity is recorded irrespective of its relevance to the planned spinal surgery. Consequently, agreement and accuracy were not evaluated for comorbidities. We only compared frequencies of relevant comorbidities recorded in NORspine vs. the corresponding comorbidities recorded in EPRs. Furthermore, we assessed the agreement for ASA classification between the two data sources.
Statistical analyses
Baseline data were described using means (95%CI) (continuous data) and proportions (categorical data). Accuracy was assessed by proportion correctly classified (PCC) and sensitivity. Perioperative complications were categorized by eight categories (Table 6), and the accuracy of complication recording was assessed by class average accuracy (CAA) using the micro-averaged method. Agreement between NORspine and EPRs was assessed by Cohen's kappa (ƙ) or Fleiss weighted kappa (ƙ) for categorical variables (dichotomous and ordinal variables). (ASA classification was analyzed as an ordinal variable, ranging from 1 to 5, in the agreement analysis.) For continuous variables, we calculated the intraclass correlation coefficient (ICC) using a two-way mixed model to assess absolute agreement [9]. We classified agreement (ƙ-value) as minimal (0.21–0.39), weak (0.40–0.59), moderate (0.60–0.79), strong (0.80–0.90), and almost perfect (> 0.90) [10]. Agreement according to ICC (values) was classified as poor (< 0.50), moderate (0.50–0.75), strong (0.75–0.90), and excellent (< 0.90) [11]. Finally, we calculated the prevalence of missing values for each variable. The results are presented as point estimates with 95% confidence intervals (CI).
We used SPSS, version 26 (IBM Corp., Armonk, N.Y., USA) and STATA version 16 (StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC.)
Ethical considerations
The Norwegian Regional Committee for medical and health research ethics approved this study (reference no. 2017(2157)), as did the data protection officers at the four hospitals. All patients had provided informed consent, and the study was conducted in compliance with the Helsinki declaration.
Results
NORspine recorded 3,843 patients operated for LSS during 2015 and 2016. The investigators were authorized to access EPRs at four hospitals and reviewed the EPRs of 474 consecutive operated patients (12.3% of the NORspine population). Mean age (95%CI) was 66 (65.3–67.2) years, and 254 (54%) were females. The total of missing data were 0.9% in NORspine (completeness 99.1%) and 2.8% (completeness 97.2%) in EPRs (Table 7).
Patient characteristics, including data on the rest of the NORspine patients operated for lumbar spinal stenosis, are shown in Table 1. Our sample differed somewhat from the rest of the NORspine population at baseline. The included patients had more comorbidity, higher BMI, and higher disability (ODI) and pain scores (NRS = numeric rating scales) for leg and back pain. In addition, the study population had more smokers and had fewer perioperative complications than the total spinal stenosis population registered in NORspine (Table 1). For a sample of 22 patients, the interrater reliability for the two authors that reviewed EPR variables was almost perfect.
Perioperative complications were recorded for 15 (3.2%) patients in NORspine, and 30 (6.5%) patients in the EPRs. The agreement between NORspine and EPR was weak (ƙ (95%CI) = 0.51 (0.33–0.69)). The class average accuracy for all perioperative complications was 99.4% (eight different categories combined), and for dural tears isolated, 97.0% were classified correctly (PCC). The sensitivity for recording a complication (95%CI) was 40% (23–58%) (Table 2).
As shown in Table 3, ASA classification (1–5) showed moderate agreement (ƙ (95%CI) = 0.73 (0.66–0.80)). Table 4 shows the differences in the prevalence of comorbidities. NORspine underreported comorbidities compared to EPRs.
As shown in Table 2, previous surgery (yes or no) had an almost perfect agreement (ƙ (95%CI) = 0.93 (0.89–0.97)), a proportion classified correctly of 97.2%, and a sensitivity of 95.8%. The number of previous surgeries showed moderate agreement (ƙ (95%CI) = 0.62 (0.48–0.75)), as shown in Table 3.
Perioperative details (method of decompression, fusion, surgical access, spinal level operated) recorded by the surgeon showed moderate to excellent agreement between NORspine and EPR (ƙ = 0.76 to 0.98), and high proportions were classified correctly (93–99%). The sensitivity for the recording of perioperative details was high (92–99%).
Smoking status had an almost perfect agreement (ƙ (95%CI) = 0.93 (0.89–0.97)), a proportion correctly classified of 97.2%, and a sensitivity of 92.0%. Furthermore, as shown in Table 3, the patients' height, weight, and BMI showed excellent agreement between NORspine and EPRs (ICC = 0.99 to 0.99).
Discussion
This cross-sectional study compared Norwegian spine registry (NORspine) data to corresponding EPR data. We found a weak agreement for perioperative complications, a moderate agreement for ASA classification, a moderate to strong agreement for perioperative details, and almost perfect agreement for demographics. NORspine underreported perioperative complications and comorbidity.
Perioperative complications had a weak agreement and were underreported (sensitivity of only 40%) in NORspine. For example, dural tears were recorded in 13 patients (2.7%) in NORspine and 25 patients (5.3%) in EPR. Physicians' underreporting of surgical complications has been previously reported [12,13,14,15,16,17]. In line with our findings, a Swedish study of medical registers by Øhrn et al. from 2011 showed that only 74 of 210 (35%) of complications registered in a patient claim database had been recorded in the Swedish spine register [18]. Furthermore, a study validating German spine register data found wrong entries ranging from 10 to 50% for variables describing complications and reoperations [7]. Still, a sensitivity of 40% for surgeon-reported perioperative complications in the present study was unexpectedly low. We found a class average accuracy (CAA) for all perioperative complications of 99.4%; however, some of the complications listed are extremely rare, and CAA may, therefore, overestimate the accuracy of complication reporting. Previously published data on the prevalence of perioperative complications range between 3 and 16% [19,20,21,22]. The corresponding number in NORspine was 3.2%, also indicating an underreporting. EPRs documented 6.5% perioperative complications – a number more concordant with previous studies. Perioperative complications are recorded in NORspine and EPR at the same time point, and these data sources should match. Possible explanations for the discrepancy between the frequencies of complications recorded in NORspine and EPRs can be different definitions; for example, a minor repaired dural tear may not be graded as a complication by some surgeons.
ASA classification showed a moderate agreement, and the means between the two data sources were similar (2.17 vs. 2.14), illustrating no tendency to either under- or over-classification. The German spine register validation study reported wrong entries for ASA classification in 25% of cases and showed that a relatively simple classification system might be reported inaccurately [7]. However, all classification systems are subject to interpretation and inherent disagreement. We considered the ASA classification recorded in EPRs by anesthetists as the gold standard. However, the surgeon completing the NORspine form could either miss or disagree with the ASA classification provided by the anesthetist or use an ASA score recorded elsewhere in the EPR.
Each comorbidity was underreported in NORspine; this may be because surgeons could have different definitions of comorbidity they considered relevant, which illustrates a problem with the concept validity of this item in the NORspine questionnaire. Carreon et al. studied the comorbidity in patients with spinal stenosis in 2003 [21]. They found prevalence on the same level as we did in EPR, which supports our conclusion that comorbidity was underreported in NORspine. Moreover, previous studies have found low accuracy for orthopedic surgeons performing coding of diagnoses and indications for surgery, assessing cognitive function, and registering antibiotic use [23,24,25]. The discrepancy in the recorded prevalence of depression and anxiety in NORspine vs. EPRs may indicate that spine surgeons are not sufficiently aware of patients’ mental health and how mental health may influence the clinical results (PROMs) after spinal surgery.
One should consider alternative ways of assessing comorbidity. However, other comorbidity scoring systems as frailty score and comorbidity indices (Charlson comorbidity index (CCI) and Elixhauser comorbidity index) [26, 27] are more complex, possibly affecting response rates and accuracy. We found ASA classification to be the most feasible comorbidity measure, and it displayed moderate agreement in our study. Mannion et al. found that ASA was a strong predictor of complications after hip surgery, and adding a more complex score (CCI) was not superior in predicting postoperative complications [28]. Hence, we recommend using ASA classification over more complex measures despite its limitations.
There was a discrepancy in accuracy between the different variables concerning previous surgery. Previous spinal surgery (yes/no) had an agreement of 0.93, and the number of previous surgeries had an agreement of 0.62; this indicates that NORspine is more precise in recording patients who had any previous surgery than the exact number of previous surgeries.
Perioperative details were accurately registered, with the proportion correctly classified above 93%. There was a strong to excellent agreement between NORspine data and the EPR data, with kappa values above 0.90; this is also in line with the literature; orthopedic surgeons coded surgical procedures and classified x-rays accurately in previous studies [23, 24]. However, surgical access reported by the surgeon showed minimal agreement between NORspine and EPR. Defining surgical accesses in NORspine may have been subject to interpretation, as surgeons may have misinterpreted the “lateral/Wiltzes’” choice as the direct lateral approach. Therefore, the NORspine board plans to clarify and amend options for surgical accesses in the next version of the surgeon-reported questionnaire.
Smoking status is recorded in the EPR as a direct question to the patient and in the NORspine as a simple yes or no question. The source of these two variables was the same, the patient. However, there was an error rate of 2,5% (PCC 97,5%) and an agreement of 0.93. This variable can indicate the rate of random error in NORspine. Patients’ height, weight, and BMI displayed excellent agreement. The patients themselves report these variables to NORspine, and their accuracy and agreement could serve as an aim for surgeon-recorded variables. It is questionable to define EPR as a gold standard because some variables could be more correctly reported by patients than healthcare personnel. A further step to improve data quality could be to use a combined construct of patient- and physician-recorded variables [4].
About 1% of NORspine data values were missing values, as compared to 3% in the EPR. This is in line with a literature review of data quality from 2002 [29]; they found 2% missing data in automatically collected and 5% in manually collected register data.
Our study has several limitations: We used EPRs as an external data source, although they may lack relevant information. EPR data might not be appropriate for some variables as a reference, so we chose to report both accuracy and agreement. Agreement would be a more appropriate measure when no clear reference standard exists. The EPRs at the four hospitals were not standardized (free text format) and could miss or misinterpret relevant information. On the other hand, every patient has an EPR, and it has been defined as a gold standard in other validation studies [4,5,6,7] and has a high medical and legal status. Ideally, to be defined as a complete gold standard, the EPR should record PROMs.
Another limitation was potential selection bias due to the non-randomized selection of hospitals. The accuracy of NORspine and EPR data registration could differ between hospitals, limiting the generalizability of our findings. However, most of the differences in patient characteristics between the four selected and the remaining hospitals reporting to NORspine were small, and some of them might be incidental findings. Therefore, the authors consider the patient sample representative for the broader population of the NORspine. Patients analyzed in the present study were operated on and included during 2015–2016, and no relevant changes have been made in NORspine since 2015. Therefore, we believe that our findings are still relevant.
The selection of variables had to be limited to those available and suitable for comparison in both data sources. Therefore, the concordance of some relevant variables could not be assessed (e.g., patient-reported disability and pain).
We only assessed patients who underwent decompression due to spinal stenosis, who were treated with a limited number of simple procedures and surgical accesses. Our results may, therefore, represent a “best-case scenario” regarding the quality of NORspine data.
The strength of this study was a comprehensive and systematic review of a large number of EPRs at four hospitals. We assessed both accuracy (PCC and sensitivity) and agreement (kappa or ICC) of patient—and surgeon-reported data to validate different NORspine variables.
Future perspectives and implications
A long-term goal could be the inclusion of clinical registry data in a structured EPR. Structured EPRs have been implemented in Norway for hip fracture patients, and data from a structured EPR are sent directly to the national hip fracture audit. Structured EPRs can improve the quality of the EPR and the quality and completeness of registry data. Furthermore, structured EPRs could make valuable data more accessible to clinical research. A future perspective would be to integrate spine registers into a structured EPR.
Conclusions
This cross-sectional validation study showed that the Norwegian Registry for Spine Surgery (NORspine) tended to underreport perioperative complications to spine surgery compared to corresponding EPRs. This finding may represent a systematic error (information bias), and future register studies on complications after spinal surgery could cross-reference perioperative complications with other data sources to reduce the risk of underreporting. Comorbidities were also underreported in NORspine; the ASA classification seems the simplest and most reliable way to assess comorbidity. Perioperative details and patient-reported data had moderate to excellent agreement.
Availability of data and material
Data available on request.
References
Brooke EM, World Health Organization (1974) The current and future use of registers in health information systems / Eileen M. World Health Organization, Brooke
Kodra Y, de la Paz MP, Coi A, Santoro M, Bianchi F, Ahmed F, Rubinstein YR, Weinbach J, Taruscio D (2017) Data quality in rare diseases registries. Adv Exp Med Biol 1031:149–164. https://doi.org/10.1007/978-3-319-67144-4_8
Derakhshan P, Azad Z, Naghdi K, Safdarian M, Zarei M, Jazayeri SB, Sharif-Alhoseini M, Zendehdel K, Amirjamshidi A, Ghodsi Z, Faghih M, Mohammadzadeh M, Khazaie Z, Zadegan S, Abedi A, Sadeghian F, Rahimi-Movaghar V (2018) P206 -The impact of data quality assurance and control solutions on the completeness, accuracy and consistency of data in a national spine registry (NSCIR). Global Spine Conferance 2018
Varmdal T, Bakken IJ, Janszky I, Wethal T, Ellekjær H, Rohweder G, Fjærtoft H, Ebbing M, Bønaa KH (2016) Comparison of the validity of stroke diagnoses in a medical quality register and an administrative health register. Scand J Public Health 44(2):143–149. https://doi.org/10.1177/1403494815621641
Löfgren L, Eloranta S, Krawiec K, Asterkvist A, Lönnqvist C, Sandelin K, Steering group of the National Register for Breast Cancer (2019) Validation of data quality in the Swedish national register for breast cancer. BMC Public Health 19(1):495. https://doi.org/10.1186/s12889-019-6846-6
Landberg A, Bruce D, Lindblad P, Ljungberg B, Lundstam S, Thorstenson A, Sundqvist P (2021) Validation of data quality in the National Swedish kidney cancer register. Scand J Urol 55(2):142–148. https://doi.org/10.1080/21681805.2021.1885485
Meyer B, Shiban E, Albers LE, Krieg SM (2020) Completeness and accuracy of data in spine registries: an independent audit-based study. Eur Spine J 29:1453–1461. https://doi.org/10.1007/s00586-020-06342-6
Solberg T, Olsen RR, Berglund ML (2018) Annual report for NORspine 2018. Accsessed 11th Novbember 2021
Altmann DG (1991) Practical statistics for medical research, Chap 14. Chapmall & Hall, USA, pp 396–439
McHugh ML (2012) Interrater reliability: the kappa statistic. Biochem Med (Zagreb) 22(3):276–282 (PMID: 23092060; PMCID: PMC3900052)
Koo TK, Li MY (2016) A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research [published correction appears in J Chiropr Med. 2017 Dec;16(4):346]. J Chiropr Med 15(2):155–163. Doi:https://doi.org/10.1016/j.jcm.2016.02
Healey MA, Shackford SR, Osler TM, Rogers FB, Burns E (2002) Complications in surgical patients. Arch Surg 137(5):611–618. https://doi.org/10.1001/archsurg.137.5.611 (PMID: 11982478)
Cooper GS, Kou TD, Rex DK (2013) Complications following colonoscopy with anesthesia assistance: a population-based analysis. JAMA Intern Med 173(7):551–556. https://doi.org/10.1001/jamainternmed.2013.2908
Litwin MS, Lubeck DP, Henning JM, Carrol PR (1998) Differences in urologist and patient assessment of health related quality of life in men with prostate cancer: results of the capsure database. J Urol 159:1988–1992. https://doi.org/10.1016/S0022-5347(01)63222-1
Hutter MM, Rowell KS, Devaney LA, Sokal SM, Warshaw AL, Abbott WM, Hodin RA (2006) Identification of surgical complications and deaths: an assessment of the traditional surgical morbidity and mortality conference compared with the american college of surgeons-national surgical quality improvement program. J Am Coll Surg 203(5):618–624. https://doi.org/10.1016/j.jamcollsurg.2006.07.010 (ISSN 1072-7515)
Fromme EK, Eilers KM, Mori M, Hsieh Y-C, Beer TM (2004) How Accurate Is clinician reporting of chemotherapy adverse effects? A comparison with patient-reported symptoms from the quality-of-life questionnaire C30. J Clin Oncol 22(17):3485–3490
Grossman SA, Sheidler VR, Swedeen K, Mucenski J, Piantadosi S (1991) Correlation of patient and caregiver ratings of cancer pain. J Pain Symptom Manag 6(2):53–57. https://doi.org/10.1016/0885-3924(91)90518-9 (PMID: 2007792)
Öhrn A, Elfström J, Liedgren C, Rutberg H (2011) Reporting of sentinel events in Swedish hospitals: a comparison of severe adverse events reported by patients and providers. Jt Comm J Qual Patient Saf 37(11):495–501. https://doi.org/10.1016/S1553-7250(11)37063.ISSN1553-7250
Deyo RA, Hickam D, Duckart JP, Piedra M (2013) Complications after surgery for lumbar stenosis in a veteran population. Spine (Phila Pa 1976) 38(19):1695–1702. https://doi.org/10.1097/BRS.0b013e31829f65c1
Cassinelli EH, Eubanks J, Vogt M, Furey C, Yoo J, Bohlman HH (2007) Risk Factors for the development of perioperative complications in elderly patients undergoing lumbar decompression and arthrodesis for spinal stenosis. Spine 32(2):230–235. https://doi.org/10.1097/01.brs.0000251918.19508.b3
Carreon LY, Puno RM, Dimar JR, Glassman SD, Johnson JR (2003) Perioperative complications of posterior lumbar decompression and arthrodesis in older adults. J Bone Jt Surg 85(11):2089–2092
Papavero L, Engler N, Kothe R (2015) Incidental durotomy in spine surgery: first aid in ten steps. Eur Spine J 24:2077–2084. https://doi.org/10.1007/s00586-015-3837-x
Kazberouk ABS, Martin BI, Stevens JP, McGuire KJ (2015) Validation of an administrative coding algorithm for classifying surgical indication and operative features of spine surgery. Spine 40(2):114–120. https://doi.org/10.1097/BRS.0000000000000682
Kristoffersen MH, Dybvik E, Steihaug OM et al (2019) Validation of orthopaedic surgeons’ assessment of cognitive function in patients with acute hip fracture. BMC Musculoskelet Disord. https://doi.org/10.1186/s12891-019-2633-x
Horner NS, Grønhaug-Larsen KM, Svanteson E, Samuelsson K, Aveni OR et al (2020) Timing of hip hemiarthroplasty and the influence on prosthetic joint infection. PLOS ONE 15(3):e0229947. https://doi.org/10.1371/journal.pone.0229947
Charlson ME, Pompei P, Ales KL, MacKenzie CR (1987) A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 40(5):373–383. https://doi.org/10.1016/0021-9681(87)90171-8 (PMID: 3558716)
Elixhauser A, Steiner C, Harris DR, Coffey RM (1998) Comorbidity measures for use with administrative data. Med Care 36:8–27
Mannion AF, Nauer S, Arsoy D, Impellizzeri FM, Leunig M (2020) The association between comorbidity and the risks and early benefits of total hip arthroplasty for hip osteoarthritis. J Arthroplasty 35(9):2480–2487. https://doi.org/10.1016/j.arth.2020.04.090
Arts DGT, de Keizer N, Scheffer GJ (2002) Defining and improving data quality in medical registries: a literature review, case study, and generic framework. J Am Med Inform Assoc 9(6):600–611. https://doi.org/10.1197/jamia.M1087
Funding
Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital). No funding was received for conducting this study.
Author information
Authors and Affiliations
Contributions
All authors have contributed to some or all significant parts of the study.
Corresponding author
Ethics declarations
Conflicts of interest
Financial interests: The authors have no relevant financial interests to disclose. Non-financial interests: Author Tore Solberg is scientific leader of the Norwegian Registry of Spine Surgery, and author Greger Lønne is member of the board of the Norwegian Registry of Spine Surgery.
Ethics approval
The study was approved by the Norwegian Regional Committee for medical and health research ethics approved this study (reference no. 2017(2157), as well as the data protection officers at the four hospitals. The study was conducted in compliance with the Helsinki Declaration.
Consent to participate
All patients have provided informed consent, and the study was conducted in compliance with the Helsinki Declaration
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Alhaug, O.K., Kaur, S., Dolatowski, F. et al. Accuracy and agreement of national spine register data for 474 patients compared to corresponding electronic patient records. Eur Spine J 31, 801–811 (2022). https://doi.org/10.1007/s00586-021-07093-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00586-021-07093-8