FormalPara Key Points for Decision Makers

Standardisation of quality appraisal processes is imperative to ensure high-quality of Health Technology Assessment (HTA) evidence for policy use in the Indian context. The proposed HTA quality appraisal checklist (HTA-QAC) comprises a comprehensive set of items that should be part of the reporting and reviewing practices.

The HTA-QAC includes a section on model review, which is the novelty of this checklist. This ‘model review’ section ensures assessment of model validity, correctness of the computation process, and transparency.

The HTA-QAC has been assessed for the ease of use and the quality of information it provides by the mode of pilot testing. Given the findings of the pilot, the checklist has been modified several times to ensure relevance, validity, smooth usability, and practicability.

1 Background

Health Technology Assessment (HTA) is increasingly becoming an integral part of healthcare decision making [1, 2]. Growing advancements in healthcare demand a careful assessment of the effectiveness, quality, and the monetary impact of any new intervention. Health economic evaluations have been widely used to compare two or more interventions such as technologies, drugs, or public health programmes in terms of their costs as well as outcomes [3]. The findings of these evaluations play a pivotal role in resource allocation decisions in health. Thus, HTA is key to informing policy decisions regarding the investment to be made in different areas of health.

In a resource-constrained setting like India, the allocation of additional resources for healthcare needs to be informed by evidence on the additional value they create at the population level [4]. To inform such decisions, HTA in India (HTAIn) was institutionalised by the Indian Government, with the primary objective, among others, to generate evidence regarding the cost effectiveness of healthcare interventions [5, 6]. The secretariat of HTAIn is housed within the Department of Health Research (DHR) [7]. The HTAIn secretariat coordinates between the user department, technical appraisal committee, regional resource centres and technical partners, and the HTAIn Board (Fig. 1). The HTAIn Board is the highest decision-making body and comprises policy makers, clinicians, bureaucrats as well as experts from different government departments. The user departments, which include Central and State health ministries or any government healthcare provider/agency, submit their topics for HTA to the secretariat, which prioritises and allocates the topic to the appropriate regional resource centres or technical partners. These regional resource centres have been established within government research institutions based on their capacity to conduct HTA and have been entrusted to generate quality evidence. The Secretariat conducts regular technical appraisal committee meetings and stakeholder consultations to review the proposal and methods, monitor interim progress, and review the study findings and implications for policy making. These regular technical appraisal committee meetings ensure transparency at all stages of HTA [7]. The evidence thus generated is then provided to the user department and is also published on the DHR website to be used for making policy decisions.

Fig. 1
figure 1

Framework of Health Technology Assessment India (HTAIn). HTAIn Health Technology Assessment in India, DHR Department of Health Research. HTAIn Board: The highest decision-making body. It comprises of policy-makers, clinicians, bureaucrats and experts from different government bodies. Technical Appraisal committee: A multidisciplinary body with experts from different areas viz economists, clinicians, researchers, social scientists, health policy experts. User Departments: Individuals, organizations or communities that have a direct interest in the process and/or outcomes of the study under consideration by the HTAIn. Regional Resource Centres: The academic and research organisations which conduct Health Technology Assessment studies commissioned by HTAIn

To use the evidence generated by HTA, it is of utmost importance that such research is methodologically robust and transparent [8]. Currently, there is a lack of standardisation in the conduct and reporting of HTA studies. A systematic review of the literature of healthcare economic evaluations in India reported an average quality score of 65.1 % [9]. Of 104 studies included in the analysis, only 16 % had performed a probabilistic sensitivity analysis (PSA), 36 % considered the fiscal implications of the intervention, and 40 % of the studies considered the generalisability of their findings [9]. Given this, the HTAIn has undertaken certain fundamental steps to ensure standardisation in process, quality of conduct, as well as transparent reporting practices for HTA studies. A process manual published by the HTAIn outlines the steps to conduct HTA [10]. This manual also includes a “reference case” which delineates the key aspects that are vital to the conduct of an economic evaluation [10]. Recently, the methodological guidelines were published to conduct Budget Impact Analysis (BIA), which was tailored to the Indian healthcare financing system [11].

Apart from standardisation in methods, it is of paramount importance to assess the quality of the study. Many checklists have been developed to ensure uniformity of reporting practices in economic evaluations including the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement which has been in use more recently [12,13,14,15,16,17,18,19,20,21,22,23,24]. While these checklists focus on the adequacy of reporting, they do not adequately assess the suitability or appropriateness of the methods employed or assumptions undertaken. Besides a cost-effectiveness analysis, an HTA study includes other analyses such as BIA, equity impact, and assessment of feasibility. It is important to assess if these have been performed appropriately as each will further strengthen the evidence base for any decision being made. None of the checklists evaluate the appropriateness of the choice of methods comprehensively. This is important as the choice of methods may vary depending on the type of economic evaluation, policy relevance, availability of data, and availability of resources including technical expertise. Unlike a clinical study, which provides evidence on the clinical effectiveness of a health intervention, the conduct of economic evaluation is a data-intensive exercise that uses input parameters for epidemiology, care-seeking, the effectiveness of the intervention, quality of life, as well as the cost of delivering the intervention. Thus, it is important to highlight the relevance and appropriateness of data sources as well as the methods used to perform the analysis. Finally, no existing checklists validate the HTA analysis through a model review, which forms an important criterion when commenting on the validity of findings.

The current paper presents the Indian HTA quality appraisal checklist (HTA-QAC), the primary objective of which is to capture all vital aspects of an HTA study in terms of conduct, reporting, and quality. Section 2 of the present paper delineates the HTA-QAC development process. In Sect. 3, we provide a detailed description of the checklist. We discuss the role of HTA-QAC and compare it with existing checklists in Sect. 4, followed by concluding remarks in Sect. 5.

2 Checklist Development Process

The development of a quality appraisal checklist was commissioned by HTAIn. The process of development was undertaken in a 3-phased manner.

2.1 Phase I

The first phase included a review of the literature to identify existing HTA checklists. A targeted review of the literature was done by developing a search strategy in PubMed. In addition, websites of national and international HTA agencies were searched to gather information on HTA checklists. A total of 18 checklists were identified during the review process. The details on the search strategy and PRISMA flow chart have been attached as Supplementary File: S1. The search yielded a total of 182 results. No filters were used related to time; thus, all the studies published until 1st March 2022 were retrieved. After the removal of duplicates [16], followed by a title and abstract screening, 27 studies were extracted for full-text review. A total of 10 studies were found relevant to be included in the review and an additional 8 studies were retrieved on grey literature search. We segregated the checklists based on the purpose they suffice (Fig. 2).

Fig. 2
figure 2

Health Technology Assessment checklists identified during the targeted literature review. EVIDEM Evidence and Value-Impact on Decision Making, CHEERS Consolidated Health Economic Evaluation Reporting Standards, QHES Quality of Health Economic Studies, SIGN Checklist Scottish Intercollegiate Guidelines Network, SMC Scottish Medicine Consortium, CASP Critical Appraisal Skill Program, JBI Joanna Briggs Institute, NSW New South Wales, NHS R & D National Health Services Research & Development, HTA Health Technology Assessment, NICE National Institute for Health and Care Excellence, ISPOR The Professional Society for Health Economics and Outcomes Research, RCT-CEA Randomized Clinical Trials-Cost-Effectiveness Analysis

Of the 18 checklists, 6 were designed to assess the reporting of health economic evaluations [12,13,14, 20,21,22], 9 aimed to review the process of economic evaluations [14,15,16,17,18,19, 21, 23, 24], 2 checklists were specifically framed to evaluate uncertainty assessment [25, 26], while the remaining 3 were broad guidelines/recommendations for the conduct of economic evaluations [27,28,29]. The review of existing checklists reveals that the majority of the checklists are author-reporting formats and do not comment on the appropriateness of methods conducted and reporting of results. No reviewed checklists include a section to review the model. Further, most of the checklists do not support a comprehensive review of costing and outcome valuation methodology. After reviewing these checklists, an initial draft of the HTA-QAC was prepared with discussion amongst the authors to include the aspects that are missing or incomplete in the existing checklists. A guidance manual was also prepared, which delineated the process of application HTA-QAC to assist the researchers as well as the reviewers while filling it out.

2.2 Phase II

In the second phase, the HTA-QAC and its guidance manual were circulated amongst the members of the technical appraisal committee of HTAIn via e-mails for their review and feedback. A meeting took place to discuss the recommendations of the technical appraisal committee members. This virtual meeting included 8 members of the technical appraisal committee, which comprised officials from Ministry of Health, public health specialists, clinicians, clinical trial specialists, medical ethics specialists as well as economists. Further, 13 members of the HTAIn secretariat along with 7 members from the regional resource centres, i.e., the academic and research organisations which conduct HTA studies commissioned by HTAIn, also participated and provided their feedback in the virtual meeting. The technical appraisal committee provided recommendations on the overall structure of the checklist, the inclusion of important methodological aspects in the report and model review section and to ensure that the HTA processes followed are in concurrence with the Indian reference case. Further, the committee also suggested conducting a pilot on already conducted studies before finally using the checklist. This process was undertaken in the period from January to July 2020. The checklist was further revised given the suggestions and inputs of the members of this review committee.

2.3 Phase III

The third phase comprised piloting the HTA-QAC to assess the quality of the approved HTA study reports. Thirteen reports were available on the HTAIn website at the time of the development of HTA-QAC. All 13 reports were included in the pilot phase. We approached the authors from the regional resource centres in a sequential manner to fill out the author section of their respective outcome report in order to incorporate their feedback on the clarity of the content in the checklist, difficulties faced while filling the checklist, and the overall structure of the checklist. After feedback from 3 researchers, there were no further significant amendments to be made. The authors (of the present manuscript) agreed with the content for more than 95 % of the fields in the checklist. Therefore, we did not contact other researchers for inputs on the author reporting section. However, we completed the reviewer section for all 13 reports.

Five rounds of virtual discussions were held with these researchers from regional resource centres to resolve the discordance in opinions and develop solutions for the identified problems in the use of the checklist. Different researchers interpreted the checklist questions in different ways. Therefore, we revised the language of the questions so that all researchers would interpret them in the same way. It was observed that there is subjectivity in what is expected in response to a specific question. Given this, we created a “description” label and outlined what is the expected answer to each question. Next, the researchers faced difficulty while providing scoring for each question. Earlier, all the ratings were sought on a 1- to 10-point scale. Thereafter, we introduced the Likert scale and assigned different scoring criteria to different questions depending on what is best for each question. Currently, the checklist can be rated on a Likert scale, a 1- to 10-point scale as well as a binary response. We also prepared a guidance manual for the checklist so that any author or reviewer filling out the checklist will interpret it in the intended way. Over a duration of one and a half years between January 2020 to June 2021, the checklist was revised 15 times to incorporate the suggestions and inputs received during each phase. Since it was not possible to have face-to-face meetings due to the current coronavirus disease (COVID-19) pandemic, virtual meetings were held for this entire process.

3 Description of the Quality Appraisal Checklist

The HTA-QAC has been divided into two parts: a self-reporting section to be filled by the author and another section to be filled by the reviewer (Fig. 3). The author section of the checklist aims to obtain a comprehensive description of the study from the author. The reviewer checklist has been divided into two sections: one to review the report/manuscript and the other to review the model. The first part of the reviewer section, which is the report/manuscript review, aims to assess the appropriateness of study methods, the assumptions made, the data sources used, and the reporting of results. The second part of the reviewer section constitutes the technical aspect of reviewing the model. Further, we have also categorised the entire checklist based on some aspects that may be considered essential or desirable when being reviewed. The key elements necessary to be fulfilled for acceptance/passing the quality check have been labelled as essential (***); the conduct of other important technical analyses and aspects apart from the main economic evaluation analysis have been labelled as major (**), and the optional or good-to-have elements have been labelled as desirable (*). Each section has been described in detail below. The complete checklist has been attached as Supplementary File: S2

Fig. 3
figure 3

Health Technology Assessment—Quality Appraisal Checklist (HTA-QAC)

3.1 Section-wise Description

3.1.1 HTA-QAC 1: Author Section

The author section is a self-reporting format comprising 7 domains. For this section of the checklist, the authors are required to specify the page numbers and line numbers of the outcome report where the information sought is mentioned against each section in the HTA-QAC.

3.1.1.1 Basic Information

This section includes general information about the study including the title and year of conduct of the study. The title of the study is required to specify the study design, study setting/geography, intervention/control, and disease/programme of concern.

3.1.1.2 Need for the Study

This section requires the author to summarise the existing knowledge regarding the topic under study and justify the need for the present study. In addition, the author is required to highlight the additional gains from the present study related to the generation of evidence and its policy implications.

3.1.1.3 Policy Relevance

It is important to understand how the evidence generated from the present study will help the process of evidence-based policymaking or what implications the study output would have on the current health policy. The author should provide this information in 3 bullet points.

3.1.1.4 Study Description

In this section, the author is required to clearly state the aims and objectives of the present analysis. Second, the author is required to comprehensively describe the intervention and comparator, its relevance to the target population, and existing evidence available regarding the efficacy and costs. Third, the author should describe the target population in terms of its relevance, composition, and characteristics. Finally, the author should specify the choice of time horizon considered appropriate to account for all benefits and costs of the intervention, the discount rate chosen to assess the present values of future outcomes and costs, and the perspective of the study.

3.1.1.5 Study Methods

This section requires the authors to provide information on study methods. First, the author is required to provide information related to the choice of model. Apart from the model choice, the author should indicate if the study required modelling of any infectious disease and whether or not interactions were considered. If interactions were considered, information regarding the use of difference or differential equations is to be provided. Second, the author is required to specify the choice of study perspective (societal/provider/patient) and time horizon for the study. Third, the author is required to specify the costs of various items to be considered. The type of costs to be included can be classified broadly as direct and indirect (Fig. 4). Under direct costs, the author should clearly describe items to be considered under “health system costs” and “out-of-pocket (OOP) expenditure:

  1. I.

    Direct medical costs:

    1. a.

      Health system costs: All costs associated with out-patient consultation, in-patient hospitalisation treatment, intensive care admission, surgery, and diagnostics incurred at the level of health facility are included in this category.

    2. b.

      Patient level costs/ OOP expenditure: These include expenses incurred by the patient on drugs, diagnostics, procedures, admission fees, etc. during hospitalisation.

  2. II.

    Direct non-medical costs: These include expenses paid by the patient on food, transportation, board and lodging during the course of treatment.

Fig. 4
figure 4

Illustration of types of costs to be included in an economic evaluation

Similarly, the author should report if indirect costs were included in the analysis. Such costs include loss of wages and employment, productivity losses due to morbidity, as well as premature mortality.

The author is required to provide information related to health outcomes in terms of whether primary or secondary data or both types of data sources were used. The secondary sources are to be categorised as single or multiple studies, clinical trials, systematic reviews, and meta-analyses. In addition, the author is also required to provide information on immediate/short-term outcomes, for example, a decrease in blood pressure or an increase in physical activity associated with an intervention for the prevention of hypertension intervention. Alternatively, the author needs to report whether long-term outcomes, for example, life years, quality-adjusted life years (QALY), disability-adjusted life years (DALY), or both, were reported for the study. Further, the sources for utility values, whether primary or secondary or expert opinion, are also to be reported by the author.

Finally, the author is required to specify if sensitivity analysis was performed to address uncertainty. Additionally, the author needed to specify whether or not other analyses such as equity, budget impact, and stakeholder analyses have been performed.

3.1.1.6 Reporting of Model Parameters and Their Sources

The HTA-QAC requires the author to report whether all parameters have been specified/tabulated with their base values, uncertainty ranges, and sources. This reporting of parameters is to include all demographic and epidemiological parameters, coverage and utilisation parameters, risk rates/ratios and transition probabilities, effectiveness, and cost parameters that have been fed into the model.

3.1.1.7 Study Results

Under this section, the author has to provide a binary response on whether or not the following are included in the study outcome report: total costs separately for both intervention and comparator as well as the incremental costs, along their uncertainty ranges. In addition to costs, it should be assessed whether all outcomes (life years/QALY/DALY) for both interventions as well as comparator, incremental outcomes, and the incremental cost-effectiveness ratios (ICERs) for all possible scenarios (if there are multiple interventions) along with uncertainty ranges are reported.

In addition to reporting the above results, the author reporting part of the HTA-QAC also seeks information on whether the different forms of analyses, such as equity analysis, budget impact analysis, and stakeholder analysis were conducted. The author is to report as a binary response (yes or no) if these analyses were conducted and the results were presented in the expected format.

3.1.2 HTA-QAC Reviewer Section: Report Review

The domains under this section are corresponding to the author reporting section. However, while the author reports on the adequacy of reporting, the reviewer assesses the appropriateness of choices made for the analysis. With each section in the HTA-QAC, it has been mentioned how the reviewer should rate a particular aspect along with the criteria on which the assessment is to be based. This section comprises 4 domains.

3.1.2.1 Basic Information

The reviewer is expected to rate the appropriateness of the title of the study depending on whether the title appropriately justifies the objectives of the study, and specifies the study design, study setting/geography, intervention/control, and disease/programme of concern. The reviewer has to rate the appropriateness on a scale of 1 to 10, with equal points for each domain. Second, the reviewer has to rate the quality of the abstract depending on whether it gives a comprehensive description of the study. Third, the objectives of the study should give complete justification of what is expected from the study and its relation to existing evidence. The reviewer is expected to rate the clarity of objectives on a scale of 1 to 10.

Finally, the reviewer has to rate the appropriateness of the description of the intervention, comparator, and study population on a scale of 1 to 10. The rating would depend on whether a detailed description of intervention and comparator is given (2 points) in terms of the type of health benefits provided, the effectiveness, safety profile as well as evidence on cost. Also, the appropriateness of justification for the choice of intervention and comparator should be evaluated (2 points) where one may discuss aspects related to clinical features of both comparator and intervention and the need for intervention over the existing pattern of care or the comparator. Further, the description of the characteristics of the study population and justification of why this population is relevant for the study (3 points) should be appraised by the reviewer. Last, the author’s consideration of any subgroups and justification for its choice is also to be reviewed, which should indicate that health effects or costs for a particular subgroup are expected to differ from the general population (3 points). In addition, the reviewer should also comment on whether there was a relevant subgroup that was not considered for the analysis.

3.1.2.2 Study Methods

Under this section, the reviewer is required to assess the appropriateness of the methodology based on the following:

  1. I.

    Choice of model and its appropriateness for the given analysis along with a clear description of the underlying assumptions.

  2. II.

    Inclusion of costs according to the study perspective, appropriateness of cost data in case of secondary sources, appropriateness of methodology adopted for primary data collection, and appropriateness of conversion rates used for costs.

  3. III.

    Appropriateness of source of effectiveness data given existing evidence in case of secondary data or appropriateness of methods used to collect primary data on effectiveness. Similarly, the appropriateness of the choice of short-term and long-term outcomes, and the appropriateness of the choice of utility measure and its sources should be appraised by the reviewer.

  4. IV.

    Use of appropriate discount rate as per Indian reference case guidelines, and appropriate annualisation of costs.

  5. V.

    Adequacy of type of uncertainties identified (methodological, structural, heterogeneity, parameter) and appropriateness of method (reference case, subgroup, scenario, univariate, PSA) used to address the same.

  6. VI.

    Appropriate conduct of other analyses, if performed, such as the expected value of perfect information (EVPI), equity analysis, stakeholder analysis, and budget impact analysis. For assessment of the conduct of budget impact analysis, the Indian recommendations for the conduct of BIA may be referred to [11].

Based on the assessment of the methods section, the reviewer is expected to rate on a scale of 1 to 10 depending on whether the study methods were in concurrence with the HTA process manual of India [10].

3.1.2.3 Study Results

In this section, the reviewer is expected to rate the appropriateness of the presentation of results. For example, an appropriate listing of the absolute and incremental costs and outcomes for each scenario separately along with ICER(s). Also, uncertainty ranges should be mentioned around each result parameter. Further, adherence to the standard reporting practices for uncertainty analysis is also to be evaluated. For example, the presentation of tornado plots for one-way sensitivity analysis (OWSA), subgroup-specific ICER(s), cost-effectiveness acceptability curves (CEAC), cost-effectiveness (CE) plane as well as confidence intervals for ICERs for PSA.

3.1.2.4 Discussion and Conclusion

The reviewer is expected to rate whether the authors have appropriately justified the results of the analyses in terms of the direction of the results as well as evidence supporting the direction of study results. Second, the reviewer is expected to rate how do the authors present the comparison of the study findings with the existing evidence in the same domain. In case the findings of the study differ from that of the existing literature, the reviewer is required to determine whether the author was able to identify and justify the reasons for this difference in findings. This information also helps to determine the validity of the model wherein the reviewer is expected to assess whether the outputs of the model are in concurrence with the existing scientific evidence available. For example: is the average life expectancy of the cohort in concurrence with the existing evidence on the same? Is the average predicted survival from the model in concurrence with clinical evidence? Is the reduction in disease-free episodes/increase in average disease-free survival/increase in average progression-free survival/decrease in mortality from the model in concurrence with the evidence from clinical effectiveness literature? Third, the reviewer is required to comment on the external validity of the study findings and how the authors describe the generalisability of the study findings to other settings, relevant populations, and age sub-groups.

Fourth, the reviewer is to assess whether the authors have mentioned all the key limitations of the study. Also comment if any potential limitations can bias the study findings and affect its internal validity.

In addition to the above, the reviewer is expected to comment on whether the conclusion of the study is in concurrence with the specified objectives of the study in the beginning or if there are any deviations from the expected outcomes of the study. The reviewer ratings should consider how the recommendations of the study will help provide evidence for the formulation of related health policies. All the parts of this domain are to be rated on a scale of 1 to 10. The reviewer is expected to give possible explanations for the rating they provide.

3.1.3 HTA-QAC Reviewer Section: Model Review

This section consists of 7 domains that guide the reviewer to assess the model.

3.1.3.1 Basic Information

This section comprises the necessary information required to assist the review of the model. It consists of key points such as the type of platform used for building the model, availability of a model dictionary containing brief analysis description, the index for different sheets, abbreviations, labels for different variables, tables and figures, and references, proper labelling of sheets, tables and figures, consistent naming convention throughout and a user-friendly layout to efficiently review the model.

3.1.3.2 Model Assumptions

This section seeks information on whether the model figures (Markov models, decision tree) and underlying model assumptions have been listed clearly.

3.1.3.3 Functionality

The section is to check the model functionality, which includes checks for macros, and ranges, and lookup values, check for general error messages, and any links to external sources.

3.1.3.4 Model Inputs

Under this section, the reviewer is expected to check for a listing of all inputs under one sheet, checking for the correctness of any conversion of parameters (for example risk/ratios to probability), and checking for proportion sums and mutually exclusive parameters.

3.1.3.5 Calculations

In this section, the linking and the processes running in the model are to be checked. It includes correct linking of different cells within as well as between sheets and the correct formulation of processes like discounting, annualisation, QALY calculations, and others.

3.1.3.6 Uncertainty Analysis

The reviewer is expected to perform checks on the conduct of uncertainty analysis. It includes checking the appropriateness of ranges listed for all parameters, proper linking of tables and graphs generated, functioning of macros, appropriateness of distributions assigned to parameters and their sources, and appropriate presentation of results of various analyses.

3.1.3.7 Model Summary

The model summary/output section is to be reviewed to ensure the comprehensive description of results, which include appropriate linking of summary estimates, and appropriate linking and labelling of tables, figures, and graphs.

For ease of understanding, a filled out checklist for an HTA outcome report [28] has been appended as Supplementary File: S3.

4 Discussion

The findings generated from HTA studies are a valuable evidence base for decision making. Resource scarcity, technological advancements in health, and growing advocacy for higher spending in health care have warranted the need for sustainable decision making in health. Since the institutionalisation of HTAIn, 19 HTAs have been completed in India [29]. In addition, many researchers and academicians have been conducting HTAs to provide evidence in the form of clinical/cost effectiveness of health intervention. However, it is of utmost importance to assure the highest level of quality of such evidence if it is to be used for making policy decisions. Second, HTAIn is committed to standardising practices related to the conduct of HTA research. The development of a quality appraisal checklist will not only ensure that good quality evidence is generated but will also improve transparency, thus promoting the use of the evidence generated.

Past evidence shows that several checklists have been developed for reporting as well as reviewing economic evaluations. Among all the published evidence, the Drummond checklist [13] and CHEERS statement [12] have been used widely for reporting economic evaluations. However, these checklists are self-reporting formats and do not incorporate the reviewer component. Further, there is a limited scope to judge the quality and appropriateness of various methodological aspects. Recently, a revised version of the CHEERS Checklist has been published with improvements to include stakeholder engagement, analysis of distributional effects, model sharing statements, and language changes [22]. However, it is a reporting platform and does not allow quality assessment. Even in the reporting structure, detailed information about costs, valuation of outcomes, and uncertainty assessment is missing. A few checklists have also been published that focus on the aspect of reviewing the quality or appropriateness of methods used. One of these is the Scottish Intercollegiate Guidelines Network (SIGN) checklist published in 2012 [16]. This checklist has been divided into two broad sections named internal validity and overall assessment of the study. A limitation of this checklist is that it is very brief and does not enable assessment of some key aspects such as input parameters used, methodological choices, sources of data and assumptions as well as the reporting format of the analysis. The NICE quality appraisal checklist was also published in 2012 and allows both reporting as well quality evaluation [21]. Although the checklist covers most of the methodological aspects, it does not comment on types of cost, their estimation methods, effectiveness data assessment, estimation of utility values, and information related to model structure assumptions. Similarly, the Critical Appraisal Skill Program (CASP) checklist, published in 2018, is generic and does not aim to assess the appropriateness of the methodological components [18]. Both these checklists provide an overview of the analysis but do not provide an opportunity for the reviewer to critically review the analysis. Similarly, while the Scottish Medicine Consortium (SMC) Economics Checklist was published in 2017, its main focus is to review pharmaceutical products only [17]. Second, it is a subjective checklist and may be a very lengthy and time-consuming process for the reviewer to fill it.

Amongst the reviewed checklists, the Evidence and Value: Impact on Decision Making (EVIDEM) framework, published in 2008, provides both reporting and reviewing formats unlike others discussed above which either focus on reporting or reviewing [14]. This (EVIDEM) checklist is divided into two components where the first part assesses the completeness and consistency of reporting and the second part on focussed on the relevance and validity of the economic evaluation. However, this checklist is also very brief and does not capture detailed information on several key methodological aspects, which is pivotal when making inferences on the quality of research. In addition, this checklist is also subjective and seeks only comments and not objective scoring. Another recent checklist, published in 2021 by the government of New South Wales, also allows us to assess the quality of economic evaluations [23]. This checklist seeks information on most of the key methodological aspects of an HTA; however, it lacks information on model structure and underlying assumptions. Criteria to assess primary data collection, as well as the criteria to rate uncertainty assessment, are also very limited. Supplementary File: S4 (Table) compares the different available checklists based on the key parameters essential to the conduct of economic evaluations.

In this paper, we present a checklist that comprises a comprehensive set of items that should be part of both reporting and reviewing practices. In addition, the HTA-QAC includes a section on model review, which is the novelty of this checklist. No studies reviewed include a model review section in their checklists (Supplementary File: S4). This “model review’ section ensures assessment of model validity, and correctness of the computation process as well as ensuring transparency. Second, our checklist gives an option to provide both objective and subjective assessments. Multiple and exhaustive options for each section have been given to account for variation in methodological choices. When compared with other checklists (Supplementary File: S4), all these formats have a single scoring criterion, which might not be suitable for different types of information sought. Our checklist allows multiple rating options (binary, Likert scale, guided 1- to 10-point rating, subjective assessment), which will allow a thorough assessment of the study. Third, the reporting and reviewing sections correspond to each other so that the reporting checklist can be used to facilitate the reviewing process. The checklist has been framed keeping in view the reference case for conducting HTA in India to promote the standardisation of processes. When compared to other published checklists reviewed, only 3 are based on underlying country-specific or internationally accepted reference case (Supplementary File: S4). In addition, we provide a guidance manual (Supplementary File: S5) along with the checklist to avoid misinterpretation as well as standardisation of use. Fourth, our checklist allows detailed assessment of uncertainty analysis, which has not been fully addressed in previously published checklists (Supplementary File: S4). More importantly, we piloted the use of our checklist to assess the ease of use and the quality of information that it provides. Given the findings of the pilot, the checklist was modified several times to ensure relevance, validity, smooth usability, and practicability.

There are a few limitations of the present study. First, we conducted a targeted review of the literature and not a systematic literature review, which may have led to missing some studies. However, we supplemented our review by including the websites of national and international HTA agencies to gather information on available HTA checklists. Second, although the model checklist allows review of models built on other platforms along with MS Excel, the focus of the checklist is more towards Excel-based modelling and economic evaluation. This is because the HTA assessors are trained in India, by HTAIn, using MS-Excel. Our checklist uses the rating on a 1- to 10-point scale to summarise the scores obtained on the assessment of the study by the reviewer, which may introduce some degree of subjectivity on the part of the reviewer while rating. Finally, we did not include the assessment of conducting EVPI in the model review section. This is because EVPI has not been recommended as an essential analysis in the reference case for HTA in India. Also, we do not have methodological guidelines for the conduct of EVPI in India. However, we will revise and expand our checklist as we further progress and develop EVPI-specific guidelines for India.

5 Conclusion

The use of HTA evidence has significant implications for the allocation of scarce resources, for which quality benchmarking of research methods has gained importance. Weak or low-quality evidence may not provide the confidence for using the evidence for policy decisions. This may lead to errors in decision making and can be associated with significant opportunity costs. Therefore, it is imperative to have guidance on reporting and reviewing practices for HTA research to ensure the quality and transparency of the research. We recommend that such processes should be standardised, by the mode of quality appraisal checklists, to ensure consistency of reporting and reviewing practices of HTA studies, thus enhancing the quality of evidence.