Impact on practice

  • A new, self-administered, performance-based tool for measuring functional medication literacy was developed.

  • Low-income and elderly patients display lower level of functional medication literacy.

  • Understanding long text instructions and using numeracy skills presents problems to patients.

Introduction

The European Literacy Policy Network defined literacy as “the ability to read and write at a level whereby individuals can effectively understand and use written communication in all media” [1]. Literacy can be further divided into three distinct types: document literacy (ability to understand non-continuous information, such as prescription labels), prose literacy (ability to understand continuous text, such as patient information leaflets), and numeracy (ability to perform quantitative computation, such as properly following dosage directions) [2]. Higher literacy skills enable people to improve their potential and knowledge thus allowing fuller participation in a society and economy. Literacy is both content and context specific. Individuals with high general literacy skills may encounter difficulties when applying their skills to unfamiliar contexts or situations which require specific knowledge, such as health care environment. Consequently, health literacy has emerged as a content specific literacy in a health context [3].

It can be defined as “personal, cognitive and social skills which determine the ability of individuals to gain access, understand and use information to promote and maintain good health” [4]. Low health literacy is linked to decreased knowledge of medical conditions and treatment, low use of preventive services, poor adherence to treatment plans, high rates of hospitalization, poor health, and high healthcare costs [5,6,7]. In Nutbeam’s outcome model for health promotion, health literacy is an outcome of health promotion actions, such as patient education and counselling. In this model, health literacy includes measures such as health-related knowledge, attitudes, behavioural intentions, personal skills and self-efficiency [4]. Nutbeam also proposed a health literacy model with 3 sequential levels through which patients progress. The basic level is functional literacy, which describes patients’ basic-level skills needed to obtain relevant health information (e.g. on health risks and using the healthcare system) and to apply this knowledge to a range of healthcare situations. It is also a prerequisite for the two higher literacy levels. The second level involves interactive/communicative literacy. It focuses on advanced cognitive skills that enable patients to actively engage in health-related interactions, extract health information and derive meaning from different forms of communication and apply it to changing circumstances. The third and highest level involves critical literacy. This level builds on functional and interactive literacy and refers to patients’ ability to critically analyse health-related information from a wide range of sources and to use this information to make informed decisions and exert greater control over health-related events and situations [8].

Patients’ poor literacy skills create unique problems in different health-care settings [2]. From a pharmacy perspective, patients must understand an increasingly complex health care system, particularly as it relates to medication [7]. Based on definitions of health literacy Sauceda et al. proposed a definition of medication literacy as “ability of individuals to safely and appropriately access, understand and act on basic medication information” [9]. Measuring such specific literacy allows better identification and understanding of literacy issues and their impact on patient outcomes [10].

Measures of health literacy can be constructed as perception-based or performance-based instruments. Perception-based instruments are subjective measures and involve respondents rating their perceived abilities to collect, understand, communicate and evaluate health information. They have the advantage of being shorter and potentially less embarrassing for the patients as their abilities are not scored by the researchers, but by themselves. There are usually used as screening tools. However, they might assess self-efficacy or behaviour rather than health literacy. Cultural norms could also affect patients’ replies thus limiting the validity of such measures for measuring health literacy. Performance-based instruments are considered objective in their assessments. Respondents show higher or lower performance on tasks such as filling in the gaps in texts, reading out loud health-related terms or completing knowledge quizzes. These quizzes are usually a set of true/false or multiple choice questions which address patients’ knowledge on disease symptoms, causes, management etc. Health literacy level is inferred from measured performance. They are mostly used in research contexts. While they may provide a more comprehensive health literacy assessment, they are more time consuming, impose performance pressure on patients and disregard contextual or situational factors that may affect performance [11,12,13]. There are a few validated instruments which measure medication literacy. Sauceda et al. developed and validated a Medication Literacy Assessment in Spanish and English (MedLitRxSE). It is a performance-based measure targeting a distinct geographical setting, the border between the United States and Mexico. It contains items about specific cases (e.g. diabetes), which may not be equally applicable to all patients. Specific patient groups (e.g. patients with diabetes) might be more familiar with topics covered and therefore show better results [9]. Stilley et al. developed and validated the Medication Health Literacy Measure, a performance-based measure which focuses on understanding of prescription labels. The questionnaire uses two labels, one for an immunosuppressant medication and one for a diabetes medication, and was validated using adult liver transplant recipients and patients with diabetes [14]. Again, the chosen cases and validation method might cause the questionnaire to be too specific. Yeh et al. developed and tested a Chinese medication literacy measure consisting of 17 items divided into 4 sections: vocabulary, non-prescription drug, prescription drug, and drug advertisement. Respondents had to read and interpret medication-related phrases and prescription, non-prescription, and drug advertisement written information. It was administered as face-to-face interviews and may be specific to Chinese cultural environment [15]. A recent instrument to measure medication literacy was developed by Vervloet et al. in the Netherlands. The RALPH instrument is an interview guide for practicing pharmacists to recognize patients with limited medication literacy skills. It tests functional, communicative and critical literacy [16]. The instrument is practice oriented and therefore not intended for patients’ self-administration which would allow anonymity. It is not solely performance-based instrument as it contains perception-based questions as well. No other published, validated performance-based instruments to measure medication literacy has been found.

Aim of the study

The aim of this study was to develop and validate a new, self-administered, performance-based questionnaire measuring functional medication literacy and to evaluate functional medication literacy among the Slovenian general population.

Ethics approval

Ethical approval for the study was obtained from the National Medical Ethics Committee (Reference Number 55/02/16).

Methods

The content of the new questionnaire was generated using various pharmacy sources addressing patient counselling, health literacy, medication literacy and then validated by a panel of patients and healthcare professionals. The resulting questionnaire was validated and used on a large sample of Slovenian residents. The entire development and validation process was conducted in Slovenian.

Content generation

In line with Nutbeam’s model, where health literacy is viewed as an outcome of patient education and counselling, the initial content of the questionnaire was generated from pharmacy counselling literature, patient information booklets, as well as articles from a Pubmed search for pharmacy counselling, patient education, medication literacy and health literacy [9, 14, 17,18,19,20,21,22,23,24]. Found literature was screened for generic types of key information on medication therapy (e.g. medication purpose, dose frequency, adverse effects) which patients should be able to access, understand and act upon. A list of such medication literacy elements was compiled from found literature.

The resulting list of literacy elements was examined by 3 academic pharmacy experts with knowledge of health and medication literacy who were willing to participate in the study. They selected the elements according to 4 criteria. The first criterion was in line with the chosen definition of medication literacy, which highlights safe and proper use of medications. Thus, the experts jointly selected elements that were considered essential to ensure such use, excluding elements that did not address it, e.g. knowing the terms adverse and side effects and the difference between them. Secondly, according to the chosen definition, the elements had to deal with basic medication information. Elements requiring advanced knowledge, e.g. the meaning of the precautionary symbol § on the outer packaging, were excluded. Thirdly, since the aim was to create a general questionnaire, the chosen elements had to be applicable to the general patient population, excluding specific issues such as the use of inhalers. Finally, the elements had to represent topics that patients should know about. The questionnaire was designed for patient self-administration and therefore all the issues which should not be patient’s concern were excluded, e.g. interactions with prescribed medications.

Questionnaire items were generated from the selected elements and presented as mostly close-ended test questions (e.g. which statement is true for adverse effects?). Guidelines for the formulation of the items were followed, with special consideration to plain language and brevity [25, 26]. The questionnaire items were not designed to refer to a specific medication a respondent might be taking or to a specific patient population. They were rather general to allow for relevance to all respondents.

Content validation was conducted by a panel of 27 participants (9 community pharmacists, 3 general practitioners, 2 nurses, and 13 patients) who gave feedback on questionnaire content and format. The members of the panel were asked to self-administer the questionnaire and send it to the research team with feedback, such as comprehensibility, missing aspects of safe and proper use of medications, redundant items, and items that might exceed the expected level of patient medication literacy. The panellists were recruited through snowball sampling until no new suggestions were proposed. Their feedback and suggestions for improvements were pooled, analysed and discussed by the research team. The questionnaire was optimized accordingly.

Validation and medication literacy evaluation

The optimised questionnaire was used in the validation study and medication literacy evaluation phase. This questionnaire comprised 30 items divided into 5 categories: dosage, adverse effects, interactions, precautions, and other information. All but 1 of the 20 close-ended items had 5 response options. The remaining 10 items were open-ended. All items had a “No answer” option. Sociodemographic data were collected at the end of the questionnaire. The questionnaire was piloted on a small opportunity sample (N = 10) of patients who were asked for additional feedback on the wording, clarity, and format, and accompanying instructions. These patients were not included in the following validation study. The pilot study resulted in minor modifications, mostly related to wording of the questionnaire.

To validate the questionnaire and measure medication literacy, the questionnaire was sent by postal mail to a random sample of 1500 adult Slovenian residents. The random sample was obtained from the Statistical Office of the Republic of Slovenia. The sample size was calculated based on the number of adult Slovenian inhabitants in 2016 (1,701,642 inhabitants), 95% confidence interval and 5% margin of error [27, 28]. The calculated required sample size was 385. Anticipating 25% response rate, 1500 questionnaires were sent. Participants were apprised of the medication literacy importance and study aims, given the instructions to complete the questionnaire, presented the estimated time needed to complete the questionnaire and assured the anonymity of their answers. Respondents returned fulfilled questionnaires in a pre-paid envelope to the research team. All returned questionnaires were analysed, and total medication literacy scores were calculated. Each correct answer was worth 1 point, with a maximum of 30 points. The criteria for evaluation the open-ended questions were defined in advance. One of the researchers used these criteria to assign the points. In case of doubt, he consulted the second researcher and a consensus was reached.

Statistical analysis

Descriptive statistics were used to assess sociodemographic data and item-level statistics (% of correct responses, total score, mean/median total score, standard deviation/interquartile range of total score). The overall validity of the questionnaire was assessed via reliability, criterion validity, discriminant validity, and content validity. Reliability was assessed using the Kuder–Richardson Formula 20, a measure of internal consistency for questionnaires with dichotomous choices (in our case correct or incorrect response) [29]. Coefficients above 0.7 were considered satisfactory [30]. The criterion validity was addressed by assessing the association between literacy and age, as cited in the literature [31,32,33]. For that purpose, a multiple linear regression analysis was run to determine factors associated with medication literacy. The medication literacy score was set as a dependent variable. The participant’s sex, age, education, income, current self-perceived health status, and any chronic illnesses were used as factors, as these were frequently reported as significant predictors of health literacy [34]. Dummy variables were used in cases of categorical variables with more than 2 categories. The forced entry method of regression (SPSS: Enter method) was used. Multicollinearity was examined by the variance inflation factors. To test the discriminant validity of individual items, a Mann–Whitney test was performed comparing total scores of respondents who answered particular items correctly with those who answered incorrectly. In contrast to other validity measures, content validity cannot be assessed by statistics, thus the procedures for content generation and the validation phase (literature search, expert review, patient and healthcare professionals panel, pilot study) ensured that relevant content was incorporated in the questionnaire.

The validation study and medication literacy evaluation were conducted in December 2016 and January 2017. All statistical analysis was performed by statistical package SPSS v22.0. A significance level below 0.05 was considered statistically significant in all analyses.

Results

Content generation

A list of 92 medication literacy elements was compiled from the literature search. After screening using inclusion criteria, the list was reduced to 30 medication literacy elements, from which 30 questionnaire items were generated. As a result of the panellists’ feedback on questionnaire content and format, two questionnaire items were deleted, four items were modified and two were added. Thus, the optimized questionnaire comprised 30 items divided in 5 categories. Table 1 shows the medication literacy elements with corresponding categories.

Table 1 Medication literacy elements and accompanying categories, together with percentages of correct answers for all generated elements

Validation and medication literacy evaluation

Participants in the pilot study took about 15 min to complete the questionnaire. Out of 1500 sent, 425 questionnaires were returned (28.3% response rate). After excluding 23 questionnaires with more than 20% missing responses, the final number of questionnaires available for analysis was 402. Table 2 shows sociodemographic data of the respondents. Their mean age was 52 years (range 18–87 years).

Table 2 Sociodemographic data of participants who completed a medication literacy questionnaire

The reliability coefficient for the questionnaire, as assessed by the Kuder–Richardson Formula 20, was 0.823, which exceeds the 0.7 criterion.

Table 3 shows the factors used in the final regression model. The model explained 23.0% of variance in participants’ knowledge (R = 0.479, N = 336, P < .001). All variance inflation factors were below 5 and the tolerance statistics all above 0.2, indicating minor multicollinearity among the factors. Three factors associated with medication literacy were statistically significant: age (standardized β = − 0.303), income (standardized β = 0.243), and current self-perceived health status (standardized β = 0.187). Younger participants with higher income and better self-perceived health status demonstrated higher medication literacy scores compared to other respondents. Other factors were not significantly associated with medication literacy.

Table 3 Multivariate regression of medication literacy scores on participant sociodemographic data

One item discussing the correct interpretation of the precaution label, “Keep out of reach of children” did not have discriminant validity. Participants answered this item correctly regardless of their medication literacy score. The item was thus deleted from the questionnaire.

When evaluating medication literacy, the median of total score was 24 points (interquartile range 4 points) out of 29 points. The minimum achieved total score was 6 points, and the maximum was 29. The scores were not normally distributed as they were negatively skewed (skewness coefficient of − 1.615). Table 1 presents the percentages of correct answers for each generated element, with the accompanying category. For full item wordings, see the questionnaire in the Electronic supplementary material.

The close-ended items received the highest scores (more than 95% of participants answered correctly). Item 14, which addressed storing or consuming medications in relation to their expiration date and how to dispose of expired medications, received the most correct answers. Item 18.4, which focused on interpreting the “Shake before use” label, received the next highest number of correct responses, followed by item 9, which was about general medication storage conditions and retaining the original packaging and patient information leaflet, and item 13, which checked awareness of issues when consuming alcohol with medication.

Items related to dosing, which required numeracy skills and understanding information from long texts, received the lowest scores. Specifically, 6 items were answered correctly by fewer than 70% of respondents. Of those, 4 were open-ended questions. Items 10.1, 10.2, and 10.3 were open-ended questions related to interpreting instructions and dosage for a commonly used OTC analgesic, and items 10.1 and 10.2 required some calculations. For item 8, which was a close-ended question about the precaution symbols △ and ▲ on outer packaging, 40.3% chose the “No answer” option. For item 20, which tested whether participants knew that multi-dose preparations (e.g. eye drops) have expiration dates, irrespective of whether they had been opened, 17% selected the “No answer” option and 16.9% incorrectly answered that the medications had unlimited expiration even if opened. Item 18.3 was an open-ended question to test interpretation of the storage label “Store in a cold place at 2–8 °C” and asked participants to specify an example of a place where such a medication could be stored. Most correctly listed the refrigerator, but some incorrectly answered “in a cold place,” “on a high shelf,” or “in the basement.” On average, participants correctly answered 84.2% of the close-ended questions and 72.7% of the open-ended questions (Wilcoxon Signed Ranks Test, P < .001).

Discussion

A new performance-based questionnaire measuring functional medication literacy among the Slovenian general population was developed and validated. The median of medication literacy score was 24 out of 29 points.

International comparison

International and published literature on measuring medication literacy is scarce with only a few studies for comparison. During the testing phase of MedLitRxSE, Sauceda et al. reported an average score of 10.7 out of 14 points (76.4%) in an English population [9]. Yeh et al. reported an average overall score of 13 out of 17 points (76.5%) using the developed Chinese medication competency measure [15]. While using the RALPH instrument in the Netherlands, Koster et al. reported more than 90% of correct responses to most questions related to functional medication literacy skills [35].

The content and results of these previous studies are similar to that of the current study: all test participants’ ability to read and correctly interpret information on prescription or non-prescription medications. They also address at least one of the three types of literacy: document literacy, prose literacy and numeracy [2].

Prose literacy and numeracy

Items related to dosing that required understanding long texts and numerical skills had the most incorrect answers. A study by Davis et al. confirms that patients may read dosing instructions but not correctly demonstrate the use of such information, because it also involves numeracy skills [36]. Numeracy-related tasks are common in healthcare situations, including handling of medications. The correct use of medications and adherence to therapy involves many numeracy-related issues, including administration frequency, treatment duration, dose per weight, and refill scheduling [37]. Therefore, interventions to improve numeracy skills linked to prose literacy should be a priority when addressing patient medication literacy needs.

Age and education as medication literacy predictors

In this study, age was significantly associated with medication literacy, a phenomenon frequently cited in literature [31,32,33, 38, 39]. Medication literacy among participants decreased as age increased. As patients age, their cognition, learning, and memory deteriorate and negatively affects their medication literacy [40]. This problem is compounded by frequent use of medications. The elderly are prescribed triple the number of prescriptions as do younger adults, and comorbidities often require taking multiple medications several times a day [14]. The elderly thus are a target population for medication literacy interventions.

In regression analysis, education was not significantly associated with medication literacy, although people with higher levels of education achieved higher average scores than those with lower levels of education (Primary school or less 20.0 points, Secondary school 22.6 points, College 24.1 points, University or more 25.1 points). Collinearity diagnostics indicated education might be moderately negatively correlated to age although all variance inflation factors were below 5. Even though consistent with the study of Maniaci et al., the current study is one of only a few studies where education has not been found to be associated with literacy [41]. Most other studies reported increasing health literacy with increasing number of education years [31, 32, 38, 42, 43]. This inconsistency might result from focusing on functional literacy. At this basic level of literacy, differences among differently educated patients may not be evident. Furthermore, Berkman et al. suggested that health literacy might not relate to years of education [44].

Questionnaire application

The initial development of questionnaire content was done as general as possible (not region- or patient-specific). The later stages (expert panel, pilot and validation study) were, however, not done internationally. Thus, applicability to international setting cannot be automatically assumed and should be tested separately.

The questionnaire is primarily research-oriented and thus not intended for regular clinical use, as it takes about 15 min to complete. Nevertheless, individual pharmacists can use it as part of their initial patient assessments to provide medication (use) review and to identify potential patient misunderstandings and concerns. Further validation also could decrease the questionnaire length, making it appropriate for routine use in outpatient settings. At a population level, the questionnaire can provide scientific bases for medication literacy interventions, such as visual aids, customized patient information, and information accessibility improvements [45].

Health literacy and health knowledge

Health literacy and health knowledge have been frequently linked, but their relation is still theoretically inconclusive. Health literacy theories view knowledge as either an antecedent, a dimension or as a consequence of health literacy [46, 47]. Sørensen et al. systematically reviewed models of health literacy and developed an integrated conceptual model of health literacy summarizing the most comprehensive evidence-based dimensions of health literacy. The model lists knowledge as one of the main dimensions of health literacy [48]. In line with the Sørensen’s model, the developed instrument features questions addressing patients’ knowledge about medications.

Limitations

Although the self-administered questionnaire sent via postal mail has several advantages, it is also a source of potential bias. Those who responded might feel more competent in medication literacy, thus leading to over-estimation in the results. Family and friends might have also assisted participants in completing the questionnaire. Although most closed ended questionnaire items have 4 or 5 response options, possibility of guessing cannot be eliminated. Furthermore, the current questionnaire does not define cut-off or class values for medication literacy. This is considered for future work. Lastly, self-administration of the questionnaire prevented observation of medication use demonstration among participants.

Conclusion

A performance-based questionnaire measuring functional medication literacy among the Slovenian general population with supported validity was developed. The median of medication literacy score among Slovenian respondents was 24 out of 29 points. Items that required comprehension of long texts (prose literacy) and numerical skills (dosing instructions) had the most incorrect answers. Special attention should be paid to low-income and elderly people with poor self-perceived health, as they demonstrate significantly lower medication literacy scores.