Keywords

1 Introduction

New methods of providing farmers and other rural stakeholders vital information to increase harvests and improve livelihoods and incomes are frequently created. However, availability and accessibility of this information at the necessary time is challenging in developing countries including India. With the rapid increase in mobile phone usage and the proliferation of smartphones and their vast libraries of applications, this offers app developers and researchers to develop and disseminate science-based information to farmers when they need it the most.

An agricultural app titled NaPanta [1] was selected for the study for its relevance and popularity. App developers often give more importance to the usability than the functionality of an app even though research has shown that one of the important and motivational factors of using an agricultural app is accurate and timely information provided to the users [2, 3]. Hence, we conducted expert review of the selected app following cognitive walkthrough and heuristic evaluation methods to identify the usability and performance analyses.

In the following sections, we will discuss the methods and results.

2 Methods

2.1 Usability Inspection Methods

Cognitive Walkthrough for the Web [4] is an extension of Cognitive Walkthrough (CW) which tries to detect errors that occur when browsing and searching for information on a web site. For this study, the Cognitive Walkthrough for the Web (CWW) method was followed to perform the CW phase of the study. This is the most relevant CW extensions to our study. CWW was used to detect issues in a mobile application when browsing and searching through it for the information instead of on a web site. Browsing and searching the mobile application for the information were performed by experts for the tasks given by the researcher. The tasks for this purpose were developed based on the information needs of farmers identified in the previous literature on agriculture and rural development.

The heuristic evaluation (HE) method [5] is the widely used and most informal usability inspection method which involves having usability specialists and experts judge the usability of the user interface of the product/system by comparing with the established usability principles also known as heuristics. In this method, usability experts evaluate the interface to identify general usability issues based on the set of heuristics identified for that product, but not user specific issues.

There are many well-established heuristics proposed by usability scholars; the specific methods selected typically depends on the type of product being evaluated. A list of suitable heuristics was combined and selected from various proposed principles for evaluating a mobile application [6,7,8,9].

2.2 Research Setting and Data Collection

Cognitive Walkthrough and Heuristic Evaluation were used by a group of selected experts to evaluate the application. People with experience of evaluating and using mobile applications, designing interfaces, and with relevant educational exposure were recruited for the study. The determining factor was that they self-identified as having some expertise in usability. All the recruitment material including survey announcements and questions were approved by the University of Illinois Institutional Review Board (IRB). Experts were recruited through email and verbal announcements and snowball sampling (participants give recommendations of possibly interested participants).

All in-person usability studies were conducted at the University of Illinois Urbana-Champaign campus. Urbana-Champaign being a university town the study being conducted on campus, a majority of the participants were from the university community with others from relevant industries in the area. It turned out that some participants had never conducted usability studies but were familiar with the concepts from the courses learned in their degree program. Hence, we classified participants into two groups: expert participants and intermediate participants. Expert participants were people who had done usability studies and intermediate participants were those exposed to usability concepts and knowledge via classes and projects but who did not have any practical experience of conducting studies. With this classification criteria, we identified 9 participants as usability experts and 9 as intermediates out of a total of 18 participants.

A two-phase survey was used. The survey started with the researcher briefly explaining the survey process and taking consent from participants to participate in the study. Study procedure began by asking participants basic demographic questions such as age and profession or major, followed by the first phase; cognitive walkthrough.

In the cognitive walkthrough method, experts act as end-users to perform the tasks given by the researcher. Hence this phase starts with the researcher explaining the end-user’s population of the app and tasks to be performed as given below:

  • End-User Population: Rural Indian Farmers who usually have little or no experience in using smartphones or mobile applications.

  • Tasks: The app under study, NaPanta, is an agricultural app whose main functionality is to disseminate required information to farmers. Hence the tasks will be to find information about general/identified information needs of the farmers.

  • Guidance to perform the tasks: The app is already installed and activated with the local (Indian) mobile number. Hence participants would open the installed app directly and look for the information asked for in the given task.

The information needs were grouped into four stages of the agricultural life cycle including seed and cultivation varieties, disease and weed management, Market & Post-Harvest management, and Agriforum: an interactive chat forum.

In the heuristic evaluation phase, the heuristic principles selected were used to build the questions to test the app’s interface for its usability and performance. Participants were asked to think aloud during the entire duration, and the sessions were audio-recorded. As the researcher asked questions to perform the tasks to find information, the participants were articulating their thoughts, ways of trying to find information, and comments on the app’s interface. The researcher took notes while they were talking. Both the study notes and the audio recordings were used for the analysis.

3 Results and Analysis

3.1 Results

Study notes of the researcher and the audio recordings were used to develop the transcripts. Both quantitative and qualitative analysis were carried on the integrated data collected. First qualitative analysis was performed to identify themes, codes, and usability problems from the transcripts. Then descriptive statistics and inferential statistics such as t-test were performed on the average number of problems identified by two groups of participants: intermediate and experts to observe any differences between the two groups.

A hybrid approach of inductive and deductive coding and themes development was used. Some of the themes were taken from the previous study on apps analysis and some themes have emerged during the coding. The analysis started with multiple readings of the study notes (transcripts) to identify keywords and phrases. Relevant keywords and phrases were coded, and relevant codes were grouped into themes. A total of 90 usability problems were grouped into 25 themes.

From the top 10 problems identified by expert problems as given in Table 1, it can be observed that the major problems were associated with app-specific usability problems such as scattered information, lack of search functionality, and disorganized interface design. These problems can be addressed with a more careful design of the information architecture in the app.

Table 1. Top 10 problems identified by 7 or more participants.

3.2 Analysis

The number of problems identified by each participant is shown in Fig. 1. The first bar in the figure indicates that participant ‘E2’: second expert participant found 34 usability problems and the fourth bar indicates that participant ‘I8’: eighth intermediate participant found 22 usability problems. Similarly, the number of problems identified by all expert and intermediate participants is shown below.

Fig. 1.
figure 1

Number of problems identified by experts and intermediate participants.

From the figure, it seems that there is a difference in the number of problems identified by intermediates and experts. To statistically examine this a t-test was performed on the average number of usability problems identified by two groups. The statistical test of means (t = 2.8047; p = 0.017) indicated a significant difference in the average number of problems reported by experts and intermediates.

Insights on the Number of Experts Required for the Usability Study.

“How many participants are really enough?” is a question of concern for many researchers and professionals in the field of usability engineering. Many pioneers in the field conducted studies with varying numbers of users on different products and reported two different views. Some studies [10] identified that five number of participants are enough to get 80% to 95% of the usability problems while other studies [11] believed that more than five participants are needed to identify most of the problems. Yet there is no consensus on the sample size as it depends on various factors including size and properties of the product, type of the product, method of the usability study, skills and personalities of participants [12, 13].

This study with 18 participants (9 experts and 9 intermediates) identified a total of 90 usability problems in the app under study. As part of investigating how many participants would be enough to identify most of the problems, I used a random selection of 5 to 9 only experts and only intermediates to get insights on what percentage of problems would be identified by how many numbers of participants (Table 2.)

Table 2. All possible combinations of 9 choose number of participants and percentage of problems found.

3.3 Discussion

In the first part of Table 2 on experts, the first row describes the maximum number of problems identified when 5 random participants from 9 expert participants for all combinations (9 choose 5 is 126) were selected. Similarly, the minimum and the average number of problems with a percentage of usability problems identified are given in the table for both expert and intermediate participants.

From Table 2, it can be noted that 8 experts could find around 80% of the total usability problems, and all 9 intermediate participants together found only 70% of the total problems. This study required more than 5 participants to identify most of the usability problems which is different to the number suggested by Turner, Lewis, and Nielsen. We think for our study to need more than 5 participants to find 80% of the usability problems could be due to various factors including:

  1. 1.

    The type of the product; our app under study was agricultural mobile app, and there were a few studies in this area that conducted usability studies hence the 5 magic number may not apply here.

  2. 2.

    Type of the study; our study examined an app developed for rural Indian farmers, and the study was conducted with usability experts in the USA.

  3. 3.

    Knowledge of the participants; inline with the above point, experts in the USA may not have domain knowledge about the content and functionality of an app developed for crop farmers.

This can conclude that there is no strict rule on the number of participants needed to identify most of the usability problems as it depends on the participant characteristics and the characteristics of the product under study.