Keywords

1 Introduction

1.1 General and Most Commonly Used UX Research Methods

User interface research has become an indispensable component of web development, considering popular demand for ergonomic, user-friendly web pages and applications.

The topic is also relevant from a business point of view, considering how web developments involving users prove to yield superior results.

There are several methods available for user surveys and researching. The appropriate choices and following through with the research (methodology, detail, depth) is subject to available funding and professional standards. There are many ways to do research, and even the most rudimentary research or survey always produces results.

The usual way to present available UX research is by saying they are surveying user attitudes or monitoring user behavior, and yield qualitative or quantitative results respectively. One of the best-known and most cited charts is the work of User Experience Design and Research Executive Christian Rohrer, from 2014 [1]. The chart arranges research methods by their behavioral and attitudinal focus (i.e. what the user does and says), and their quantitative or qualitative data yield (why or how often users do what they do). Another defining factor in method classification is the context of future use (Fig. 1).

Fig. 1.
figure 1

A Landscape of User research Methods. Christian Rohrer, 2014.

Another cited chart is the one from McCrindle Research [2] that shows a range of methodologies broken down by generation. Of course, not all of these are used in examining User eXperience, they are more widely used in social science research, but many of these methods have been prominent in online interface testing as well (Fig. 2).

Fig. 2.
figure 2

Emerging Research Methods, McCrindle Research, 2013.

Certain methods developed for social science research are highly applicable to UX research, yet they are not featured on these charts. These are an increased sensitivity toward participant observation, mental modeling and cognitive schemas, as well as the application of projective testing.

In the following Sect. 2 will describe three methods which I have used in my previous research, and introduce their potential innovative use for online interfaces.

2 Social Science Researching Methods in the Field of UX Research

2.1 Researching User’s Mental Model in the Course of Mapping User Needs

One of the big, centuries-old questions facing not only psychology, but philosophy and linguistics as well is how we model the world inside our mind [3], how we conceptualize complex processes. For example, how we might imagine a shopping excursion, mailing a letter, or unwrapping a chocolate egg. These practical and accurate mental maps determine how we set about doing such a complex series of tasks: essentially, it is a mental model of our attitude object [4]. Mental models are shaped and molded by previous experience [5]. These models are relevant to both on- and offline environments, and pre-define our expectations toward how things should work. Some important shared features of mental models is their occasional instability, irrationality, rapid emergence and mutability, scientific unfeasibility and blurred demarcations [6].

In a sense, we would do well to start off all research and development with mapping the user’s relevant models of attitude objects (including services, apps, issues etc.) [4].

Rohrer mentions the study of mental models, namely in connection to card sorting tests [1], and the same card sorting method was applied by Jakob Nielsen in redesigning SunWeb’s intranet [7].

However, if we were to become acquainted with users’ mental models relevant to an application undergoing development, we can start by conducting some individual in-depth interviews, followed up with group interviews. Analyzing the data, we can then then classify and describe the views and models we found [8]. In case data visualization is more convenient, we may use Indi Young’s methodology [9]. For this paper, I am describing a case study involving in-depth interviews, the contents of which was categorized and analyzed.

2.2 Case Study: Researching Mental Models During a Bank Development Project

As a case study for applying this method, I will present a study commissioned by a bank. Ergománia UX agency developed a new web interface for a Hungarian bank in 2016. The interface introduced an entirely new complex service to Hungarian banking, a digital innovation supporting small business financial administration. As per the bank’s instructions, everything works online in a transparent and simple manner, including photographed invoice uploading for accountancy. The agency organized in-depth interviews for the novel service concept, attempting to explore participants’ relevant attitudes and mental models. The service itself was novel to Hungarian banking, therefore all I had to work with was extrapolations of participants’ previous experience, somewhat hazy ideas overall. My central focus was a question borrowed from partner therapy: “How would this solution improve your day?” My experience during these interviews was that participants were highly enthusiastic and vocal about possible online solutions, which they were not yet familiar with (i.e. which did not exist), but once developed, would make their business affairs more easily manageable, by simplifying accountancy correspondence and financial administration.

It was my experience that these participant interviews yielded multiple mental models, both parallel and subsequent. Finally, I ended up with far more mental models than the number of interviewees, in fact there were individual versions for several of these models. The reason behind this phenomenon is that mental models are elusive, mutable, and generally difficult to grasp. According to the bank, the research yielded some highly remarkable concepts, and the user needs it revealed provided plenty of inspiration for subsequent web design and development. Mental models were not encountered through thematic exploration (as per card sorting), through webpage assessment of existing or missing features, but in-depth mapping of user needs and ideas.

2.3 Participant Observation and Its Potential Application in Menu System Design and Development

About Participant Observation

Participant observation is one of the oldest and most widespread qualitative research methods. Its origins trace back to cultural anthropology, first proponents being Bronislaw Malinowski (1884–1942) and Clifford Geertz (1926–2006), but later adapted by social researchers to their examination of immigrant communities. This is the only method we can use to learn how subjects actually behave, rather than learn what they report on their own behavior. Field research offers direct and complete observation of subjects in their natural surroundings. It is suitable for exploring qualitative, unquantifiable details and minutia, and is therefore highly suited for mapping the fine details of attitude and behavior [10].

As far as we are discussing online interfaces and menu structures, mental model exploration is conducted foremost (and in case of online content, longest) through the card sorting test method. This was the first method applied by Nielsen to the Sun intranet development project in 1994 [7]. However, even though we had used this same method in our presented research, we chose participant observation for instead of this method for menu development, and I will now present its innovative use. These methods were used in the successful restructuring of a webshop’s menu system.

Case Study: Participant Observation and Card Sorting Test, as Support for Menu Tree Design and Development

Participant observation is a widely used method in market research as well as cultural anthropology. This fact, as well as researcher curiosity, encouraged me to apply this method in developing an interface for an online household appliance webshop called markabolt.hu. Research aimed to find solutions to resolve blocks in the shopping process as well as mapping the process in as much detail as possible. The research ws conducted by Hungarian UX agency Ergománia in 2017. We were interested in finding out about shoppers’ product selection and shopping behavior in an offline shop’s online environment. Also, we were focusing on comparing this behavior with the planned online selection and shopping process. To this end, a UX colleague and I spent a few mornings working alongside household appliance store employees, listening in on client inquiries. Sometimes we asked shoppers questions to pinpoint their product selection and shopping issues, including their priorities for finding the right oven or washing machine. What were their basic considerations for making a choice? Another of our priorities was specifying the exact parameters they used to select a product. Would shoppers arrive at the store looking for a specific appliance, possibly as far as its product code? Do they have a special function or feature in mind, like a freestanding or built-in set? How relevant are size dimensions? Our experience in this department proved fundamental in formulating the menu tree design.

To balance what we had seen and heard, we also kept up a conversation with the store employees in between sessions. We encouraged them to tell us their experience of typical shoppers and distinct shopping processes. We also asked employees to relate their previous pleasant or annoying experiences. This proved highly entertaining, as well as instructive for us.

Inbetween participant observations, and as follow-up, we asked employees as well as shoppers to participate in individual card-sorting tests. During the card sorting, researchers ask participants to arrange items destined for webpage display in a system they find intelligible. They can name and re-name their groups. Via this card-arrangement method, we can map users’ concepts for webpage content and its respective arrangements, as well as how users would categorize information featured on the webpage, and their terms for groups of content. In this case, our questions pertained to how they would find it logical and navigable to arrange store goods on an online interface. They used little cards to provide us with answers. We included this input in the development of the webpage menu tree.

Our participant observation experience definitely benefited our redesigning of the webpage search bar, product filter and menu tree, down to the actual wording of interface sections. We thereby applied experience from offline behavior observations to planning an online interface.

2.4 Applying Projective and Enabling Methods to Collecting Interface-Relevant Feedback

About Projective Methods

Focus groups and projective methods used in group interviews provide invaluable assistance to gain insight to what lies under the surface of rationalized replies. These techniques are typically used in market research, and may enable a deeper exploration of subject attitudes toward situations, products and activities [11]. They are also helpful in accessing interviewees’ inner perspectives, and give us a means of looking beneath the surface of rationalization i na way perfectly acceptable to the respondents.

Projective technique has five important methods: Associative, Complementary, Constructional, Self-expressive and Sorting-arrangement. Complementary procedures including sentence completion, and the picture sorting technique were applied in my research presented below. For the picture sorting exercise, it is crucial for participants to provide the explanation for their choices, rather than the researcher.

Case Study: Projective Tests for Deeper Examinations of Interface-Relevant Attitudes

In spring 2007, I applied an eye tracking study and a follow-up online focus group study to examine the webpage magyarorszag.hu, which was Hungary’s central administrative portal at the time.

Both in terms of its composure and the issues explored, the online focus group activity was based on traditional, in-real-life focus group methodology. Research participants meet in a chatroom-like online virtual environment at a given time, and have individual nicknames for participation. The procedure and aims of the online focus group research are much like a traditional focus group’s, the only difference being that the conversation takes place in a virtual, home-accessible chat room rather than a designated offline space. Online focus group participants are introduced to a variety of stimuli, and may even apply the projective methods familiar to traditional focus group studies [11]. Such projective methods include semantic differential, collage making, picture sorting and completing unfinished sentences (Fig. 3).

Fig. 3.
figure 3

magyarorszag.hu (May 2007)

These were the studies I conducted in 2007 for the webpage as it was at the time; and this same study was repeated for the since revamped magyarorszag.hu webpage. National Infocommunications Service Company Ltd. informed me that the new webpage was developed using agile methodology, and several rounds of progressive internal testing. User testing was not applied to the new webpage, nor was it for the 2007 version.

Due to the circumstances of COVID-19 lockdown, there was no way to conduct an eye-tracking study for May 2020. Instead, participants were asked to solve projective tasks from the online focus group study. There were 11 participants in total, and the questions were relayed via Google Forms. Two participants were high school graduates, the rest held college degrees. Ages ranged from 30 to 66. 62% were women (8/5), 38% men (8/3), residents of Budapest (9/4), other towns (9/3) and villages (9/2).

In both instances, online focus group projective testing took place in the same manner. Lead-up questions focused first on their personal, then online experiences of administration management.

In this study, responses relevant to personal administration were 50% positive and 50% negative, such as connection, smile, complicated or time-consuming. Online administration was categorized as positive for the most part, with terms like “simple” or “no cue”, but one participant stressed “simple, as far as the site is navigable”. Participant memories relevant to the magyarorszag.hu website however were grouped around terms like “total waste”, “annoyance”, “searching”, “illogical”, “complicated”, “tangled”. Afterward, I showed participants the webpage starting screen for that day (see Fig. 4), and asked for their feedback. Most participants reported the webpage is simple, clear, and transparent, and only few thought it was complicated. In comparison, every single participant rated the webpage negatively during its 2007 testing, as “illogical”, “gray”, “dim”, ‘bleak”, “cluttered”, “austere”.

Fig. 4.
figure 4

magyarorszag.hu (May 2020)

My next step was to ask for feedback via a differential scale. The two end parameters were, with little exception, identical to those applied in the 2007 study. For the sake of brevity I will only highlight the most prominent and differing resposes, based on the response diagrams. Semantic differential scale responses for the 2007 webpage study favored overall the idea of “complicated” as opposed to “simple”, “monotonous” rather than “colorful”, and “boring” rather than “interesting”. Prominent values for the 2020 study included characterizing the webpage as “slow” rather than “fast”, more “in-depth” than “superficial”, “reliable” rather than “unreliable”, more “thorough” than “superficial”, more “boring” than “interesting”, more “official” than “casual”, and the like/dislike categorization was in favor of the dislikes.

One of the most interesting parts of the study was picture sorting, which focused on opinions and attitudes which are more difficult to express and less readily translated to stereotypes. Software applied for the 2007 study [12] enabled the selected pictures’ assembly into collages. For this present study, the software was no longer available for use, so we used a Google survey to pick images, and there was no option for collage-making, regrettably. Participants were asked to select their pictures from sets including various animals, landscapes, different price-range automobiles, reps of various ages and characteristics, various landscapes, differently styled HQ’s (trendy office blocks to classicist halls), office interiors, cues (from single-person to snaking lines), picking the ones that they thought best reflected on the magyarorszag.hu main page. I also asked them to write down why they thought their choice appropriate.

The most frequently picked animals were the vizsla dog (10/4) and snail (10/3), and justifications included “vizsla are swift, curious and Hungarian”, and for snails: “not pretty or quick, but ours al lthe same”. For the “Administrative support at magyarorszag.hu webpage is…” section, there were images of different cues and administration service situations. Most participants picked personal support sessions (10/3), and the employee sitting next to a pile of paperwork (10/4). Most participants’ automobile of choice was the used Honda Jazz (10/4), citing reasons like “Because Hungarian administration is so bureaucratic, I picked an older but reliable car”, some even ventured as far as “Give me a round-headlight Zhiguli anyday!” The runner-up was a modern, mustard-colored Audi (10/3), with justifications like “it is up to the challenges of today”, and “reliable quality”. The other car choices were disparate. The most popular landscape (10/4) was the green hill familiar as a default Windows background. Interestingly, only one respondent took time to justify their choice, namely “this is as functional as a blank Windows screen”. The winning building photo was a modern, all-glass block because “it is a well-thought-out piece of engineering”, “stability”, “block”.

Another interesting part of the study was finishing the incomplete sentences. For brevity’s sake, I will only give a few examples from three questions and the respective responses’ summary evaluation. Regarding page navigability, the 2007 study had the incomplete sentence “Navigating the magyarorszag.hu webpage is…”, which most respondents completed to the effect that page navigation is difficult and confusing. They specified two main reasons for this, one, that there was too much information crammed into one space, and two, that menu points and information was not found where one would expect to look. The visual experience was reported as “drab”, “dreary”, “too much gray”, “needs improvement”.

The sentence “Support personnel at magyarorszag.hu are…” elicited responses including: “unimaginative”, “bureaucrats”, “helpful and quick to react”, “as confused by the webpage as I am”, “not circumspect”, “on a coffee break”, “bored and unhurried”, “dressed in gray and beige, with a seasonal red ribbon for Christmas”.

In contrast, the interface tested in 2020 was more positively rated by participants. Reporting on their navigation experience, most participants reported “easy” and “simple”, and only a small minority (11/3) said it was “very complicated” or “impossible”. The website visuals were also less negatively received than in 2007, although one-third of respondents (9/3) were again quick to point out its drabness, while replies included terms like “official”, “boring”, “a bit tedious”, “navigable, but complicated”. As for the support staff, they were rated mostly positive or neutrally: “proper”, “likeable”, “competent”. Yet here again we found responses probably rooted in stereotypes regarding public services, like “lost in the maze of bureaucracy”, or “waiting to get off work at 5”.

Apparently, these projective tests, while taking longer to evaluate due to their complexity, yield more subtle insight to user opinion exploration, revealing stereotypes and attitudes, as well as supporting the articulation of less straightforward opinions.

3 Summary

This paper was an attempt to present examples on how well-known methods used in less familiar settings can enable us to gain insights to user feedback regarding websites and online interfaces. These methods may prove highly useful in the field of UX research, giving access to user opinions, attitudes and stereotypes that lie beyond the issues of mere interface utility. These methods also help to learn how users think about issues, and user observation allows us to experience this as a reality [14]. Because the subject of observation is actually performing what we want to examine, we are in a position to ask direct questions about the reasons and motives behind their actions and choices. Projective testing is a familiar staple from market research, and can give very detailed and insightful feedback on users’ non-reflected opinions.