Introduction

Ill-structured problem-solving is increasingly seen as an important skill by modern learning theorists (Eichmann et al. 2019; Glazewski and Hmelo-Silver 2018; van Merriënboer 2013). Indeed, employers seek a wide array of problem-solving skills including creativity, communication, and leadership (Vogler et al. 2018). In response to the complexity inherent within domain practice, educators often advocate for active learning strategies and case-based approaches within classroom contexts (Herrington and Reeves 2017; Kim et al. 2017; Wosinski et al. 2018), also known as inquiry-based instruction (IBI) (Loyens and Rikers 2011). In contrast to information dissemination approaches, these strategies allow learners to resolve cases that are similar to the types of challenges encountered by domain practitioners (Dabbagh and Dass 2013; Valentine and Kopcha 2016). Specifically, many of these classroom strategies focus on ill-structured cases that vary in terms of structuredness, context, complexity, dynamicity, and domain-specificity (Dabbagh and Dass 2013; Jonassen 1997). It is argued these classroom strategies allow broader and deeper interaction with the important features, concepts, and goals embedded within the ill-structured problem (Hmelo-Silver 2013; Teasley and Roschelle 1993). Due to these inherent complexities, students do not merely converge on pre-established “correct” solutions within the problem space; rather, they focus efforts on meaning-making as they justify their proposed solutions given the available evidence (Ju and Choi 2017) found within various information sources (Glazewski and Hmelo-Silver 2018). As learners solve the given case, they also engage in systematic tests of their knowledge structures (Ifenthaler et al. 2011). It is thus argued that problem-solving for ill-structured cases supports deeper learning that incorporates multiple perspectives and expertise (Eichmann et al. 2019; Hmelo-Silver et al. 2007a, b).

From an implementation perspective, problem-solving instructional strategies necessitate that instructors adapt their teaching approach to one that facilitates student-centered learning (Tamim and Grant 2013; Wijnen et al. 2017). To date, there have been several theoretical models that articulate components of problem-solving and describe ways to effectively transfer its principles to learning environments. For example, Jonassen (1997), van Merriënboer and Kirschner (2012), and Ge and Land (2004) each detail how learners progress through initial understanding and solution generation stages of problem-solving. These models not only outline specific learning tasks, but also detail inherent cognitive processes embedded within each step. Other models elucidate specific subsets of complex reasoning in terms of failure processes (Tawfik et al. 2015) and stages of reflection (Hong and Choi 2011). Collectively, the approaches provide insight into the reasoning processes learners engage in as they resolve ill-structured problems, while also specifying design strategies to employ from a pedagogical perspective.

Despite their variants, each of the aforementioned theories and models of ill-structured problem-solving consistently emphasizes how individuals iterate their understanding as they resolve complex problems. That is, learners’ knowledge construction is rarely complete with a single pass; instead, they engage in multiple comprehension cycles as they elaborate their understanding of the problem space. Iterations in problem-solving result from a variety of factors including failure (Kapur 2012; Rong and Choi 2018) or further investigation of the phenomenon encountered during inquiry (Huang et al. 2017). When learners are engaged in problem-solving, their ensuing inquiry processes and iterations are often catalyzed by a question that seeks to reconcile an emergent knowledge gap or failure in achieving a subgoal. It follows that question-asking is a critical component of the knowledge construction process. Although question-asking is implicitly acknowledged within established problem-solving theories and models, instructional design theorists have yet to explicitly describe how the type and depth of the generated question plays a role in the learner’s inquiry process. Instead, many extant design principles and strategies focus on how to strategically divide content to support learning (e.g., segmentation principle; flipped classroom approach) rather than how questioning strategies can be applied towards learning design. To that end, Wang et al. (2013) asserted that “existing studies in the field have tackled problem-solving and knowledge construction separately, failing to see them as an integrated two-way process” (p. 294). Given the alignment of problem-solving and questioning, further discussion is needed about (a) mechanisms of generating strong questions and (b) how questions posed during problem-solving elucidate a trajectory of learning. What is needed, but not adequately articulated and validated, is a widely accepted taxonomy of question-asking as a lens into a learner’s reasoning process during inquiry learning in case-based, ill-structured problem-solving.

Based on existing learning theory and studies, we propose a theoretical taxonomy of question-asking that elucidates elements of the knowledge construction process. In doing so, this taxonomy describes various categories (Krathwohl 2002) of question types used by learners on their trajectory of understanding. The article first begins by surveying prominent theories and models as they relate to ill-structured problem-solving. The models presented provide a broad overview of established problem-solving processes and others detail the subprocesses (failure, reflection) that emerge when solving ill-structured cases. In line with expert–novice literature, we then present a proposed taxonomy of question-asking that consists of the following major categories: simple/shallow (verification, disjunctive, concept completion), testing (example, feature specification, quantification, definition, comparison); and deep/complex questions (interpretation, causal antecedent, causal consequence, goal orientation, instrumental/procedural, enablement, expectation, and judgmental). We then evaluate previously cited learning technologies through the purview of the question taxonomy. Finally, we conclude with a discussion for future implications, especially as it relates to scaffolding theories and artificial intelligence advances in education.

Literature review

Learners engage in various reasoning skills such as information gathering (Glazewski and Hmelo-Silver 2018), causal reasoning (Eseryel et al. 2013; Jeong and Lee 2012), and decision-making (Stefaniak and Tracey 2014; Wilder 2015) during IBI. Although theories and models highlight different aspects of the ill-structured problem-solving process, they consistently underscore the fact that it is an iterative and recursive process. These iterations are often due to failures encountered (Kapur 2014), expansion of the understood problem space (Hmelo-Silver 2013), or more focused inquiry on particular concepts. Additional studies show that this is particularly true for novices given their lack of expertise with the phenomenon and unfamiliarity with the problem space (Hmelo-Silver et al. 2007a, b; Jacobson 2001; Wijnia et al. 2016).

The reason for these revised cycles highlights two critical elements of knowledge construction during problem-solving. First, it signifies that learners have identified knowledge gaps that they need to resolve. Alternatively, the learners iterate to fortify their initial understanding of a concept and refine their understanding. In both cases, these iterations are often driven by the pursuit of a question generated by the individual or proffered by the instructor. For example, a case study by Hmelo-Silver and Barrows (2006) found an expert facilitator asked over 300 questions in a problem-based learning medical case to incite further inquiry. To achieve optimal challenge during problem-solving, Kim et al. (2019) further argued that “teacher scaffolding consists of one to one support for student learning, often in the form of probing questions” (p. 8). Additional research has focused on the conditions for optimal inquiry and how open-ended questions posed by teachers play a role in the inducement of complex reasoning in K-12 classroom contexts (Webb 2009; Wells and Arauz 2006). Collectively, the studies show a shift towards more open-ended questions as a catalyst for complex reasoning as teachers gain expertise.

In line with studies in teachers’ facilitation strategies, we argue that various ill-structured problem-solving theories/models make reference to the significance of question-asking. In the section that follows, we provide an overview of theories and models that explore ill-structured problem-solving holistically (Jonassen 1997; van Merriënboer 2013), self-regulated problem solving (Ge et al. 2016), scaffolding (Ge and Land 2004), failure (Tawfik et al. 2015), and reflection (Hong and Choi 2011). We further describe how the importance of question-generation is implicitly highlighted within each theory/model (see Table 1).

Table 1 Instructional design problem-solving theories and models

Jonassen (1997) model of ill-structured problem-solving

Within the field of instructional design, Jonassen (1997) was one of the first to advocate designing learning systems that espouse and support ill-structured problem-solving. In contrast to “drill-and-practice” approaches to education, Jonassen’s (1997) ill-structured problem-solving model contended that learning is contextual. He further argued that deep learning requires learners to develop viable solutions that account for the constraints, perspectives, and manipulable parameters embedded within the ill-structured case (Jonassen 2011a; Jonassen and Hung 2008). In the first stages of inquiry and problem-solving, his model calls for learners to describe the problem context (Step 1) and related constraints (Step 2). He later calls for learners to select and develop cases (Step 3), which sets the stage for knowledge construction (Step 4) and later argumentation (Step 5). Once learners have generated a solution, they are able to monitor and adapt their solution based on observed outcomes (Step 6).

Given how the model elucidates the inquiry process, Jonassen (1997) alluded to the role of questioning at various times. In the early stages, he suggested that “learners must answer questions, such as: How much do I know about this problem and its domain?” (p. 79). In later stages, he suggested more complex questions arise during the generation of the solution and often focused on the following: What are the alternative perspectives? What is the quality of the evidence? and other complex questions. If the solution does not carry out as initially intended, causality questions emerge that drive subsequent iterations. Although this model was one of the first to reference the importance of questioning in ill-structured problem-solving, it leaves open to interpretation how the type of question reflects depth of understanding.

Ge and Land (2004) conceptual scaffolding framework

In the years that followed Jonassen’s work, many researchers and theorists began to explore how to better engender students’ ill-structured problem-solving. Within the learning design community, a prominent model that emerged to support problem-solving theory was Ge and Land's (2004) conceptual scaffolding framework. They built on the original Jonassen (1997) model by asserting that novices often maintain misconceptions and shallow conceptualization of the problem-space. Based on how experts solve problems, Ge and Land (2004) suggested that learners require scaffolds in the following phases: (a) problem representation, (b) generating solutions, (c) making justifications, and (d) monitoring and evaluation. In the first phase, the focus is on scaffolding an individual’s knowledge structure and initial domain-specific knowledge. Learners must first assess their understanding of the new problem space given their prior knowledge. If they lack adequate domain knowledge to solve the problem, their initial inquiry aims to resolve these gaps by relating the new information to what they already know. In terms of implications for learning design, this phase especially requires scaffolds that support elaboration and elicitation of preliminary understanding. The second phase (solution generation) moves towards action and decision-making. The scaffolding strategy entails mapping existing schema-driven solutions onto the new context given their problem representation. The third phase emphasizes an individual’s ability to justify their resolution using available evidence and data. An especially important scaffolding strategy in this stage includes supporting causal reasoning. The last phase of the Ge and Land (2004) model consists of conclusions about the appropriateness of the proposed solution. Given that research shows that learners often fail to critically evaluate their solutions (Ertmer and Koehler 2018; Jeong and Hmelo-Silver 2016), the scaffolding strategy in this phase focuses on satisfaction with the resolution in light of alternative perspectives.

As noted earlier, the Ge and Land model builds on the Jonassen (1997) model by outlining iterative and ill-structured problem-solving through the lens of (a) expert–novice literature and the (b) zone of proximal development (ZPD). Over time, Ge et al. (2016) expanded their model to focus on self-regulated constructs of planning, execution, and importance during the problem representation and solution generation phases. Collectively, the models seeks efficient replication of the ill-structured problem-solving strategies employed by experts as they direct their learning. As in the case of Jonassen (1997), these models make implicit reference to the importance of question-asking. For example, Ge and Land (2004) underscored the following during the later stages of problem solving: “through cycles of questioning, explaining, elaborating, and feedback, students modify their thinking, plan remedial actions, and monitor and evaluate their solution steps” (p. 13). In the subsequent model, they denote that “learners need to conceptualize the problem by questioning and generating hypotheses” (Ge et al. 2016, p. 6). Again, this necessitates a shift away from convergent thinking towards iterative thinking and inquiry within the broader problem space. Of the surveyed approaches, this model arguably best emphasizes the importance of questioning given its emphasis on scaffolding of self-regulated problem-solving through prompts. That said, this model emphasizes question-asking as replication of expert reasoning processes rather than how student-generated questions may serve as evidence of advanced understanding.

4C/ID model (van Merriënboer et al. 2002; van Merriënboer and Kirschner 2012)

As higher order learning became increasingly emphasized within the instructional design community, other models emerged to facilitate problem-solving. The four-component model of instructional design (4C/ID) is similar to other problem-solving models, but explicitly focuses on the development of strategies that support schema construction and the transfer of gained competencies (van Merriënboer et al. 2002; van Merriënboer and Kirschner 2012). These subcomponents include the following: learning tasks, supportive information, just-in-time (JIT) information, and part-task practice. As learners progress through the problem space, the model suggests they automate recurrent tasks and garner abstraction from their concrete experiences. In this view, supportive information links prior knowledge and the existing problem. That is, learners develop strategies that elaborate from the extant schema based on what is known and compared with the new information presented in the case. While supportive information specifically focuses on nonrecurrent skills (context-specific information), just-in-time support scaffolds recurrent skills that learners are expected to transfer in future situations. In contrast with the prior models described, an important aspect is the fading of scaffolding as learners gain competencies and begin to fortify their new schema. The last aspect (part-task practice) builds expertise in recurring tasks to facilitate future transfer.

Given the focus on refinement of understanding, the 4C/ID model argues that designers should provide options to incite inquiry and exposition. They argue that inquiry approaches “are very appropriate for interconnecting new information and already existing cognitive schemata. It is a form of guided discovery, because the leading questions (e.g., which parts can be distinguished in this machine?) help learners to identify relevant nonarbitrary relationships” (van Merriënboer et al. 2002, p. 49). As it relates to problem-solving, the model further asserts that learners have to “answer questions that provoke deep processing and the induction of mental models from the given example” (van Merriënboer and Kirschner 2012, p. 46). Once again, questions are highlighted as important, but the mechanisms that constitute a strong question is yet to be defined.

Unified design approach for failure-based learning (Tawfik et al. 2015)

While many early models focused on providing clarity about the different phases of problem-solving (Ge and Land 2004; Jonassen 1997; van Merriënboer and Kirschner 2012), other models emerged that focused on specific subprocesses encountered during the inquiry. In contrast to the more holistic approaches, Tawfik et al. (2015) contended that ill-structured problems are inherently complex and include failure experiences, which often catalyze problem-solving iterations. They, therefore, proffered a unified design framework based on child development theory (Piaget 1952), impasse driven learning (Blumberg et al. 2008; VanLehn 1988), case-based reasoning (Kolodner et al. 2004; Schank 1999), productive failure (Kapur 2011), and negative knowledge (Gartmeier et al. 2008, 2010). As it relates to failure-based learning, they suggested that encountering errors activates a series of additional cognitive processes when compared to a successful experience. Specifically, the failure causes an individual to identify reasons for the error, evidence for the phenomenon, and the root cause. Once s/he engages in meaning-making about the new experience, a new solution is implemented and the viability of the outcomes is assessed. Once that happens, Tawfik et al. (2015) argued an expanded mental model is generated based on the failure experience that affords additional opportunities for transfer.

As noted earlier, the Tawfik et al. (2015) model highlighted the role of failure in iterative problem-solving. They argued that failure produces additional cycles of inquiry described by Jonassen (1997), Ge and Land (2004), and the 4C/ID (van Merriënboer 2013) model. As in the case of the other models, Tawfik et al. (2015) suggested that failure and question-asking are inextricably linked because “questions help prompt consideration of failure scenarios and recognize potential misconceptions. This, in turn, can illuminate causal paths members of the group may not have otherwise recognized” (p. 988). They further described how questions generated from failure are often framed to support causal reasoning, decision-making, and consideration of alternative perspectives. However, the focus on this model is on the co-occurrence of errors and questions rather than on defining how the types of questions support a deeper understanding of the failure experience.

Three dimensional model of reflective thinking (Hong and Choi 2011)

While models have focused on different definitions of problem-solving stages, reflection is generally seen as an integral part of experiential meaning-making. This can be done by reflection-in-action or reflection-on-action (Schon 1984). The former is focused on in situ meaning-making, while the latter includes a retrospective assessment of the situation. In both cases, reflection serves to “correct distortions in their understanding and errors in their problem solving, but also to critically examine the presuppositions upon which their beliefs have been built” (Hong and Choi 2011, p. 696). To that end, Hong and Choi (2011) offered a three-dimensional approach to reflection during problem-solving that includes the following: timing, objects, and level. The timing refers to when reflection is introduced (problem-driven, solution-driven), while the objects focus on consideration of specific artifacts. These objects may be the internal self and consist of knowledge, experiences, feelings, attitudes, and ingrained beliefs. Alternatively, the objects may be driven by the following external sources: functions, stakeholders, and contexts. In many cases, the tension between the artifacts and self-reflection is mediated by the contextual circumstances (budget, timeline, resources, politics).

In this model/theory, question-asking is a significant aspect, especially as it relates to gaps in knowledge generated during reflection. However, different questions emerge based on the depth of the reflection and loop. The three-dimensional reflection model consists of loops (single loop, double-loop, triple loop) that each contain questioning components. For example, the single loop is focused on informational questions and understanding why the failure occurred. The double loop necessitates one “to question their assumptions in relation to their understanding of the problem” (Hong and Choi 2011, p. 699). Finally, the triple loop causes one to question broader constraints, such as the efficiency of a solution or ethical considerations that are present within the context.

Question taxonomy

The above models/theories demonstrate that question-asking is a critical aspect when solving complex cases posed in IBI. For example, questions are described as identifying basic elements of the problem space (Jonassen 1997), gaps in understanding (Hong and Choi 2011), functions of components (van Merriënboer and Kirschner 2012), and causal paths (Tawfik et al. 2015). Others describe the unique role of questions in supporting peer interaction (Choi et al. 2005) and procedural understanding (Dillon 1984; Lin et al. 1999). Although the theories/models highlight the importance of questions, the learning design community lacks systematic definitions regarding how the type of question reflects depth of understanding. A more comprehensive understanding and interdisciplinary approach to question-asking can be used as a mechanism to (a) serve as markers of deep learning, (b) identify a trajectory of knowledge construction and solution application, and (c) assess and scaffold learners as they engage in ill-structured problem-solving.

Due to the diverse literature of problem-solving, an adequate model of question-asking in the design of educational technologies needs to consider contributions from various arenas, including the following: education, psychology, discourse processing, communication, linguistics, natural language processing areas of artificial intelligence, and other areas of the learning science. These particular disciplines have analyzed questions from different perspectives; therefore, there is mixed guidance on how designers of educational technologies should proceed. To date, there are some foundations of guidance to support good questions to ask in classrooms; early proposals were Questioning the Author to facilitate reading comprehension (Beck et al. 1997) and Dillon’s (2004), Questions and Teaching: A Manual of Practice that covers courses across the curriculum. At the computational end of the continuum, researchers in artificial intelligence (AI) and computational linguistics employed candidate questions based on representations and processes that can be dissected computationally, such as QUALM (Lehnert 1978), SWALE (Schank 1999), QUEST (Graesser et al. 1992), QUAID (Graesser et al. 2006), PREG (Otero and Graesser 2001), and others (Lauer et al. 1992). In other domains, learning designers conducted a task analysis that identified the questions relevant to the specific procedures involved in completing a complex problem, such as the ASK-Systems (Jonassen 2011a). Designers of educational technologies are thus beset with a bewildering number of perspectives on how they should design question-asking and answering facilities.

Question category

Given the available literature, an interdisciplinary taxonomy is needed to analyze questions in a way that integrates subject matter knowledge/skills, pedagogy, computation, and discourse processes. To that end, questions can be classified according to the nature of the information being sought. Inquiry is thus influenced by question types, which are a function of (a) how learners organize their knowledge (knowledge structures) Clariana 2010; Ifenthaler et al. 2011; Kim 2017) and (b) one’s cognitive processes needed to solve the problem (Jonassen 2011b). In line with the Chi et al. (1981) categories of expertise, studies suggest one can distinguish between shallow/simple questions (categories 1–3; see Table 2), testing questions (4–8; see Table 3), and deep/complex questions (9–16; see Table 4). As one goes from shallow/simple to deep/complex questions, there are more demands placed on reasoning, and the answer content goes beyond the words in the question and immediate context. For the first category, empirical studies suggest shallow/simple questions are often posed by novices throughout their problem-solving. Research finds these questions frequently focus on defining basic parameters of the problem-space (Jacobson 2001; Wolff et al. 2016) as novices tend to view variables in isolation (McIntyre and Foulsham 2018; McIntyre et al. 2017; Tawfik et al. 2019; Wolff et al. 2016). As it related to prior studies, these questions aim to verify, compare, and complete concepts to understand the basic parameters of the problem space.

Table 2 Question taxonomy of shallow/simple
Table 3 Question taxonomy of testing questions
Table 4 Question taxonomy of deep/complex questions

While the first group of questions (shallow/simple) are designed to detail the definitions of the problem space, testing questions are focused on meaning-making and alignment with their prior knowledge. The questions are marked by testing the problem space, qualifying parameters of the surface components (Chi et al. 1981), and generating initial interdependencies between the variables (Gartmeier et al. 2019; Hmelo-Silver et al. 2007a, b). As such, the literature suggests these questions aim to find similar examples, quantify the variables, and make comparisons across concepts.

Deep/complex questions align with expert-like reasoning and focus on interactions across the variables within the problem space. Specifically, it combines different elements of the problem-space and supports advanced reasoning in terms of causality, inferences, decision-making, and other qualities. For example, a study between expert and novice engineers found that experts “frequently asked themselves how much they could expect to achieve if they continued a particular approach” (Ahmed et al. 2003). Other studies show that questions posed by experts often make connections among the variables (Chi et al. 1981; Ertmer et al. 2008), support systems-level thinking (Hmelo-Silver, Marathe, et al. 2007a, b), consider causal effects as they manipulate problem space parameters (Chi and VanLehn 2012), generate judgements of the evidence presented (Iordanou et al. 2019; Ju and Choi 2017), notice structures across analogous contexts (Dumas 2017; Dumas et al. 2014; Malkiewich and Chase 2019), and identify opportunities for transfer (Chi and VanLehn 2012; Nussbaum and Asterhan 2016).

Application of question taxonomy in problem-solving

The three-tiered question taxonomy (shallow/simple questions, testing questions, deep/complex questions) based on expert–novice studies can be used as a guide for educators, including designers of learning technologies and instructors. The first step is to consider the knowledge and skills of proficient performance in the tasks and subject matter of consideration, whether it be science, computer programming, or ethics. The second step is to consider the pedagogical approaches and strategies to help learners achieve the knowledge and skills, while remembering that pedagogical approaches are very different for novices and those with emerging expertise (Hmelo-Silver et al. 2007a, b; Jacobson 2001; Wolff et al. 2016). Finally, the third step is to prepare specific questions from judiciously selected question types.

Consider the task of problem-solving in the medical context. Problem-solving generally follows four stages according to many theoretical frameworks (Funke 2010; Ge and Land 2004):

  1. 1.

    Exploring and understanding. Interpreting the initial information about the problem and any information uncovered during the course of initial exploration.

  2. 2.

    Representing and formulating. Identifying global approaches to solving the problem, relevant strategies, procedures, and relevant artifacts (e.g., graphs, tables, formulae, symbolic representations) to assist in solving the problem.

  3. 3.

    Planning and executing. Constructing and enacting goal structures, plans, steps, and actions to solve the problem. The actions can be physical, social, or verbal.

  4. 4.

    Monitoring and reflecting. Tracking the steps in the plan to reach the goal states, marking progress, and reflecting on the quality of the progress or solutions.

In terms of a medical example, a patient arrives for the annual check-up with an optometrist and complains he periodically sees psychedelec images for about 30 min before his vision returns to normal. Thus, the goal for the doctor and the students in a medical school is to diagnose the problem and find a way to decrease the images for this particular case. As such, the selection of the question categories in Tables 2, 3, 4 would be sensitive to the stage of problem-solving and one's expertise. For example, a novice learning about medicine would enter the exploration and understanding phase of problem-solving (Phase 1) to utilize concept completion, feature specification, and definition and interpretation questions, but a more experienced physician would likely bypass many shallow/simple types to ask more advanced questions. For instance, the exploration and understanding phase of problem-solving (Phase 1) would mostly include the following concept completion, feature specification, definition, and interpretation questions below:

  • What is the technical term for the psychodelik image? (ANSWER: Ocular migraine.)

  • What are the symptoms of ocular migraines?

  • What is an aura?

  • What would be the diagnosis of the patient’s problems?

The representation and formulation phase (Phase 2) requires the individual to dig deeper into the ocular issue with questions in the disjunctive, quantification, comparison, example, and judgmental categories:

  • Does the aura occur in one eye or two eyes?

  • How many minutes does the aura persist?

  • What is the difference between an ocular migraine and a regular migraine?

  • What are some example experiences when ocular migraines have occurred?

  • To what extent is there pain associated with ocular migraines?

As the solution emerges, the planning and executing phase (Phase 3) would largely have questions from the goal orientation, instrumental/procedural, and enablement categories, as exemplified below:

  • Why does the doctor check my retina when diagnosing me for ocular migraines?

  • How can I minimize the occurrence of ocular migraines?

  • What devices can be used to diagnose ocular migraines?

The monitoring and reflection phase (Phase 4) likely has multiple questions in the verification, causal antecedent, causal consequence, and expectation categories:

  • Is an ocular migraine a symptom of a stroke?

  • What causes ocular migraines?

  • What will happen if I get frequent ocular migraines?

  • Why isn’t there pain associated with ocular migraines?

As shown above, the expertise of the learner has a significant impact on the questions that they ask and can answer (Graesser and Olde 2003; Hmelo-Silver et al. 2007a, b; Jacobson 2001). It is important to acknowledge that the answers to the questions can have many additional layers of complexity and ill-defined content. Consider, for example, the question “How can I minimize the occurrence of ocular migraines?" Thus, the expertise of the learner, the complexity of the subject matter, one’s knowledge structure, and difficulty of the cognitive processes also impact the questions that could be asked.

Evaluation of computer-supported learning technologies through purview of question taxonomy

Given that ill-structured problem-solving is often an iterative process, question-asking is an important element of inquiry; therefore, questions encountered during problem-solving should align with advances in students' understanding. It follows that the question taxonomy can be used to guide design and also evaluate computer-supported learning environments. The following section of this paper will explore the application of the aforementioned learning taxonomy (shallow/simple, testing, and deep/complex) within the design of learning technologies.

Shallow/simple question learning environments

CSAL

The Center of the Study of Adult Literacy (CSAL) developed an artificial intelligence tutoring system to aid struggling adult learners in developing literacy skills (Graesser et al. 2019). When the user logs in, the computer screen presents 30 different reading lessons that cover a variety of foundational literacy topics, such as learning new words and contextual clues. The learning system is adaptive and uses the types of questions to drive how the system responds to the learner and develop comprehension strategies. For example, the system initially asks shallow/simple questions such as: “What is the topic of the text?” and “Does the Raines line only stop at the shopping district?” Subsequently, the system adapts and asks increasingly difficult questions for more advanced reading texts (Fig. 1).

Fig. 1
figure 1

CSAL Interface using shallow questions

Through the purview of the question taxonomy, CSAL AutoTutor first employs shallow/simple questions to establish initial comprehension and later provides morecomplex questions based on the learner input. Again, initial questions strategically ask “What is the topic?” and “What is the problem in this text?”. In terms of the aforementioned taxonomy, these concept completion questions are designed to support the initial framing of the basic concepts. Based on the learner’s response to the question, the system advances to testing questions or deep/complex  questions (causal antecedent: “Why is it important that the victim is no longer near smoldering material?”) that focus on a contextual analysis of the reading material. Once the learner has responded to the question, the system also provides scaffolds through different media sources, such as advanced organizer videos and hints provided by the intelligent tutoring agents. In doing so, questions drive the interaction in a way that aids students in finding solutions to presented problems.

Testing question learning environments

ElectronixTutor

ElectronixTutor is an intelligent tutoring system that contains learning aids for Navy trainees during their apprentice technician training. Topics include circuit basics and physics concepts, such as Ohm’s and Kirchhoff’s laws, as illustrated in Fig. 2. ElectronixTutor uses question responses to intelligently recommend learning resources. Given its coverage of electronic circuits, the system specifically includes quantification questions, such as “How many closed paths can you find in the circuit below?” and “How many bulbs will be lit in the circuit below if switch Z is open?”. Over time, there are also deep/complex questions regarding underlying causality within electronic principles, such as “What happens to the current in I if the I2 current decreases?”.

Fig. 2
figure 2

ElectronixTutor

As noted earlier, the interaction in ElectronixTutor is designed around multiple interactions with questions. In some of the visual representations, a learner clicks on a hot spot to access a menu of questions (Graesser et al. 2005). In addition to being presented with diagnostic questions, students can also generate their own questions and submit them to the automated tutor. As learners gain expertise, the system increases the frequency and diversity of questions that students are exposed to and asked.

Nick’s dilemma

“Nick’s Dilemma” is an inquiry-based instructional environment designed to teach business students about sales management principles. In this case, Nick and his boss (Sheila) must hire a new individual to address the rapid turnover within a medical device company. They must also solve this problem in light of increased competition by local area medical device providers. Using a game-based approach, students consider various alternatives, such as hiring an internal candidate, an external candidate, or restarting the search process. As they evaluate each alternative, they must balance the potential increased training cost and long-term stability of the position given the emergent market realities (Fig. 3).

Fig. 3
figure 3

Game-based learning environment using questions to drive interaction

This learning environment uses various strategies to support student learning including advanced organizers, related cases, and badges. Rather than merely presenting information, the system proactively proffers questions for the learner to consider as they navigate the problem-space. This design, therefore, models strong testing questions and serves as a decision-point that advances the learner within the game. For example, if the learner wants cases related to the prior hires, s/he can click on example questions such as “What can you tell me about previous hires?”, which then presents supportive cases. Other comparison questions are embedded in the learning environment so that learners can understand how related cases are similar to Nick’s Dilemma. Using the question taxonomy, learners read similar narratives and transfer the lessons learned to solve the existing problems.

Deep/complex question learning environments

A World without Oil

“A World without Oil” is an early learning environment designed to teach students about the oil crisis and climate change (Rusnak et al. 2008). Each week the environment presents a hypothetical news story outlining the current effects of the oil crisis on different governments and global economies. To further contextualize the topic, variables such as fuel price increases, shortages, and economic decline are embedded within each news story (Fig. 4).

Fig. 4
figure 4

World without Oil questions

Through the purview of the proposed taxonomy, “A World without Oil” has users answer various types of deep/complex questions in the categories of interpretation, causal antecedent, causal consequence, and judgemental. At the end of each week’s update on the crisis, the site strategically employs question strategy such as “What will $4/GAL gas do to your finances?” (causal consequence) and “If the world’s demand for oil is greater than the supply, how will the world make up the difference?” (expectation). In accordance with problem-solving theories and models, learners reference the question to guide their inquiry. Users engage in collaborative problem-solving in an attempt to provide solutions to these difficult issues. That said, “A World without Oil” appears to primarily focus on deep/complex questions and somewhat less on shallow/simple or testing questions.

ASK systems

One of the most common types of learning environments that leverage deep/complex questions includes ASK Systems, which employ narratives as a model of problem-solving processes (Jonassen 2011a; Schank 1999). Specifically, the cases are “stored as nodes in an associative network that is organized to reflect (a) the objects and relationships in its domain of expertise and (b) the student's task in that domain” (Ferguson et al. 1992, p. 98). As learners engage in ill-structured problem-solving, they select from lists of different questions and read a corresponding case that contextualizes the question. Thus, the ASK systems are designed to (a) scaffold problem-solving using cases (b) support learning transfer from the case to the main problem to solve and (c) model complex questioning (Jonassen 2011a).

Using this design approach, Schmidt et al. (2020) constructed an ASK System to help caregivers of epilepsy patients gain access to case resources. Given that caregiving for individuals with epilepsy is a complex and nuanced topic, the system is designed around shallow/simple questions (e.g., What do I need to know about epilepsy?) and also more deep/complex and enablement questions (e.g., How do I help my child solve adherence problems?). The system provides different types of questions as links to related cases that contextualize proper care protocols. Using questions as the driver of learning, the system anticipates queries that caregivers will encounter and offers cases that allow them to transfer the lessons learned to their individual context (Fig. 5).

Fig. 5
figure 5

ASK system interface from Epilepsy Adherence in Children and Technology (eACT) online learning environment (Schmidt et al., 2020). Used with permission

Discussion and opportunities for future research

Ill-structured problem-solving is a multifaceted construct that consists of an array of subskills, including information gathering (Glazewski and Hmelo-Silver 2018), argumentation (Ju and Choi 2017; Si et al. 2018), decision-making (Stefaniak and Tracey 2014), and others. As IBI is increasingly emphasized in classroom contexts (Lazonder and Harmsen 2016; Loyens and Rikers 2011), various theories and models aim to articulate the ill-structured problem-solving process. To date, many existing theories and models often focus on how to elucidate and later replicate the expert reasoning process for novices within learning environments. While earlier theories and models outlined the holistic nature of the problem-solving process, other perspectives emerged that focused on specific aspects of failure (Rong and Choi 2018; Tawfik et al. 2015) and reflection (Hong and Choi 2011). Collectively, these theories and models present a nuanced way of understanding ill-structured problem-solving and the cognitive subprocesses that occur along the way.

The various design strategies that support ill-structured problem-solving each highlight the iterative nature of inquiry when solving ill-structured cases. We build off these prior theories/models and highlight how question-asking catalyzes inquiry during ill-structured problem-solving. Despite the consistent references to the importance of question-asking, the field has yet to differentiate the depth of questions and explore its implications for learning design. To address this gap, we propose a question taxonomy based on expert/novice literature that consists of the following question categories: shallow/simple (verification, disjunctive, concept completion), testing (example, feature specification, quantification, definition, comparison), and deep/complex questions (interpretation, causal antecedent, causal consequence, goal orientation, instrumental/procedural, enablement, expectation, and judgmental).

A well-defined taxonomy is essential to learning design for multiple reasons. Although many theories and models suggest question-asking is important, this taxonomy provides educators clarity on how learners are progressing in their knowledge construction. For example, a learner focusing on simpler verification or concept completion questions across multiple iterations suggests s/he is entrenched in a shallow state of understanding. Alternatively, a learner that increasingly asks causal antecedent or enablement questions suggests a deeper understanding now that their inquiries seek to connect complex ideas within the problem space. The taxonomy indicates a need for intervention in the former case, while the latter case identifies evidence of growth and how to scaffold deeper levels of learning. Educators could employ the taxonomy as a means to identify, define, track, and support these knowledge construction challenges across various phases of problem-solving.

The above analysis of ill-structured problem-solving technologies highlights additional insights and opportunities from a design perspective. As noted earlier, many learning systems employ design principles about how to parse content (e.g., flipped classroom; segmentation principle) or successful completion of a domain-specific activity (e.g., task analysis). As such, the interface is based on strategic placement of the content; however, the learning resources may not always align with the learner’s individual knowledge gaps. While questions are often appended to modules, research shows question-asking is often done through embedded hard scaffolds (Belland et al. 2017). If a hard scaffolding strategy is employed, this question taxonomy suggests a systematic pathway is needed for learners to identify basic parameters of the problem space and strategically move towards more advanced queries, such as example questions (testing questions) and causal antecedent (deep/complex questions). However, many designs employ a hard scaffold whereby simple questions are proffered and then progress directly to complex reasoning questions (i.e., causal antecedent). Failing to ask questions in a systematic way that corresponds with their knowledge trajectory may impede a learner's schema formation. Educators can thus use this taxonomy to guide their design strategies to ensure learners start at the appropriate levels and progress in their problem-solving. Researchers could also leverage the framework to assess how an interface design support question-asking and problem-solving across different learning technologies.

As noted earlier, problem-solving theories and models consistently highlight that question-asking is an integral part of complex reasoning. It follows that learners should be engaged in not only answering questions during ill-structured problem-solving, but self-generating increasingly deep/complex questions to resolve the case. However, the above analysis of learning technology highlights the fact that many designs often embed questions as hard scaffolds within the learning environment rather than ask learners to engender this skillset. That is, questions are presented rather than student-generated. In addition to supporting competencies such as decision-making and scaffolding, it follows that question-generation be more explicit within learning environments and an intentional part of the problem-solving process. This assertion coincides with advances in areas of machine learning and AI where learners are provided more opportunities for inputs rather than prescribed what is presented on the interface. As advances in machine learning and AI are applied to education, one approach could be for the learner to input questions they want to pursue and have the system adapt with related content or even additional questions. Moreover, the taxonomy could be used to generate algorithms for an automated scaffolding system. In both cases, questions can be used as an additional guide for designing these advanced learning environments.

In addition to supporting design, the proposed taxonomy also provides opportunities for future research. To date, many studies are focused on learner outputs during ill-structured problem-solving, such as argumentation (Evagorou and Osborne 2013; Tawfik and Jonassen 2013), learner discourse (Hou 2011; Huang et al. 2017), and concept maps (Olney et al. 2012; Si et al. 2018). Future studies could explore if those artifacts of learning are in response to the questions posed by the teacher, peer, or learning system. This would provide additional understanding on the dynamic between a question and a learner's iteration of their problem-solving. Other research could explore the degree to which questions are inherent within certain domains. While the literature above cited research from engineering (Ahmed et al. 2003; Chi et al. 1981; Malkiewich and Chase 2019), medicine (Dumas et al. 2014), teacher education (McIntyre and Foulsham 2018; Wolff et al. 2016), and others, it could be that certain questions are related to the problem types inherent within a domain (Jonassen and Hung 2008). For example, the diagnosis-solutions embedded within the medical field could lend themselves to more causal interpretation or causal antecedent questions. Alternatively, teachers may be more focused on goal orientation or expectation questions as they work with their students. Future studies could therefore explore how these questions intersect within different learning contexts and the problems posed within the domain.

Conclusion

Theorists assert educational strategies that emphasize convergent thinking are not adequate to solve the types of complex and situated ill-structured problems faced by many practitioners (Greiff et al. 2014; Hmelo-Silver et al. 2007a, b; Jonassen 2000). As adoption of inquiry-based instruction has increased, more learning technologies that support complex reasoning have similarly been implemented within classroom settings (Herrington et al. 2014). To date, many of the applied design theories and models focus on how learners progress during different phases of the problem-solving process. As noted earlier, these theories/models reference the importance of question-asking during iterative problem-solving, but the field has yet to formally (a) define the types of questions asked and (b) explore the application of questions to learning design. Unless the link between questions and learning design is clarified, educators may not view questions as a marker of knowledge construction and thus fail to construct learning systems that support this area. In terms of design, one approach to support ill-structured problem-solving is to have a more sophisticated, systematic, and nuanced mechanism of defining questions.

Rather than assume questions are the same, the proposed taxonomy views question-asking as a problem-solving skillset that requires fostering in a similar vein to information-seeking, argumentation, decision-making, causal reasoning, and others. The taxonomy could be used as a roadmap for educators to structure their learning designs to guide learners towards higher levels of understanding, as well as an analytical tool capable of assessing relative levels of student comprehension based on the questions they ask. Based on the expert/novice literature, we argue questions should be classified according to the following taxonomy: shallow/simple (verification, disjunctive, concept completion), testing (example, feature specification, quantification, definition, comparison), and  deep/complex questions (interpretation, causal antecedent, causal consequence, goal orientation, instrumental/procedural, enablement, expectation, and judgmental). Classification of questions based on associated cognitive processes can also be used by emergent systems design (e.g., machine learning algorithms) to (a) diagnose learners’ understanding and (b) guide how to foster students’ own questioning as they construct their knowledge. In doing so, our hope is that designers of learning systems can apply such a taxonomy to evaluate and guide their design of new technologies to advance higher order learning.