Keywords

1 Introduction

Adaptive instructional systems (AISs) are artificially-intelligent, computer-based systems that guide learning experiences by tailoring instruction and recommendations based on the goals, needs, preferences, and interests of each individual learner or team in the context of domain learning objectives [1]. Multiple academic and industry-based communities have engaged in defining the theoretical constructs, practical approaches, and technical standards for AISs for human learning. Currently several workgroups within the IEEE Learning Technology Standards Committee are developing standards for the Adaptive Instructional System (AIS) informed by prior research-based frameworks such as the Generalized Intelligent Framework for Tutoring [2] and Knowledge-Learning Instruction Framework [3]. A consortium within the IEEE standards organization (ICICLE) is defining and developing Learning Engineering as a profession and as an academic discipline. Rapidly developing innovations in artificial intelligence, virtual/augmented reality, social learning platforms, instrumented learning experiences, and mobile/place-based learning are creating new opportunities to leverage technology to optimize human learning. Meanwhile, learning sciences findings continue to expand what we know about how people learn.

The offered model begins with the notion that the learner is a core component of the AIS. The academic discipline of human-computer interaction recognizes human actors as a part of a system. However, conceptual and architectural models of AISs often focus on the functions of the technology, with the learner as a user external to the system. In many models only a proxy of the learner (the learner model) is considered part of the system. The offered model considers the role of the human learner and physical/perceptual environment as well as the digital twin representations of the real agent and environment.

The offered model is also informed by the emerging field of learning engineering. It offers mindsets, processes, and practices for those designing and developing AISs that may translate into new approaches to design decisions.

We start with the learner component at the center of the AIS design. Like other development models the learner needs drive design decisions, and like other models we begin with a prototype of the learner and learner context to set the purpose and goals for the system, and to consider what problems need to be solved. The design builds outward from the learner component, considering the requirements for interoperation of the learner and other system components.

2 Learner-Centered Adaptive Feedback

The system design is guided by the requirements for interactions between the learner and other components via human-computer interfaces, inputs from and about the learner, and feedback to the learner from the other system components. “Feedback” here is broadly defined to include recommendations, visualizations, system prompts, adaptations to the user interface (e.g. menu choices offered), and adaptations to learning experiences. The quality of the feedback generated by system components (e.g. a human or AI agent) depends on a current and accurate understanding of the learner.

The learner-centered adaptive system may give feedback to the learner at multiple levels:

  • Progress level (hyper-adaptive)—feedback on where a learner is in relation to long-term learning goals, such as progress toward award of a credential, qualification for a job, or level of mastery in a domain

  • Lesson level (macro-adaptive)—feedback or adaptation between learning experiences that helps inform what the learner does next, e.g. recommending an instructional strategy or learning experience

  • Activity level (micro-adaptive)—formative feedback or adaptation during a learning experience.

Park and Lee (2003) consider Aptitude Treatment Interactions (ATI) as a separate class of adaptive instructional approach, although it could be argued that the treatments here could be at the micro-adaptive or macro-adaptive levels. The ATI approach considers learner aptitudes (learner characteristics or environmental conditions that increase or impair the probability that a given treatment will result in student learning) and treatments (variations in the pace or style of instruction). According to Park and Lee (2003) “since Cronbach (1957) made his proposal, relatively few studies have found consistent results to support the paradigm or made a notable contribution to either instructional theory or practice.” However, some of the ATI research provides valuable insights into the variety of aptitude variables that might be used as inputs into adaptation decisions made by AISs, and into the variety of treatments (instructional strategies and conditions) that might be adapted to the learner’s needs based on those inputs.

3 Conceptual Model of an Adaptive Instructional System

According to the AIS Ontology Workgroup [4], an AIS may be conceptualized as a combination of four models: (1) learner models, (2) knowledge models, (3) adaptive models, and (4) interface models (see also Murray, 1999; Sottilare & Brawner, 2018; Woolf, 2010; Nkambou, Mizoguchi & Bourdeau, 2010).

3.1 The Learner Model

The learner model (implemented with data architecture) is a structured representation of a learner’s knowledge, abilities, dispositions, habits of practice, misconceptions, difficulties, and/or other learner attributes that evolve during the course of learning. Details such as the learner’s background knowledge, prior experiences, cultural values, and learning contexts have an impact on what feedback will be most effective at any given moment.

The learner model may function as an imperfect “digital twin” of the learner. It is imperfect because the mind of a learner is not directly observable and the scope of data in any system will be limited. However, along with observable event data the learner model may include predictions or assertions about unobservable characteristics of the learner that have been inferred from observable event data.

Learning technology may handle learner data in different ways and in both structured and unstructured formats, with more or fewer links to contextual information. The learner model may use a federated data approach that uses data across multiple physical system components. The learner model has interdependencies with other system components and with other systems in the ecosystem.

3.2 The Domain Model

The domain model (implemented as information repositories) is a fundamental element of an AIS that contains the set of skills, knowledge, and strategies/tactics of the topic under instruction. It normally contains the ideal learner or expert knowledge model for the domain of instruction, along with question banks, common bugs, mal-rules, misconceptions, and content. The domain model includes data, metadata, and learning resources, and maps the relationships between those things. Such content and relationships include

  • competency frameworks,

  • competency pathways,

  • pedagogical models,

  • adaptive strategies,

  • lesson definitions,

  • activity definitions,

  • learning resources (static or interactive content),

  • knowledge maps,

  • rubrics,

  • assessment activities,

  • metadata, and

  • paradata to support adaptations, weighting, decision-making.

Information from the domain model and learner model are inputs into the adaptive model.

3.3 The Adaptive Model

The adaptive model (implemented as adaptive engines) represents the decision-making and control functions of the adaptive system. The adaptive model uses data from the learner model and the domain model as input, informs decisions about what strategies, steps, and actions the AIS should do next, and triggers feedback events. In mix-initiative systems, the learners may also take actions, ask questions, or request help (Aleven, McClaren, Roll & Koedinger, 2006; Rus & Graesser, 2009), but the AIS must be able to decide next steps, which is determined by the adaptive model that is driven by pedagogical theories.

These three models (learner model, domain model, and adaptive model) represent the theoretical/conceptual components in an adaptive instructional system (Knowles, et al.; Draft Standard for the Classification of Adaptive Instructional Systems). A physical implementation of an AIS may have many more components that address specific functions of the system.

Adaptive learning systems employ data-informed adaptations of learning experiences and conditions for learning. The workgroup for the IEEE AIS Standard for the Classification of Adaptive Instructional Systems has identified “levels of adaptivity” for AISs. At the highest level are systems that are self-improving.

This paper examines the concept of a self-improving system in three contexts:

the learner as a key component of an AIS

  1. 1.

    the AIS technology and information architectures

  2. 2.

    the AIS engineering team

3.4 The Interface Model

The user interface model (implemented as human-computer interfaces and machine-machine interfaces including application programming interfaces) is a representation of how a human user or another system component interacts with a component of the system, and how the system component responds. It interprets the learner’s input (e.g. speech, typing, clicking) and produces outputs (e.g. text, diagrams, animations, agents). In addition to the conventional human-computer interface features, recent systems have incorporated natural language interaction (e.g. Graesser et al. 2012), speech recognition (e.g. Litman 2013), and the sensing of learner emotions (e.g. Baker, D’Mello, Rodrigo, & Graesser, 2010; Goldberg, Sottilare, Brawner, Holden, 2011).

Interface models may be further classified based on whether the interface represented is between a human and the system components or between system/software modules.

  • Human-computer Interface Model—a representation of how human user(s) interact with a computer program or another device and how the system responds.

  • An Application Programming Interface (API) Model—a representation of how other system components get access to specific information, to trigger special behavior, or to perform some other action in a component of the system. (See: https://www.w3.org/2008/webapps/).

4 Learning Engineering

Given the enormous complexities, trade-offs, and uncertainties associated with learning in real-world learning contexts (Koedinger, Booth, & Klahr, 2013), building effective AISs is difficult. This issue is often magnified with scale. Scale arises from not only the number and diversity of learners, but also the variability of time and space for learning opportunities, the resulting rich data about learner engagement and performance, the mass personalization (Schuwer & Kusters, 2014) of learners and learner groups, and the way in which our pedagogy must adapt to these needs (Roll, Russell, Gasevic, 2018). All of these must be taken into account as technologies, pedagogies, research and analyses, and theories of learning and teaching are combined to design effective learning interactions and experiences. Growing learning engineering efforts are beginning to shed light into the processes that help us figure out what works in AISs to promote learning, why it works, and how to scale what works.

Learning engineering is “a process and practice that applies the learning sciences, using human-centered engineering design methodologies, and data-informed decision-making to support learners and their development” [5]. Learning engineering applies the learning sciences—informed by cognitive psychology, neuroscience, and education research (Wilcox, Sarma, Lippel, 2016)—and engineering principles to create and iteratively improve learning experiences for learners. It leverages human-centered design to guide design choices that promote robust student learning, but also emphasizes the use of data to inform iterative design, development, and improvement process. In the following subsections, we provide examples of ways in which learner needs drive the design decisions within each aspect of the learning engineering process and practice.

4.1 The Learning Engineering Process and Practice

The Generalized Intelligent Framework for Tutoring [2], the Knowledge-Learning Instruction Framework [3], and similar efforts such as ASSISTments as an open platform for research [6] are excellent examples of learning engineering in practice. The GIFT testbed methodology supports the manipulation of the learner model, instructional strategies, and domain-specific knowledge, and enables empirical evaluation of the effects of environmental attributes, tools, models, and methods on student learning, engagement, and transfer of skills within [7]. The KLI framework advocates for in-vivo experimentation, which enables rigorous experimental controls in real learning settings with real students. Such frameworks have led to important understandings of what works and why, and as a result, has led to robust learning gains (e.g., [8, 9]).

Effective learning requires the integration of research across different fields that impact learning. The learning engineering process enables data-informed decision-making through development cycles that include learning sciences, design-based research, and learning analytics/educational data mining. It leverages advances from different fields including learning sciences, design research, curriculum research, game design, data sciences, and computer science. It thus provides a social-technical infrastructure to support iterative learning engineering and practice-relevant theory for scaling learning sciences through design research, deep content analytics, and iterative product improvements.

Figure 1 illustrates a learning engineering process for the design and development of an AIS. Product decisions across human-centered design and dissemination in this process may vary in method, but still follow similar patterns of questioning, generating or accessing data, data interpretation, and application of key learnings (Fig. 2).

Fig. 1.
figure 1

Levels of Feedback. From Glowa, L. and Goodell, J. (2016) Student-Centered Learning: Functional Requirements for Integrated Systems to Optimize Learning, Vienna, VA.: International Association for K-12 Online Learning (iNACOL). Adapted with permission.

Fig. 2.
figure 2

The learning engineering process. From [10].

The process starts with decisions or hypotheses. Decisions include product design, experience design, and learning design. An example of a hypothesis could be, “If using design x, the response will be y.” From this point, areas of information needs emerged. To create an AIS that is successful in terms of learning goals, engagement, and market viability, teams should answer questions, such as, “to what extent are product/learning assumptions true?” or “what is the behavioral response to the interaction design?” These become the research questions that drive a focused research design toward data gathering and meaning-making.

Learners use AISs. A learner-centered approach requires production teams to have a complete (or as close to complete as possible) view of the learner, their motivation, and their learning environments. It is through user activity that data are generated. Data can be observational, behavioral, sentiment, and analytic, and are gathered and understood through a variety of analytical methods. By way of these methods and analytic approaches, researchers make sense of findings, and together with product owners and designers, derive insights to inform design improvements and contribute to the broader research corpus.

4.2 Applications of Learning Sciences

The learning engineering process starts with the application of learning sciences to inform the pedagogy and design for learning and engagement. We refer to learning sciences in broad definition by the International Society of the Learning Sciences as research that involves the “empirical investigation of learning as it happens in real-world settings” [11] Learning sciences research is interdisciplinary and includes scholarship from areas like cognitive science, educational psychology, curricular studies, and design research. In the learning engineering process, learning sciences research is mapped to stages of design, from curriculum to immersion and interaction design, and to overall system design and final product development and implementation with considerations of evidence and data collection [12]. In relation to adaptive design, important methodological extensions of learning sciences such as user-centered design research, game-based learning design, learning analytics, and educational data mining [13, 14] also become relevant as part of a paradigm of data-informed AISs. As such, learning sciences applications are present at all stages of the learning engineering process [13].

At the start of the design and development process, the fundamental question in learning design is to clearly define what is being taught. Educational research in curricular design investigates methodology in this area, with the establishment of learning trajectories [15]—including fundamental components of fine-grained, measurable learning objectives and pathways embedding formative assessment for differentiated instruction. In application to foundational mathematics skills, for instance, the design of learning trajectories informs an approach and a specified ontology of core mathematics learning objectives in the program Building Blocks, a mathematics curriculum designed using a comprehensive Curriculum Research Framework to address numeric and geometric ideas [16].

Building on defining competencies and learning objectives, learning science research can inform the core design of learning experiences. This can be done in numerous ways.

In game-based and immersive learning contexts, for example, evidence-based design frameworks serve to fundamentally connect learning objectives with specific digital interactions designed to elicit evidence of learner knowledge, skills, or abilities (e.g. Evidence Centered Design, [17]. This alignment allows insight into student learning through real-time interaction with a virtual space, thus providing ongoing performance data that enable formative feedback and personalized learning pathways. This gives way to authentic assessment and player immersion, which is vital for reaching learners in both formal and informal learning environments [18].

Learning science can also support learning design by using research in human cognition and development to support long-term learning of target competencies. For example, desirable difficulties [19] such as retrieval practice [20], interleaving [21], and distributed practice [22] can be built into an adaptive system to support long-term memory. Applications of perceptual learning principles can be used to support the development of perceptual expertise (e.g., [23]). Concreteness fading principles can be used to promote conceptual transfer [24].

In data-centric phases of the learning engineering process, methodological extensions of learning sciences such as learning analytics and educational data mining [14] can guide design improvements for better learning and engagement (more in Sect. 4.4). Learning sciences approaches can inform research questions, analyses, interpretation of results, and derive insights for design improvements [25]. Resulting insights can enable intelligent personalization of the system, inform iterative data-driven design of core activities and mechanics, and allow for real-time visualization of learner progress.

4.3 Human-Centered Engineering Design Methodologies

Achieving learning and engagement goals requires a deep knowledge of the end user. This is where human-centered engineering design processes are critical. Consider design researchers in a learning engineering team regularly recruiting children and parents to playtest early production prototypes, investigating the ways young children demonstrate their problem-solving and meaning-making through proposed playful interactions. Data from such user testing sessions can drive the concrete interactions, user interface, and user experience design for each learning experience, all of which are sensitive to the cognitive load, executive functioning skills, etc. that are appropriate for the learners’ cognitive and development stages. This is particularly crucial for young learners, for whom the interaction needs are difficult to calibrate in initial design iterations.

Design research outcomes support the blending of pedagogical and engagement goals by pointing to actionable insights that allow teams to make informed design decisions through product development cycles. From this perspective, educational design research is embedded in and integral to the design work itself [26, 27] is grounded in empathy, beginning with the needs and perspectives of the people being designed for [28] is interested in discovering how and why people behave the way they do, and what opportunities may exist for new innovation; and highlights starting points for the design of meaningful interactions, which sit at the core of well-designed environments for teaching and learning [29].

4.4 Data-Informed Decision Making

With large numbers of diverse learners engaging with content, the extensive detailed trace data about learner engagement and performance from AISs allow for better understanding of how learning unfolds, and permit experimentation of different analytical methodologies and approaches (such as learning analytics and educational data mining), and of content with rapid design and evaluation cycles (e.g. [14]). This means we can know more about how learners with different attributes engage and learn, and thus can differentiate educational experiences that match learners’ skills, goals, interests, and background [30]. A learning engineering approach takes each of these elements into account to work for products at scale.

An AIS with rich event-stream data, enabled by the integration of research-based design phases, supports the application of a large range of methods in learning analytics and educational data mining to drive learning insights and design iterations. Learner performance data can be surfaced on dashboard visualizations as feedback for students, parents, and teachers to monitor and provide additional individual learning support. Event-stream data of learner progress also supports efficacy research to evaluate learning outcomes. For example, in educational data mining efforts for learning design and better personalization, behavior detection methods were recently used to build a predictor tracking when students are “wheel-spinning” in ABCmouse Mastering Math, an AIS designed to promote early number sense in children ages 2–8. Wheel-spinning students are those who are spending too much time struggling to learn a topic without achieving mastery, a form of unproductive persistence [31] versus those who are productively persistent [32]. In detecting wheel-spinning in real-time, AISs can better respond to students who need support, and surface these insights to educators for additional in-person intervention. This also enables the design of dashboards summarizing learning objective mastery. Insights from these efforts can also inform pedagogical understanding and informed design for deeper learning and engagement.

Learning engineering iteration cycles can impact learning outcomes at large not just by feeding insights into new product decisions, but by also sharing learnings back into the broader academic community of learning engineers and product developers. The result is a learner-centered practice and process that (1) provides a comprehensive view of the learner and their environment to enable the design of effective and engaging personalized learning experiences for diverse learners, (2) enables fast learning and development, which is crucial for staying sustainable with limited resources in typical industry production environments, and (3) provides coherence across theories and methodologies to enable actionable insights toward product design, development, validation, and contribution to the field.

5 The Learner as a System Component

The models introduced earlier provide a theoretical/conceptual model of the AIS. We discussed how learners’ needs drive the design and development of AISs in the learning engineering process. When considering a functional system, we propose that the learner and the learning environment are also part of the system (Fig. 3).

Fig. 3.
figure 3

Diagram showing the learner and learning environment as part of the overall system connected to AIS modules via including human-computer interfaces and environmental sensors. (Goodell, 2019)

Each component in a system has a job to do. If the efficiency and effectiveness of one component can be improved, it may or may not improve the functionality of the entire system. In other theoretical AIS models, the learner model serves as a proxy for the learner as it does here, so the adaptive engine makes inferences and adaptations using the digital twin, and then can infer whether or not that adaptation had a positive effect on learning based on the updated digital twin. That is a valid approach. However, the expanded perspective that includes the learner and environment as part of the system might prompt learning engineering teams to ask different questions, leading to new insights for optimizing the overall system. As we have seen, the exploration of such research questions about the learner and their learning environment is an important part of the learning engineering process. For example, what environmental conditions/patterns might be affecting learning in ways that humans might not notice, but an AI agent might discover? How can we better design the conditions outside of the technology system to promote learning? What are the blind spots in the differences between the learner and the learner model’s digital twin of the learner? What if we use different kinds of interfaces to adapt conditions for the learner than what we are used to (e.g. non-verbal audio cues, natural language dialogue, physiological sensors, climate control adjustments)?

If we consider the learner and learning environment as system components, and the learner’s function is to learn, then one of the core design goals for the system should be to help the learner become better at learning. It is not just about what the other components can do to the learner to cause learning to happen. What additional components can be introduced to help the learner become a better learner, a more motivated learner, a self-regulated learner?

We might say that the overall system goal is to optimize the functionality of the “learner component” by optimally adapting interactions between the other components of the system and the learner via human-computer interfaces and by adapting the conditions within which those interactions take place.

These interactions can be simply expressed as a cycle of learner experiences/conditions, observed-measured and analyzed by other components to produce inferences about the state of the learner as diagnosis and prescriptive adaptation informing what the system does next. The prescription may be immediate feedback for an adaptation of a future learning experience or to trigger some extra-learning process (e.g. a credential assertion, an alert to an instructor, etc.) (Fig. 4).

Fig. 4.
figure 4

Experience and feedback cycle illustration. CC BY Goodell, J. & Flynn, J. (2016) Adapted with permission.

5.1 Modeling the Mind (and Body) of the Learner

System components in the “learner model” category may serve as a “digital twin” of the learner. This digital twin is the information that informs the rest of the adaptive system. It is a kind of interface between the real learner component and the AIS. For cognitive learning objectives the adaptive system, just like a good human tutor, attempts to “get inside the learner’s head” to understand conceptions that are on target and misconceptions that need correction. For objectives that involve development of both mental and physical abilities, the digital twin may include information about physiology and physical development.

This information may include:

  • both facts and inferences about past, current, and predicted future cognitive capabilities and functions (Examples: logs of interaction, transactional performance assessment data, and inferred competency assertions derived from those raw data)

  • physiological attributes related to learning and performance objectives. (Examples: physiological metrics/abilities/limitations that might indicate readiness, lack of readiness, or need for accommodations, scaffolding, or pre-requisite physical conditioning; eye tracking to gauge learner engagement)

  • both raw transactional data and processed/interpreted data as a log of a learner’s activities, actions, and experiences (They may include data to support spaced learning and knowledge decay models.)

  • various data collected from the learner to indicate learner preferences or provide feedback to the system

  • data collected from other people on behalf of the learner

  • data collected through sensors and external systems

  • inferences, assertions, and evidence of competency and levels of mastery

  • inferences and evidence of knowledge gaps and misconceptions.

  • inferences and evidence of learning behaviors

  • contextual data about the environmental conditions, cultural contexts, human relationships that might influence a learning experience or the learner’s general perceptions that impact learning

  • data about levels of engagement, emotional states, etc.

  • records of feedback offered to the learner

  • applicable metadata inferred from patterns in other learner data sets used to predict optimum conditions for this learner’s current and future pursuit of the same learning objectives. (e.g. demographics, prior learning pathways, contextual patterns).

5.2 From a Generic Proxies of the Learner and Context to a More Precise Digital Twins

The system design starts with designing for a proxy of the learner based on a set of general assumptions about how people learn, about the class of learners who will interact with the system, and about their learning environments. The domain model and learning experiences designed for the system are based on and may map to definitions of pedagogical models. Pedagogical models represent application of learning theories with a pedagogical approach, i.e. defining the kinds of interactions that in theory should promote a learner’s achievement of learning objectives [1].

The adaptive system doesn’t rest with a general proxy of the learner or general assumptions about how people in general might respond when a pedagogical model is applied in a given context. Once the specific “learner component” is plugged into the system the other components begin to adapt. With every interaction the learner model is augmented and corrected with a more precise “digital twin” of this specific learner. Information about learner responses to system stimulus are used to test general theories behind pedagogical models and enrich those models based on the specific learner, context, and conditions. The more precise pedagogical model can be used to adapt the learning activities offered and other factors of the learning experience (e.g. content, presentation, scaffolding, motivational constructs, etc.).

This kind of iterative improvement is a learning engineering process that can be and is done by human learning engineering teams such as creators of intelligent tutoring systems. It is also a learning engineering process that can be done with AI/machine learning.

5.3 Modular Architecture

While conceptually the proposed learner-centered AIS fits into four conceptual categories (learner models, domain models, adaptive models, and interface models), the functional components require further classification. We envision a distributed system with modules that perform specialized functions and require specialized data architectures. For example, the data architecture for an assessment item bank has very different structural requirements than the data architecture for a competency frameworks repository.

The following list defines modules that might be included in a distributed learner centered AIS [33]:

  • Competency framework module—an information resource with competency definitions and metadata to which learning experiences, resources, assessment items, and other resources will be aligned. This includes information about

    • competency frameworks;

    • competency definitions;

    • associations between competency definitions, e.g. for optimizing competency-based pathways;

    • rubrics and/or assessment criteria profiles (profiles of how a learner’s competence level can be measured for competency definitions and given contexts); and

    • feedback profiles.

  • Customized learner profile modules that combine data from source systems and input from students, educators/instructors, parent/guardian (if applicable), supervisors, and others involved in the student’s education/training, work context, or well-being

  • Separate learner model repositories with data in multiple formats and granularities, such as a learning experience record store with granular log data versus more structured data repositories for assessment results, competency assertions, and predictive inferences driving motivational feedback

  • Personalized learning plan modules responsive to the learner as he or she progresses and changes

  • Learning resource/activities content repositories

  • Learning resource/activities metadata/paradata repositories

  • Learning resource/activities discovery module

  • Content authoring modules

  • Interface & instrumentation modules (including multisensory learner stimulus components, sensors, and data capture functions)

  • Repositories of pedagogical models

  • Repositories of adaptive strategies as inputs to adaptive engines (Fig. 5)

    Fig. 5.
    figure 5

    From Glowa, L. and Goodell, J. (2016) Student-Centered Learning: Functional Requirements for Integrated Systems to Optimize Learning Vienna, VA.: International Association for K-12 Online Learning (iNACOL). Adapted with permission.

5.4 Designing from the Inside Out, and then Through Iterative Optimization

We envision future design and development of adaptive instructional systems benefiting from an emerging learning engineering discipline that embraces a learner-centered iterative problem-solving approach. Designing from the inside out, the learner’s needs drive the design. This approach starts with imperfect but research-based assumptions about the learner, how they will interact with the system, their learning environment, the learning objectives and pedagogical models, the decisions about what functional components are needed to implement the pedagogical model, and the learning experience designs that are mapped to pedagogy and learning objectives, etc. As data from and about the users are collected, we can iterate on the design- and address-specific design and interface problems. The proposed learner-centered AIS model is a self-improving system by design that embodies key learning engineering processes. The human-centered design approach can enable us to better understand the learner and the problem to be solved (i.e. the learner model). Applications of the learning sciences can drive and inform improvements in the domain model, adaptive model, and the interface model. Iterative design and development approaches can enable data-informed decision-making. Through designing from the inside out and then through iterative optimization, we can build AISs that can address diverse learning needs at scale.