Keywords

Acronyms and Definitions

A.I.:

Artificial Intelligence, for this work, is any semantic technology that aids human cognition.

AoC:

Allocation of Complexity is the relative number of nodes in the information and knowledge layers. For example, a computer model having fifty knowledge nodes and fifty information nodes, in which each node is one unit of complexity, is a computer model with an even AoC across the information and knowledge layers.

AoU:

Allocation of Understanding is a theoretical distribution of human understanding across data, information, knowledge, wisdom, and vision.

DSM:

Decision Support Model is the computer model hidden behind the computer interface.

DSS:

Decision Support Systems go by many names, such as expert management systems, knowledge management systems, and others.

FOM:

Figure of Merit is a ruler of sorts used to rate the utility of individual solutions so that a wise decision (the best solution among alternatives) can be made.

FoU:

Flow of Understanding indicates the directional transfer of understanding from SoU to ToU. The relative positions of the ToU and SoU with respect to each other can be top-down, sideways, or bottom-up (traditional pedagogy is bottom-up).

HCI:

Human-Computer Interaction.

HoUFootnote

The phrases hierarchies of understanding (HoU), cognitive hierarchies, and semantic hierarchies have similar meaning in this work. Each refers to the layers of data, information, knowledge, and beyond, and to the processes and context that transform one layer into another.

:

Hierarchy of Understanding usually means a hierarchy of data, information, knowledge, wisdom, and vision (DIKWV).

%RinC:

Percentage Reduction in Complexity is the percentage reduction in the number of nodes of information and knowledge within a computer model.

%RinU:

Percentage Reduction in Understanding is the percentage loss in fidelity to optimize a complex system for an abridged computer model. For instance, if an optimized goal function = 100 for a reference computer model, and then 10 information or knowledge nodes are culled from the model such that the new value for the optimized goal function for the abridged model = 99, then one defines a %RinU of 1%.

SME:

Subject Matter Expert is a person recognized as an expert on one particular subject.

SoU:

Source of Understanding can be an SME, teacher, A.I., HCI, DSS, or other semantic technology that offers Understanding (DIKWV) to a ToU.

ToU:

Target of Understanding (ToU) can be a novice, student, or User of A.I., HCI, DSS, or other semantic technology.

U.A.I.:

Ubiquitous Artificial Intelligence is a predicted property of near-future (2020s) advanced societies.

VHM:

Vision Hierarchy Model is synonymous with Vision HoU and Vision Hierarchy, and it is a hierarchy of data, information, knowledge, wisdom, and vision (DIKWV).

1 The Purpose of This Work

Once A.I. and other semantic technologies (i) achieve a type of global “knowledge” awareness (a semantic web), (ii) rationalize for best knowledge, information, and data (i.e., flatten the world’s knowledge bases, information bases, and databases), and (iii) begin to create new knowledge (by inference reasoning), then our current paradigms of HCI design, bottom-up DSS design, and bottom-up educational pedagogy become lethargic, anachronistic, and even obsolete when compared to what U.A.I. will be able to accomplish. Already machines process data and information many orders of magnitude better than humans do in speed, quality, and quantity. Figure 1 projects a trend in cognitive cybernetics into the 2020s, 2030s, and 2040s, of which this work is only interested in the period of the 2020s, a period referred to as the “age of U.A.I.” The goal of this work is to provide design principles to HCI and DSS designers and educators to inform design and pedagogy that prepares humans to maintain primacy over U.A.I. through the 2020s.

Fig. 1.
figure 1

Evolution of human-machine hybrids.

For most of H. sapiens existence (leftmost column in Fig. 1), humans mentally performed each layer of understanding. By the 1960s, data management systems were becoming widespread. Today, access to data, information, and knowledge is commonplace, although the Internet still treats most content as data and information. For now, the “knowledge” that is available in print media and on the Internet must be presented by the content author (the “Source”) and interpreted by the content user (the “Target”); little is processed as knowledge by computing systems. But such is about to change. Soon A.I. and other semantic technologies may process knowledge as easily as computing systems process data and information today. This work assumes that such A.I. ability comes into effect in the 2020s and that A.I. quickly outperforms human knowledge management. The question is, what can humans do to maintain jobs, self-worth, and primacy over the machines in the coming age of U.A.I.? This work provides partial answers to this question.

2 Hierarchies of Understanding (HoU)

HoUs facilitate discussion about human learning (Fig. 2). Cleveland [1], Ackoff [2], and Bellinger [3] proposed a bottom-up HoU beginning with data and ending with wisdom. Tuomi [4] proposed a top-down HoU in which knowledge is first conjectured, then it is subdivided and contextualized to derive information, which, in turn, is subdivided and contextualized to derive data. Carpenter [5, 6] proposed the Vision HoU, a cognitive hierarchy based on context-conjecture that builds context downward (values, goals, models, categories, indexes), followed by context-validation in which real-world observation (content) filters upward to become data, information, knowledge, wisdom, and vision. To better understand trends in human-machine cognition, each HoU is briefly explained. Then the Vision HoU is used as a model for top-down education and top-down HCI, DSS, and pedagogical designs.

Fig. 2.
figure 2

Comparison of hierarchies of understanding (HoU).

2.1 Cleveland’s 1982 Information Hierarchy

Cleveland [1] defines a four-level, bottom-up HoU (see Fig. 3) comprising (1) facts and ideas, (2) information, (3) knowledge, and (4) wisdom. Although Cleveland’s HoU was originally referred to as an information hierarchy (consider the technology of his day), it makes sense to think of it as one of the early wisdom hierarchies, as wisdom is its top layer. Cleveland’s HoU is process driven and builds from the bottom up; that is, lower layers must be developed before his unexplained learning processes can transform them into the next higher cognitive layer. Cleveland posits that human mental capacity limits the growth of human knowledge and wisdom, yet Cleveland did not see the need for an objective disambiguation of the terms information, knowledge, and wisdom. For example, Cleveland writes (p. 34), “The distinction between information and knowledge—or knowledge and wisdom—is, of course, subjective. One person’s information may be another’s knowledge; …” However, since Cleveland’s time, Davenport and Prusak [7] wrote (p. 1), “Confusion about what data, information, and knowledge are—how they differ, what those words mean—has resulted in enormous expenditures on technology initiatives that rarely deliver what the firms spending the money needed or thought they were getting.”

Fig. 3.
figure 3

An interpretation of Cleveland’s 1982 Information Hierarchy [1].

2.2 Ackoff’s 1988 Knowledge Hierarchy

Ackoff’s HoU [2] has five layers: data, information, knowledge, understanding, and wisdom (see Fig. 4). Ackoff refers to such layers as “types of content of the human mind” (p. 3), in which each human’s HoU is an approximation of reality. In Ackoff’s day, his hierarchy was often called a knowledge hierarchy; however, today it is more often called a wisdom hierarchy because its uppermost layer is the wisdom layer. Like Cleveland’s HoU [1], Ackoff’s is process-driven and builds from the bottom up.

Fig. 4.
figure 4

An interpretation of Ackoff’s 1988 Knowledge Hierarchy [2].

The base layer starts by sensing something that becomes an observation, which then somehow turns into data. Then, processes continue to build higher layers. At the top is wisdom, which Ackoff describes as akin to “effectiveness.” Whereas Cleveland acknowledges that our human mental capacity is limited, Ackoff goes further to conjecture that most of our “mental space” allocates to the lower layers of his hierarchy. Ackoff develops the notion of “allocation of mental space” (p. 3) in which he mused that 40% goes to the data layer, 30% to the information layer, 20% to the knowledge layer, 10% to the understanding layer, and very little human mental space remains for the wisdom layer. Figure 4 illustrates Ackoff’s allocation of mental space as a step-pyramidal shape. Ackoff’s work [2] contains many important wisdomisms and foresight not covered here.

2.3 Bellinger’s Wisdom Hierarchy (c.1997)

Bellinger’s HoU [3] incorporates context as the key to promoting one layer of understanding into the next higher layer. Only if there is sufficient context can the observer derive meaning from content. Thus, Bellinger’s cognitive hierarchy (see Fig. 5) calls out two bottom-up hierarchies that work together to create understanding; a content hierarchy (data, information, knowledge, and wisdom) and a context hierarchy (relations, patterns, and principles).

Fig. 5.
figure 5

An interpretation of Bellinger’s Wisdom Hierarchy [3].

Bellinger defines data as disconnected elements, which means that a datum has no recognizable relationship (shared context) with other data or with other content. However, once an observer recognizes and understands the relationships between data and other content, then, for such observer, the relationships transform data into information. Likewise, recognizing patterns among the information transforms information into knowledge, and recognizing principles transforms knowledge into wisdom. Therefore, Bellinger recognizes a relativity of understanding, that the level of each observer’s understanding is dependent upon context recognized by each observer. An expert may perceive a particular system as simple and easily identify information, knowledge, and wisdom; however, a novice may perceive said system as complex and only recognize data and information. Bellinger’s work [3] contains many important wisdomisms and foresight not covered here.

2.4 Tuomi’s 1999 Reverse Knowledge Hierarchy

Tuomi [4] defines a top-down (reverse) HoU (see Fig. 6) that posits learners acquire knowledge first. The learner applies some kind of context to the knowledge from which information emerges. Similarly, data emerges from contextualized information. When Tuomi speaks of knowledge existing before information and data, he must be speaking about conjectured knowledge or hypothetical knowledge about something that is still unknown; it is more like a hypothesis. Tuomi’s work [3] contains many important wisdomisms and foresight not covered here.

Fig. 6.
figure 6

An interpretation of Tuomi’s 1999 Reverse Knowledge Hierarchy [4].

2.5 Vision HoU (2002)

The Vision HoU [5, 6] combines a top-down hierarchy of conjectured contexts (values, goals, models, categories, and indexes) with a bottom-up hierarchy of belief (data, information, knowledge, wisdom, and vision) (see Fig. 7). Thus, the Vision HoU combines elements of the top-down model of Tuomi [4] with the bottom-up models of Cleveland [1], Ackoff [2], and Bellinger [3]. Vision is posited to be an evolutionary adaptation that helps humans to extract increasing utility from their environments; that we are born with instincts that develop into a vision-center within the brain useful for conjecturing about the real world. In the words of Peirce [8, p. 477], “All human knowledge, up to the highest flights of science, is but the development of our inborn animal instincts.” The Environment is the SoU, and human vision is the ToU. The Vision HoU explicates the process of human learning in two phases: (1) context-conjecture and (2) context-validation.

Fig. 7.
figure 7

adapted from [5, 6].

The two-phase and compact forms of the Vision HoU,

The Context-Conjecture Phase (Top-Down) (Steps 1–5 in Fig. 7). The learning process begins when we use our vision to conjecture context (human values, goals, cause-and-effect models, categories, and indexes). This phase of human learning is called the context-conjecture phase, and it represents a hierarchy of hypotheses about the real world. Values help us to decide our big purposes. Goals support achieving our purposes. In turn, cause-and-effect models support achieving our goals; categories of ideas support our models; and an index stores, organizes, and remembers the locations of the data. The context-conjecture phase is top-down. Consider that many organizations guide their workforce by first creating a vision statement and a core set of values, then goals are set, business models are developed, categories of ideas (such as words, names, and copular descriptions) are programmed in, and dynamic indexes handle the data.

The Context-Validation Phase (Bottom-Up) (Steps 6–10 in Fig. 7). Once the context-conjecture phase is completed, the context-validation phase begins. Context is validated once it is shown that content (a number or other type of value), having filtered into the context, represents an accurate-enough abstraction of the environment. For example, validating the context within the data layer requires only verifying that the data can be retrieved and stored without loss or unexpected transformation, and that repeated observations of the same thing under the same conditions results in the same data. As the content works its way up the layers of context (categories, models, goals, and values), content is validated at each layer by comparing the content to the environment and finding well-enough agreement. Should the context developer find that the processed content does not agree well-enough with real-world observation, then the context must be re-conjectured until it meets some criteria for accuracy as validated against the real world. Once the context is validated, conjecture turns into belief and we can say that we have learned and understand more of our environment.

3 Allocation of Understanding (AoU)

The AoU is akin to Ackoff’s “Allocation of Mental Space” [2] (review Fig. 4). Figure 8 shows a hypothetical smooth lifetime AoU for a generic human under the traditional bottom-up education system of the 19th and 20th centuries, prior to the electronic computing age, say c.1870–1940. In this age, humans performed mathematics by mind and hand (finger counting, pencil and paper, slide rule, abacus—but no electronic calculators).

Fig. 8.
figure 8

Human Allocation of Understanding (AoU) across the Vision HoU.

Traditional bottom-up pedagogy still measures today what it measured in the 19th and 20th centuries—data, information, and knowledge processing. Cognitive-cybernetic technologies, such as A.I., Internet, spreadsheets, calculators, DSSs, augmented reality (AR), virtual reality (VR), and so forth, already process data and information vastly better than humans ever have. Furthermore, such machine systems have access to more data and information, and the process it at many orders-of-magnitude faster than humans process. In the 2020s, U.A.I. may equally trivialize knowledge processing, vastly outperforming humans. Humans may quickly come to regard knowledge as trivial, just as humans regard data and information as trivial today. U.A.I., in the 2020s, may even create vast amounts of new knowledge through transitive relations and, more generally, through inference reasoning. U.A.I. may even have capabilities to validate such newly created knowledge.

To maintain human primacy over U.A.I. in the 2020s, new models of top-down HCI, DSS, and pedagogical designs are needed. Before we introduce these new top-down models, which emphasize human vision (purpose, creativity, ambition) and wisdom (goal-setting, figure-of-merit selection, and decision making), let us use the HoU to review the trends and hypothetical impact to human AoU caused by implementation of semantic technologies, and then predict what impact 2020s U.A.I. may have for human cognitive-cybernetic activities. Will humans choose to be the master, or will we, little by little, become the machine’s slave? Many parents already complain about boys’ addiction to video games and girls’ addiction to social media devices.

4 The Changing Paradigm for HCI, DSS, and Pedagogy Designs

Today we exist in the age of ubiquitous computing and communication, and soon comes the age of ubiquitous artificial intelligence (U.A.I.) Let us look first at the effect that ubiquitous computing and communication have on human learning.

4.1 Brain (Data Overload) ⇒ Data Management Systems (Data Trivialization)

Our present culture classifies humans and technology as separate entities—a human and a computer with an interface (HCI). Clark [9] argues that we cannot disambiguate ourselves from our technology, that information technology is an extension of mind, that we are seamless cybernetic organisms. Thus, Clark sees no a priori limit to our cognitive ability because it expands with technology; the mind is just less and less in the head. The Vision HoU also takes this view, that humans develop computing technology, and computing technology amplifies human cognition; each iteratively amplifies the other. Figure 9 illustrates the up-shift in human AoU enabled by common data-handling technology because the human mind is relieved of mundane data handling. With data handling technology, humans can spend more of their mental energy at higher cognitive layers, to make a better estimate of reality.

Fig. 9.
figure 9

Data systems (c.1960s-present) should up-shift one’s allocation of understanding (AoU).

4.2 Brain (Info Overload) ⇒ Info Management Systems (Info Trivialization)

Figure 10 conceptualizes the up-shift that information-management technology SHOULD have on human AoU. Computing technology should relieve the human mind of the mundanity of data and information handling, and improve one’s estimation of reality at higher layers of understanding. In the age of information systems, humans ‘should’ spend more time extracting knowledge from the information layer. In 2018, machines primarily handle data and information; however, humans, for the most part, still manage category-creation. Nevertheless, for many applications, category schemas and taxonomies have already been created, and weighted text searches are convenient.

Fig. 10.
figure 10

Information systems should (c.1990-present) up-shift one’s AoU.

4.3 Brain (Knowledge Overload) ⇒ U.A.I. (Knowledge Trivialization)

Many claim that their websites contain knowledge, but whether something is knowledge, information, or data is relative to the observer’s existing AoU. For example, to a geometry student the word “sphere” should have category meaning, information, but to a toddler, the word “sphere” may be a word of unknown usage; therefore, “sphere” is only data to the toddler. Thus, most so-called web-based knowledge is actually data and information because, today, knowledge must be extracted or recognized by human observers. Today, machines do not think; they follow routines and statistics without any thought at all—calculations, yes, but thought and reflection, no.

Figure 11 extends the trend in cognitive up-shift by considering A.I. and other semantic technologies that are able to process knowledge and information with the same ease as numbers are processed today. Our culture may come to view information and knowledge as mundane and as boring as data is today. In this new world of the 2020s, humans spend most of their time setting goals, choosing figures of merit (FOM), and making wise decisions (optimized to FOM) based on solutions (knowledge) supplied by U.A.I. Are such skills being taught in our 160-year-old traditional bottom-up educational system? No!

Fig. 11.
figure 11

The U.A.I. of the 2020s semantic technologies should up-shift one’s AoU.

4.4 Discussion

The Vision HoU and the AoU show, hypothetically, an improvement in human AoU to the wisdom and vision layers, from 10% AoU for pre-I.T. humans (c.1950) in Fig. 8, to 23% for those using data management systems (computers, calculators, spreadsheets, and others) in Fig. 9, to 40% for information management systems (text searching, Internet, communication technology, and others) in Fig. 10, to 70% under a world of 2020s U.A.I. in Fig. 11.

Because humans cannot compete with U.A.I. at the data, information, and knowledge layers, pedagogy should immediately change to top-down, to stimulate uniquely human cognitive talents: vision (creativity and ambition) and wisdom (setting goals, choosing FOMs, and making wise decisions).

We have used the Vision HoU and the AoU to give an insightful view about where U.A.I. and humans stand in the 2020s. Recognizing and visualizing such trends in human AoU against the Vision HoU, as a function of advancing cognitive-cybernetic technologies, enables predictions about the workforce of the 2020s.

5 Competing with U.A.I. in the Workforce

5.1 A Century of Molding Idiot Savants

In the last one or two hundred years, our collective understanding has advanced so greatly that no longer can any one person come even close to understanding it all. Consequently, we educate and train for one or more decades to become so-called subject-matter-experts (SMEs). SMEs are, effectively, idiot savants; both SME and idiot savant know one subject well enough to be effective, but too little to be effective in any other subject in our highly competitive, specialist society. Because so much specialized understanding is required to master a given subject, one has scant time to become an expert in more than one field. Modern problems are complex and interdisciplinary, and teams of expensive SMEs are required to resolve such complex interdisciplinary problems. Consequently, there is not much difference between a so-called scholar on one topic and an idiot, for both know too little about most other subjects. For example, take the problem of colonizing Mars. Figure 12 is an illustration of just a few of the SMEs and support staff needed to tackle such a huge, complex interdisciplinary project. How likely is it that one SME alone could successfully conclude such a project—zero!

Fig. 12.
figure 12

The 2010s workforce, a workforce of subject-specific idiot-savants.

Even if one could hire all the SMEs in the world, other problems remain, such as the communication problem (or argument problem, or even political problem) between each group. How does the superconductor-group explain their mass requirements to the system-architecture group, which demands mass reduction—the two groups may just not understand the constraints behind each other’s problems, and thus spend considerable time arguing before reluctantly settling on non-optimum goals and solutions. In summary, not only is the more expert SME more expensive, but also the more powerful and influential. Each group within a multi-group project has their own interests at heart. Each group may think it the key group, each person may think it the indispensable person. Therefore, management must resolve conflicting arguments between groups, mostly over resource control and power within the company or project, but management is often poorly able to make wise refereeing because management does not understand the technical issues involved. Furthermore, even earnest groups will not operate faithfully because each group cannot understand or see the total picture, a picture that does not even exist until the semi-optimum design is discerned, prototyped, and implemented in the market, or in space for the Mars-colonization project as indicated earlier.

5.2 The Age of U.A.I

For the workforce, the problem is about to become much worse, because no matter how many SMEs one has, U.A.I. will outperform all of them by orders of magnitude at the lower layers of understanding: data, information, and knowledge. Consider the 2020s, perhaps the late 2020s, when U.A.I. manages knowledge and information as easily as computers currently handle data, U.A.I. will not only know all that is known, it will know the best of what is known. In such a world, humans do more goal-setting and decision-making than they do information and knowledge learning, processing, and memorizing. Humans will pose goals and figures-of-merit (FOM) to U.A.I.; U.A.I. will use FOMs to propose solutions to satisfy human goals. In such a future, Fig. 13 shows that most of the expensive staff is newly unemployed; the company saves money; and the products are cheaper, better, and produced faster.

Fig. 13.
figure 13

A culture of visionaries, goal-setters, and decision-makers. (Color figure online)

However, the high unemployment is scary, unless the U.A.I. opens enough new opportunities to employ everyone, and everyone understands how to use U.A.I.; that is, everyone has enough vision to pose goals and to set FOMs to put U.A.I. to work. Unfortunately, our current school system is of 19th-Century design, bottom up, and it teaches primarily information and knowledge processing because that is essentially all that kindergarten through university undergraduate schools know how to measure, to grade. It is easy to grade 30 math problems taken during a one-hour test; unfortunately, integrated understanding requires vision and wisdom, which are both difficult to measure in a fair and standardized way, and this is probably why education pedagogies have remained bottom-up for 160 years, or more. Secondly, because U.A.I. is akin to a super-SME in ALL subjects, the communication and political internecine struggles for power and resources among competing departments disappear; solutions are truthfully optimized across all subjects. Thirdly, U.A.I. finds such optimizations to human-proposed goals and FOM in real-time, perhaps a million or billion or trillion times faster than could be accomplished by a team of the world’s best human SMEs. The solutions are far more reliable, accurate, and fair than humans of the 2010s could ever hope to achieve. The debate about whether or not the situation in Fig. 13 is inevitable is a red herring because anyone involved in the high technology industry since 2000 has certainly seen it on small and corporate-wide scales in which only local A.I. and local semantic technologies have been applied. The 2020s, under U.A.I., are likely to see it on a massive global-wide scale.

6 What Can Be Done Now?

HCI, DSS, pedagogical designs should immediately focus on the top two layers of the Vision HoU, to prepare humans for the 2020s under U.A.I. Many authors since the 1990s consider knowledge the highest value resource to an organization. Not so anymore! In the 2020s, the workplace may look more like that of Fig. 14.

Fig. 14.
figure 14

A hypothetical mid-2020s U.A.I. workplace.

Organizations will logically strive to put in place as fast as possible semantic technologies that manage the bottom three layers of the Vision HoU—data, information, and knowledge—to enable their dwindling human workforce to better create, envision, set goals, establish FOMs, and make wise decisions once U.A.I. has delivered multi-dimensional solutions to goals, so that humans can make their overarching “visions” into reality. In the 2020s, and today for that matter, wisdom and vision are the highly prized layers of understanding. Albert Einstein, T.S. Eliot, Arthur C. Clarke, most CEOs, and many others have recognized the fundamental importance of wisdom and vision over knowledge, information, and data. We must avoid competing with U.A.I.; instead, we should use U.A.I. as a tool to enhance our natural human cognitive talents—vision, goal-setting, and decision-making.

Two design approaches to help HCI designers and educators to prepare humans for the 2020s workplace are (1) taking into account that complexity and simplicity are relative to the observer, and (2) simplify the source of understanding (SoU).

6.1 Understanding Is Relative to the Observer

Figure 15 represents the familiar teacher-student cognitive divide caused by the wide gap in subject-matter understanding. From the teacher’s perspective, the subject matter is relatively simple, understandable, contextual, useful, and organized. However, from students’ perspectives, the subject matter is complex, disconnected, unusable, and disordered. The teacher may spend years helping students to achieve expert-level understanding. Consequently, complexity and simplicity are relative to the observer. Therefore, a system can be complex, simple, and in-between because some people understand the system partly, others not at all, and still others understand it very well.

Fig. 15.
figure 15

Understanding is relative to the observer.

We can generalize the “teacher” as the SoU and the “student” as the ToU. Consequently, the FoU is from the SoU to the ToU. We further generalize the SoU to be a mentor, SME, U.A.I., DSS, or anything that offers understanding to a target learner. We also generalize the ToU to be a novice, a User of HCI and DSSs, and even other semantic technologies that wish to gather understanding.

In Fig. 15, two FoUs are shown from the ToU to the SoU. FoU ① is inefficient for two reasons: (1) the expert, A.I., HCI, or DSS does not use knowledge or information (terminology) that is understood by the novice or user of the A.I., HCI, or DSS, and (2) the intended flow-RATE of understanding (FoU) from source to target is too high (thick line) for the target’s cognitive capacity. The better FoU is ②, because (1) the source uses knowledge and information that is understood by the target, and (2) the rate of the FoU is low enough that it does not overwhelm the cognitive capacity of the target. How then to customize the design of HCI and DSSs to each User’s cognitive ability? Sections 6.2 and 6.3 discuss approaches for doing so.

6.2 Simplify the Source of Understanding (SoU)

Because solutions to goals are unknown a priori, humans gather significantly more understanding at lesser cognitive layers than is required to exactly understand a decision space at the top. The pyramidal shape of Ackoff’s (1989, p. 3) “allocation of mental space” is the result (see Fig. 16).

Fig. 16.
figure 16

An inefficient, non-parsimonious AoU results from bottom-up learning.

Filtering Irrelevant and Insignificant Understanding.

Humans gather knowledge, information, and data to support making wise decisions. Figure 16 shows an inefficient AoU, which could be representative of a person’s mind on a particular subject, or of the AoU as encoded in a DSS. In any case, Fig. 16 shows two regions of understanding that impede wise decision-making: irrelevant and insignificant understandings. If the irrelevant and insignificant understanding could be culled, then what remains is a perfectly efficient vertical column of understanding, each layer of data, information, and knowledge is exactly that amount of understanding that is required to support the decision space near the top, to enable making good decisions. The domain of the decision space determines the width of the vertical column. A decision support model (DSM) consisting of just the vertical column of “significant” understanding is considered a parsimonious DSM; that is, it incorporates only that quantity of understanding needed to satisfy an analysis of the decision-space at the top.

Whereas irrelevant understanding is completely useless to the decision space, insignificant understanding is determined by the user’s needs (usually established by specifying a requisite model fidelity—the representational accuracy and precision required for the complex system being modeled). Insignificant understanding may be relevant to the goal of the DSM, but (1) contributes too little to the overall understanding of the complex system to matter to the user, in addition to its high cost to design, maintain, and to train people to use; and (2) it can possibly obfuscate significant parts of the DSM. For example, if an organization needs its DSM to model a complex system to within 1% of its true behavior, then why should the organization pay for understanding that contributes less than 1% while adding additional complexity and uncertainty to the DSM to boot? Insignificant understanding is thus defined as that subtle small quantity of understanding that is smaller than the requisite fidelity (precision and accuracy) needed by an organization to make good decisions.

Carpenter [6] recently developed methodology for culling irrelevant and insignificant understanding from DSMs of complex systems. Culling irrelevant understanding can be accomplished by applying a goal function upon the DSM, assuming that the DSM has such a facility, or that such a facility can be added to the DSM, and then culling those nodes that play no role whatsoever in determining goal function optimization. For example, for the DSM testbed used by Carpenter [6], the original DSM had 1212 nodes of knowledge (functions) and information (independent variables, constants, and various dimensionless and dimensional units and conversion factors). After rationalizing similar nodes into best nodes, and after globalizing local variables, the DSM reduced to 1190 nodes. This 1190 node DSM formed the baseline DSM. The next critical activity was to code into each node dimensionality (a generalized unit, such as length, time, mass, money), so that the DSM could be checked for modeling reality (conservation-of-dimensionality), as opposed to mere program data typing. Adding dimensionality is an extra quality-assurance step. Whereas typing checks for model integrity, conservation-of-dimensionality checks the validity of the model to the real world. Then, a goal function was established, and a procedure used to cull all nodes that played no role in such goal function. The number of nodes reduced to 380 nodes: 188 information nodes, 191 knowledge nodes, and 1 goal-function node, also called the “wisdom node.” At this point, irrelevant understanding has been culled from the DSM—that’s 810 nodes or 68% of the DSM’s complexity. Such complexity is no longer in the way of the user, and the HCI design should indicate, and possibly hide from view, such useless and possibly confusing DSM structures from the User. Still, 380 nodes remain, a lot of knowledge and information for a User to comprehend. For example, a User would want to know which pieces of information and knowledge are most important to satisfy the goal function, to make a wise decision. Many of the 380 nodes contribute insignificantly to optimizing the goal function. Consequently, using a procedure called the Knockout Methodology [6], the relevance of each node to optimizing the goal function was determined. Then, beginning with the node of least importance, nodes were removed until some reduction in fidelity limit was reached, a limit that could be set by the User. In the case of the DSM under study, 137 nodes were pruned for a further reduction in model complexity of 36%, but at only a reduction in model fidelity, or understanding, of 1%. In other words, an optimized goal function connected to 242 knowledge and information nodes, only results in a 1% deviation from an optimization of the goal function when connected to the full complement of 379 nodes. The ability to reduce nodes in complex systems models enables new DSS users to reduce the DSM to its bare minimum, which in the case of the DSM under review, was 21 nodes while maintaining conservation-of-dimensionality. These 21 nodes are the most important nodes, critical nodes to the operation of the DSM. As a new user learns the utility of these 21 nodes with respect to the goal function (the user’s goal), the user can add-in the next most important nodes, and learn the relevance each new node plays as the full DSM is slowly rebuilt. Consequently, new users can come to understand the DSM as fast as their cognitive abilities allow. Voilà!, node dimensionality enables customized learning to any particular user’s cognitive ability. Now consider the hypothetical situation in the 2020s when U.A.I. has “node-ified” entire subjects, such as physics, or economics. The number of discrete nodes may be in the trillions. For any given human goal posed to U.A.I., wouldn’t it be nice if such U.A.I. then provided the minimum model to satisfy our goal, to which we could then add additional nodes so as to learn our subject as fast as our cognitive ability allows? I think so.

7 Conclusion

Today, semantic technologies are already automatically processing data and information, and the traditional bottom-up education paradigm is finding its bottom being chipped away. Our machines give us prodigious control over nearly limitless quantities of data. Likewise, soon semantic and A.I. technologies will give us prodigious control over unimaginable quantities of information and knowledge. As has already happened with data, knowledge will trivialize. Education must necessarily shift its emphasis to goal-setting, decision-making, value-setting, and creativity. These skills involve using our imagination in practical ways to maintain our primacy over U.A.I. through the 2020s. Later than that, who knows!