Keywords

1 Introduction

In his 1972 Turing Award Lecture, Edsger Dijkstra notes that LISP “has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.” Curiously, it was during that same year that Prolog was developed. We do not know if it was felt at that time just how important the discovery of the Prolog language was, but it is not surprising that the name of the language, an acronym for “Programming in Logic”, is a homophone for prologue. Robert Kowalski’s and Alain Colmerauer’s language was an introduction to a new way of thinking about programming, one which in some ways is alluded to by an old joke at the language’s expense:

Prolog is what you get when you create a language and system that has the intelligence of a six-year-old - it simply says “no” to everything.

The joke hints at just how revolutionary the language was. For the first time, we now had a language that rather than having a programmer answer the question of “how”, we had one that enabled us to answer the question of “what”. In other words, the language freed us from thinking about and describing the mechanics of an algorithm, and allowed us to focus on describing the goal, or specification that the algorithm was intended to meet. So, if we come back to the notion of a six-year-old child, it turned a programmer into a teacher, and the computer into a student. This shift, to return to Dijkstra’s quote on LISP, enabled us to think previously impossible thoughts – and therefore, to ask previously impossible questions.

Two other aspects of the language’s nature – its connection to both Horn clauses and context-free grammars shed light on the kinds of heretofore impossible thoughts we now find ourselves engaged with. SLD and its successor SLDNF resolution enabled us to both simply encode and render computable part of the language of thought itself. This in turn shifted our gaze to the question of: “What kinds of reasoning can be described (i.e., taught) to a machine?” The search for answers to these questions (and others such as uncovering the nature of negation-as-failure) gave rise to other languages and their attendant semantics, such as the well-founded [9] and answer-set semantics [1, 2], advancing our understanding of how we ourselves reason and how the kind of reasoning we carry out can be imparted to a machine. These questions yielded further lines of inquiry into areas such as commonsense reasoning, natural language understanding, reasoning about actions and change, and algorithmics, many of which are part of the foundation of the artificial intelligence technologies in active development here at Elemental CognitionFootnote 1.

Elemental Cognition (EC), a company founded by Dave Ferrucci after his success in helming IBM’s Watson ProjectFootnote 2 through its landmark success in beating the best humans at the question-answering game of Jeopardy, is a particular beneficiary of the foundations laid by Kowalski and those who followed him. The fields of knowledge representation, non-monotonic reasoning, and declarative programming can trace part of their ancestry to Kowalski’s work, and provide the logical foundations of the work done at EC. In particular, our vision of artificial agents as “thought partners” capable of collaborating with humans, rather than just acting autonomously, depends on numerous developments in these fields.

EC’s history with logic programming begins in some respects with Ferrucci’s own background, and the IBM Watson project in particular. There, Prolog played a role in the project’s natural language pipeline and was instrumental in the detection and extraction of semantic relations in both questions and natural language corpora. Prolog’s simplicity and expressiveness enabled the developers to readily deal with rule sets consisting of more than 6,000 Prolog clauses, something which prior efforts involving custom pattern-matching frameworks failed to do. This work in no small part informed the design of EC’s neuro-symbolic reasoner, Braid [3]. The expressivity and transparency of a Prolog-like language combined with the statistical pattern matching power of various machine learning models enabled a powerful HybridAI solution which had been applied to several “real-world” applications. This work in part involved the development of a backward chaining system that can be seen as an extension of Prolog’s SLD resolution algorithm by features such as statistical/fuzzy unification and probabilistic rules generated by a machine learning model. This enabled the system to circumvent the knowledge acquisition bottleneck and potential brittleness of matching/unification, while retaining the elegance and simplicity of the declarative paradigm itself. Subsequent work has seen the Braid reasoning system evolve towards the use of the answer-set semantics and constraint logic programming [7].

All of this enabled a number of high-profile successes, such as our development of the PolicyPathFootnote 3 application which was used during Super Bowl LV in 2021 at the height of the Covid-19 pandemic [4]. The project was built on a declarative, logic-based representation of the related policies, and part of the reasoning mechanisms developed in the course of the project combined techniques for reasoning about actions and change with various flavors of logic programming including answer-set programming and constraint logic programming. Other successes include our partnership with the OneWorld AllianceFootnote 4 on the development of the virtual agent they employ for scheduling round-the-world travel.

In this paper we give an introduction to a new language called CogentFootnote 5 under development at EC, which carries forward the torch that was lit by the introduction of Prolog.

2 From Prolog to Cogent

As was mentioned previously, the advent of logic programming enabled us to shift our focus from describing the how of a computation, to the what. In other words, it enabled us to focus our attention on what Niklaus Wirth termed “the refinement of specification”. As an example, let’s consider the following example: a nurse scheduling program written in answer-set prolog (a descendant of Prolog based on the answer-set semantics of logic programs, and one of the elements at the core of EC’s internal language known as Cordial).

figure a

The important aspect of the program in Listing 1.1 is that none of the statements describe an algorithm for computing a potential solution. Rather, they encode the specification itself. It’s worth reflecting and appreciating the power of such a syntactically simple and elegant language. Compare for example this program, against the equivalent programs written in an imperative language using Google’s OR-Tools [8]. The difference is stark, and it raises an important question: “Why has the logic programming approach not gained in momentum since its discovery?”

There are many potential answers to this question. One possibility is that in addition to the cognitive load incurred by switching from an imperative to a declarative mindset, there is an additional cognitive load incurred by the close relationship between logic programming languages and the notations of formal logic. This dramatically increases the distance a potential user has to mentally travel in order to get to the current state of the art. Another way to view this, is that logic programming languages on some level, are still at the level of assembly language. The declarative paradigm is a higher level paradigm than imperative programming, but declarative languages by and large are still on too low a level to be readily adopted. If this is true, then a natural question to ask is: “What could a high-level, structured, declarative programming language look like?”

At EC, we believe that one potential answer to this question is structured natural language, in particular our own version of this known as Cogent. Similar work in this area exists, namely Kowalski’s own work on logical English [5, 6], but with Cogent we are able to leverage our expertise in both natural language understanding and knowledge representation to build a more flexible, and user friendly representation language. In particular, let’s revisit the program from Listing 1.1, only this time in Cogent instead of ASP:

figure b

The reader will notice that with the exception of lines 7, 9, and 11, the text of the program is the same as comments from the ASP encoding in Listing 1.1. Given this program, our reasoning engine is capable of finding solutions just as efficiently as the ASP encoding, yet the Cogent program is more accessible to a reader. Not only that, but the fact that the language is a structured form of natural language helps bridge the gap in terms of familiarity to aspiring users. The notion of accessibility to a reader, however is of special importance, since at EC, one of our motivating goals is to help develop explainable AI. One important aspect of this is to render the axioms of a domain that an AI system represents both inspectable and clear to as many users as possible. This kind of transparency enables deeper human and AI partnerships which furthers our vision of artificial agents as “thought partners” capable of collaborating with humans.

Cogent has features that overlap with those found in contemporary logic programming languages, such as non-deterministic choice, aggregates, recursive definitions, costs, preferences, a declarative semantics for negation, and contradiction diagnosis. In addition however, it features numerous advanced term building features that facilitate the construction of clear, concise natural language expressions. Consider the solution to the N-Queens problem given in Listing 1.3

figure c

Listing 1.3 demonstrates several term building features of Cogent, as well as a natural encoding of the constraints of the domain. In addition, the language utilizes EC’s Braid reasoning engine, making it capable of scaling to advanced production applications, such as the Round-the-World travel application developed for the OneWorld Alliance. While dramatically more complex in scope than the toy examples presented above, the encoding of various rules in Cogent (such as those shown in Listing 1.4) remains not only manageable, but clearly conveys their intention to a reader:

figure d

In addition to bridging the linguistic gap by being a controlled form of natural language, Cogent is coupled with a powerful AI authoring assistant to help bridge the gap even further, making for a system that we believe is greater than the sum of its parts. It is our belief at EC that Cogent provides a revolution in the arena of declarative programming, and programming at large by elevating the notion of high-level language to a new level.

3 Conclusion

In 1972, Kowalski and Colmerauer started a revolution with the advent of the Prolog programming language. The ability to think “previously impossible thoughts”, led the community to ask previously unthinkable question, sparking revolutions in natural language understanding, knowledge representation, commonsense reasoning, and other diverse areas. For a time, these fields grew in isolation from each other, and now are coming together rapidly and in profound ways. With the development of Cogent, an ultimate grandchild of Prolog in some sense, we at Elemental Cognition hope to carry forward the tradition and enable a new class of impossible thoughts to be given voice. The community owes a debt to Kowalski, Colmerauer and the Prolog Language, and the great unexplored sea they revealed to us. Happy Birthday.