Prolog is a language with a long history (from the 1970s), with a few variants that are still being maintained. In researching the variants, I decided to use what seemed the most popular flavor – SWI-Prolog.
Logic programming with SWI-Prolog is a a far cry from the imperative, object oriented paradigms I’ve known. Unlike languages like C# or Java, Prolog is declarative (like SQL). You declare the data you have, as “facts” in a knowledge base, and then later, create a constraint that returns the subset of facts. You’re not telling the computer what to do, rather you’re telling it what you want.
It utilizes facts and relationships to form a “knowledgebase” that can be queried. The following are some examples of facts in a knowledgebase:
There are also more complex relationships between facts – for example:
Since Prolog is a declarative language, it’s going to work a bit like SQL – you’ll start out with building queries using constraints that describe the data you want out of the knowledgebase.
Compare the following:
SQL - get things that are tangible and blue
SELECT [output] FROM [some place] WHERE [constraints]
SELECT things FROM MyTable WHERE (Color LIKE 'Blue' AND tangible=1)
Prolog - get things that are tangible and blue
? - [Constraint(output)]
? - blue(X), tangible(X).
In this case, what we want are things that are both blue and tangible. Prolog finds the things this result through a technique known as backtracking.
Oftentimes, programmers are introduced to the concept of backtracking by means of the 8-Queens Puzzle – how to fit 8 queens on a certain sized chessboard such none are threatening each others squares.
One solution, backtracking, is a smarter type of brute force approach where the computer performs a depth-first search of the solution tree. In the case of N queens problem, a somewhat optimized algorithm that utilizes backtracking will start by placing the first queen in the corner, A1. It then tries putting the second queen on B1, (which fails), B2 (also fails), then succeeds on B3. It then continues in this way, iterating through the rows, trying to place the queen in the first non-threatened spot. If ever there is a case where no rows are valid with previous queens, it backtracks by
going back to a previous queen, and iterates the previous queen’s position to the next valid spot, then retries all the queens that follow it. The algorithm continues in this way, sometimes backtracking to even earlier queens as it exhausts invalid placements on the board.
Prolog utilizes this backtracking algorithm when executing a query. Consider the sample of our knowledge base above:
We can run the following query:
The prolog runtime will start by searching for what makes up a duck
duck(X) :- ...
then performs a depth-first search through its component facts
and searches to see if X=mallard satisfies all the constraints. If we’re using the facts above, this query actually fails. The runtime correctly determines that the mallard does indeed talk like a duck, which meets our first constraint. Then it checks fliesLike(duck, goose) against fliesLike(duck, mallard), and fails. It then tries any other rules for fliesLike, before it exhausting its search. The search then backtracks, and checks to see if there’s any other definitions of talksLike that might still be valid. There aren’t any, so it gives up, and returns false. If we wanted this query to return true, we can either statically declare some additional rules for talksLike and walksLike so that they include mallards, or declare these rules as dynamic, allowing the definitions to be changed at runtime.
Since SWI-Prolog predictably performs this depth first search, backtracking when necessary, it’s possible to use this in crafting elegant solutions, and in working around some of SWI-Prolog’s quirks.
As far as elegant solutions go, Prolog can make short work of problems that imperative languages find difficult. Sudoku solvers, graph coloring problems, and a lot of math/logic riddles have very elegant solutions when implemented in Prolog.
Part of this comes from the style of decomposing problems with it. With imperative languages, I would start by asking myself:
“What small programs can I piece together in order to reach the end goal?”
With Prolog, it’s more of a top-down approach, asking
“What are some facts about the finished product?”
For example, if I wanted to build a train, a Prolog definition might start like this:
There’s a little more work to do to get our train working, but you’ll notice that a lot of the logical components that make up a train are easier to tease out this way. I’ve noticed
that the more vague, non-functional program requirements can be more easily defined using the Prolog’s method of stating facts about it. This is powerful, even if not crafting a solution in Prolog.
In addition to a great model for decomposing problems, Prolog presents a fantastic interface for a brute force/depth first search with backtracking. Because of this, you can have very short, elegant solutions for otherwise lengthy/tricky problems, like graph coloring in 6 lines, Sudoku solving, and logic riddles. In a larger scale, it’s also noteworthy for it’s role in winning Jeopardy.
Unfortunately, with great power comes great technical gotchas. If you’re anything like me, you rely on negations in logic – adding ‘DoesntExplode’ to a list of things a train does seems like a good idea. In Prolog, this is often a poor way to organize things – or at the very least, it’s inefficient. Prolog uses what’s known as negation as failure, which in this case, means it needs to fail on every local branch before it’s sure that a fact is negative. Fortunately, there are some more advanced features (cuts) you can use that can make the search more efficient at times.
Prolog also doesn’t allow you to re-assign variables once written, which can really complicate matters for the uninitiated programmer. Many imperative algorithms rely on re-assigning the same variable, so breaking this habit will be one of the first steps towards using Prolog effectively. If you decide to pick up the language, be prepared to learn some new control and style patterns.
The difficulties in jumping over prolog’s early hurdles are not without payoffs though. Most functions in Prolog are implemented using recursion, rather than the looping constructs of imperative languages. After all, loops are harder to implement when you can’t rewrite variables. Effectively, this means that it’s easier to determine where exactly a value is coming from. Recursion also makes Prolog a great exercise in finding solutions to problems by means of list manipulation. It uses the format of [H, E1, E2 … | T] to refer to the head of the list, any following elements, and the the tail of the list. This format makes reasoning about solutions using lists easier by clearly identifying the positions of the list that are being manipulated. Another neat perk is that without loop constructs and control statements (if/else), every condition is coded deliberately as a separate function. This approach greatly reduces the number of accidental states in a program, and is fantastic for reducing side effects. If you’re looking at a problem where you might have a high cyclomatic complexity, you may want to consider prototyping some of the requirements in Prolog to see if there’s an easier way.
So to recap, the features of Prolog are:
- Fact driven development – Start at “What are some facts about the finished product?”, and work your way down.
- Functional, Recursive solutions – writing to variables only once helps modularity and helps functions from picking up too many concerns.
- Elegant solutions for riddles and puzzles that can be hard to express otherwise
- List processing experience.
If you’re interested in picking up the language, SWISH is a great place to start. It provides a sleek web-interface to run simple SWI-Prolog programs, provides tutorials, and numerous other features that make prototyping easy.