If you're trying to solve boolean satisfiability, you want a SAT solver, not Prolog.
If you're trying to solve more general problems, you probably want an SMT solver or other more specific solver (e.g., ILP), not Prolog.
If you're trying to verify software, you want a theorem prover, not Prolog.
If you want database-ish stuff you want Datalog (or something like it), not Prolog.
Prolog falls into the uncanny valley of being too general of a DSL for the sorts of optimizations that more narrow solvers enjoy, while being too narrow of a DSL for most general-purpose programming. I think because it was an early DSL, people got caught up in the idea of it being more "high-level" than other languages [1], but the DSL nature of it wasn't fully appreciated at the time.
Prolog being tied to this very specific execution model of SLD resolution is somewhat root to the problems you have described. The promise of declerative programming in general, and logic and constraint programming specifically is, that programs are easier to reason about because the execution model is abstracted away and there's a relatively simple and mathematically pure definition of the model(s) of a program.
But again and again one uses extralogical constructs in Prolog programming.
Case in point: basically all your mentioned alternatives (Datalog, SMT, I'd add ASP) have implementations that use the Prolog syntax or something close to it with very similar semantics. But not being tied to that specific execution model can give you (better) answers or more complete answers, termination guarantees, or other things we might need.
As I mentioned in another comment, SLG resolution (implemented in XSB Prolog and more recently in SWI-Prolog) does alter the execution model in a way that makes it easier to avoid extralogical constructs. And the wide availability of deeply integrated CLP modules for eg SWI-Prolog also makes it easier to keep programs purely logical.
>> Case in point: basically all your mentioned alternatives (Datalog, SMT, I'd add ASP) have implementations that use the Prolog syntax or something close to it with very similar semantics. But not being tied to that specific execution model can give you (better) answers or more complete answers, termination guarantees, or other things we might need.
To clarify, datalog is a restriction of Prolog excluding functions of more than 0 arguments (i.e. constants). For example, P(x,y) :- Q(f(x), g(y)) is Prolog but not datalog. There's also a restriction in the sharing of variables between head and body literals that ensures termination. The point of datalog is that because of its function-free structure it's decidable. You could simply ground a datalog program and get its entire Herbrand model.
SMT, I don't know much about. It's got nothing to do with Prolog, as far as I know. It's not a programming language, but a framework for boolean constraint solving. I don't know why it (or SAT) is compared to Prolog which has nothing to do with boolean satisfiability problems.
ASP also has nothing much to do with Prolog. It's a strictly first-order language (Prolog is higher-order) that is based on the idea of stable model semantics. Very briefly, a stable model is often the set of queries to which Prolog will answer "true" (or "yes" depending on the interpreter). ASP programs share some syntax with Prolog, but are not definite programs. For one thing, ASP allows classical negation, alongside Prolog's Negation-As-Failure. ASP is said to be well-suited to optimisation problems but to me it's more interesting as a formal treatment of reasoning under uncertainty in a strictly logic framework.
Note that datalog is incomplete (obviously, since it doesn't have functions) and ASP solvers are NP-complete. They are also special-purpose languages, so for example nobody is going to write a web server in ASP anytime soon. By comparison, the SWI-Prolog website runs on a web server written in Prolog (https://eu.swi-prolog.org/dogfood.html).
> SMT, I don't know much about. It's got nothing to do with Prolog, as far as I know. It's not a programming language, but a framework for boolean constraint solving. I don't know why it (or SAT) is compared to Prolog which has nothing to do with boolean satisfiability problems.
SMT and SAT are certainly less expressive than Prolog (or more abstract systems, like LambdaProlog, Isabelle/HOL, etc.), so they cannot express the same structures/abstractions. However, given any "concrete" query/program, we can collapse all of its structure, in-line/unroll all of its higher-order parts, etc. to get a SAT formula (modulo the halting problem).
The resulting term is usually exponentially-larger than the original; but SAT solvers have become fast enough to handle such giant problems. (SMT solvers are usually wrappers around a SAT solver too)
I think what you're saying is that you can take a Prolog program and ground it all down to the propositional order so you can then evaluate it with a SAT solver. For example, you can take a Prolog clause like this one:
p(X,Y):- q(X,Z), r(Z,Y)
And ground it in all possible ways with constants in the domain of discourse:
And so on, at which point you essentially have propositional formulae, like:
p1 :- q1, r1
p2 :- q2, r2
...
etc.
That's what ASP does (except it's not Prolog). But one problem with that is that it doesn't work in a language with functions, because then you get infinite groundings. So for example, ASP doesn't have lists (because you get infinite lists). Prolog does, and you can't just ground any old Prolog program, just like that.
Then again, the motivation behind Prolog's unification is exactly that you don't need to ground the entire Herbrand base of a Prolog program to evaluate it. You can unify variables, until you finally have only ground terms when the proof is complete, but no sooner than that. And of course, you can get a proof before you have only ground terms. So it's really not like SAT solving in my mind (given the very little I understand about SAT solving).
If I understand correctly, the Davis-Putnam algorithm used in SAT solving is based on Resolution in the propositional order, and Prolog is SLD-Resolution, that is a restriction of Resolution to definite clauses which are first-order (though Prolog is not first-order but higher-order as I say in another comment). So that's some common structure, there, but still, they're very different things.
sure, that's all true and most of that is part of my day-to-day research. But the post I replied to mentioned very specific problems where Prolog could be suitable at first glance but other models of computation / problem statement are usually preferred.
But most of those do have implementations that use Prolog-like syntax.
Prolog has predicates, not functions. (It has uninterpreted function symbols -- aka functors in this context.) It's based on first-order logic, i.e., it has nothing higher order in it, if you want to take it (formally) seriously. But of course there is `call/1` and so forth, which are just hacks... err, "non declarative".
There are some interesting higher-order logic programming languages, e.g. Mercury. I like the approach taken by HiLog [1] as it struck me as (semantically) honest and simple. I don't know if it's usable.
Prolog is not first-order. That's how it's described but it lacks the fundamental distinction between predicate and function symbols that makes First-Order Logic, first-order.
To clarify. A FOL language begins with an "alphabet" (sometimes called a "signature") of predicate symbols, function symbols (including constants), variables, the five logical connectives (¬, ∧, ∨, →, and ↔) and punctuation (left and right parens, comma, dot, colon). The sets of predicate, function and variable symbols are disjoint.
Function symbols are used to define terms, inductively, as follows. A variable is a term; a constant is a term; if f is a function symbol and t_1, ..., t_n are terms, then f(t_1, ..., t_n) is a term.
Terms are used to define formulae as follows: If P is a predicate symbol and t_1, ..., t_n are terms, then P(t_1, ..., t_n) is an atomic formula, or atom. If φ is a formula, then ¬φ is a formula. If φ, ψ are formulae, then φ ∧ ψ, φ ∨ ψ, φ → ψ, and φ ↔ ψ are formulae.
Literals and clauses are defined as follows: if A is an atom, then A is a positive literal and ¬A is a negative literal. A clause is a disjunction of literals. A clause with a single positive literal is definite. A definite clause with no function symbols other than constants is datalog.
Now, if you squint a bit at all of the above and then at a Prolog program you will see a serious violation of the rules of FOL syntax. Observe:
This is Prolog:
p(x, y):- q(r(x),z), r(p(x,z)).
See? r/1 and p/2 are used both as the symbols of terms and as the symbols of atomic formulae. That is a violation of FOL syntax. What's more, it's a violation of the "first-order" nature of FOL, where functions are "at a lower order" than predicates. That is, in FOL, terms map to entities in the domain of discourse, whereas formulae map to relations over entities in the domain of discourse.
But Prolog doesn't care. Prolog doesn't care if an object is a predicate, or a function, if it's a mapping, or a relation. For Prolog there's no distinction, and because there's no distinction it can treat everything as an argument to everything else: an atomic formula can be "passed" to an atomic formula, and taken apart, and put back together again. For example, you can do this sort of thing:
?- T_ =.. [p,X,Y,Z], T =.. [r,T_].
T_ = p(X, Y, Z),
T = r(p(X, Y, Z)).
That is no first-order functionality! In fact, in Prolog you can write this sort of thing:
I don't disagree with what you say here, but ... I'm coming at it from a formal semantics point of view. (I find Prolog and indeed "logic programming" in a larger sense to be all promise and little delivery on this front.) Perhaps you can check if your `prove` example is expressible in HiLog ... if so, it's more-or-less first order (in a semantically clean way). I believe the hard working XSB have an implementation.
I don't know that Prolog, or logic programming, makes any promises about formal semantics. I have to say I don't think of Prolog or logic programming in that context. Except for the more general point that Prolog programs are theories, and when a query is executied, that's a theorem being proven.
Btw, I'm not at all complaining about Prolog's higher-order. That may be a "hidden" feature, but it's a really, really useful one. FOL is actually very restrictive in many ways. As far as I understand it, the distinction between first and higher-order logics was not made by Peirce and Frege, who tended to fudge the orders a lot, just like Prolog.
In any case, for me, logic programming begins and ends with Robinson's paper on Resolution. That was not about formal semantics, and only about a sound and complete inference rule. And that (soundness and completeness) is all I really need to know. The rest (including definite clauses, SLD-Resolution and Prolog) is all implementation details.
I don’t think those are the problems since all sorts of solvers ship with prolog and it’s still a better way to write down a problem.
I think it’s biggest headwind is that you have to be significantly more intelligent than the average developer to think in Prolog.
Prolog requires you to think about code as the implication of constraints — you just have to keep a lot more in your head. Prolog also requires you to understand the execution model before you can do much of anything.
> I think it’s biggest headwind is that you have to be significantly more intelligent than the average developer to think in Prolog.
This is arguably wrong. Thinking in Prolog is much easier than knowing how to solve a problem beforehand. In Prolog you just state the facts and known rules, but you have no clear idea what would be the best way to solve it. Prolog solves it for you then, if possible. No intelligence needed. A "dumb" friend of mine on the 80ies only used Prolog for his work and was very happy with it. Lisp, Pascal or C was too hard for him.
The problem with Prolog is mostly that you'll through too hard problems at it, mostly with exponential complexity. Tiny changes have huge influences then, it either works or not.
Also the tabling and SAT/SMT integration sucks.
The problem with your claims is that once you start “debugging”, it gets complicated quickly.
You’ll have to gain an intuition of how the language actually tries so solve the problem under the hood.
sometimes the problem you encounter doesn’t match the capabilities of the language, and then you’ll have to state the problem in a roundabout way to coax the language to do what you intended.
It’s probably great for the small number of toy problems that it was designed to solve, but in my limited experience prolog doesn’t do well on real world problems that have all sorts of edge cases and dirty inputs.
You see this often in the data side of things, we delegate very laborious or very complicated tasks to powerful the powerful engines behind Postgres, BigQuery, Spark, and the like, and while they provide you with a high level interface that make common tasks very convenient, mastery can take years as you have to study the architecture and implementation behind them.
It's a huge investment on your end to do that, and people do it readily because the return is being able to process datasets of sizes and complexity that you otherwise wouldn't be able to.
People haven't found it to be a good investment in their time to commit themselves to Prolog's engine.
--------
> Thinking in Prolog is much easier than knowing how to solve a problem beforehand. In Prolog you just state the facts and known rules, but you have no clear idea what would be the best way to solve it.
While Prolog streamlines certain aspects of problem-solving due to its declarative nature, it still necessitates a clear understanding of the desired solution and the proper organization of facts and rules. Although Prolog's approach abstracts some implementation details, a well-planned strategy remains essential.
--------
> No intelligence needed.
Prolog might make some tasks seem simpler, but intelligence and a deep comprehension of logic programming are still needed to efficiently create and maintain solutions.
--------
> A "dumb" friend of mine on the 80ies only used Prolog for his work and was very happy with it. Lisp, Pascal or C was too hard for him.
I may be the "dumb" friend you referred to?
Consider this: someone skilled in producing valuable software using Prolog is likely knowledgeable and proficient enough to program professionally in various other languages, such as C, Lisp, C#, C++, Scala, Java, Pascal, Python, Assembly, and more.
As an experienced programmer with a strong background in multiple languages, I often receive requests to translate my Prolog software into languages like Python, Java, or C#. However, I usually decline and claim that "it would be too hard"
--------
> The problem with Prolog is mostly that you'll throw too hard problems at it, mostly with exponential complexity.
Prolog isn't inherently prone to exponential complexity problems. The complexity depends on the problem being solved and how the facts and rules are structured.
It's crucial to understand the problem domain and optimize the Prolog code accordingly
The complexity of a problem and fluid nature of flow control shifts during software runtime often make it unfeasible to implement the software in alternative languages. So consequently, in the rare instances when I or others have attempted this, we have either developed or found a Prolog interpreter in the target language and executed the original code.
Had these tasks not been so daunting, others would have already implemented solutions to them in different languages, and there would be no need for people to continually request my assistance!
--------
> Tiny changes have huge influences then, it either works or not.
While it is true that small changes can sometimes have significant impacts on the behavior of a Prolog program, this is not unique to Prolog. Any programming language can experience similar issues if the underlying logic is not well-designed or properly tested. It is the developer's responsibility to understand the problem domain and design the logic appropriately to avoid unintended consequences.
--------
> Also the tabling and SAT/SMT integration sucks.
Tabling is a technique used in Prolog to store intermediate results of a computation to avoid redundant sub-computations, thus improving the efficiency of certain types of queries. While it may have some limitations, it is generally considered a valuable feature of Prolog that can enhance performance for specific types of problems.
Prolog's integration with SAT (Boolean Satisfiability) and SMT (Satisfiability Modulo Theories) solvers is not inherently flawed. Prolog can be used effectively in combination with these solvers to tackle complex problems in various domains. Integrating Prolog with these solvers might require a good understanding of the solver's functionality and the problem domain, but the same can be said for integrating any other programming language with these solvers. It's important to leverage the latest Prolog tools and libraries to optimize these aspects of your program.
> Thinking in Prolog is much easier than knowing how to solve a problem beforehand. In Prolog you just state
> the facts and known rules, but you have no clear idea what would be the best way to solve it.
>
Prolog is a programming language that takes a unique approach to problem-solving due to its declarative
nature. While this can simplify certain aspects, it is important to have a solid understanding of the problem
you are trying to solve and how to properly organize the facts and rules. Even though Prolog can abstract
some of the implementation details away, having a well-thought-out plan and strategy is still necessary to
effectively use the language.
>
> No intelligence needed.
>
Although Prolog might make some tasks appear more straightforward, it is important to remember that
intelligence and a comprehensive understanding of logic programming are still required to efficiently create
and maintain solutions within the language.
>
> A "dumb" friend of mine in the 80s only used Prolog for his work and was very happy with it. Lisp, Pascal or C
> was too hard for him.
>
I may be the "dumb" friend you referred to?
Consider this: someone skilled in producing valuable software using Prolog is likely knowledgeable and
proficient enough to program professionally in various other languages, such as C, Lisp, C#, C++, Scala, Java,
Pascal, Python, Assembly, and more.
As an experienced programmer with a strong background in multiple languages, I often receive requests to
translate my Prolog software into languages like Python, Java, or C#. However, I usually decline and claim
that "it would be too hard"
Had these tasks not been so daunting, others would have already implemented solutions to them in
different languages, and there would be no need for people to continually request my assistance!
>
> The problem with Prolog is mostly that you'll throw too hard problems at it, mostly with exponential complexity.
>
Prolog is not inherently more susceptible to problems with exponential complexity than other programming
languages. The complexity of a problem depends on the specific problem being addressed and how the facts
and rules are organized within the Prolog code. To optimize Prolog code, it is essential to have a deep
understanding of the problem domain and to structure the code in a way that minimizes complexity.
>
> Tiny changes have huge influences then, it either works or not.
>
It is true that small changes in a program can sometimes have a significant impact on its behavior. This is
not a characteristic unique to Prolog; it can happen in any programming language if the underlying logic is
not well-designed or properly tested. It is the responsibility of the developer to understand the problem domain
and design the logic in such a way that it avoids unintended consequences, regardless of the programming
language used.
>
> Also the tabling and SAT/SMT integration sucks.
>
Tabling is a technique in Prolog that stores intermediate results of a computation to prevent redundant
sub-computations, thereby improving the efficiency of certain types of queries. Although it may have some
limitations, tabling is generally considered a valuable feature of Prolog that can enhance performance for
specific types of problems.
Prolog's integration with SAT (Boolean Satisfiability) and SMT (Satisfiability Modulo Theories) solvers is
not inherently flawed. Prolog can be used effectively in combination with these solvers to tackle complex
problems in various domains. Integrating Prolog with these solvers might require a good understanding of
the solver's functionality and the problem domain, but the same can be said for integrating any other programming
language with these solvers. It's important to leverage the latest Prolog tools and libraries to optimize these
aspects of your program.
> Prolog also requires you to understand the execution model before you can do much of anything.
This is true of most (all?) languages. Prolog's control flow is ultimately not too different from the usual stack-based semantics people are used to from imperative languages like C (if you're not too ambitious with the backtracking; if you are, best of luck!). The tricky part is perhaps understanding the dataflow that arises from unification. That can be arbitrarily complex but it seems people tend to go no further than difference structures, which are easy to comprehend in terms of pointers. (Yes yes I know, but logic variables are essentially just single-assignment pointers.)
The thing I find deeply weird about the logic programming literature is that if you want to reason about your program you're told to use things like Hoare logic [1] or intended interpretations [2] and not fundamental logical concepts like implementations implying specifications (ala Lamport and many others who grappled with refinement).
Well no, I’d go as far as to say that your average JS/Python dev will know very little about how their code is executed — and doesn’t need to know necessarily. SQL on the other hand is a different story . Noobs don’t know , but pros know and consider the execution graph of a query.
You don’t really have a choice in Prolog —- gotta know what the interpreter is doing from the beginning otherwise you can’t debug.
I don't disagree with your stance but one of the most beautiful things to come from this line of work was Ehud Shapiro's declarative debugging [1]. I'd recommend taking a look at his PhD thesis if you're at all interested in what logic programming can do (that nothing else can [2]). You can also see it as a philosophical exercise: he shows how we might understand Popper's falsification criterion and use it to do science.
[2] There have been attempts to build declarative debuggers for other languages but for one reason or another (complexity, space use, incomplete coverage of the language, ...) they don't appear to get used very widely. Shapiro's setup is beautifully simple for what it does.
Here "declarative" means that you have a model in mind ("model" in the sense of logic, i.e., you expect your predicates to hold of particular arguments) and the tool helps you explore the divergence between what you wrote and what you intended. Thinking this way means you can completely ignore the details of SLD-NF resolution (the details of the Prolog implementation)... I mean, it's unlikely to be perfect in practice for any number of reasons, but I find it very disappointing that we don't dream like this any more.
It't just that everyone has the execution model of imperative languages so internalized, we mostly don't think about it. Same with OO semantics, not to mention various JS pitfalls - they are second nature to enough people that they count as nature-given, but one false move and they come back to bite you.
It’s worth pointing out that the semantics of prodcedural programs tend to be pairwise, while coaxing the prolog solver to terminate is a more global, transitive problem
I disagree. A JS/Python developer will have no problem understanding the following example, which makes absolutely no sense without some (operational) semantic model of the code:
Prolog solvers don't necessarily optimize like gurobi, cplex or OR-tools would. Yes they can give you a solution that satisfies the constraints, and you can query for more possible answers, but it doesn't really optimize for the objective function in the same way.
Solvers are integrated into Prolog nowadays, like the other commenter said.
The real problem is that Prolog doesn't play nice with relational databases. (Too bad the people behind the short-lived NoSQL fad didn't know about Prolog, though.)
Smuggle a Prolog in YAML somewhere, call it YAMLogic or anything else really which can’t be directly traced to it and watch the kids be amazed at what you can make computers do.
Only about 33% serious here, but… still a bit serious.
Or I guess you can see it as a DSL where the "Domain" is the entire set of programs computable by a Universal Turing Machine. It's a DSL for writing programs that manipulate programs that manipulate programs.
That's a very apt description of where Prolog stands. The niche I see it fit in is as a teaching tool for first-order-logic in computer science (not necessarily math). At least that's how I learned it and I'm glad I did.
I think Prolog's main problem is that it presents the idea of "just declare the constraints and it will find the solution," but can't really live up to that. In reality it's an extremely leaky abstraction, and even with pure predicates you need to take its execution model into account to avoid getting stuck in an infinite loop.
I still love the language, but I think there's a point where every learner realizes it's not as beautiful and ideal as they once thought.
This is the case with any other declarative programming system: while you only care about "what", i.e. the end result matching your declarative specification, you are living a dream. Once you need to fine-tune the details of how you get there (e.g. performance, resource use), or tweak the end result in a way that is not easy to describe within the constraints of your declarative dream land, things get ugly pretty quickly. Query hints in some SQL dialects, vendor-specific hacks in HTML/CSS, Kubernetes YAML templating, etc. are all sad stories about it.
Using a template to render out text that is then interpreted as something else (YAML in this case) is always fraught with quoting issues, complex workarounds, and for YAML specifically indentation issues.
You see the same thing with the C preprocessor - there's a very good reason that basically no language since has copied that design, and it's infamous for being full of footguns.
A far simpler and more manageable solution is to generate the same data structure (the final manifest) in a language which can actually represent those data structures directly. That might be a dedicated language like Cue or Jsonnet, or it could just be a general purpose programming language. You can generate the data structure using whatever tools you like, and then render it to JSON or YAML to be consumed by whatever tool is uploading it to kubernetes (kubectl apply, helm, etc).
Using textual templating to generate a whitespace sensitive language is a massive footgun - I wouldn't want to try and generate python that way either.
It looks like you can plugin in your own templating script to the helm stuff though, I was thinking that something like https://yglu.io could retain the essential YAML-ness for comprehensibility to others while not being completely ridiculous like using a text templating system is.
Sure. Using things like loops and conditionals to stitch parts of the specification together is not a very declarative way to do things, imo. Regarding a better system I don't know, they all have their quirks, I guess. My point was that purely declarative approach has its limits and often times breaks when facing real world complexity.
Sure, nobody is deceived by the reality of the execution engine. But it's hard not to see a pure statement that is fast in one mode but slow in a different mode, determined entirely by the logically arbitrary order of two clauses, and not wish that the computer could just figure out those details itself.
I fully agree and came here to post something very similar. Advanced (or even non-trivial) prolog requires such a deep understanding of what prolog is doing under the hood that, at that point, you should be able to implement whatever you're trying to solve just as easily in your favorite non-prolog programming language.
What makes this problem really bad is that everything about your problem that isn't directly related to logic programming is much harder to do in pure prolog. Just reading in a list of facts from a CSV file is non-trivial in prolog.
In the end Prolog makes trivial but hard problems easy, while non-trivial hard problems remain hard to solve, and easy problems, even some trivial ones, also remain hard.
I still love Prolog as well because it really does change the way you think about programming, but it is very far from even Haskell in allowing you to extend these new ways of thinking to solving real world problems.
Well said. Although there are a few things Prolog excels at, grammars are one of which. DCGs are one of my favorite things about any language ever and I miss them everywhere else.
The fact that you need to understand the execution model is not a limitation or an accident: it's a feature. One of the aims of prolog is to PROgram in logic. There are plenty of other logical frameworks that you can use to formulate a problem, make deductions etc, but then what do you do? The point of Prolog is that the operational and declarative semantics concur (excuse the non-exact verbiage). People mistakenly think this means there shouldn't be an operational semantics, or that it's a failure if you have to learn about it. If that's what you need, Answer Set Programming (or SMT solvers etc) might be a better fit. But that's not what Prolog is aiming to be.
Just try writing (without hacks/cuts) a run-length encoder that produces only one answer ([9,9,9,8,8,7] -> [[9,3], [8,2], [7,1]]) and works forwards and backwards and then tell me the execution model is not a limitation.
It's the sort of thing where people produce versions that seem to work, but fall into infinite loops or inconsistent extra outputs if you go past the first answer. I'm not sure it's actually possible without meta-programming.
There is a minor impurity in this code: `N = 1+1, rle([a,a],[[a,N]]).` succeeds, yet `rle([a,a],[[a,N]]), N = 1+1` fails. Add `:- op(150, fx, #).` and use it like `#CountPlus1 #= #Count+1` etc.
I was lumping clpfd under metaprogramming, since it acts like an extra runtime or dsl and you can't use the prolog debugger to make sense of it. I'll readily admit that prolog+clpfd is very powerful; any goalpost-moving on my part is unintentional.
I found that using peano numbers got a bit further in plain prolog, but it still eventually infloops in at least one of the directions.
>> Just try writing (without hacks/cuts) a run-length encoder that produces only one answer ([9,9,9,8,8,7] -> [[9,3], [8,2], [7,1]]) and works forwards and backwards and then tell me the execution model is not a limitation.
But, why not use the cut? It's a feature of the language and it's there exactly so you don't have to bust your head trying to find a way to do things the hard way.
I honestly don't understand what's the fuss with the poor cut. If you want an efficient language that runs on a computer then you have to be pragmatic and recognise that concessions have to be made to accommodate the limitations of the hardware. If you want something pure and ideal, then you should probably not be trying to do that on a computer. You should probably not be trying to do it at all. There are no pure formal systems, or if there are, we can't explain or understand them because they can't communicate with the external world (hey, no side effects, OK?). See Haskell, for instance. The article above says something really strange about Haskell, that it:
>> ... finally broke through the wall of pure-functional programs being seen as ivory-tower languages that were impractical, or at least really awkward, for any system that needed to do I/O or interact with users
Which to me is insane. I think it's talking about monads. Monads, as an escape from the ivory tower, impractical, or at least awkward formalisms of... what, LISP? I mean for the love of dogs.
Prolog is a balanced language. People complain about it because it's not unbalanced, that's the problem. If it were 100% declarative and nobody could program anything in it, it would have legions of dotting admirers praising it for its perfection, and whining because they have to do all their work in languages that are hardly 10% declarative, like Rust, or Python or Ruby (and I'm being generous). People have to get their head around this: our computer science is primitive and it is full of holes, dark pits of despair and undecidability, intractability, and terminal inefficiency. To be practical, a language has to give up a little part of its true, pure self, that it could achieve in a perfect world. So Prolog has the cut, and you can use it to control its backtracking, which you need because it's implemented as a depth-first search, because that's efficient and convenient. So use the cut, it's a feature.
The thing about the cut is that it makes debugging extremely difficult because it (nearly always) completely nullifies the usual semantics of how you expect predicates to work. Stuff will seem to work and then break in mysterious ways. Also I've personally never found a valid use for them, there always seems to be a better, more logical alternative for any potential use case I've found.
It's kind of unfortunate IMO that most oldskool Prolog code out there is peppered with cuts and (->)/2 since it propagates a ton of bad habits.
I don't remember having any problems with debugging programs with cuts. But it should be said that I restrict myself to green cuts [1]. There are usual ways to avoid those, but not without making the code more complicated than it needs to be.
Debugging soft cuts is more of a problem. That's because soft cuts don't appear in the tracer and it's difficult to follow program flow when it doesn't directly correspond with the source code as-written. Or at least that's the way the tracer behaves in SWI-Prolog which I use almost exclusively these days (plus Visual Prolog, but let's not talk about that). There may be a setting to configure it. I generally don't use soft cuts, I find them a bit clutter-y anyway.
There's a run-length encoding program in the "99 Prolog problems" and it also doesn't use cuts:
[1] A "green cut" is one that does not change the program's Least Herbrand Model (LHM). Or, in plain English, a green cut does not change the set of queries to the program, that succeed. A "red cut" is one that does change the program's LHM. A green cut only stops unproductive backtracking, or backtracking that diverts a proof down infinite branches. A red cut is one that cuts finite success branches. You should never use red cuts, and you should never run on a barge :P
> There's a run-length encoding program in the "99 Prolog problems" and it also doesn't use cuts
I tested them. p10, p11 and p13 all infloop when you ask what encodes to [[3,7], [2,8], {what 9 encoded to}]. p12 claims that [7,7,7,8,8,9] and [7,7,7,8,8,[9,1]] are valid encodings for [7,7,7,8,8,9] and then gives "Arguments are not sufficiently instantiated".
As I implied in my original comment, there are plenty of examples out there, none of which actually work in both directions, without metaprogramming. Using clpfd seems to work well in all directions, which is neat, but that's a solver built atop prolog, rather than prolog itself.
Right, that's interesting. Did you try tabling the looping program? That's one way to avoid (some) looping behaviour.
Note that p10 - p13 are meant to be called in mode (+,?) according to their comments, so (-,+) (or (-,?)!) is asking for trouble. I can see you were looking for a program that would run in arbitrary mode that is also non- or semi-deterministic ("gives one answer". Did you mean more like functional?) but I replied to another comment just to point to existing RLE programs, not to give examples of what you were asking.
The instantiation error is the kind of thing that you'd use a "green" cut to avoid. The cut in that case would avoid unnecessary backtracking and not change the results of the program.
I don't know about the malformed result. I'd have to pick at it a bit and I don't want to do this now. I also don't want to try and write a "pure" version, first because I don't think it satisfies any real-world need, and second because I already have a version that seems to work OK and uses cuts. I wrote it several years ago (possibly around 2010 ish). I'm copying it here keeping the idiosyncracies of my coding and commenting style at the time.
%! run_length_encoding(+List, -Run_length_encoding) is det.
%
% Run_length_encoding is a list of all key-value pairs where each
% key is an element in List and value the number of consecutive
% repetitions of that element up to the first differing element.
%
% For example:
% ==
% ?- run_length_encoding([a,a,a,b,b,b,b,b,b,c,c,c,a,a,f], RLE).
% RLE = [a-3, b-6, c-3, a-2, f-1].
% ==
%
run_length_encoding([], []-0):- !.
run_length_encoding([H|List], Run_length_encoding):-
run_length_encoding(List, [H-1], Run_length_encoding).
% Last element in input list or single-element input list.
run_length_encoding([], [C-F|Fs], Run_length_encoding):-
reverse([C-F|Fs], Run_length_encoding).
% Run of N consecutive identical elements
run_length_encoding([C|Cs],[C-F|Fs], Acc):-
! % Orange ish; backtracking will produce successive
% counts of repetitions of C from different indices in the list
% I think.
,F_ is F + 1
,run_length_encoding(Cs, [C-F_| Fs], Acc).
% End of run of N consecutive identical elements.
run_length_encoding([C|Cs], Fs, Acc):-
run_length_encoding(Cs,[C-1|Fs], Acc).
I give this as an example of a simple, short program you can write in Prolog to accomplish a simple, useful task, while using the cut to make your life easier, which is what I'm arguing about here.
Note the uncertainty I had at the time about the use of the cut in the second auxiliary clause. I've left that comment in, in the interest of being honest about the difficulties in learning how to use the cut correctly. As I say, that was written many years ago. I'm not arguing that it's easy to learn to use the cut, I'm just saying that it makes your life easier once you've learned how to use it. I should probably have made that more clear in my comment.
Please let me know if my code above breaks. I haven't tested it in ages. But please respect the documented call modes and determinism :)
Edit: if you're wondering about the use of reverse/2, the goal is to simplify debugging; if you put accumulators in the head, you don't see their instantiations until recursion unrolls, so you don't know what's going on.
Edit 2: To further clarify, as documented, the program only runs in one mode, (+,-). It was an auxiliary for another program that calculates the Shannon entropy of a string (tokenised as a list of characters) so there was no need for other modes. Not every Prolog program needs to run in all possible modes. And even when one does, it's often simpler to write multiple auxiliaries for additional modes. And why not do the simpler thing, when you can? The point is to write programs that have the desired behaviour. The constant complaint about Prolog (in this discussion also) is that it makes it hard to do simple things, not that it doesn't look pretty. So we should talk about how easy it is to do simple things, not how hard it is to write pretty programs.
Yeah, my "gives one answer" was an incredibly misleading phrasing - I meant the encoding should be greedy and so [7,7,7,8,8,9] should have just the one answer of [[7,3], [8,2], [9,1]] and not also become [[7,2], [7,1], [8,2], [9,1]] or any other variant.
I feel like Prolog is sold by many as being able to solve predicates in both/many directions, with all the grandfather/ancestor/append examples, and my apparent frustration is in trying to use it that way in general and finding that doesn't work and that trying to produce working predicates that do work that way is seemingly impossible (without clpfd or cuts).
My most recent foray into Prolog was in starting with solving the Countdown numbers game (without clpfd or cuts) and then trying to use the predicates from that to answer further questions like what sets of tiles can reach all target numbers or what the highest necessary intermediate result is, or similar. I didn't just fail; I failed to learn anything useful. Getting those answers in a few imperative languages was trivial and educational.
Since then, I've reduced the goal to writing an any-mode rle that can return just correct answers and not infloop, without cuts or meta-programming. Simpler things seem easy enough. Rle seems just complex enough to make it impossible but is still logically trivial.
Your rle definition looks good to me, of course, but I'm not sure I can work out how to usefully compare it to other (+,-) answers. Certainly, I've only been checking for correctness in all modes, including a lack of infloops and a lack of extra answers.
I'd be happy to engage in offline conversation about it, but I'd likely just be yelling at clouds. I am interested in discovering which logic languages can trivially provide an any-mode rle and in working up from there to larger problems. Clearly, from this thread, prolog+clpfd is one.
?- rle2(X,Y).
X = Y, Y = [] ;
X = [_A],
Y = [[_A, add1(zero)]] ;
X = [_A, _A],
Y = [[_A, add1(add1(zero))]] .
As I understand it, Prolog itself is CLP(H), that is CLP over the Herbrand terms (that is, atoms and functors and stuff). CLP(FD) just extends the capability with numbers (FD = finite domains). Although I do sort of understand how you'd consider it not "pure" Prolog, I don't see it as that different especially with things like `dif/2` in Prolog which is a constraint just like any other.
>> As I understand it, Prolog itself is CLP(H), that is CLP over the Herbrand terms (that is, atoms and functors and stuff).
That's an interesting way to see it. Quite an unorthodox one, mind. I learned recently that it was Alain Colmerauer, co-creator of Prolog, who also came up with the idea of Constraint Logic Programming, specifically to address Prolog's er difficulties with numbers and arithmetic. See:
I was recently asked a question about the use of Peano notation in some stuff I was working on. I didn't think of it that way at the time, but one big advantage of Peano notation compared to is/2 is that it is... well, in a nutshell it's declarative integer arithmetic, just like CLP(FD). Boy I hate saying jargon-y things like that. But Peano numbers as Prolog terms in predicate's heads can be backtracked-over, and unwound when recursion unwinds, just like you do in your rle2/2 above. The downside is that they get hard to read once they get too big.
Then again, you can easily do this sort of thing:
%! peano(?Arabic,?Peano) is nondet.
%
% Convert between Arabic and Peano number notations.
%
% Use this program to quickly convert between Peano notation and
% Arabic notation.
%
% Accepted modes are (+,-) and (-,+). In mode (?,?) this predicate
% returns only 0 deterministically.
%
% Examples:
% ==
% ?- peano(1,P).
% P = s(0).
%
% ?- peano(I,s(0)).
% I = 1.
%
% ?- peano(N,P).
% N = P, P = 0.
% ==
%
peano(N,P):-
peano(P,0,N).
peano(0,N,N):-
!.
peano(s(P),N,Acc):-
succ(N,N_)
,peano(P,N_,Acc).
I see your points - I'll certainly use clpfd more, in future.
Also, I was mistaken - my peano-ish solution worked in both directions, though it had probable performance issues (perhaps because I didn't know `dif/2` was more flexible than `\=`) and then had trouble when I tried to map back from peano to naturals. I used a fun and perhaps misleading form to try for brevity:
>> I feel like Prolog is sold by many as being able to solve predicates in both/many directions, with all the grandfather/ancestor/append examples, and my apparent frustration is in trying to use it that way in general and finding that doesn't work and that trying to produce working predicates that do work that way is seemingly impossible (without clpfd or cuts).
You're not wrong about that and it's not your fault, or any misunderstanding. Certainly Prolog has been "sold" this way, by some. I can't name those "some" because I'm not sure who they are exactly (although I have a few specific people in mind). I don't think it's something that's made a big issue of in the usually recommended textbooks (Clocksin and Mellish, Sterling and Shapiro, O'Keefe, Bratko) but I can't be sure.
Obviously, you can run some programs in multiple modes, in Prolog. You can't always write every program to be all-modes. Pretending that it's possible to do so is overselling the language. What's more it's trying to sell a nice feature, as a defining feature. The idea I think is that First-Order Logic predicates are relations and so should not have "inputs" and "outputs". But Prolog predicates are not FOL predicates. If they were, they'd be simultaneously capable of being run in all modes, and less expressive than they are now (because Prolog is higher-order, as I say in other comments, trying not to sound like a dangerous loon).
I think this is the age-old debate about purity and formal rigour in Prolog, that has never done the language any good. There has always been a part of the Prolog community who wanted Prolog to be some kind of pure and perfect jewel, that should remain untouched from the real world. There's always been another part, who wanted to treat Prolog as a programming language to do real work with. I get the feeling that the latter, more practically-minded, folks have slowly weeded themselves out and now the majority of those who remain are of the "purist" camp. Entirely coincidentally, the number of people picking up Prolog has also constantly dwindled.
Entirely coincidentally.
You can probably tell which camp I'm on. Actually, I'm not. I got a foot in each. I serve two masters and all that. I just want to know that things work, that's my schtick. Sometimes you need formal rigor, sometimes you just need to bodge some codes together. And use the cut :P
>> I'd be happy to engage in offline conversation about it, but I'd likely just be yelling at clouds. I am interested in discovering which logic languages can trivially provide an any-mode rle and in working up from there to larger problems. Clearly, from this thread, prolog+clpfd is one.
If you're not trying to make your programs as close to logic as possible (that is, support as many modes as possible and make it so you can model your program as a logical expression) then what's even the point of using Prolog? Settling for having to reimplement the same logic in multiple directions completely defeats the purpose, you're just writing imperative code at that point.
I think "pragmatic therefore non-logical" versus "ivory tower therefore useless" is a false dichotomy. In my experience it's nearly impossible to make green cuts, and once you know a few tricks and rules of thumb, you can get pretty far writing logical code.
Support of as many modes as possible is an aim that is often too ambitious. Instead, unsupported modes should be indicated with an instantiation error or (much rarer) an uninstantiation_error. In this manner the logic remains intact as long as no error occurs and the program does not loop. And yes, this may come at the expense of its usefulness, think of (is)/2.
YeGoblynQueenne omitted what the mode +,- actually means. And the program does not check it. Does it mean that the first argument must be ground or is it OK, to leave some variables inside? Think of `[a,f(X),b]`. Also the - has sometimes the meaning of being descriptive (that is a completely steadfast argument) or prescriptive (meaning that the program may be incorrect if the argument is not an unaliased variable). Would errors be used for the unintended cases things would be much clearer.
Good points and I have to hang my head in shame. Prolog doesn't enforce mode declarations in documentation and I don't either, most of the time, in my code. I also use modes imprecisely and never define them, before I use them, in my documentation, which if anyone ever looks at my code and tries to run it, would create confusion.
For instance, I use "-" to indicate both a term that's an unbound variable on entry and one that can be partially instantiated on entry. Obviously that will cover different behaviour in different programs. That's not very nice of me. At least, since I've been bitten many times myself by this I try to document the structure of arguments and to give an example or two of calling in different modes, when that is not too hard to do.
I've been complimented many times about the good documentation of my projects, but I feel so embarrassed when people do that. My documentation sucks. It's worse to have half-done documentation than to have no documentation at all :(
Explicit checking of the appropriate instantiations is something for more or less official interfaces, not for every internal predicate. For those official checking via must_be/2, can_be/2 and the library(si) (all in Scryer) is often all you need. (The SWI-version conflates must_be/2 and can_be/2).
But you could save your beloved cut with minimal effort by using dif_si/2 (voir https://stackoverflow.com/a/20238931/772868) just before the cut. In this manner you would get instantiation errors for exactly the cases you cannot handle.
>> Explicit checking of the appropriate instantiations is something for more
or less official interfaces, not for every internal predicate.
But there's no formal concept of "interface" in Prolog, and the module system
sucks (or doesn't exist, depending on implementation) and many programs don't
ever use it anyway, so instantiations errors are a possible issue at every
level of a program. For me at least it's a constant headache knowning how a
predicate must be called (hence my practice of documenting the structure of
inputs and outputs carefully).
I really don't like the ad-hoc "type" checking in Prolog (as in integer/1
etc). Prolog doesn't have types, because pre-Church logic doesn't have types
(and post-Church logic is a mess; typed mess, but still a mess). And yet
Prolog terms (in the sense of a functor followed by comma-separated terms in
parentheses) are implicitly typed, under unification: T is the type of all
terms that unify with T. So making sure that terms are correctly instantiated
is the most prudent thing to do when checking the "type" of a term.
Unfortunately that adds too much clutter and I personally resent havng to do
it. I have seen various attempts to solve that conundrum over the years, for
example there's a package for SWI-Prolog that adds Hindley-Milner type
checking (brrr). I don't like any of them.
It's a bit weird how that is the kind of thing that Prolog gets wrong that I
rarely hear as an actual criticism, as opposed to complaints about the cut, or
the imperfect declarative-ness.
I guess all this would be solved in practice by the addition of a static type
system, checked at compile time, which would completely change the language of
course. Incidentally, I'm currently working on an ancient (30 years old)
Visual Prolog project; Visual Prolog is exactly that, a compiled Prolog with a
static type system. I feel that while it sure helps catch some errors, it
contrives to suck all the energy and life out of Prolog. Prolog is like
parcour: it lets you fly around town, jumping and rolling like Spiderman. So
sometimes you get a lamppost en plein dans la figure. So what. Who needs type
safety anyway?
(She says while changing yet another bloody bandage)
>> But you could save your beloved cut with minimal effort by using dif_si/2
(voir https://stackoverflow.com/a/20238931/772868) just before the cut. In
this manner you would get instantiation errors for exactly the cases you
cannot handle.
Thanks for the pointer. I had a quick look. I think there's something similar
in SWI-Prolog's Picat-style matching with the "=>" operator that is an
alternative to ":-". I'm not sure exactly what it does (something about
"steadfastness", that you also mentioned, and that is a new term for me) but I
prefer to not use libraries that significantly change standard Prolog syntax,
for fear of backwards compatibility.
> But there's no formal concept of "interface" in Prolog
Any predicate you write for someone else has an interface. Someone else includes you in a couple of weeks. All predicates that are not truly auxiliary ones which are more like basic blocks of command oriented programming languages - COPLs.
> instantiations errors are a possible issue at every level of a program
That is not the worst that can happen. Nor is non-termination the worst. The worst is if an unexpected failure or success happens that
looks like a legitimate answer. However, for many, non-termination is seen as being worse than such real errors. The reason is that there
are only very few Prolog systems that reliably catch resource errors or have reliable timeouts. Well, to be true, there is one system. And the others most often abort.
> (hence my practice of documenting the structure of inputs and outputs carefully).
> I really don't like the ad-hoc "type" checking in Prolog (as in integer/1 etc).
integer/1 is one of the orignal sins of DEC10 which remplaced the errors of Prolog I by silent failure. And everybody then followed suit. Mindlessly. There is library(si) that offers integer_si/1 to make things much cleaner.
As for your remarks on type systems for Prolog, one feature they do not provide is a dynamic conversion between regular Prolog and the typed part. There was an attempt some time ago but unfortunately the individual left for FP.
> Who needs type safety anyway?
It seems there are two different objectives: type safety and instantiation safety. Both are often confounded.
> SWI-Prolog's Picat-style matching with the "=>" operator that is an
> alternative to ":-".
And then when adding the suggested catchall rule, both fail. Both. This throws us back by almost half a century. So now DEC10-style
errors get into the 21st century. Prolog 0 and Prolog I were better.
> "steadfastness", that you also mentioned, and that is a new term for me
Since 1987-09-29 in common use, see the Prolog Digest of that date, posting by Richard O'Keefe. Here is my definition: an argument Ai is steadfast w.r.t. a goal g(A1,..Ai,..AN), when executing this goal is equivalent (complete operationally) to, g(A1,..CAi,..AN), CAi = Ai with CAi a fresh new variable. And more generally an argument Ai is steadfast, if this property holds for all goals.
A good example of a steadfast argument is the second argument of append/3. You can replace any goal append(Xs, Ys, Zs) by append(Xs, CYs, Zs), CYs = Ys without observable effect (except efficiency, resource consumption and the like). But even non-termination is preserved! Another one is the 3rd argument of phrase/3.
Or take your run_length_encoding/2/3, where the second argument is steadfast. You (presumably on purpose) accumulated in the second argument all the elements only to reverse them finally in a highly lispeling manner. At least no destructive nreverse.
>> That is not the worst that can happen. Nor is non-termination the worst. The worst is if an unexpected failure or success happens that looks like a legitimate answer. However, for many, non-termination is seen as being worse than such real errors. The reason is that there are only very few Prolog systems that reliably catch resource errors or have reliable timeouts. Well, to be true, there is one system. And the others most often abort.
Yes, those "right for the wrong reasons" results are a bad problem, too.
Which system are you referring to? I mainly use SWI-Prolog and in my experience it does exit with error when the stack size limit is exceeded, and it will even give you a message about possible unbounded recursion. Although on my Windows machine it then often crashes immediately after because it's left with no memory to recover from the error. I normally don't try-catch such errors because they're most of the time a sign of programming error (a well-written program should go into an infinite recursion without blowing the stack).
I've had more mixed experience with SWI-Prolog's resource-limited call/1 variants, like call_with_time_limit/2 and call_with_depth_limit/3 and call_with_inference_limit/3. The first one seems to work OK. The other two are a bit hit-and-miss, although there's warnings about that in the documentation if I remember correctly.
>> Or take your run_length_encoding/2/3, where the second argument is steadfast. You (presumably on purpose) accumulated in the second argument all the elements only to reverse them finally in a highly lispeling manner. At least no destructive nreverse.
Oh, is that lisp-y style? I had no idea. I use it because it makes tracing the program easier. There's a bit of overhead, but I feel it's made up for in spades by the ability to debug errors. If you put an accumulator variable in the head of the goal, it is only bound when the recursion terminates and the stack "unrolls", so you don't get to see the bindings of the accumulator and it's harder to verify the logic of the program. I normally trace with the "unify" port non-leashed but I don't think that changes anything.
>> A good example of a steadfast argument is the second argument of append/3. You can replace any goal append(Xs, Ys, Zs) by append(Xs, CYs, Zs), CYs = Ys without observable effect (except efficiency, resource consumption and the like). But even non-termination is preserved! Another one is the 3rd argument of phrase/3.
Thanks, this is a clear explanation. To be honest, I don't stay up-to-date with discussions about Prolog programming, as I did in the past. Even in the discussions that I've followed there's a lot that turns around principles that don't mean anything to me (as I said in another comment, I really can't be bothered about declarative, vs non-declarative programming, for example) and so I'm a bit suspicious of jargon that seems to be pointing to such principles. Maybe steadfastness is a more practical consideration. I'll have to think about it.
SICStus. The best for catching resource errors (that is with catch/3) and continues thereafter happily. SWI does catch many situations, but as you observe it is much less reliable. SICStus is also best for timeouts. I have not tested SWI recently, only noted that the compatibility library(timeout) has been updated. The call_with_inference_limit/3 might be interesting. The others you mention have just bad interfaces. But what both systems lack is a reproducible notion of time on a lower level. Walltime is not useful, as it is load dependent, CPU-time depends on TLBs, caches and all that. Think of number of (completed) CPU-instructions. PAPI-wise. While there is some JIT-tering in both systems this is still the best. Anyway, I start daydreaming...
> they're most of the time a sign of programming error
Often yes, but then you need to explain such errors which incurs a lot of attempts to execute fragments of the looping program. Like with a failure-slice https://stackoverflow.com/tags/failure-slice
And then, there is another kind of programs that effectively do not terminate but are still useful. Those that try to find counterexamples (posh expression for bugs).
> Oh, is that lisp-y style?
Recursive functions a la (cons x (recursive ...)) are not tail recursive (or at least have been not in the 1970s), whereas the corresponding code in Prolog is. So they were often transformed with an auxiliary argument that collected all conses in the wrong direction and were nreverse-d thereafter since (ideally) noone else referred to that list. I believe this has been solved in the meantime.
And yes, with a step-by-step tracer, this style makes things easier to watch. OTOH, you could use purer methods for debugging (for the pure part of your programs). At least, when you are into constraints, the tracers are of no use any longer and you have to switch.
>> I think "pragmatic therefore non-logical" versus "ivory tower therefore useless" is a false dichotomy. In my experience it's nearly impossible to make green cuts, and once you know a few tricks and rules of thumb, you can get pretty far writing logical code.
I don't agree that it's hard to make green cuts. The standard ones are, in recursive programs, one in the terminating condition of a recursive program and one in every recursive clause after the last. Ideally those should be placed immediately after the goal in the relevant clause acting as a "test" to select the clause, as advised by Covington et al (https://www.covingtoninnovations.com/mc/plcoding.pdf; much sensible advise on using the cut there, too).
Those cuts are "green" as a rule (there will be exceptions) in that they avoid unnecessary backtracking after a condition is met for selecting the clause. But of course those cuts can be misused to hide a programming error. That is something the programmer must learn to avoid, but I really don't see that as an insurmountable obstacle. To come back to Covington et al:
5.5 Never add a cut to correct an unknown problem
A common type of Prolog programing error is manifested in a predicate that yields
the right result on the first try but goes wrong upon backtracking. Rather than add
a cut to eliminate the backtracking, investigate what went wrong with the logic.
There is a real risk that if the problem is cured by adding the cut, the cut will be far
away from the actual error (even in a different predicate), which will remain present
to cause other problems later.
And btw, "pragmatic" doesn't have to be "non-logical". I'm not arguing about that. I'm claiming that the purity of FOL is unreachable in the real world and concessions have to be made. At the end of the day, sloppy coding is sloppy coding, whether it's pragmatic, or pure, or declarative, or whatever and I'm definitely not advocating for writing code in any old way.
I mean, I don't have any rational reasons. I could try to find some but I'd just be rationalising. The things that most people consider advantages or disadvantages, I couldn't care less about. Take declarativeness for example- writing declarative code is just not something I care deeply about. Nor is being able to run code both ways. I don't care about that stuff. I just like writing my code in Prolog [1].
Of course, if you do care about being able to code declaratively, or logically, then Prolog is your best chance. Prolog is maybe 90% purely declarative. You can write your programs so they run in all modes easily maybe 60% of the time. Try getting anywhere close to either of that in any other language. The reason people oversell Prolog with "you can run your code backwards" is that you can do that at all in Prolog, but not in other languages.
As to "make your programs as close to logic as possible" - well, you can't. Logic is not a programming paradigm. Logic programming is. FOL says nothing about programs. Logic programming does. FOL doesn't even say anything about proofs. That's how logic works: you have your clean and tidy syntax rules and everything else is semantics.
Now, Logic Programming, that's a semantics for FOL on computers. Like I say in another comment, logic programming begins and ends with Robinson's Resolution principle [2]. That's the theory of logic programming, or in any case Resolution-based logic programming (hi ASP folks!). Prolog is an implementation of Resolution. The implementation is not perfect, but I'm fine with that, as long as I know I can count on the theory to be well-formed. If the theory is good, the implementation can do whatever it likes. At some point, if there is a real need, someone will come up with a better one. So far, I don't see anyone making a better Resolution implementation than Prolog. So I don't think it's needed. I think Prolog is just fine the way it is.
_________
[1] Not just Prolog, of course. Then again, last time I had to write a substantial bit of Python I found, to my horror, that I have turned to the proverbial FORTRAN programmer who can write FORTRAN in any language; except s/FORTRAN/Prolog/g. By which I mean my Python code looked just like my Prolog code. I may have lost the ability to write anything but Prolog, after writing almost all my stuff in Prolog for the last five years or so. Well at least it didn't happen with R...
[2] I know that is a disservice to all the people who worked on automated theorem proving and logic programming before and after Robinson, but I'm going through a bit of a phase where I am in awe of what Robinson achieved, and I think of him as the god of logic programming, more or less. I'll get over it.
(It takes some time here, before the reply link appears)
While I do not know what you were thinking either, I do know that you insisted on a mode +,- and thus refrained from using the most general query to test your program just in case. Prolog would have shown you the problem right in the first answer!
?- run_length_encoding(L,E).
L = [], E = []-0, unexpected
; ... .
While the most general query often leads to an unfair enumeration of answers/solutions, it is still a useful and effortless way to test a program, just in case.
>> (It takes some time here, before the reply link appears)
Yes, I think that happens when threads exceed a certain size, perhaps also when replies come at a certain rate. I believe it's a measure put in place to cool down hot heads revving up to a flame war, or something like that.
Or maybe it's just a measure put in place to avoid people kicking a programmer for an error she made in decades-old code >:P
(Thanks for the advice though, it's good advice and useful for programmers at any stage of their evolution).
P.S. I kept thinking of what you said that Kowalski said regarding Colmerauer's knowledge of meta-interpreters. I'm really curious to know and there's a couple other questions I have he might be interested in answering so I plan to write him an email and point him to your comment (https://news.ycombinator.com/item?id=34243808).
(It seems one needs to go into a post directly to be able to reply more rapidly. The delay seems to be reserved for the thread-mode.)
Before sending an e-mail, check https://www.softwarepreservation.org/projects/prolog/ there are so many original sources now online that help to reconsider some perceptions. In particular, the Prolog I manual is there, but in a very bad shape (bad scan from a used up chain printer, probably a 3203, all caps, no accents ...).
>> (It seems one needs to go into a post directly to be able to reply more rapidly. The delay seems to be reserved for the thread-mode.)
Oh, right, good catch!
That's an amazing resource you linked me to. Thanks!
On the other hand, it's such an awful feeling some times that all the activity of the early years of logic programming has died out, that I've come to it too late, and that so many of the people who created the field are now retired or gone.
> ... all the activity of the early years of logic programming has died out, ...
Of course, now there is activity of the current years! What else could we have now? If you look back, there was not only progress but also quite a lot of regress. Like, dif/2 in Prolog 0, then to stay submerged for so long, resurfacing in Prolog II and Mu etc. Errors in Prolog I, then abolished in DEC10 at the expense of incorrectness, only to resurface later on, but still not entirely recovered...
Btw, the reason I want to write to Kowalski is not just the history of meta-interpreters. There's a question that only he may be able to answer. The question is about the Subsumption Theorem in Inductive Logic Programming (ILP).
Briefly stated, the Subsumption Theorem, as it's known today, is that a clause D is entailed by a logic program S if there exists a clause C that can be derived from S and such that C subsumes D.
Similar statements are also found in the logic programming literature, for example in Lloyd's Foundations of Logic Programming.
There's a bit of a mystery about the origins of the Theorem.
A series of similar papers by Ninenhuys-Cheng and de Wolf point out different definitions and apparent rediscoveries of the Theorem, and proofs of it, some of them mistaken, in the ILP literature. The paper I use as my reference is titled The Subsumption Theorem in Inductive Logic Programming: Facts and Fallacies. (http://homepages.cwi.nl/~rdewolf/publ/ilp/ilp95.pdf found in Ronald de Wold's webpage, here: https://homepages.cwi.nl/~rdewolf/).
That paper claims that the Theorem was probably first stated by R. C. T. Lee, in his 1967 doctoral thesis. It seems this information comes from Kowalski, in a paper that I can't access fully (cited in the Nienhuys-Cheng and de Wolf paper). However, the earliest statement, and proof, of a Subsumption Theorem that I could find is from Robinson's resolution paper, published in 1965, where subsumption is also defined. The Subsumption Theorem in Robinson's paper is slightly different than the one found in the ILP literature, in fact it looks to me like the modern version of the Subsumption Theorem is a corrolary of Robinson's one.
If I were to rephrase Robinson's Subsumption Theorem in more modern terms, what it says it as follows:
[Restated Subsumption Theorem] Let S be a finite set of clauses and C ≠ D be two clauses that are not in S and such that C ≼ D (C subsumes D). Then, S ∪ {C,D} is satisfiable if and only if S ∪ {C} is satisfiable.
So this statement is missing the explicit requirement for derivation of C from S, but if S ∪ {C} is satisfiable then C should be derivable from S by Resolution.
Now, R. C. T. Lee's thesis is not available anymore, it seems, and it wasn't available to Nienhuys-Cheng and de Wolf, either, so there's a lot of uncertainty around what, exactly, Lee stated and proved, and how it is similar or different to Robinson's Subsumption Theorem, and to subsequent statements of the Subsumption Theorem in ILP. Kowalski may be able to clarify the confusion.
I'm posting this in the very slim chance that you might have some intuition about all this, in which case I'd be grateful to hear it.
The sibling comment has a good general answer. In this particular case, if you use cut, it seems like it works, but then you try to solve rle([9,9,A,B,8,7], [[9,X], [8,Y], [7,1]]) or other bidirectional things and you find the cuts prevent you from finding some answers. Generally, I find that cuts (and implication and such) are prone to only work in the specific cases the programmer considered and to cause missed solutions, elsewhere.
Like I say above, that's the difference between using a green cut vs. a red cut. See also my link to the RLE version in "99 Prolog Problems". So far it's been my experience that it's very hard to find a Prolog program that can't be written without a cut.
I’m not sure it’s a feature that you MUST think about the execution model even for trivial things :-) but that you can and should is certainly a strength.
It's really not different from any other programming language, I think Prolog is just held to a higher standard, and I think that might be due to a misconception.
Also, XSB Prolog and SLG resolution[1] should be mentioned. It solves some of the problems with Prolog that force you to think about the execution model when you'd rather not. To some extent, this can be thought of as getting the good parts of both Datalog and Prolog.
SLG resolution was introduced in SWI-Prolog as well fairly recently[2].
There's a nice book[1][2] about Prolog, with modern characteristics. Moreover, there are things like ProbLog[3] and DeepProbLog[4] that allow you to use probabilistic reasoning and power of machine learning. I am personally looking forward for Scryer Prolog[5] to achieve its goals.
This is a good take, and historically accurate IMO. Rule engines were indeed marketed as a more practical way of getting the benefits of declarative rules, in a form where supposedly non-programmers would be able to maintain the rule base. And yes, the relational model (in the unattractive guise of SQL) won out in the database space.
Just like functional languages inspire people to write "more functional" code in other languages, I think Prolog can inspire us to think in relations (SQL) and rules. Rules engines grew into monstrosities, but more light-weight rule engines exist and can be a great tool.
Prolog feels like a really useful feature was hauled out of some other programming language and made to stand on its own. Like if I pulled regular expressions out of Perl and tried to make them their own language.
I highly suggest the wikipedia entry [0] on regular expressions and their history; the concept goes back to at least a couple decades before Perl, which incorporated them via a c library written by Henry Spencer a couple years before Perl even existed.
Embedding prolog in Common Lisp was pretty normal during the first wave of AI, as I understand it.
There’s always been an odd irrational hostility to Lisp, so I wouldn’t be surprised if Prolog feels as it does to you exactly for the reason you suspect.
Yeah, imagine if Python had a 'declare these two things are equal' operator and let you just randomly summon up computations declaratively, but with all of the python standard library and data structures. That would be more approachable than Prolog imo.
Take https://p3rl.org/Language::Prolog::Yaswi and steal the ideas into python then, you can probably at least get close to sqlalchemy style in terms of surface syntax at the python level.
(the perl community has implemented and released many strange and wonderful things, often years before other languages' communities noticed they were there (because we're weird), and while it's entirely reasonable to dislike perl as a language it's well worth looking at the things we've perpetrated over the years and stealing the parts you like for your language of choice :)
Funnily enough, perl has both a native prolog implementation AI::Prolog (which is not amazingly fast but serviceable for simple cases) and Language::Prolog::Yaswi, which is bindings to SWI prolog with a perl-native DSL if you want it.
I understand that Prolog is good for creating parsers, right? Then could it to a large extent be a replacement for RegExps, providing more powerful ways to do string-manipulation.
It might partially have been inspired by Prolog, in fact. But it lacks backtracking and unification so I don't think it qualifies as an embedded Prolog system. Erlang allows so-called non-linear patterns (ie using the same variable for several argument positions, implying equality), which comes slightly closer to Prolog (and of course, Erlang was originally modeled on Prolog, but the apple ended up far from the tree).
At compile-time, sure. But ML-like languages in general have no support for runtime logical programming. They don't provide an API to do things like type inference at runtime. For a good reason too, the only possible way type inference in ML-like languages work is because the computational cost is abstracted away from the runtime. Logical programming (ala Prolog) is very useful but the truth is it's asymptotically inefficient. Standard ML's type inference algorithm was, for example, asymptotically exponential. It just so happened that for useful programs it halts in a reasonable amount of time. This is not something you can provide an API for so that at runtime you can run arbitrary type inferences.
they are their own language class...the regular languages, at least in pure form, not the pcre extended regular expressions (which would be a different language class).
If your aim is to develop an SAT solver, it is recommended to write it in Prolog.
For more general problem-solving, consider writing an SMT solver or another specialized solver (such as ILP) in Prolog, as it will likely be the most efficient choice.
In the case of creating type-safe software, writing the interpreter in Prolog is a good idea, as it allows for code verification during runtime.
Lastly, if you wish to integrate code and database functionality, it is advisable to use Prolog to develop an enhanced Datalog system, which will offer a more powerful and flexible solution.
If I was a company in the 80's or 90's, I'd use spreadsheets if I need declarative data stuff, and I'd use a boring blub language like BASIC for everything else. Few business problems need the type of logical inference Prolog provides.
I'm always dubious about such answers: "if I was in X camp, I'd use A; and if in Y camp, I'd use B; so C has no reason to exist". There's always room between X and Y, sometimes quite a lot. There's always people that have been shoehorning their problems to fit A or B, and would be happy to adopt C if it didn't have other major issues.
BNF was created by Naur, the guy who invented FORTRAN, and was critical to both many early RFCs (the most important standards that facilitated the growth of the internet) and man pages in Unix. The syntax:
What connection are you drawing between the two? BNF is used to define language grammars. Prolog is written using Horn clauses. There is not an obvious connection.
Prolog DCGs have pitfalls that aren’t immediately evident to someone only familiar with BNF, mainly that you have to specify your grammar in Greibach normal form (every nonterminal production starts with a terminal) to get any sort of performance out of them.
Woa, blast from the past! :D I used Greibach normal form grammars in my Master's, in 2014, when I was doing some grammar induction stuff with DCGs.
Now, Greibach normal form is useful, but its use is to avoid left-recursions. That's not to say it improves performance: Prolog's SLDNF-Resolution just can't handle left-recursion and it will often "go infinite" with it (although many staple Prolog programs like member/2, append/3, reverse/2 etc. are left-recursive).
DCGs are not the pinnacle of performance, sure, but that's not because of left-recursion. It's because a Definite Clause Grammar evaluated by SLDNF-Resolution is a left-right descent parser, and those are just not very efficient. I think it's O(n^3) to parse a CFG with an LR descent parser? So if you want better performance out of a DCG, parse it with a chart parser.
"Language grammars" are the same as specifying what is valid and what is invalid. This is done using hierarchical class membership, roughly equal to mathematical sets or programming language types.
I would argue that this can attract users better than the additional formalism you can obtain through logic programming, because once you have determined the validity and class (roughly "set membership") of something, you can potentially generalize about it more easily and you have a clear cognitive boundary.
Speaking from experience, in practice, I often rigidly define one layer of a system and then implement algorithms less rigidly using a less formal approach. This seems standard across the industry. Think about UUIDs, network addresses, etc.
I believe this type of tradeoff is ultimately what killed logic programming. If you really want to do maths, do maths. If you want to define something, do so succinctly. And if you are here to implement an algorithm, use the right tool. Don't pretend the code will write itself. Logic programming always seemed to me a sort of allusion to mathematical inference that didn't transfer well to real world programming problems.
Thanks for the explanation - interesting link. The same sort of rigidity appears in functional programming styles, where is bound more tightly into the data definition. I'm thinking about the sum types in Haskell for example, where the programmer can make a rigid definition of the boundary that you are describing and then it is reflected in the structure of the code that manipulates the types.
There is an algebraic similarity between Horn clauses, BNF grammar clauses (if you consider concatenation and choice to be operators) and sum types. After working in logic programming, functional programming and imperative styles I would agree that the rigid definition part makes it easier to write the code that will sit on top, even if that code is far less rigid.
I once wrote something in Turbo Prolong which had both chaining inference and menus. Chaining inference fit well in the language. Drop-down menus were really painful in pure Prolog.
Prolog has a use case, but it's just not that big.
"Declarative" is sometimes misappropriated by functional programming. Is there a better term? Perhaps explaining it, as in TFA ("specify what, not how")? Or explicitly distinguish from fp? Or some other way?
I think it's fair to say that declarative programming can only work well for "simple" problems (i.e. not requiring turing equivalence), and the strongest wins are when a declarative approach is discerned that didn't seem possible - perhaps by slightly simplifying the problem, to cross this threshold.
"Relational programming" pretty much explains what it is, I think. It also makes it clear that functional programming (at least abstractly) is a special case of relational programming, because functions are special cases of relations (namely those relations that are not one-to-many). It's perfectly possible to program in a functional style in a relational language. (In practice, Prolog lacks some of the amenities of say ML-descendant languages, but Curry has them!)
"Declarative" is in my eyes an umbrella term for logic/relational and functional programming, but the word gets used in many ways.
Relational is closer - though I'm personally wanting a term for when you specify the whole outcome in one go, rather than a composition. Like WYSIWYG (see my other reply).
Yes, the word gets used in many ways, that's basically it.
BTW there's injective programming (only one-to-one relations), so programs are reversible.
I see the declarative vs. operative concept as a multi-tiered division where none of the general purpose languages can ever be on the declarative extreme, but relational programming is indeed the closest one.
You can try running with what wikipedia says [1] or stack overflow [2]. You could say it's PL marketing terminology from the 1980s and 1990s (FP + Prolog v the world). Roughly these languages aren't based on the idea of changing state, but of course that idea eventually ends up in there somewhere (e.g. monads and laziness in Haskell, DCGs in Prolog).
On the positive side, I'd be comfortable saying that SQL is declarative.
I'd be wary about bringing Turing completeness in here. It seems irrelevant.
fp is declarative in the shallow sense of being immutable. But you can't just state what you want (e.g. what constraint programming languages enable); you must write a program. If fp is also declarative, what word is left for describing what I mean? Thus: misappropriation.
Not "Turing completeness", "Turing equivalence". Many very simple systems are turing equivalent, so they can exhibit arbitrary complexity.
A mapping, from what you have to what you want, is an example of declarative programming. WYSIWYG is declarative.
[But arguably this makes "declarative programming" an oxymoron... or we must call html a "programming" language]
Yes, SQL is a lot more declarative; the underlying relational algebra perhaps also is. Something that I'd personally like is a term for systems that enable you to specify the outcome as-is, rather than a composition of steps to achieve it. Perhaps I just want a word for this. I'll give an example:
What you describe sounds like the difference between Relational Algebra (steps) and Relational Calculus (no steps), as formulated by E. F. Codd. He called the algebra "procedural", for that reason, and by that definition functional programming languages would also be called procedural. SQL is a bit of both, but more to the calculus/non-procedural side. Prolog is trying to be both (you can think in steps when you want to, but mostly you shouldn't have to). Answer Set Programming has no notion of steps or ordering at all.
Wow, I've never heard of "relational calculus" (maybe I assumed it a synonym for the algebra?), though I've only read Codd's first relational paper. Sounds perfect! I'll google - though would you recommend a particular reference?
BTW A possible term is programming-by-example (or "exemplary programming", to parallel fp's pun), where you give an example of what you want.
I suppose you end up with a specification language to specify what you want, which might well be very complex, and use composition.
Yeah, I think in prolog you can destructure and structure (construct) in the same expression, but in practice, recursion always seemed involved (IIRC a uni class). XSLT feels kinda similar (though uses paths instead of destructuring).
Answer Set Programming (also new to me) sounds like specifying the set you want.
The difficulty is then describing what you want - sometimes it's easier to state how to make it. Reminds me of AI in golden-era science fiction, where computers would do exactly what you asked (repackaging genie's three wishes).
"Relational completeness of data base sublanguages"
"A data base sublanguage founded on the Relational Calculus"
SQL is most similar to the calculus "sublanguage", so that would be the main point of reference. ISBL was an early implementation of relational calculus[1].
The The Datalog Educational System (DES)[2] is the only available implementation of relational calculus (in both its forms), and it also implements relational algebra, Datalog, and SQL!
I agree, in general it's more convenient to state "what the solution should look like", but in some cases we find it easier to give instructions to follow instead. This is true in everyday conversation between humans as well. Cooking recipes are a good example - we generally want to know the steps required, but sometimes it's helpful to add a description of what the end result should be like.
Thanks very much! I had heard of Codd's first two languages as not being "usable by mortals" (non-mathematicians). ALPHA seems to be the first one.
Re: functional programming is a subset of relational programming. I read them as brand names, so it was revealing when you pointed out that they are literally true - formal properties. I wonder, what are their consequences?
For relations, each input can have a set of outputs. So this is like a tree at each call level, ever-expanding. (in general a call graph, not a tree)
For functions, I think it's only that one input always gives the same output, including nested functions. So, only immutability?
For injections, it's just reversibility. A cute idea, implemented by only having injective functions to compose, but hasn't taken off. I expect it's too restrictive, except when reversibility is the purpose.
Just so you know your guidance wasn't wasted: I looked into all these references, and Codd's calculus is based on predicate calculus (like Answer Set Programming), so it all seems inspired by set builder notation, all x such that ...
{ x : property(x) }
Is encryption another example of having the result (the encrypted message) but not knowing how to make it?
I consider declarative and procedural to be readings of a program (i.e, a subjective property). A more interesting property is amenability to formal reasoning, which is what is really wanted.
A distinction from functional programming can be made by noting that in functional programming the input is a proof term, and in logic programming the output is a proof term (evaluation is proof search).
But excluding optimized domain-specific solvers which are making performance/expressiveness/termination tradeoffs, it doesn't seem obvious that programming via proof search has any definite complexity disadvantages in theory (assuming you encode the algorithm in a comparable way), though I admit in practice the existing implementations of logic programming are anemic when it comes to libraries, polish, etc.
I think that Prolog will rise again. If you have an hour to kill, check out this video introduction by Markus Triska about what makes Prolog unique: https://www.youtube.com/watch?v=8XUutFBbUrg
Recently I have been working on getting Trealla Prolog to run in WebAssembly. Support is quite good now, with an easy-to-use JS library[1], Go library[2], and is now one of the best-supported languages for the Spin runtime[3]. I think it's particularly nice for validations, you can write some simple Prolog rules and run the same code on the client and the server now. I would like to explore this more with a dedicated library at some point. Built-in features like DCGs make parsing/generating all kinds of data trivial. I love it.
I think the biggest barrier to Prolog adoption is that it has a really steep learning curve, and most people get burnt by half-assed introductions in college that tick the 'talked about logic programming' box in the syllabus. You really have to think in a completely different way to write a good Prolog program. But, once you get the right mindset, it makes solving complex problems incredibly easy.
I don't really buy the argument that Datalog is enough. Most Datalog implementations I've seen hack on a bunch of extensions to do things like list membership. It would be awesome to see some serious effort put into an efficient Prolog-based database. The closest thing I can think of is TerminusDB, which I've never used but I wish I could convince my company to try it out ;).
My dream is to have some kind of magical serverless persistent Prolog interpreter in the cloud. That's where I hope to take this Wasm stuff at some point. Can you imagine how powerful it would be to use Prolog instead of GraphQL or whatever? Prolog is IMO the perfect language for querying things, with a 1:1 correspondence to how you write the data. It's like SQL without all the ugly bits (I like SQL too, of course). If this sounds cool to you, check out the paper/book on Web Prolog[4] which is a fascinating dive into what a Prolog-based networked query system could look like.
TypeDB[1] is also doing interesting things. Work on incremental view maintenance is also carrying the deductive database vision forward, but under another banner, such as Materialize[2] (based on Datalog, but speaks SQL) or pgsql-ivm[3].
There was a lot of work done in the 1980s and 1990s on deductive databases. Somehow it didn't work out. See [1] for a survey from 1995 (coauthored by the most cited guy in CS). There is also the semantic web with its query technology. Also didn't work out.
You can take a look at OPA Rego. It’s a datalog-inspired language. It will do query and transformation no problem. But it’s not Prolog. Trealla is very nice.
It's a natural fit for people with a logic background to validate their ideas computationally. That also meant that people in the 80's and 90's used it for AI research.
For me personally it was a great way to be introduced to different programming paradigms.
It's fine for a programming language to live in a niche.
A lot of the early AI community was using lisp or lisp derivatives. I remember playing with expert systems and Bayesian believe networks in university in the mid nineties. Bayesian believe networks were basically about assigning probabilities to correlations between facts. Early ML became about trying to learn the probabilities.
I also did a Prolog course and I got to teach programming as a teaching assistant in Java to philosophy students that had Prolog as their only programming experience. So, they understood logic but not loops or assignments.
Just because of performance issues. No other explanation needed really. same reason why it took some time for pure functional programming to take off too. The time for Prolog will come.
Prolog has many issues; for instance the lack of types and mode declarations, and backtracking almost never being what you want. This was addressed in Mercury [1] but much of the baby was thrown out with the bathwater. (IMHO Mercury ultimately landed closer to the MLs than to Prolog.) The academic literature suggests classic Prolog got fast enough in the late 1980s / early 1990s on stock hardware [2]; see also Peter Van Roy's intriguing book [3] for what happened next.
IMHO it's worth learning about Prolog as a way of looking at the world -- for instance Isabelle and Lambda Prolog are essentially logic programming engines, albeit with more sophisticated terms and unification algorithms.
I agree that the lack of types is a serious deficiency, but surely that can't be the reason Prolog lost steam? I mean... Python?
As for modes, I'm not convinced they are really needed? I do feel they take away a lot of the beauty of Prolog, and a lot of the incentive to think in relations. (Or at least, I fear they do. I haven't actually programmed in Mercury. Would be happy to be proven wrong.)
Ah, but Python has always had types. Just not static types. It still does not really have static types. Static type validation is done by external tools like MyPy. Python itself ignores type declarations at runtime. Still objects of Python have types. You just cannot reason about them at compile time using only pure Python.
I learnt as an undergrad that every sufficiently large program contains a type system (in the loose PL sense of operations requiring arguments of certain sorts) ... now that can be provided by the language or in an ad hoc way by the program itself. (Yes, shades of the CLOS truism.) Python has learnt this lesson (at least I get this impression from reading hacker news) since I stopped using it circa 2015.
As for modes, well! 1. they were needed for at least one Uni of Melbourne person to receive their PhD degree, and less facetiously 2. they are needed for efficient code generation. They can be inferred for this purpose. One use case you may consider is enforcing the correct dataflow in a DCG. Otherwise they are handy for preventing calls to predicates that would not terminate.
I've never programmed in Mercury but it really looks like an ML to me. It was beautifully engineered in the late 1990s and it's a shame it hasn't received more attention. Its FFI influenced the development of Haskell's.
Prolog doesn't have typed variables, but any Prolog term has a type, the set of Prolog terms that unify with itself. In practice, when you write Prolog, your arguments to predicates are always implicitly typed, for example if you write a program that takes as argument a list, you'll write something like this:
Or at least you learn to do that quickly enough. And note that this implicit typing is used both to select clauses in a predicate, and also to index clauses in the Prolog program database by their first term (in which case the first term better be a constant though, not a list). So it's a bit of a bug-repellent, and a bit of an efficiency booster, both. Just like you get with static types.
Right, I have generalized a bit. It doesn't necessarily have to be Prolog, but I'm referring to logic-programming in general.
There is even a new language that allows to embed logic programming inside of "regular" (imperative/functional) programming code. Forgot the name though.
There appears to be a lot of activity around minikanren [1]. Personally I'm so used to having a powerful metalanguage around (lambda calculus plus or minus a bit) that I can't imagine programming in pure Prolog any more. Which is to say this is probably the way to go in practice, if a subproblem can be solved using logic programming.
Minikanren allows you to program in relations. That is, not only unidirectional functions, X->Y, but you can leave out the X and give the Y, getting all X, which would result in that Y. I always found that talk at RacketCon impressive [1].
I mean, of course, minikanren is also implemented somehow, using lambda, but usually one would not want to rebuild the things minikanren offers.
I have not yet tried to use minikanren inside an otherwise "normal" program, but I suspect it will just work.
> same reason why it took some time for pure functional programming to take off too
To be pedantic, almost nobody is using a pure functional language even modernly. Almost all languages that support functional programming are hybrid systems at this point in time, and still have anti-functional constructs such as classes and objects.
I dare say, it would be very difficult to maintain a large system that was purely functional. Some of the concepts revolving around OOP are indeed quite good and lead to better organized, more easily understood codebases.
The future is probably a hybrid approach, not a pure approach as some advocates would prefer.
Not really, at least not in my terminology. Classes (in the pfp world) are pretty much just a bundle of free functions with some predefined bindings that are available to all those free functions, and putting those functions into their own namespace.
Those are valid things to do even in pfp.
data classes are technically similar or even identical but conceptually a different thing (though of course also useful and needed).
> same reason why it took some time for pure functional programming to take off too.
And when exactly did that happen? And no, I am not about imperative languages incorporating some functional programming stuff. When did a "pure functional" PL become mainstream and widely used in professional software engineering?
Because as far as I can see, I see imperative languages dominating pretty much every field.
The languages are not widely used. Rhey play a minor role at specialised applications in the industry. Almost everyone who learns programming uses imperative languages.
So what exactly is the definition of "take off" here? Because, other than Haskell having a bit of media-echo a couple years ago, and an army of fans promising things like "the future of programming", I don't see any takeoff.
My own feeling about the speed of adoption in languages and libraries, nothing more. Something can take off but still play a major role. Just like LLMs took off recently but still play a minor role in almost every area I know.
> Just like LLMs took off recently but still play a minor role in almost every area I know.
Okay, let's do the comparison.
LLMs became a big story less than a year ago. Since then, they are, among other things, powering the fastest growing app of all time. They are poised to take over pretty much every NLU functionality, products are being developed every day that wouldn't be able to otherwise, and major players in the industry throw billions at them. Their, and other ML achievements, impact on society are estimated to be so profound, that they have caused ripples in international politics, rekindled debates about UBI, made geopolitical positions about chip maufacturing and AI research even more relevant, and set lawmakers all around the globe scrambling to figure out how to deal with this new reality, regarding everything from copyright laws to regulations.
So, what is going on in the purely-functional-PL space?
Well, the flagship of these languages, Haskell, had it's stable release 12 years ago. Since then, the only software product that I can name without googling that's written in it, is pandoc. The language is currently Number 34 on the Tiobe index. In fact, I am not sure any of the current top20 could be called "purely functional", which would place these languages below Scratch, Fortran, and Classic Visual Basic in terms of that index.
Regarding usefullness, what have purely functional PLs demonstrated? That they can be challenging to use coming from an imperative background, and that they can be a drag on performance. On the plus side they offer ... yeah, what exactly is it they offer, that imperative languages adopting some of their features, like functions as first class objects, cannot?
So I have a hard time seeing any indicators of an impending take-off for purely functional PLs.
I am aware that most mainstream languages incorporated FP features. And they are the better for it.
But the topic of this discussion is PURE functional programming. See the following quote:
"same reason why it took some time for pure functional programming to take off too."
So when did that happen? Almost every codebase I see is imperative, either procedural or OO. They use mutable state, they use functions with side effects.
The reason why my argument above is built around PFP Languages, is because they are the only ones where pure functional codebases seem to exist from my PoV. I am happy to change my mind if you can point me towards major projects written in a PFP style in non PFP PLs.
> But the topic of this discussion is PURE functional programming.
Yes, exactly. It is NOT about "pure functional programming languages" nor is it about (what's nowadays called "functional programming" features). But you seem to be talking about those two things.
> So when did that happen? Almost every codebase I see is imperative, either procedural or OO. They use mutable state, they use functions with side effects.
Then maybe you need to broaden your horizon. I've worked exclusively(!) in different code bases over the last 5+ years that followed a pure(!) functional style, maybe with very few exceptions for performance or sometimes to make things simpler since it was not Haskell.
ZIO is a PFP library for Scala (even though they don't market it as that to be more beginner-friendly, but read the documentation and you'll see that it's essentially an IO-monad on steroids). And it is a rather new one but already got lots of traction. And even typescript has now fp-ts which is used much more than I expected myself.
Of course, being a ZIO adopter does not necessarily mean the whole codebase is using ZIO (and hence following a PFP approach) but the chance is high and I personally know two more companies that use ZIO and are not in the list, so there are very likely much more companies that use ZIO. Also, those companies were using a PFP style pretty much everywhere, in every team (backend only though). So it's not like there was just one project in that style or something like that.
When I compare that to 10 years ago, there was pretty much nothing besides Haskell and Haskell is, as you correctly said, not very prevalent in visible production projects. But checkout the list and to your surprise you might find some familiar names in there.
> (even though they don't market it as that to be more beginner-friendly, but read the documentation and you'll see that it's essentially an IO-monad on steroids)
> hybrid languages that also incorporate some aspects of functional programming or other programming paradigms. Mercury and Mozart/Oz seem to be two of the more prominent ones
Like MLs, like Lisps etc power is trumped by convenience.
Whether it is syntax convenience of C, Java, C#
Or batteries included convenience of Python.
It's a crying shame because we are stuck with the primitive expressiveness of 3rd generation, and are unlikely to break free any time soon if ever. We'd rather get some LLM to generate the reams of boiler plate required to do anything significant.
“The Japanese Did It” is not the correct answer to “Who Killed Prolog?”. Prolog was killed by the failure in the early 1980s of mainstream AI researchers to find out about Prolog, by their willingness to play along with the “appropriate responses” to FGCS, and by their use of FGCS’s failure as an excuse for continued neglect of a promising alternative to Lisp.
Prolog doesn't have a conventional programmer oriented introduction. You need to at least include a "Hello World" program like this
main :- write('This is a sample Prolog program'),
write('\nThis program is written into hello.pl file').
?- main.
?- halt.
Next up, there's the fact that math doesn't use the =, but rather uses is to do evaluations at runtime, which drove me nuts. You have to use foreach or forall to do loops, and there are subtle differences
?- forall(between(0,5,N), (A is 5-N,writeln(A))).
5
4
3
2
1
0
true.
It's just not programmer friendly. At least with spreadsheets, you can see what's going on, Prolog is just weird
It's hard to pick up users, if they can't get enough of a grasp of it to make sense of it.
I made a few attempts and finally got it by having a go at Google Zanzibar problem using SWI-Prolog. I read TAOP and followed a bunch of videos on youtube. Prolog is nice.
If you're trying to solve more general problems, you probably want an SMT solver or other more specific solver (e.g., ILP), not Prolog.
If you're trying to verify software, you want a theorem prover, not Prolog.
If you want database-ish stuff you want Datalog (or something like it), not Prolog.
Prolog falls into the uncanny valley of being too general of a DSL for the sorts of optimizations that more narrow solvers enjoy, while being too narrow of a DSL for most general-purpose programming. I think because it was an early DSL, people got caught up in the idea of it being more "high-level" than other languages [1], but the DSL nature of it wasn't fully appreciated at the time.
[1]: For fun, take a read through https://en.wikipedia.org/wiki/Fifth-generation_programming_l... , which was a whole thing in the 1980s.