Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> Answer Set Programming (ASP) offers a simple and powerful modeling language to solve combinatorial problems.

I wonder if this isn't the completely wrong way to explain what ASP is for. I'm saying this because until recently I used to think of ASP just like that, a logic programming language that's good for optimisation problems and that is "more declarative" because it implements classical negation (a.k.a. default negation) and not just negation-as-failure as in Prolog (a.k.a. the closed-world assumption).

To be honest, I never though that's particularly interesting. But it turns out that there is a much more interesting motivation for ASP: nomonotonic reasoning, and the ability to represent uncertainty in a purely logical framework, a kind of proof-theoretical framework of reasoning with uncertainty.

This paper is what changed my mind:

Automating commonsense reasoning with ASP and s(CASP)

https://personal.utdallas.edu/~gupta/csr-scasp.pdf

And while I cringe a bit whenever anyone says the words "commonsense reasoning" (because it's typically neither common sense, nor much reasoning; in humans, let alone modelled by machines) the first few sections in the paper are a simple, plain-worded, straight-forward explanation of what ASP is really about that I think will speak to the heart of every logician who has ever looked at Baye's rule, looked at a gigantic, noisy dataset, and couldn't help thinking of the Mogwai and how you should never feed them after midnight lest they turn into Gremlins (like probabilities turn to statistics when you try to instantiate those random variables with actual, you know, numbers).



ASP has both classical negation and negation-as-failure, but that is not quite why I'd say it is "more declarative". Prolog is capable of performing "imperative" tasks (by this I really mean side effects), while ASP solvers exclusively find stable models (answer sets) that may (or may not) exist for a given logic program. In ASP you can only declare a model with your input, and the output is either one or more stable model(s), or the knowledge that the model was unsatisfiable.

You are right about it being more than just a "modeling language to solve combinatorial problems", I agree that description sells it a bit short. As you pointed out, it is well suited for problems involving non-monotonic reasoning and uncertainty. You can encode reasoning that is more reality-hardened, with logical rules to deal with imperfect information.


>> ASP has both classical negation and negation-as-failure, but that is not quite why I'd say it is "more declarative". Prolog is capable of performing "imperative" tasks (by this I really mean side effects), while ASP solvers exclusively find stable models (answer sets) that may (or may not) exist for a given logic program. In ASP you can only declare a model with your input, and the output is either one or more stable model(s), or the knowledge that the model was unsatisfiable.

Thanks for the insight. I must confess I still don't have a lot of experience with ASP, mainly because of my initial misunderstanding of it.

To clarify, I'm not interested in the declarative aspect of logic programming so Prolog's side-effect-ness doesn't bother me. I prefer it in fact that Prolog is pragmatic that way and allows itself to be used to do practical work, that would otherwise have to be delegated to another language or tool. Prolog is a big, dirty ball of cheating but I've kind of made my home in it and I'm comfortable there.

But my research interest is in machine learning of logic programs (Inductive Logic Programming, ILP). A big part of that is dealing with noise and uncertainty, which traditional approaches to ILP aren't very good at. In recent years there has been a flurry of work in learning either ASP, or with ASP, and I guess I feel a bit like an idiot to finally realise why. My hope now is that I can find a way to reuse the ideas in ASP with the learning framework I studied in my PhD, where first-order programs are learned by a form of higher-order SLD-Resolution. I think the combination of a sound and refutation-complete inductive algorithm with an elegant treatment of uncertainty could produce something really unique.


You can do cool stuff with s(CASP).

See https://swish.swi-prolog.org/p/non-monotonic_ilp.swinb

For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.

But the scasp finds the two potential rules that cover both positive examples but not the negative example.

i.e.

flies(X,h8):-not penguin(X).

and

flies(X,h17):-bird(X),not penguin(X).

Which is cool.


Thanks, that's a nice example.

>> For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.

Yes! I'm thinking of how to adapt Louise (https://github.com/stassa/louise) to do that. The fact that s(CASP) is basically a Prolog-y version of ASP (with constraints) could make it a very natural sort of modification. Or, of course, there's always Well-Founded Semantics (https://www.swi-prolog.org/pldoc/man?section=WFS).

What's hyp_gen.pl?


I don't have a working hyp_gen.pl anymore I think but it was just a simple search given the atoms to add to a rule and its negations.


And adapting Louise would be an interesting thing to work on for sure. Why is it called Louise?


There was an earlier system called Thelma (https://github.com/stassa/thelma), an acronym for "Theory Learning Machine". Then I created a new system and, well, I couldn't resist a bad pun. They're a bit of a tradition in ILP.


Thank you. You convinced me to read the paper, and so far I’m loving it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: