Ph.D. Thesis of Don Geddis

Caching and Non-Horn Inference in Model Elimination Theorem Provers

June 1995

Caching in an inference procedure holds the promise of replacing exponential search with constant-time lookup, at a cost of slightly-increased overhead for each node expansion. Caching will be useful if subgoals are repeated often enough during proofs.

In experiments on solving queries using a backward chainer on Horn theories, caching appears to be very helpful on average. When trying to extend this success to first-order theories, however, intuition suggests that subgoal caches are no longer useful. The cause is that complete first-order backward chaining requires goal-goal resolutions in addition to resolutions with the database, and this introduces a context-sensitivity into the proofs for a subgoal. A cache is only feasible if the solutions are independent of context, so that they may be copied from one part of the space to another.

It is shown here that a full exploration of a subgoal in one context actually provides complete information about the solutions to the same subgoal in all other contexts of the proof. In a straightforward way, individual solutions from one context may be copied over directly. More importantly, non-Horn failure caching is also feasible, so no additional solutions in the new context (that might affect the query) are possible and therefore there is no need to re-explore the space in the new context. Thus most Horn clause caching schemes may be used with minimal changes in a non-Horn setting.

In addition, a new Horn clause caching scheme is proposed: postponement caching. This new scheme involves exploring the inference space as a graph instead of as a tree, so that a given literal will only occur once in the proof space. Despite the previous extension of failure caching to non-Horn theories, postponement caching is incomplete in the non-Horn case. A counterexample is presented, and possible enhancements to reclaim completeness are investigated.

For further exploration

The full thesis is available in postscript (230K ps gzip), or an automatically-converted Adobe PDF (550K pdf gzip) file.

In addition, a first-order theorem prover, aka automated inference engine, named DTP was created. You can view the README or read the manual in postscript (87K ps gzip) or Adobe PDF (190K pdf gzip). Also available is the Common Lisp source code (80K tar gzip), along with some sample logical axioms (18K tar gzip).

Note that this was all created circa 1995. Some minor updates occurred in 2004 to convert the original CLtL2 dialect into ANSI Common Lisp. All of these files can be found in the directory http://don.geddis.org/dtp/.


webmaster@geddis.org For encrypted email, use my PGP Public Key. Last updated 06/23/05