A Geometric Approach to Reasoning about the Physical World

VERY PRELIMINARY VERSION

John Nagle

December, 1986

CAVEAT

This note describes some very preliminary thinking on a new problem. It is NOT a complete paper.

INTRODUCTION

The basic problem with expert systems is that they don't know what they are doing. This criticism has been made before, most notably by Dreyfus. But we are not concerned here with philosophical arguments about the need for a sense of self in a reasoning being; our concerns are at a more concrete level.

The form of reasoning used by expert systems is fundamentally superficial, being based on high-level statements of a propositional nature without an underlying model of the applicable world to provide a sanity check on results. For some limited classes of problem, especially those stated in very formal terms and with essentially complete information available in a concise form about the problem space, this is an effective approach. Puzzle-solving and theorem proving fall into this category.

Interestingly, the expert system paradigm also seems to work when the available knowledge about the problem is rather superficial. The phrase "Mycin doesn't know about bacteria" sums this up rather nicely. Here the real situation is so complex that it is usually dealt with in a superficial cause-and-effect fashion without regard to the underlying mechanism.

But when we try to deal with simple (to humans) problems in the physical world, as in robot navigation or in the design, construction, and assembly of physical objects, the rule-based approach seems to fail us. There is a school of thought that claims that if we only had enough rules and fast enough inference engines, the rule-based approach would be able to handle such problems. We are skeptical of this claim; if more compute power would solve the problem, we should have some very impressive but slow rule-based systems today, and we don't. Something is lacking in the rule-based approach.

What seems to be lacking in expert systems is some means by which results obtained by the application of high-level rules can be tested against some notion of how the real world works. So we propose that an expert system be provided with access to some kind of simulation facility. With such a facility, the rule-based portion of the system could be used to generate possible solutions to the problem, which can then be tried out in simulation. The results of the simulation run would then be fed back to the rule-based portion of the system, both to determine whether the proposed solution was valid and to assist the rule-based portion of the system in constructing other proposed solutions. One could think of such a mechanism as a means for conducting "thought experiments".

The above description is rather nebulous; one could not start implementing it tomorrow. So we are working on a design for a preliminary system, incorporating a rule-based reasoning component and a simulation model of modest capabilities, with which we hope to find out if the basic approach described is of value.

DESIGN FOR A PRELIMINARY SYSTEM

To reason symbolically, we need a set of symbolic objects, a universe in which these objects live, and operations that can be performed upon the objects. Most artificial intelligence work has been performed in a universe in which the symbolic objects are named symbols in the mathematical sense, the universe is the space of mathematical or symbolic expressions which satisfy some syntax, and the operations are forms of symbolic manipulation along algebraic lines.

We propose a different approach. We propose that our set of symbolic objects should be solid and rigid three-dimensional objects; that the universe in which these objects live is a three-dimensional space in which some simple rules are enforced, and that the operations allowed on the objects are those of physical movement in three-space.

OBJECTS

An object in our system is a solid, connected, three-dimensional construct with finite dimensions. Objects are constructed by combining simple geometric components into larger structures. One approach to building objects that looks promising is to use superquartics as primitives, along the lines of Pentland's Supersketch system. This allows us to construct quite complex objects with a minimum of effort, according to Pentland.

During object construction, components of objects may interpenetrate; there may even be "negative components" which cut volumes out of other components. But a finished object is considered to be a rigid, impenetrable unit, and two objects cannot occupy the same space.

THE WORKSPACE

The workspace is a space in which objects can be manipulated. The workspace is a three-dimensional Cartesian space. Objects are manipulated in the workspace but are constructed elsewhere. In the workspace, objects cannot interpenetrate; objects can be brought into being only in empty space, and objects cannot be moved into other objects such that two objects occupy the same space. Movement in the workspace is continuous; objects can only be moved through empty space and must be moved around, not through, obstructing objects.

The workspace has a sort of Aristotlean physics; objects at rest remain at rest, and objects only move if some external actor moves them. Objects can be grouped together and moved as a unit, but you cannot push objects around with other objects. There is neither gravity nor inertia in the workspace.

Collisions between objects are detected, and a collision stops any attempt to move an object. The coordinates at which the colliding objects came into contact, as well as the coordinates of the handling points (perhaps the centroids) of the objects at the point the move had to stop, are reported to the external actor requesting the move. This gives the external actor a sense of touch.

OPERATIONS IN THE WORKSPACE

So far, all we have is a 3-D modeling system. Such a system is useful in and of itself if fitted with a suitable user interface and graphic output mechanisms. But for us this is a means to an end, not an end in itself. Our goal is to implement a form of graphic reasoning, and the workspace and object construction systems are means to this end.

Now that we have our objects, we need operations on them. The workspace is passive; it sits inert until told to do something. The primary operations in the workspace are "insert", "delete", and "move". "Insert" creates a new instance of an object at a given location; "insert" fails if there are objects already at the given location that interfere with the creation of the new object. "Delete" is the inverse operation of "insert". "Move" moves an object or a collection of objects along an arbitrary path; the motion stops if the moving objects collide with other objects.

Given these basic operations, we can simulate solutions to a wide variety of navigation, design, and assembly problems.

PICTORIAL REPRESENTATION OF RULES

Rules need not be expressed in textual form. We have devised a new form of rule we call a "detail". A detail is a cross between an architectural detail and an assembly plant task instruction sheet. A detail has the form of a pair of pictures, one of which represents the situation in which the detail is applicable (the "before" picture) and the other represents the desired goal after applying the detail (the "after" picture). There is explicit correspondence between elements of the two pictures, indicating which elements in the "after" picture correspond to matching elements in the "before" picture. Some elements of the "before" picture may be in some sense generic, so that they will match objects that are similar (in a sense to be defined later, but probably involving tuning parameters of Pentland-type objects.)

The obvious application of this mechanism is to show how to assemble something. An "after" picture showing the completed assembly tied to a "before" picture showing an exploded view of the parts might be sufficient direction to an assembly system. More interestingly, we could provide such architectural details as how to place bricks for the types of transitions required in building a chimney, or how to arrange the plumbing for common situations, and let a suitable computer-aided design system do this detail work given a basic representation of the structure to be designed.

Details are purpose-oriented; one only applies a detail if the purpose of the detail is consistent with the problem being solved. Details associated with building construction, for example, would only be applied when a building construction problem was being worked on. In practice, a much finer granularity of purposes will probably be necessary, and some context mechanism of the frame type will almost certainly be required.

We would probably want to apply details as subgoals, rather than as rewrite rules. This would force us to try to accomplish the transformation described by the detail through the use of the primitive "insert", "delete", and "move" operations only. This seems like doing it the hard way, but it is essential to our notion that the constraints enforced by the simulation system are a form of "common sense". Thus, if the application of a detail leads to a physical impossibility, the result will be a failed operation which will be reported back to the rule level, rather than an invalid result from the system.

Contrast this approach with that used by Winograd in the Blocks World. There, an attempt was made to convert the geometric problem into a more symbolic one as soon as possible, so that mathematical operations could be performed on the representation. We prefer to work in the ???? domain whenever possible. We recognize that this may require extensive amounts of computation because of the cost of solid geometric modeling computations.

CONNECTING THE SIMULATION TO THE PREPOSITIONAL SYSTEM

One of the big problems is figuring out how to hook the lower-level model, which is simulation-oriented, to the higher-level model, which may be rule-oriented or at least propositional in nature. One possible mode of intercommunication is IF-THEN rules. The idea here is that when the simulation level replies to state information from above not with predicate values but by constructing IF-THEN rules relevant to the situation being simulated. This has a few obvious values; one obvious one is that when the situation being simulated is underconstrained, the simulator can propagate upward some IF-THEN rules which describe what the simulation would report if given more information.

The converse case, that of overconstraint, can also be handled. The IF-THEN rules propagated upward should contain terms in the hypothesis which mention the relevant inputs to the simulation that are required for the conclusion. Thus, if some of the information flowing downward is irrelevant, the upward flow of information will reflect this.

This idea needs substantially more development.

CONCLUSION

By no means do we claim that the mechanism described here is sufficient to perform all the functions attributed to "common-sense reasoning" in humans. But it represents a promising line of attack in an area where little progress has been made in some time. And, of course, our claims are pure speculation until something is implemented, running, and shown to be useful.