OMWG TRIP REPORT
Jon
Doyle (doyle@lcs.mit.edu)
March 30,
1995
CONTENTS
As part of the Planning
Ontology (PO) construction effort, I attended a meeting of the Joint Task Force Advanced
Technology Demonstration (JTF-ATD) Object Model
Working Group (OMWG), held at the Institute for Defense Analyses,
Alexandria, Virginia, March 28-29, 1995. The meeting was run by
Deborah Heystek (heystek@ida.org) and 1Lt. Todd Carrico of
Wright-Patterson AFB (tcarrico@alhrg.wpafb.af.mil), with 17 others
attending.
The OMWG's task within the JTF-ATD is to develop the database and
representational objects for use in storing, retrieving, and
communicating JTF-ATD information. The task of this particular
meeting (one of a series) was to identify the object structures for
representing ``plans'' and the ``planning'' process.
In the abstract, the task for this meeting was closely related to our
work on developing a planning ontology, and I believe my attending was
beneficial both for what I learned for use in our effort, and for the
influence---such as it was---I exerted on the OMWG conclusions. In
general, my status as an interloper kept me from trying to argue
points very strongly. Even though I perhaps should have spoken out
more at some points, I feared that doing so might ruin the meeting,
turning it away from the tasks required by the JTF-ATD (for which the
meeting was convened) and towards the issues of primary concern to the
PO effort. Rather than do this, I tried to let the group do its work
and speak out only when absolutely necessary. I wasn't always clear
in my statements, however, and the discussion pretty much proceeded by
taking little heed of what I said, at least when I said it. Part of
the difficulty of communication was that the background of most
participants seemed to include little or no AI (I was not positive of
this, however), and I could not think of how to bring to the table
parts of AI I considered relevant at some points without first
attempting some (probably lengthy) partial AI education. This seemed
counterproductive, at least within this meeting, but the result was
that parts of the discussion stumbled over or made steps toward issues
long discussed or understood in AI. To some extent I was surprised by
the lack of awareness of relevant AI concepts and techniques
(especially with regard to description logics and reason maintenance
ideas that extend the object-oriented and audit-trail techniques
familiar to the participants), but this may just be AI chauvinism. In
any event, I saw no reason to think this would cause the effort to
fail, as its aims are more immediate and modest than ours.
The remainder of this report divides into two parts: a summary of the
meeting discussion, and identification of some consequences or lessons
for the PO effort.
The first---and essentially only---presentation of the meeting was by
Todd Carrico, who described the official JTF planning process using
Harel statecharts. His characterization covered the main six steps
and immediate substeps, and corresponded well to what I recalled of
the Purple
Book description. The diagrams (online at WPAFB) were
misleading, however, as they present only the nominal steps of the
official process. In particular, they omit all the transitions
possible due to intervening events, which may move the process back to
virtually any prior step. Moreover, the states and transitions often
taken in practice diverge from this description (e.g., the first three
sequential steps are often taken simultaneously, or incompletely, with
the process starting abruptly in the fourth step). Questions from the
audience led to the conclusion that the state transitions in reality
form almost a complete graph.
I had several reservations about this description. First, it did not
seem to me to capture the process at the right level of abstraction,
even if it did capture what the Purple Book says. I suggested that
identifying some abstract informational states might help simplify the
rats-nest reality of the diagram (e.g., whether one has a complete
understanding of the situation, or of the possible courses of action),
but did not state these suggestions very clearly, and it not much came
them.
Second, it eventually became clear to me that the process description
is intended for actual use within the project rather than as a simple
aid to understanding what is going on, and in fact plan objects will
refer back to states within this process description as a way of
indicating where the plan is in the construction --- authorization ---
execution lifecycle. This realization made the likely abstractional
inadequacies more worrying, and in the long run (which may not matter)
seem likely to freeze the planning process into an unrealistic and
inefficient form structured more to accord with the tastes of
bureaucrats lacking automated support than to reflect the variety of
realistic planning processes, especially as technological advances
permit or demand reorganization or re-engineering of these processes.
(I read the ARPI discussions on mixed-initiative
planning as expressing similar reservations about the nominal
Purple Book process.
Third, statecharts were presented as a language adequate for
describing all processes, a view I found surprising. I did not
contest it, both because it is conceivably true (though I doubt it),
and because it may not matter for the task at hand.
On the plus side, however, it strikes me that a good task for the PO
effort is to use whatever ontology and language elements we develop to
describe realistic planning processes, in particular the JTF
deliberate and crisis planning processes.
The next topic for discussion was how to describe the variety of
organizations and their structures. By and large, the discussion
seemed superficial and confused, which seemed appropriate in
retrospect as the main conclusions that arose (or at least that I
drew) was that the JTF-ATD requires only a few distinctions among
organizations, principally along the military-nonmilitary,
friendly-foe, etc. dimensions, and only a little in the way of
representing the internal structure of organizations. I know there is
a large literature elaborating taxonomies of types of organizations
and organizational structures, and a few remarks in the OMWG
discussion led me to think that some people present also know
something of that literature. But there was no explicit reference to
the literature, and I don't think any deeply-considered taxonomies
influenced the discussion. For my own part, I could not recall enough
details of any of the literature I think I have seen to support any
sound contribution to the discussion.
The next section of the discussion was both the longest---extending
across both days of the meeting---and the most relevant, addressing
the plan objects directly.
This discussion started by presenting or alluding to a long ``taxonomy''
of plans drawn from the Purple Book, and to some partial taxonomies of
plan types and components of uncertain origin. The Purple Book list
struck me more as a list of roles of plans, and of plans for different
organizations, a taxonomy according to things external to the plans,
rather than a description of types of plans according to internal
structure. The mystery taxonomy came closer to our interests, but
thoroughly mixed ideas from all levels of abstraction, with no
separation of general plan characteristics from extremely
military-specific elements.
I took the opportunity offered by the mixed-level mystery ontology to
give a two-minute sketch of the PO effort, both to urge some
abstraction in the plan objects to be developed, and to explain what I
was doing there.
Finally, the discussion turned to writing down and discussing in
detail generic objects for plans and their components. There again
was no great separation of abstract from military concepts, so that,
for example, plans describe goals, missions, and tasks, elements that
might all fall in the same type from a more abstract point of view.
But I did not find this too objectionable. By this point, it was
clear to me that the aims of the JTF-ATD are very short-term, so my
criterion for judgment shifted from conceptual clarity to mere
informational adequacy. That is, I only sought to ensure that each
element in the description had a reasonably coherent interpretation in
terms of the underlying abstractions I knew to be important, and to
ensure that the objects developed did not omit types of information I
knew would be necessary sooner if not later. In any event, much of
the mixed structure developed for ``generic'' plans aimed to capture
directly the standard elements of military operations plans. These
have reasonable divisions among the main classes of elements.
The main example of this was provision for dependency or source
information identifying the provenance of information recorded in plan
objects. I suggested this briefly on the first day as a necessary aid
to revision and adaptation of plans. The discussion on the second day
brought up the topic again, with the participants starting to realize
the need for dependency information and to reinvent the idea of
recording it. I pushed this idea as hard as I thought I could against
the objections of people unwitting of the standard AI treatments, who
focussed more on the extra costs (real or imagined) recording this
information incurs than on the utility of the information in guiding
the efforts of planners in adapting plans to changed situations.
There was an extended discussion of the structure of situations, both
existing and intended. The only interesting feature of this was the
question of whether goal situations should be specified directly or
indirectly as differences from the current situation. Eventually the
direct approach prevailed, in light of the possibility that situation
assessments might change over the course of developing plans without
the goals also changing.
There was an extended discussion of plan purposes. The main effort
was in sorting out the goal-related elements of standard operations
plans, which include (1) a goal, intent, or objective statement, (2) a
mission statement, and (3) a set of tasks to be accomplished. All of
these are goals at some level of abstraction, and it took considerable
discussion with the two planning experts present (Bob Butcher and
Roger Hilfinger) to determine the relations among these elements. The
upshot is that every participant in a military operation is supposed
to know not only his own goals, but also the goals of the next two
echelons above him, so as to be able to act sensibly when events
render his own goals inappropriate or meaningless. Accordingly, the
``goal'', ``objective'', or ``intent'' of a plan (``objective'' was
confusing militarily, since it usually refers to a physical location
or feature, such as Hill 57) characterizes the most abstract purpose
of the plan---the intent behind the mission. The tasks in turn form
the lowest level goals of the plan, indicating the required approach
to achieving the mission, as well as the dimensions along which to
compare alternative courses of action (i.e., how well each alternative
accomplishes each task).
Postscript 17/4/1995: The OMWG meeting settled on terminology
in which plan objects will have goal, mission, and tasks elements. In
subsequent discussions, I learned that by-the-book commanders will not
know what is meant by ``goal''; the regulation terminology for this
concept is the ``intent'' paragraph of an order.
The final big topic of discussion was called evaluation, but really
mixed together questions of the distinction between plans and actions,
and between decisions and actions. One major discussion concerned
whether courses of action (COAs) could also use the generic plan
object, or whether they require some different representation. The
answer, after debates on two days, was they could use the same plan
representation. To the extent that I understood the fairly confusing
discussion, the reason was that COAs are not sequences of actions but
subplans (in the planning sense; to many of the participants, subplan
means something else entirely, namely subsequences of a sequence of
primitive actions). The main obstacle to achieving this answer was
the inclusion of sets of alternative COAs in plans---one identified as
the chosen concept of operations---thus making the plan object the
record of a decision as well as of a plan of action. I think the
method of resolution was to say that COA objects need not themselves
list sets of alternatives in this slot.
Mixed in with the plan/COA discussion was consideration of how COAs
are evaluated and where these evaluations are stored, whether in the
COA objects themselves or in the parent plan object. There was a
brief discussion of what the evaluations look like. From a planner's
point of view, each goal or task of the plan represents a dimension of
evaluation (how well the COA satisfies it), so the overall comparison
is a big matrix of Goals X COAs. I made suggestions about including
possible overall evaluations as rows or columns of this matrix, but
the sentiment seemed to be to not explicitly represent any overall
evaluations apart from the commander's selection of one COA as the
concept of operations of the plan. I did not push for more as there
was generally no provision made for explicit representation of or
reference to the knowledge, doctrine, or general preferences
underlying plans.
This meeting convinced me the PO will be essential to the success of
ARPI-related efforts as we look beyond the short-term horizon
represented by the JTF-ATD. There simply won't be much home for
hooking together different automated systems, or of exploiting the
variety of AI planning techniques, unless we can clearly identify the
different objects, relationships, and processes at the relevant levels
of abstraction (which might be different for different automated
systems). The meeting also strengthened my conviction that the point
of the PO should be to support all plan-related operations across the
full lifecycle, from plan construction, through execution and
revision, to postmortem analysis and lesson-drawing.
The first surprise I had was about how the OMWG and military planners
understand the term ``plan''.
To military planners---at least the accomplished ones present at the
meeting---a plan is a fully-detailed specification of the movements
and actions of every minute component of a military operation. A plan
in this view is a (or has a) TPFDD or it is not a plan. This
difference in terminology is of little significance except as
something to bear in mind when talking with military planners.
The more interesting notion was one that the OMWG took for granted.
While AI discussions typically take plans to be specifications of
purposive behavior, the OMWG started from a conception of plans as
both specifying purposive behavior and describing the alternative
behaviors considered toward this purpose. To the OMWG, plans contain
several courses of action, and the standard notion of AI plan is just
that of one of these COAs. This different focus makes sense in a
setting in which plans are not merely executed, but adapted to
changing circumstances and constraints before and during execution.
But to prevent terminological confusion, I think it best to use
``plan'' in the standard AI sense as the specification of purposive
behavior, and to use a term like ``considered plan'' or ``plan
decision record'', etc. for the broader notion. I have used the term
``plan rationale'' for an even more expansive notion, intended to
include the plan, the records of the decisions through which it was
chosen, and in addition, the explanations of the plan, alternatives,
evaluations, and method of choosing in terms of more fundamental or
general knowledge.
The meeting (and especially the comments of the military planners)
also brought out the importance of the notion of authorization in
work toward automating (parts of) the planning process. One can use
all sorts of automated planning and decision-making aids in
constructing and choosing among alternative plans, but the ``final''
decision is up to a human commander, whose decision then authorizes
the plan. Planning systems and staffs may continue to work on the
plan and consider possible revisions, but none of these potential
changes become part of ``the plan'' unless and until the human
commander declares a new authorized plan containing them. Automated
support probably offers the opportunity for revision this procedure
somewhat, to provide for more piecemeal authorization, so that
``reasons'' or ``justifications'' might contain both constructive
dependency information and change-authorization information---so
augmenting reason maintenance with ``authorization maintenance'' if
you will.
Another terminological lesson is that the term ``constraint'' has a
fairly specific meaning in military planning. AI is happy to think of
virtually everything as a constraint (which in my opinion makes the
notion so vacuous as to be unhelpful), but military plans use
``constraint'' in an essentially negative sense as ``restraint'',
e.g., don't use nuclear weapons, or don't bomb hospitals. Positive
``constraints'' constitute the goal, mission, or tasks of the plan,
e.g., get all the US citizens out before Mt. Pinatubo blows up, or
destroy the Republican Guards.
Another constraint-related issue is that the planning process blurs
the distinction between hard and soft constraints. Some tasks may be
declared essential, but COAs are compared by how well the attain all
the tasks, and in particular, all the ``essential'' tasks. There
seemed to be no expectation that one can always find a COA that
achieves all essential tasks---rather, one always has to make
tradeoffs, and the essential tasks are those which count more than the
others. There may be a similar ranking of constraints and expectation
of tradeoffs in observing them, but I didn't think to ask it at the
time.
The final note is that military plans explicitly describe the
agents (plural) for whom the plan is intended, the main points of the
organizational relations among these agents (principally who is the
supported (chief) commander and who are the supporting (subordinate)
commanders), and the communication regime (security methods, etc.) to
be employed between them. Most of the PO discussion has concerned
plans for individual agents usually left implicit. Standard military
practice indicates the need to explicitly treat at least some aspects
of organizational structure in the PO.
Last updated by Jon Doyle (doyle@mit.edu) on April 17,
1995.