

40
AIQS
News
73
avoid humiliating a loved one, keep a promise, or . . .
(make up your own O’Henry story here). Failure to rise
to such an occasion might well be grounds for blaming
a human chess player. Winning or throwing a chess
match might even amount to commission of a heinous
crime (make up your own Agatha Christie story here).
Could Deep Blue’s horizons be so widened?
Deep Blue is an intentional system, with beliefs and
desires about its activities and predicaments on the
chessboard; but in order to expand its horizons to
the wider world of which chess is a relatively trivial
part, it would have to be given vastly richer sources of
“perceptual” input—and the means of coping with this
barrage in real time. Time pressure is, of course, already
a familiar feature of Deep Blue’s world. As it hustles
through the multidimensional search tree of chess,
it has to keep one eye on the clock. Nonetheless, the
problems of optimizing its use of time would increase by
several orders of magnitude if it had to juggle all these
new concurrent projects (of simple perception and self-
maintenance in theworld, to saynothing ofmore devious
schemes and opportunities). For this hugely expanded
task of resource management, it would need extra layers
of control above and below its chess-playing software.
Below, just to keep its perceptuo-locomotor projects in
basic coordination, it would need to have a set of rigid
traffic-control policies embedded in its underlying
operating system. Above, it would have to be able to
pay more attention to features of its own expanded
resources, being always on the lookout for inefficient
habits of thought, one of Douglas Hofstadter’s “strange
loops,” obsessive ruts, oversights, and deadends. In
other words, it would have to become a higher-order
intentional system, capable of framing beliefs about its
own beliefs, desires about its desires, beliefs about its
fears about its thoughts about its hopes, and so on.
Higher-order intentionality is a necessary precondition
for moral responsibility, and Deep Blue exhibits
little sign of possessing such a capability. There is, of
course, some self-monitoring implicated in any well-
controlled search: Deep Blue doesn’t make the mistake
of reexploring branches it has already explored, for
instance; but this is an innate policy designed into the
underlying computational architecture, not something
under flexible control. Deep Blue can’t converse with
you—or with itself—about the themes discernible in
its own play; it’s not equipped to notice—and analyze,
criticize, analyze, and manipulate—the fundamental
parameters that determine its policies of heuristic
search or evaluation. Adding the layers of software that
would permit Deep Blue to become self-monitoring
and self-critical, and hence teachable, in all these ways
would dwarf the already huge Deep Blue programming
project—and turn Deep Blue into a radically different
sort of agent.
HAL purports to be just such a higher-order intentional
system—and he even plays a game of chess with Frank.
HAL is, in essence, an enhancement of Deep Blue
equipped with eyes and ears and a large array of sensors
and effectors distributed around
Discovery 1
. HAL is not
at all garrulous or self-absorbed; but in a few speeches
he does express an interesting variety of higher-order
intentional states, from the most simple to the most
devious.