Peter Wolff - Jul 21, 2013:
But anyway, this is one important step of a journey of a thousand miles.
True. I’m just cautious about making that journey’s first step a misstep.
We shall now try to show not only that human behavior can be regular
without being governed by formalizable rules, but, further, that it has to
be, because a total system of rules whose application to all possible
eventualities is determined in advance makes no sense.
In our earlier discussion of problem solving we restricted ourselves to
formal problems, in which the subject had to manipulate unambiguous
symbols according to a given set of rules, and to other context-free
problems such as analogy intelligence tests. But if CS is to provide a
psychological theory—and if AI programs are to count as intelligent—
they must extend mechanical information processing to all areas of
human activity, even those areas in which people confront and solve
open-structured problems in the course of their everyday lives.
Open-structured problems, unlike games and tests, raise three sorts of
difficulties: one must determine which facts are possibly relevant; which
are actually relevant; and, among these, which are essential and which
inessential. To begin with, in a given situation not all facts fall within the
realm of possible relevancy. They do not even enter the situation. Thus,
in the context of a game of chess, the weight of the pieces is irrelevant.
It can never come into question, let alone be essential or inessential for
deciding on a specific move. In general, deciding whether certain facts
are relevant or irrelevant, essential or inessential, is not like taking blocks
out of a pile and leaving others behind. What counts as essential depends
on what what counts as inessential and vice versa, and the distinction cannot
be decided in advance, independently of some particular problem, or
some particular stage of some particular game. Now, since facts are not
relevant or irrelevant in a fixed way, but only in terms of human pur-
poses, all facts are possibly relevant in some situation. Thus for example,
if one is manufacturing chess sets, the weight is possibly relevant (al-
though in most decisions involved in making and marketing chess sets,
it will not actually be relevant, let alone essential). This situational
character of relevance works both ways: In any particular situation an
indefinite number of facts are possibly relevant and an indefinitely large
number are irrelevant. Since a computer is not in a situation, however,
it must treat all facts as possibly relevant at all times. This leaves AI
workers with a dilemma: they are faced either with storing and accessing
an infinity of facts, or with having to exclude some possibly relevant facts
from the computer’s range of calculations.
But even if one could restrict the universe for each particular problem
to possibly relevant facts—and so far this can only be done by the
programmer, not the program—the problem remains to determine what
information is actually relevant. Even in a nonformal game like playing
the horses—which is much more systematic than everyday open-struc-
tured problems—an unlimited, indefinitely large number of facts remain
as possibly relevant. In placing a bet we can usually restrict ourselves to
such facts as the horse’s age, jockey, past performance, and competition.
Perhaps, if restricted to these facts from the racing form, the machine
could do fairly well, possibly better than an average handicapper; but
there are always other factors such as whether the horse is allergic to
goldenrod or whether the jockey has just had a fight with the owner,
which may in some cases be decisive. Human handicappers are no more
omniscient than machines, but they are capable of recognizing the rele-
vance of such facts if they come across them.
(“What Computers Still Can’t Do: A Critique of Artificial Reason”, Herbert L. Dreyfus, 1992, pages 257-258)