# FOM: accessibility of abstract notion of algorithm

Vaughan R. Pratt pratt at cs.Stanford.EDU
Wed Oct 29 02:55:41 EST 1997

```>From: John Baldwin <jbaldwin at math.uic.edu>
>I taught a `welcome to the university  seminar' last Fall.  There
>was a lot of time on the internet and people playing with the
>game of Life.  Then I tried to talk about an abstract notion of
>computation with cellular automata as the model.  Maybe one
>or two of twenty students got some reasonable notion of the
>idea of algorithm as expressed by an abstract machine.  And this
>was in 5 or 6 hours of lecture supplemented by considerable
>experimentation and play.

I teach an undergraduate automata-and-computability course each
year---deterministic and nondeterministic automata (finite, pushdown,
P-time, Turing) and the languages they accept.  Early on I emphasize
that individual languages are always countable, but that there are
uncountably many languages, only countably many of which can be
regular.  As the language classes increase---regular to deterministic
context-free to context-free to P to NP to recursive to r.e. (and on
occasion I've gone further up the arithmetic hierarchy for fun)---I
keep reminding them that we continue to prove countability of those
classes, and hence uncountability of their complements, the same way.

One concept that students absolutely have to get is the recognition
that size and complexity are orthogonal properties of a language.  This
does not come easily without explicit help.  Students naturally confuse
language inclusion with class inclusion, thinking of \Sigma* as "more"
than the empty language and hence more work for an automaton to deal
with.  You have to get across the idea that a language is a cut in
\Sigma* and that size (correlated with language inclusion) is the
location of the cut, while complexity (correlated with class inclusion)
is its jagginess, orthogonal to location.

This distinction needs to be drawn in two ways, neither of which should
be omitted.  First, explicitly describe the distinction as a generality
as above.  Second, give lots of examples in the form of proofs of
strict language inclusions and strict class inclusions.  The latter is
a basic reason for treating more than a single class such as the
recursive sets---without other classes to compare the class of
recursive sets with, the concept is meaningless and might just as well
be the class of all languages (less the one or two languages you might
have shown are not in it) as far as the student is concerned.

Now suppose that after the 5 or 6 hours, you've told them all sorts of
interesting things about computability, but even after all that they
still have not latched onto the size-complexity distinction.  Without
this sine qua non I would say that they still did not have a clear
picture of computability.  A crash course in computability, however
short, should aim at conveying this concept early on.

I admit that the language perspective warps the exposition in favor of
one-bit outputs, but I would still give the above distinction priority
over the distinction between recursive sets and recursive functions.
Justify your initial focus on decision problems with the slogan "the
first bit is the hardest."

>On the other hand, the statement `the domain of a rational function
>consists of all but a finite number of real numbers' seems to
>completely mystify students in my precalculus class after they spent
>a week graphing rational functions.

So it would, out of the blue.  But if you go through the sequence:
define "domain," illustrate using the functions you've been graphing,
then throw rational functions at them until they start correctly and
quickly naming the points not in the domain, and lastly throw in a few
other functions like polynomials and square root for comparison, I
can't see what would be left to mystify them.  To do less is not to
teach.

Vaughan

```