[FOM] Learnability

Jacob Hilton jacob.hilton at gmail.com
Wed Jan 16 23:51:51 EST 2019


The paper exhibits a "learning problem" whose learnability is equivalent to
the statement that the continuum is smaller than aleph_omega. The
significance of this result is in what it says about the definitions of
"learning problem" that encompass their example - perhaps they are too
general for the purpose of proving meaningful results. However, there is no
"realistic" solution to their learning problem and so it does not have any
significance for what machine learning can do in practice. (The authors
explain this in their conclusion, though they could have been clearer about
it at the start in my opinion.)

The full text of the paper is available here
<https://www.nature.com/articles/s42256-018-0002-3>.

Here is the authors' plain English description of their problem:

"In other words, we are faced with the following scenario. There is an
unknown probability distribution P over some finite subset of the interval
[0, 1]. We get to see m i.i.d. (independent and identically distributed)
samples from P for m of our choice. We then need to find a finite subset of
[0, 1] whose P-measure is at least 2/3. The theorem says that the standard
axioms of mathematics cannot be used to prove that we can solve this
problem, nor can they be used to prove that we cannot solve this problem."

(This statement is slightly informal - for instance, they neglect to say
that success is only required with probability 2/3, and that the same m
must work for all P. The formal definition is given in Definition 1 (EMX
learner) and they take X to be [0, 1] and curly F to be the family of
finite subsets of X.)

The first clue that size of the continuum is relevant to this problem is
the fact that the order structure on [0, 1] is not relevant to the problem
definition, only its cardinality.

For an informal proof that there is no "realistic" solution to the problem,
let S be a set of 3m points sampled uniformly and independently from [0, 1]
and let P be uniform over S. Thus I am given at most m points from S, and I
need to find a finite set that covers 2m points from S. My best realistic
strategy seems to be to choose the points I have been given, together with
many additional points chosen uniformly and independently from [0, 1], but
this strategy fails with probability 1.

However, there is in fact a deterministic strategy that solves the problem
if the continuum is smaller than aleph_omega. This is a fun puzzle, so I've
put the proof below in white - highlight to view. Credit to Nisan Stiennon
(though presumably a similar argument falls out of the proof of Theorem 1
from the paper).

Here is a proof assuming CH; the general case is similar. Let <* be a
well-ordering of [0, 1] of order type omega_1. Given a set M of m i.i.d.
samples from P, let a be <*-maximal in M, and let A = {x in [0, 1]: x <=*
a}. Note that A is countable and that for large enough m (chosen
independently of P), the P-measure of A is 1-epsilon with probability
1-epsilon. Now let <** be a well-ordering of A of order type omega, let b
be <**-maximal in M, and let B = {x in A: x <=** b}. Then B is as required
for large enough m.

On Wed, 16 Jan 2019 at 18:17, Victor Marek <marek at cs.uky.edu> wrote:

> During a routine perusal of the site RealClearScience.com, I read the piece
> about the article "Learnability can be undecidable", by S. Ben-David, P.
> Hrubes
> et.al. The piece is published in the journal Nature Machine Intelligence.
> In that paper the authors appear to show that certain aspect of machine
> learning
> (pretty practical task) is equivalent (in ZFC) to Continuum Hypothesis
> which
> is (as we know since P.J. Cohen) undecidable in ZFC.
>
> I never did Machine Learning, and this appears to be absolutely incredible.
> The piece in RealClearScience is a product of a science writer, not
> necessarily
> knowing what s/he is talking about.
>
> Obviously, the matter is relevant to F.O.M. Could someone in the community
> make
> this matter clearer for pedestrians such as I?
>
> Thanks,
>
>                 Victor Marek
> Victor W. Marek                                 Department of Computer
> Science
> marek at cs.uky.edu                                        University of
> Kentucky
> marek at cs.engr.uky.edu                                 Lexington, KY
> 40506-0633
> 859-257-3496 (office)                                     859-257-3961
> (Dept)
> http://www.cs.uky.edu/~marek                              859-257-1505
> (FAX)
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> https://cs.nyu.edu/mailman/listinfo/fom
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20190116/f2351c4f/attachment.html>


More information about the FOM mailing list