[FOM] Learnability

K. P. Hart k.p.hart at tudelft.nl
Sat Jan 19 14:34:41 EST 2019


In defense of the authors: jumping from finite/discrete to 
infinite/continuous is what we do all the time.
Even Fluid Mechanics has its Continuum Hypothesis: a gas/liquid is a 
continuous body, rather than a finite
set (albeit quite big) of tumbling molecules 
https://en.wikipedia.org/wiki/Fluid_mechanics#Assumptions
To say nothing of vibrating strings, membranes; or sums approximated by 
integrals, etc.
Theoretical results from the continuous can, sometimes, be translated 
back to the discrete.
The conclusion in this case would be that there is nothing to translate 
back and that care must be taken
when trying to exploit the continuous.
See also 
http://hartkp.weblog.tudelft.nl/2019/01/11/more-on-machine-learning-and-ch/

KP Hart



On 1/18/19 6:27 AM, Patrik Eklund wrote:
> The authors are manipulative already at start. They say "Approximate a 
> target concept given a bounded amount of data about it." This is 
> wrong. It's not "bounded", it's finite. It can be very big (Big Data), 
> but it is nevertheless finite.
>
> Then they bridge that to PAC (probably approximately correct), and 
> then we are in probability with the reals. Now we are departing from 
> the essence of learning.
>
> Note also things like "access to a training sample of visitors drawn 
> from an (unknown) distribution P". In learning we do not not "draw". 
> As they say at the beginning, "given ... amount of data".
>
> Learning is approximating to given and finite data.
>
> The authors don't give up. As if they by that point feel they are on 
> thin ice, they start to speak about "posted ads are to be chosen from 
> a given pool of ads". Now they enter the realm of social media 
> marketing, with Facebook, Google, etc. Just take Facebook as an 
> example. There are Adsets and Ads. Ads in Adsets. And a Campaign is a 
> set of Adsets. But again, it's finite. It's very finite. We could play 
> around with the creative (the content of the add) and start to include 
> real numbers for assigning distance between one cup of coffee to 
> another in an add, or in videoclips saying that time is continuous 
> (but in a moving picture it isn't).
>
> ---
>
> This paper is Artificial Artificial Intelligence, where their "The ad 
> problem above becomes an instance of the following EMX problem" is 
> giant leap from AI to their AAI. It may seem as a small step in their 
> paper, but it's an all too giant leap for practical learning. After 
> that, the paper has nothing to do with AI and learning.
>
> ---
>
> What I would like to see is a totally new approach to brain, cognition 
> and AI. Why not look at reception in cells, pathways, transcription, 
> RNA/DNA and all that. Hormones come into play. They are transported by 
> the nerves and through the blood, into the cells. Intestines are 
> permeable. The large intestines and the bladder "communicate", as do 
> the kidney and the heart e.g. in the cardiorenal syndrome, and so on 
> and so forth. Water retention in the body. The role of sodium , 
> potassium and many other things like calsium channels. Why don't we 
> analyze these things? We could analyze stress otherwise than using 
> arithmetics? Doctors are totally unable because they are stuck with 
> populations and comparing mean values. That's baby mathematics.
>
> The brain is finite, but the brain of the gut may be infinite. The 
> brain of the gut may be undecidable?! It certainly cannot be modelled 
> by baby mathematics.
>
> Best,
>
> Patrik
>
>
>
> On 2019-01-17 11:36, Dennis Müller wrote:
>> The following twitter thread by John Carlos Baez summarizes the issue
>> in a rather concise manner:
>> https://twitter.com/johncarlosbaez/status/1083047483368890368
>>
>> Basically: The question of whether a ML agent can generalize a
>> classification scheme with a certain accuracy from a finite set of
>> training data is reducible to finding a finite subset of [0,1] with a
>> sufficiently large P-measure, where the measure P itself is unknown
>> except for N iid samples. The latter is apparently possible iff
>> there's at most finitely many uncountable cardinals < 2^aleph0.
>>
>>> During a routine perusal of the site RealClearScience.com, I read 
>>> the piece
>>> about the article "Learnability can be undecidable", by S. 
>>> Ben-David, P. Hrubes
>>> et.al. The piece is published in the journal Nature Machine 
>>> Intelligence.
>>> In that paper the authors appear to show that certain aspect of 
>>> machine learning
>>> (pretty practical task) is equivalent (in ZFC) to Continuum 
>>> Hypothesis which
>>> is (as we know since P.J. Cohen) undecidable in ZFC.
>>>
>>> I never did Machine Learning, and this appears to be absolutely 
>>> incredible.
>>> The piece in RealClearScience is a product of a science writer, not 
>>> necessarily
>>> knowing what s/he is talking about.
>>>
>>> Obviously, the matter is relevant to F.O.M. Could someone in the 
>>> community make
>>> this matter clearer for pedestrians such as I?
>>>
>>> Thanks,
>>>
>>> Victor Marek
>>> Victor W. Marek Department of Computer Science
>>> marek at cs.uky.edu University of Kentucky
>>> marek at cs.engr.uky.edu Lexington, KY 40506-0633
>>> 859-257-3496 (office) 859-257-3961 (Dept)
>>> http://www.cs.uky.edu/~marek 859-257-1505 (FAX)
>>> _______________________________________________
>>> FOM mailing list
>>> FOM at cs.nyu.edu
>>> https://cs.nyu.edu/mailman/listinfo/fom
>>
>>
>> -- 
>> Dennis M. Müller
>> logicalphalluses.net
>> "Mathematics is the music of reason. To do mathematics is to engage in
>> an act of discovery and conjecture, intuition and inspiration; to be
>> in a state of confusion— not because it makes no sense to you, but
>> because you gave it sense and you still don’t understand what your
>> creation is up to; to have a breakthrough idea; to be frustrated as an
>> artist; to be awed and overwhelmed by an almost painful beauty; to be
>> alive, damn it. Remove this from mathematics and you can have all the
>> conferences you like; it won’t matter. Operate all you want, doctors:
>> your patient is already dead."
>>  - Paul Lockhart (on mathematics in school)
>>
>> _______________________________________________
>> FOM mailing list
>> FOM at cs.nyu.edu
>> https://cs.nyu.edu/mailman/listinfo/fom
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> https://cs.nyu.edu/mailman/listinfo/fom
>

-- 
E-MAIL: K.P.Hart at TUDelft.NL           PAPER: Faculty EEMCS
PHONE:  +31-15-2784572                       TU Delft
FAX:    +31-15-2787245                       Postbus 5031
URL:    http://fa.its.tudelft.nl/~hart       2600 GA  Delft
                                              the Netherlands



More information about the FOM mailing list