Brian Dolan’s Post

View profile for Brian Dolan, graphic

Mathematical Entrepreneur. Building Enterprise Grade Artificial Intelligence for over 20 years.

I think many people interpret the Universal Approximation Theorem to say that “neural networks can do anything.” This is a mis-read. A key facet is in the statement assuming the existence of a function f we are trying to approximate.  Recall that a function maps a domain x in X to a single value y in Y. That is, f always maps x to y.  That is, a given input has exactly ONE result. Think of how false this is in real life. How many times can you be given a datum and know exactly the output? For natural language, this set basically has measure 0. For histology, this is also about measure 0. Boil down: Please stop telling me Deep Learning “figures it out”. It does not and the UAT almost certainly does not apply to your situation. #datascience #deeplearning #aritificalintelligence #mathematics Jim Cooper David Hubbard Noelle Saldana RJ Smith Anthony J. Annunziata Dave Whelan Todd Terrazas Robert Rovetti Sarah Nowak

Mario Marhuenda Beltrán

PhD student at Radboud University (Nijmegen, Netherlands)

2y

Perhaps you can explain to me but I don't understand your complaint. Yes, the UAT assumes there is a function to approximate because, well, the hint is the name. If you don't have a function then ml can do nothing for you because you don't have a good enough model of what you're studying.

To view or add a comment, sign in

Explore topics