“No Free Lunch” algos: from supervised learning to blackbox optimization

The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally. In an academic paper, David Wolpert from the Santa Fe Institute, explains that the importance of the theorems arises as they are used to analyze scenarios involving non-uniform distributions, and to compare different algorithms, without any assumption about the distribution over problems at all.

In particular, the theorems prove that anti-cross-validation (choosing among a set of candidate algorithms based on which has worst out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption — which has never been formalized — about how the distribution over induction problems, on the one hand, is related to the set of algorithms one is choosing among using (anti-)cross validation, on the other.

In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution. They also motivate a “dictionary” between supervised learning and improve blackbox optimization, which allows one to “translate” techniques from supervised learning into the domain of blackbox optimization, thereby strengthening blackbox optimization algorithms.

Read the full paper

Related Posts

Previous Post
European Association of CCP Clearing Houses (EACH) pushes back on clearing member compensation guidance
Next Post
NIST publishes cloud system guidance for IaaS, PaaS and SaaS

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account