84-wolpert-1997

Foundational Papers in Complexity Science pp. 2649–2697
DOI: 10.37911/9781947864559.84

No Free Lunch

Author: Charlie Strauss, Los Alamos National Laboratory; and Vijay Balasubramanian, University of Pennsylvania; Santa Fe Institute

 

Excerpt

Few theorems are as universal, poorly misunderstood, and absolutely infuriating as the series of no-free-lunch theorems. The theorems ultimately stem from the same basic insight but have been tailored to the specific cases of “search,” “optimization,” “generalization,” and others with profound implications for those fields. I dare to say that many, many papers would never be published if all authors were required to answer how they were able to violate the iron-clad bounds of the no-free-lunch theorem.

In a nutshell, the theorem for search or optimization says that no search algorithm can find the global minimum of a finite search surface faster than any other on average. This has maddening implications, for whenever a paper debuts a new oh-so-clever search innovation and demonstrates it on a benchmark set where it outperforms the number of steps required to find the minimum compared to everything else there’s an annoying corollary. If any algorithm is shown to outperform on a selected set of tests, it is equivalent to proving that it must underperform on average over all other problems outside the test set! That is, by claiming a better algorithm by empirical tests, you are actually proving it’s worse.

BACK TO Foundational Papers in Complexity Science