How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved.

Interesting piece from The New Yorker. Ted Chiang illustrates his statment with some fundamental computer science concepts. I think his statement still holds.