Yossi Kreinin recently posted three excellent articles on runtime performance. Efficiency is fundamentally at odds with elegance starts off with Bjarne Stroutrop’s ludicrous claim that C++ doesn’t make a trade-off between runtime performance and developer productivity because it achieves both. Debunking this claim is hardly necessary for anyone who has ever used C++ and at least one other modern language, but Yossi shoulders the heroic task anyway and does not shy away from potshots in the other direction, either:
Of course there are plenty of perfectionists who, instead of rationalizing C++’s productivity problems, spend their time denying that Python is slow, or keep waiting for Python to become fast. It will not become fast. Also, all its combinations with C/C++ designed to remedy this inefficiency will forever be ugly. We had psyco, PyPy, pyrex, Cython, Unladen Swallow, CPython extension modules, Boost.Python, and who knows what else. Python is not designed to be efficient; it’s designed for productivity and for extensibility through a necessarily ugly C FFI. The tradeoff is fundamental. Python is slow forever. Python bindings are ugly forever.
Stepping back from extremist propaganda claims, however, the dichotomy is not quite so absolute. Yossi’s next article, Is program speed less important than X?, notes that sometimes higher execution speed itself results in greater developer productivity – namely when the programmer spends less time waiting for slow tools to finish their job, or researching fancy algorithms and workarounds to counteract a language’s slowness.
And if that’s not paradoxical enough for you, Amdahl’s law in reverse: the wimpy core advantage shows how n “wimpy” CPU cores, each running at 1/n the speed of a single “brawny” core, can greatly outperform the latter. This happens when tasks include a great deal of external latency, e.g. due to random memory access, that can be distributed among multiple slower cores without increasing its total duration.