Hacker News Clone new | comments | show | ask | jobs | submit | github repologin
Microbenchmarks Are Experiments (mrale.ph)
14 points by zdw 3 hours ago | hide | past | web | 2 comments | favorite





It’s cool to see this kind of analysis, even if it’s analyzing a totally bogus benchmark.

If you want to compare language runtimes, compilers, or CPUs then you have to pick a larger workload than just one loop. So, if a microbenchmark is an experiment, then it is a truly bad experiment indeed.

Reason: loops like this are easy for compilers to analyze, in a way that makes them not representative of real code. The hard part of writing a good compiler is handling the hard to analyze cases, not loops like this. So, if a runtime does well on a bullshit loop like this then it doesn’t mean that it’ll do well on real stuff.

(Source: I wrote a bunch of the JSC optimizations including the loop reshaping and the modulo ones mentioned by this post.)


And what if the runtime does poorly on even such a simple loop? Go is surprisingly slower here than Java and Kotlin.

I agree with the author of the blog here - microbenchmarks are just experiments and they can be very useful if you do proper analysis of the results. You can definitely learn something about the runtimes even from such a simple for loop benchmark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: