%0 Conference Paper %B Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11 %D 2011 %T A microbenchmark case study and lessons learned %A Gil, Joseph Yossi %A Lenz, Keren %A Shimron, Yuval %C New York, NY, USA %I ACM %K benchmark %K measurements %K steady-state %P 297–308 %R 10.1145/2095050.2095100 %S SPLASH '11 Workshops %U http://doi.acm.org/10.1145/2095050.2095100 %X The extra abstraction layer posed by the virtual machine, the JIT compilation cycles and the asynchronous garbage collection are the main reasons that make the benchmarking of Java code a delicate task. The primary weapon in battling these is replication: "billions and billions of runs", is phrase sometimes used by practitioners. This paper describes a case study, which consumed hundreds of hours of CPU time, and tries to characterize the inconsistencies in the results we encountered. %@ 978-1-4503-1183-0