|Title||A microbenchmark case study and lessons learned|
|Publication Type||Conference Paper|
|Year of Publication||2011|
|Authors||Gil, J Y, Lenz K, Shimron Y|
|Conference Name||Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11|
|Conference Location||New York, NY, USA|
|Keywords||benchmark, measurements, steady-state|
The extra abstraction layer posed by the virtual machine, the JIT compilation cycles and the asynchronous garbage collection are the main reasons that make the benchmarking of Java code a delicate task. The primary weapon in battling these is replication: "billions and billions of runs", is phrase sometimes used by practitioners. This paper describes a case study, which consumed hundreds of hours of CPU time, and tries to characterize the inconsistencies in the results we encountered.