Export 87 results:
Sort by: Author Keyword Title [ Type  (Asc)] Year
Conference Paper
Georges, A, Eeckhout L, Buytaert D.  2008.  Java performance evaluation through rigorous replay compilation. Proceedings of the 23rd ACM SIGPLAN conference on Object-oriented programming systems languages and applications. :367–384. Abstract
Singer, J.  2011.  A literate experimentation manifesto. Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software. :91–102. Abstract
Gil, J Y, Lenz K, Shimron Y.  2011.  A microbenchmark case study and lessons learned. Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11. :297–308. Abstract
Bailey, DH.  2009.  Misleading performance claims in parallel computations. Proceedings of the 46th Annual Design Automation Conference. :528–533. Abstract
Bachmann, A, Bird C, Rahman F, Devanbu P, Bernstein A.  2010.  The missing links: bugs and bug-fix commits. Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering. :97–106. Abstract
Zannier, C, Melnik G, Maurer F.  2006.  On the success of empirical studies in the international conference on software engineering. Proceedings of the 28th international conference on Software engineering. :341–350. Abstract
Frachtenberg, E, Feitelson DG.  2005.  Pitfalls in parallel job scheduling evaluation. Proceedings of the 11th international conference on Job Scheduling Strategies for Parallel Processing. :257–282. Abstract
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2009.  Producing wrong data without doing anything obviously wrong!. ASPLOS '09: Proceeding of the 14th international conference on Architectural support for programming languages and operating systems. :265–276. Abstract
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
Drummond, C.  2009.  Replicability in not Reproducibility: Nor is it Good Science. The 4th workshop on Evaluation Methods for Machine Learning. Abstract
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
Basili, VR.  1996.  The role of experimentation in software engineering: past, current, and future. Proceedings of the 18th international conference on Software engineering. :442–449. Abstract
Hoefler, T, Belli R.  2015.  Scientific Benchmarking of Parallel Computing Systems: Twelve Ways to Tell the Masses when Reporting Performance Results. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :73:1–73:12. Abstract
Curtsinger, C, Berger ED.  2013.  STABILIZER: Statistically Sound Performance Evaluation. Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems. :219–228. Abstract
Georges, A, Buytaert D, Eeckhout L.  2007.  Statistically rigorous java performance evaluation. OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications. :57–76. Abstract
Clark, B, Deshane T, Dow E, Evanchik S, Finlayson M, Herne J, Matthews J N.  2004.  Xen and the art of repeated research. Proceedings of the annual conference on USENIX Annual Technical Conference. :47–47. Abstract
Journal Article
Feynman, R.  1974.  Cargo Cult Science. Engineering and Science. 37(7):10-13. Abstract