Biblio

Export 87 results:
Sort by: Author Keyword Title [ Type  (Desc)] Year
Conference Paper
Perry, DE, Porter AA, Votta LG.  2000.  Empirical studies of software engineering: a roadmap. Proceedings of the Conference on The Future of Software Engineering. :345–355. Abstract
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2010.  Evaluating the accuracy of Java profilers. PLDI '10: Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation. :187–197. Abstract
Höst, M, Wohlin C, Thelin T.  2005.  Experimental context classification: incentives and experience of subjects. Proceedings of the 27th international conference on Software engineering. :470–478. Abstract
Eide, E, Stoller L, Lepreau J.  2007.  An experimentation workbench for replayable networking research. Proceedings of the 4th USENIX conference on Networked systems design & implementation. :16–16. Abstract
Bird, C, Bachmann A, Aune E, Duffy J, Bernstein A, Filkov V, Devanbu P.  2009.  Fair and balanced?: bias in bug-fix datasets Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering. :121–130. Abstract
Hanenberg, S.  2010.  Faith, hope, and love: an essay on software science's neglect of human factors. Proceedings of the ACM international conference on Object oriented programming systems languages and applications. :933–946. Abstract
Wieringa, R, Heerkens H, Regnell B.  2009.  How to Write and Read a Scientific Evaluation Paper. Proceedings of the 2009 17th IEEE International Requirements Engineering Conference, RE. :361–364. Abstract
Georges, A, Eeckhout L, Buytaert D.  2008.  Java performance evaluation through rigorous replay compilation. Proceedings of the 23rd ACM SIGPLAN conference on Object-oriented programming systems languages and applications. :367–384. Abstract
Singer, J.  2011.  A literate experimentation manifesto. Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software. :91–102. Abstract
Gil, J Y, Lenz K, Shimron Y.  2011.  A microbenchmark case study and lessons learned. Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11. :297–308. Abstract
Bailey, DH.  2009.  Misleading performance claims in parallel computations. Proceedings of the 46th Annual Design Automation Conference. :528–533. Abstract
Bachmann, A, Bird C, Rahman F, Devanbu P, Bernstein A.  2010.  The missing links: bugs and bug-fix commits. Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering. :97–106. Abstract
Zannier, C, Melnik G, Maurer F.  2006.  On the success of empirical studies in the international conference on software engineering. Proceedings of the 28th international conference on Software engineering. :341–350. Abstract
Frachtenberg, E, Feitelson DG.  2005.  Pitfalls in parallel job scheduling evaluation. Proceedings of the 11th international conference on Job Scheduling Strategies for Parallel Processing. :257–282. Abstract
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2009.  Producing wrong data without doing anything obviously wrong!. ASPLOS '09: Proceeding of the 14th international conference on Architectural support for programming languages and operating systems. :265–276. Abstract
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
Drummond, C.  2009.  Replicability in not Reproducibility: Nor is it Good Science. The 4th workshop on Evaluation Methods for Machine Learning. Abstract
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
Basili, VR.  1996.  The role of experimentation in software engineering: past, current, and future. Proceedings of the 18th international conference on Software engineering. :442–449. Abstract
Hoefler, T, Belli R.  2015.  Scientific Benchmarking of Parallel Computing Systems: Twelve Ways to Tell the Masses when Reporting Performance Results. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :73:1–73:12. Abstract
Curtsinger, C, Berger ED.  2013.  STABILIZER: Statistically Sound Performance Evaluation. Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems. :219–228. Abstract
Georges, A, Buytaert D, Eeckhout L.  2007.  Statistically rigorous java performance evaluation. OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications. :57–76. Abstract
Clark, B, Deshane T, Dow E, Evanchik S, Finlayson M, Herne J, Matthews J N.  2004.  Xen and the art of repeated research. Proceedings of the annual conference on USENIX Annual Technical Conference. :47–47. Abstract
Broadcast