Biblio

Export 40 results:
Sort by: Author [ Keyword  (Desc)] Title Type Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
virtual machine
Georges, A, Eeckhout L, Buytaert D.  2008.  Java performance evaluation through rigorous replay compilation. Proceedings of the 23rd ACM SIGPLAN conference on Object-oriented programming systems languages and applications. :367–384. Abstract
User evaluation
Buse, RPL, Sadowski C, Weimer W.  2011.  Benefits and Barriers of User Evaluation in Software Engineering Research. OOPSLA '11: Proceedings of the ACM international conference on Object oriented programming systems languages and applications. Abstract
Tools
tool
Bachmann, A, Bird C, Rahman F, Devanbu P, Bernstein A.  2010.  The missing links: bugs and bug-fix commits. Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering. :97–106. Abstract
Theory
technology transfer
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
subject motivation
Höst, M, Wohlin C, Thelin T.  2005.  Experimental context classification: incentives and experience of subjects. Proceedings of the 27th international conference on Software engineering. :470–478. Abstract
subject experience
Höst, M, Wohlin C, Thelin T.  2005.  Experimental context classification: incentives and experience of subjects. Proceedings of the 27th international conference on Software engineering. :470–478. Abstract
steady-state
Gil, J Y, Lenz K, Shimron Y.  2011.  A microbenchmark case study and lessons learned. Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11. :297–308. Abstract
statistics
Hoefler, T, Belli R.  2015.  Scientific Benchmarking of Parallel Computing Systems: Twelve Ways to Tell the Masses when Reporting Performance Results. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :73:1–73:12. Abstract
Georges, A, Buytaert D, Eeckhout L.  2007.  Statistically rigorous java performance evaluation. OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications. :57–76. Abstract
statistical mistakes
statistical methods
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
static workload
Frachtenberg, E, Feitelson DG.  2005.  Pitfalls in parallel job scheduling evaluation. Proceedings of the 11th international conference on Job Scheduling Strategies for Parallel Processing. :257–282. Abstract
spec cpu
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
SPEC
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
software engineering
Perry, DE, Porter AA, Votta LG.  2000.  Empirical studies of software engineering: a roadmap. Proceedings of the Conference on The Future of Software Engineering. :345–355. Abstract
Hanenberg, S.  2010.  Faith, hope, and love: an essay on software science's neglect of human factors. Proceedings of the ACM international conference on Object oriented programming systems languages and applications. :933–946. Abstract
Software Artifacts
simulation
Scientists
Scientific Theories