Biblio

Export 7 results:
Sort by: Author [ Keyword  (Desc)] Title Type Year
Filters: First Letter Of Last Name is K  [Clear All Filters]
A B C D E F G H I J [K] L M N O P Q R S T U V W X Y Z   [Show ALL]
Tools
technology transfer
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
statistical mistakes
statistical methods
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
spec cpu
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
SPEC
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
scientific method
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
Research methodology
research guidelines
Reproducibility
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
repeatability
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
professionals
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
observation study
methodology
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
Java
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
Human subjects
Human participants
experiments
Experimentation
Experimental evaluation
empirical software research
Empirical software engineering
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
DaCapo
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract