Biblio

Export 9 results:
Sort by: Author [ Keyword  (Desc)] Title Type Year
Filters: First Letter Of Last Name is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
User evaluation
Buse, RPL, Sadowski C, Weimer W.  2011.  Benefits and Barriers of User Evaluation in Software Engineering Research. OOPSLA '11: Proceedings of the ACM international conference on Object oriented programming systems languages and applications. Abstract
technology transfer
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
steady-state
Gil, J Y, Lenz K, Shimron Y.  2011.  A microbenchmark case study and lessons learned. Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11. :297–308. Abstract
SPEC
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
Reproducibility
relevance
professionals
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
Performance
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2010.  Evaluating the accuracy of Java profilers. PLDI '10: Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation. :187–197. Abstract
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2009.  Producing wrong data without doing anything obviously wrong!. ASPLOS '09: Proceeding of the 14th international conference on Architectural support for programming languages and operating systems. :265–276. Abstract
observation study
methodology
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
measurements
Gil, J Y, Lenz K, Shimron Y.  2011.  A microbenchmark case study and lessons learned. Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11. :297–308. Abstract
Measurement
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2009.  Producing wrong data without doing anything obviously wrong!. ASPLOS '09: Proceeding of the 14th international conference on Architectural support for programming languages and operating systems. :265–276. Abstract
literate programming
Singer, J.  2011.  A literate experimentation manifesto. Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software. :91–102. Abstract
Java
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
Human study
Buse, RPL, Sadowski C, Weimer W.  2011.  Benefits and Barriers of User Evaluation in Software Engineering Research. OOPSLA '11: Proceedings of the ACM international conference on Object oriented programming systems languages and applications. Abstract
experiments
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
Experimentation
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2010.  Evaluating the accuracy of Java profilers. PLDI '10: Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation. :187–197. Abstract
Mytkowicz, T, Diwan A, Hauswirth M, Sweeney PF.  2009.  Producing wrong data without doing anything obviously wrong!. ASPLOS '09: Proceeding of the 14th international conference on Architectural support for programming languages and operating systems. :265–276. Abstract
experimental write-up
Singer, J.  2011.  A literate experimentation manifesto. Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software. :91–102. Abstract
Experimental evaluation
Empirical software engineering
Sj"berg, DIK, Anda B, Arisholm E, Dyb{\aa} T, J"rgensen M, Karahasanovic A, Koren EF, Vokác M.  2002.  Conducting Realistic Experiments in Software Engineering. Proceedings of the 2002 International Symposium on Empirical Software Engineering. :17–. Abstract
DaCapo
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
complexity