Biblio

Export 15 results:
Sort by: Author [ Keyword  (Asc)] Title Type Year
Filters: First Letter Of Keyword is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
Scientific evaluation papers
Wieringa, R, Heerkens H, Regnell B.  2009.  How to Write and Read a Scientific Evaluation Paper. Proceedings of the 2009 17th IEEE International Requirements Engineering Conference, RE. :361–364. Abstract
Scientific Experiments
scientific method
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
Scientific Theories
Scientists
simulation
Software Artifacts
software engineering
Perry, DE, Porter AA, Votta LG.  2000.  Empirical studies of software engineering: a roadmap. Proceedings of the Conference on The Future of Software Engineering. :345–355. Abstract
Hanenberg, S.  2010.  Faith, hope, and love: an essay on software science's neglect of human factors. Proceedings of the ACM international conference on Object oriented programming systems languages and applications. :933–946. Abstract
SPEC
Blackburn, SM, Garner R, Hoffmann C, Khang AM, McKinley KS, Bentzur R, Diwan A, Feinberg D, Frampton D, Guyer SZ et al..  2006.  The DaCapo benchmarks: java benchmarking development and analysis. OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. :169–190. Abstract
spec cpu
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
static workload
Frachtenberg, E, Feitelson DG.  2005.  Pitfalls in parallel job scheduling evaluation. Proceedings of the 11th international conference on Job Scheduling Strategies for Parallel Processing. :257–282. Abstract
statistical methods
Kalibera, T, Jones R.  2013.  Rigorous Benchmarking in Reasonable Time. Proceedings of the 2013 International Symposium on Memory Management. :63–74. Abstract
statistical mistakes
statistics
Hoefler, T, Belli R.  2015.  Scientific Benchmarking of Parallel Computing Systems: Twelve Ways to Tell the Masses when Reporting Performance Results. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :73:1–73:12. Abstract
Georges, A, Buytaert D, Eeckhout L.  2007.  Statistically rigorous java performance evaluation. OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications. :57–76. Abstract
steady-state
Gil, J Y, Lenz K, Shimron Y.  2011.  A microbenchmark case study and lessons learned. Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, &\#38; VMIL'11. :297–308. Abstract
subject experience
Höst, M, Wohlin C, Thelin T.  2005.  Experimental context classification: incentives and experience of subjects. Proceedings of the 27th international conference on Software engineering. :470–478. Abstract
subject motivation
Höst, M, Wohlin C, Thelin T.  2005.  Experimental context classification: incentives and experience of subjects. Proceedings of the 27th international conference on Software engineering. :470–478. Abstract