Biblio

Export 15 results:
Sort by: Author Keyword [ Title  (Desc)] Type Year
Filters: First Letter Of Keyword is R  [Clear All Filters]
A B C D E F G H I J K L M N O P Q [R] S T U V W X Y Z   [Show ALL]
W
Prechelt, L.  1997.  Why We Need an Explicit Forum for Negative Results. Journal of Universal Computer Science. 3:1074–1083. Abstract
T
S
Curtsinger, C, Berger ED.  2013.  STABILIZER: Statistically Sound Performance Evaluation. Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems. :219–228. Abstract
R
Drummond, C.  2009.  Replicability in not Reproducibility: Nor is it Good Science. The 4th workshop on Evaluation Methods for Machine Learning. Abstract
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
Vitek, J, Kalibera T.  2011.  Repeatability, reproducibility, and rigor in systems research. Proceedings of the ninth ACM international conference on Embedded software. :33–38. Abstract
P
J
Georges, A, Eeckhout L, Buytaert D.  2008.  Java performance evaluation through rigorous replay compilation. Proceedings of the 23rd ACM SIGPLAN conference on Object-oriented programming systems languages and applications. :367–384. Abstract
H
Wieringa, R, Heerkens H, Regnell B.  2009.  How to Write and Read a Scientific Evaluation Paper. Proceedings of the 2009 17th IEEE International Requirements Engineering Conference, RE. :361–364. Abstract
Wieringa, R, Heerkens H, Regnell B.  2009.  How to Write and Read a Scientific Evaluation Paper. Proceedings of the 2009 17th IEEE International Requirements Engineering Conference, RE. :361–364. Abstract
F
Hanenberg, S.  2010.  Faith, hope, and love: an essay on software science's neglect of human factors. Proceedings of the ACM international conference on Object oriented programming systems languages and applications. :933–946. Abstract
E