A metric is ignored when it is excluded despite being necessary to confirm an evaluation's claims. There are two primary manifestations of "ignored metrics". First is to ignore the cost of an innovation while reporting only the benefit. Most innovations have both cost and benefit and ignoring the cost is misleading. Second is to report only ends‐based or only means‐based metrics. Both are important: ends‐based metrics (such as end‐to‐end execution time) are often relevant to the claim; means‐based metrics (such as cache misses) are necessary for confirming the cause of the change in the ends‐based metric. Thus reporting just ends‐based or just means‐based metrics is misleading.
Example: You claim that your new data‐locality optimization improves performance. Your evaluation only measured execution time (an ends‐based metric), but did not measure the improvement in data locality, e.g., by counting data cache misses (a means‐based metric). It thus failed to show that the improved performance was caused by an improvement in data locality.
...