Your Own Metric System Ian Dees @undees OSCON 2012 Hello, and - - PowerPoint PPT Presentation

your own metric system
SMART_READER_LITE
LIVE PREVIEW

Your Own Metric System Ian Dees @undees OSCON 2012 Hello, and - - PowerPoint PPT Presentation

Your Own Metric System Ian Dees @undees OSCON 2012 Hello, and welcome. Im Ian. By day, I make oscilloscopes. By night, I play guitar irresponsibly. pragprog.com/titles/dhwcr I also write books, mostly about Ruby topics. A group of


slide-1
SLIDE 1

Your Own Metric System

Ian Dees ·@undees OSCON 2012

Hello, and welcome. I’m Ian.

slide-2
SLIDE 2

By day, I make oscilloscopes. By night, I play guitar irresponsibly.

slide-3
SLIDE 3

pragprog.com/titles/dhwcr

I also write books, mostly about Ruby topics. A group of us—me, plus two major contributors to the Cucumber test framework—are working on a new book of specific testing techniques. Ruby and its various test frameworks were my gateway drug to code metrics, though for this talk we’ll be concentrating on other languages.

slide-4
SLIDE 4

Oscilloscopes have been available commercially since the 1940s. Their architecture changes slowly. Software needs to last, and it tends to last whether we wish for a rewrite or not. Our team’s exploration of this mix of old and new code led to our interest in code metrics.

slide-5
SLIDE 5

Setting

And even if you’re not working on a large legacy code base, there are likely issues that we face in common.

slide-6
SLIDE 6

The forces against us

❥ Entropy drags our code down ❥ Apathy drags us down

There are a lot of forces that push on us and our teams. Today, I want to talk about two very difgerent forces that have surprisingly similar efgects: the entropy that drags our code down over time, and the apathy that drags us down personally over time.

slide-7
SLIDE 7

Stay engaged and productive

How do we fight these forces? How do we keep our interest after our tenth straight hour wading into the weeds of an incomprehensible legacy routine? How do we prevent the code we write today from being someone’s nightmare tomorrow?

slide-8
SLIDE 8

Knowing our code can help us do our jobs and have more fun

We have many tools in our chest; one is a good set of metrics—information about our code base. My hope is that you’ll consider code metrics at least as an intriguing, low-cost possibility for making the day go by a little better.

slide-9
SLIDE 9

Risk #1 Missing or poor information can waste our time

  • r lead us to cause harm

The risk with doing this—and there’s always a risk—is that we might waste our time making changes we don’t need, or worse, end up trashing our code in the name of blindly satisfying some target number.

slide-10
SLIDE 10

Two steps forward

  • 1. Ask questions about your code
  • 2. Choose metrics that answer those questions

How do we address that risk? By letting our project needs dictate our metric choices, not the other way around. It sounds simple. But as we’ll see, it’s possible to misapply a metric and make a big mess.

slide-11
SLIDE 11

Purpose of metrics

Since getting the reasons right is so important, let’s talk about why we’re gathering this data.

slide-12
SLIDE 12

purpose of metrics

Help you answer a question

The purpose of any metric should be to help you answer a question. Since we’re developers who maybe also do a little testing, let’s ask a few example questions now.

slide-13
SLIDE 13

purpose of metrics

What mess should I clean up next?

For example, if several files need some love, where should I concentrate my efgorts?

slide-14
SLIDE 14

purpose of metrics

The product backlog isn’t a substitute for your brain

Something else may be giving us guidance on what part of the code to work in—like the product

  • backlog. But you may be in a situation where you’ve got a little more leeway, like an explicit charter

to pay down technical debt.

slide-15
SLIDE 15

Risk #2 Making structural changes can introduce new bugs (or expose existing ones)

That said, when you do wander ofg the map, you do risk creating a bug. With legacy code, you may also uncover an existing bug and get the blame nonetheless. One way to address this risk is to improve your test coverage, and make small changes at a time. Another is to choose the right metrics; fixing static analysis warnings has anecdotally been one of the lowest-risk change activities I’ve ever seen.

slide-16
SLIDE 16

purpose of metrics

Where are the bugs (likely to be)?

Here’s another question we might ask. Where are the bugs? Where are the old bugs we haven’t found yet? Where are the new ones we might have created recently?

slide-17
SLIDE 17

purpose of metrics

/** * REMOVE THIS CODE * BEFORE WE SHIP! */

We can also turn to our code for ideas of what questions to ask. Has anyone seen something like this comment in production code? The number of these red flags in your code is a kind of code metric you can measure and reduce.

slide-18
SLIDE 18

purpose of metrics

Have we forgotten anything for this release?

That quantitative measurement—number of bad comments in the code—is helping us make a qualitative determination.

slide-19
SLIDE 19

purpose of metrics

These questions are for us

The questions we’ve heard so far are things we might ask,...

slide-20
SLIDE 20

purpose of metrics

Not for someone else

...not things someone else might ask.

slide-21
SLIDE 21

purpose of metrics

Questions from others:

(outside the scope of our metrics)

Not that other people’s questions aren’t legitimately interesting, or that they might not apply metrics of their own.

slide-22
SLIDE 22

purpose of metrics

Should we hold the release?

For example, the SQA team might be looking for red flags that could hold up the release.

slide-23
SLIDE 23

purpose of metrics time → errors/KLOC →

So they might look at aggregate errors per thousand lines of code. Not something I necessarily use to make decisions as a developer, but it doesn’t scare me if this metric is in use somewhere.

slide-24
SLIDE 24

purpose of metrics

Who’s got the best KLOC or error rate?

On a more sinister note, tracking rates of code production or error creation/resolution are outright destructive of teams.

slide-25
SLIDE 25

purpose of metrics It was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000. After a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied. —folklore.org

There was apparently a brief, dark time at Apple when employees were tracked by lines of code produced, until Bill Atkinson showed that you can improve and shorten the code at the same time.

slide-26
SLIDE 26

purpose of metrics

Have we met our target complexity or coverage?

Another, more subtle trap is setting absolute thresholds for various metrics.

slide-27
SLIDE 27

Doing so is like blindly obeying a GPS device: sooner or later, you’ll drive ofg a clifg.

slide-28
SLIDE 28

purpose of metrics

Metrics serve you, not the other way around

Metrics are supposed to be here for our benefit.

slide-29
SLIDE 29

purpose of metrics

Keep the job fun

And indeed, in addition to answering specific questions about our projects, they can make coding seem a little bit like a game where the side efgect is to produce better code...

slide-30
SLIDE 30

purpose of metrics

More fun than actually working?

...as long as we still get around to writing the code eventually.

slide-31
SLIDE 31

Risk #3 There is a trap here for the distractible

We have to be careful not to spend all day writing fancier shell scripts and slapping our stats onto elaborate dashboards (though there are quick-and-cheap dashboards I like; see the Tranquil project).

slide-32
SLIDE 32

Common metrics

Now that we have a few questions in mind about our code base, let’s look at some metrics commonly used by many projects. (Later, we’ll look at writing our own.) The nice thing about prefab metrics is that we can find open source implementations and supporting research.

slide-33
SLIDE 33

common metrics

Languages

❥ C: a case study ❥ Perl: the beginner’s experience ❥ <your lang> just ask!

Rather than present you with a laundry list, I’m going to stick to a few targeted examples in C and

  • Perl. But similar tools likely exist for your language; catch me in the hall afterwards if you’d like to

explore that together.

slide-34
SLIDE 34

common metrics

Repo for this talk

github.com/undees/oscon

The code samples you’re about to see are on GitHub; feel free to send a pull request if you’d like your favorite language to be included.

slide-35
SLIDE 35

common metrics

Cyclomatic complexity

The granddaddy of modern code metrics is McCabe Cyclomatic Complexity. It’s meant to be a loose measure of how many difgerent paths there are through a piece of code.

slide-36
SLIDE 36

common metrics

E – N + 2P

The fancy explanation is that you draw a graph of control flow through your function, then calculate a score from the number of edges, nodes, and return points.

slide-37
SLIDE 37

common metrics

  • 1. Start with a score of 1
  • 2. Add 1 for each if, case, for, or

boolean condition

The simpler explanation is that we walk through the code and add a point for each decision the code has to make.

slide-38
SLIDE 38

Volume speaking_volume( bool correct_room, bool correct_time) { if (correct_room && correct_time) { return INTELLIGIBLE; } else { // rehearsing return INAUDIBLE; } } complexity: 1

So we’d start with a value of 1 for this code sample...

slide-39
SLIDE 39

Volume speaking_volume( bool correct_room, bool correct_time) { if (correct_room && correct_time) { return INTELLIGIBLE; } else { // rehearsing return INAUDIBLE; } } complexity: 2

...add 1 point for the if statement...

slide-40
SLIDE 40

Volume speaking_volume( bool correct_room, bool correct_time) { if (correct_room && correct_time) { return INTELLIGIBLE; } else { // rehearsing return INAUDIBLE; } } complexity: 3

...and add 1 final point for the boolean operator. Depending on the implementation, we might add a point for the multiple returns.

slide-41
SLIDE 41

common metrics

parisc-linux.org/~bame/pmccabe

pmccabe

One easy-to-use implementation of this metric for C code is pmccabe.

slide-42
SLIDE 42

$ pmccabe *.c | sort -nr | head -10 3 3 3 6 8oscon.c(6): speaking_volume 1 1 2 16 5oscon.c(16): main

When we run it, it prints the complexity, size, and location of each function in our project.

slide-43
SLIDE 43

common metrics

Perl::Metrics::Simple

CPAN has several metrics modules for Perl; Perl::Metrics::Simple is an easy one to get started with.

slide-44
SLIDE 44

sub speaking_volume { my $correct_room = shift; my $correct_time = shift; if ($correct_room && $correct_time) { return 'intelligible'; } else { # rehearsing return 'inaudible'; } }

Here’s a Perl subroutine similar to the one we saw.

slide-45
SLIDE 45

$ countperl lib ... Tab-delimited list of subroutines, with most complex at top

  • complexity sub path size

4 speaking_volume lib/OSCON.pm 9 ...

Similar to pmccabe, Perl::Metrics::Simple gives us the size and complexity of each method.

slide-46
SLIDE 46

Speaking of size and complexity, this paper reexamined several previous studies and found that several popular code metrics were efgectively just expensive ways...

slide-47
SLIDE 47

$ wc -l oscon.c

...of counting lines. The paper didn’t consider cyclomatic complexity alone (and there were other issues dealt with in subsequent papers by other authors), but we should always be skeptical of our

  • wn metrics. Fortunately, most tools give us both a line count and a complexity metric; we can

decide for ourselves.

slide-48
SLIDE 48

Risk #4 Blindly reducing one number can add complexity and bugs

Some teams set complexity targets. In the degenerate case, they turn their code into a bunch of tiny functions that do nothing—making the overall code base more complex and prone to bugs.

slide-49
SLIDE 49

common metrics

Test coverage

Another widely used metric is the percentage of your code that gets executed by your tests.

slide-50
SLIDE 50

common metrics

  • 1. Instrument your program
  • 2. Watch your tests run
  • 3. Report which lines get executed

Measuring this typically involves instrumenting your code, so that you can watch it as it runs your tests.

slide-51
SLIDE 51

common metrics

Addresses “epic confidence” fail

  • pensourcebridge.org/sessions/923

Knowing our test coverage helps address the “epic confidence” problem that Laura Thomson described in her Open Source Bridge talk, “How Not To Release Software.” Teams affmicted by this bug assert without evidence that their tests are great.

slide-52
SLIDE 52

common metrics

Testable code is more... testable

In addition to combating hubris, measuring coverage helps us make our code more testable. Testability is not an end in itself, but a property with beneficial side efgects.

slide-53
SLIDE 53

common metrics

gcov

For C projects, it’s easy to measure coverage. GCC comes with the gcov coverage tool.

slide-54
SLIDE 54

int main() { assert(speaking_volume(true, true) == INTELLIGIBLE); return 0; }

Here’s a test that exercises just one branch of our code from earlier.

slide-55
SLIDE 55

$ gcc -fprofile-arcs \

  • ftest-coverage \
  • c oscon.c

$ gcc -fprofile-arcs \

  • scon.o

First, we’d compile and link our program with a couple of gcov’s required flags.

slide-56
SLIDE 56

$ gcov oscon.c $ cat oscon.c.gcov

Then, we’d run our tests and point gcov at the logfiles.

slide-57
SLIDE 57

1: 6:Volume speaking_volume(bool correct_room, bool correct_time) { 1: 7: if (correct_room && correct_time) { 1: 8: return INTELLIGIBLE;

  • : 9: } else {
  • : 10: // rehearsing

####: 11: return INAUDIBLE;

  • : 12: }
  • : 13:}

The result is a list of what lines did and didn’t get executed. In this case, we never ran the “else” clause.

slide-58
SLIDE 58

common metrics

Devel::Cover

Not to be outdone, Perl provides Devel::Cover.

slide-59
SLIDE 59

$ cover -test $ cat cover_db/coverage.html

You just point Devel::Cover at your tests, and it produces an HTML report for you.

slide-60
SLIDE 60

Devel::Cover gives us more information than gcov did. We executed line 26 once, but didn’t exercise both sides of the “&&”.

slide-61
SLIDE 61

Risk #5 High code coverage can make you think your code is good

Which brings us to another thing to keep in mind. Hitting each line of code once isn’t the same as hitting each combination of branches. Code coverage is meant to help you look for holes, not to lull you into false security.

slide-62
SLIDE 62

Custom metrics

The advantage of applying commonly used measurements is good support. The downside is lack of context; the creators of those metrics have nowhere near the knowledge of your project that you

  • do. So you may want to supplement common metrics with a few of your own. I can’t tell you what

those metrics are, but I can tell you a couple of the ones I’ve seen used.

slide-63
SLIDE 63

X-rated-ness

First, let’s look at what I’ll call X-rated-ness.

slide-64
SLIDE 64

1. 2. 3. 4. 5. 6. 7. custom metrics

Carlin’s 7 Dirty Words

Just as George Carlin gave us his famous list of words you can’t say on television,...

slide-65
SLIDE 65

custom metrics 1.XXX 2.TODO 3.FIXME 4.TBD 5.HACK 6.#if 0 7.#ifndef TESTING

Our 7 Dirty Words

...software teams have their own lists of bad words.

slide-66
SLIDE 66

custom metrics 1. 2. 3. 4. 5. 6. 7.

Our 7 Dirty Words

(Sorry, I should have blurred those out. ;-)

slide-67
SLIDE 67

$ ack -cl 'XXX|TODO|FIXME'

  • scon.c:1

This is dead simple to do with ack, the modern-day replacement for grep. Just count string

  • ccurrences across your files, and optionally do a little sorting.
slide-68
SLIDE 68

custom metrics

Test::Fixme

Grepping works on nearly every language, of course. But Perl has its own specific implementation of this metric.

slide-69
SLIDE 69

use Test::Fixme; run_tests(where => 'lib', match => qr/XXX|TODO|FIXME/);

All you have to do is throw a couple of lines into a “.t” file...

slide-70
SLIDE 70

$ make test ... t/test-fixme.t .. 1/1 # Failed test ''lib/OSCON.pm'' # at t/test-fixme.t line 2. # File: 'lib/OSCON.pm' # 34 # XXX:remove the temp limit before we deploy # Looks like you failed 1 test of 1. t/test-fixme.t .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/1 subtests

...and Perl won’t even let your tests pass if you’ve got a naughty word in your code.

slide-71
SLIDE 71

custom metrics

Churn

Another metric that’s not universally used, but can still come in handy, is code churn: how often does a given piece of code change?

slide-72
SLIDE 72

custom metrics

Recently changed code may have new bugs

Churn can tell us what parts have changed recently; those parts may have new bugs.

slide-73
SLIDE 73

custom metrics

Frequently-changed code may have problems

Churn can also tell us what parts change often; those parts can become trouble spots.

slide-74
SLIDE 74

git log --pretty=oneline \

  • -since=2012-05-04 \
  • scon.c | wc -l

You can get as crazy as you want with churn: examining which lines have changed the most, which functions have had the most people working on them, and so on. Git can tell you a lot more than a simple metric can, but if you’re on a centralized system you may want to just grab the data yourself and pick it apart with UNIX tools.

slide-75
SLIDE 75

custom metrics

Missing documentation

If you’re writing code that’s going to get used by developers outside of your team, you might use a metric like documentation coverage to identify the parts of the code that most badly need docs.

slide-76
SLIDE 76

custom metrics

Errors by time of day

Most of the metrics we’ve seen so far have been one-shot numbers. But it’s also possible to track things over time, like occurrences of compiler errors or test failures.

slide-77
SLIDE 77

custom metrics

Play by Play: Zed Shaw

peepcode.com/products/play-by-play-zed-shaw

Zed Shaw does a great demo of this in his Play by Play screencast with PeepCode.

slide-78
SLIDE 78

What do we get from all this?

We’ve talked about the kinds of questions we want to ask about our code, and the metrics that can help us answer those questions. Now for the bigger question: what’s the efgect on our software? Well, here are some of the things that happened with my team.

slide-79
SLIDE 79

Found a real dependency problem with pmccabe

One, I found a surprisingly high complexity number in what was supposed to be a simple math

  • routine. Somebody had snuck in an unwanted dependency on an unrelated system.
slide-80
SLIDE 80

Found dead code with gcov

While looking for untested code, we found some code that didn’t need any tests—because it was never called anyway!

slide-81
SLIDE 81

Did a quick churn check at manual test time

I personally like to look at what features have changed when it’s time to do manual testing.

slide-82
SLIDE 82

Found places we can DRY up the code

Some designs come at a time when our understanding of the domain is imperfect. As our understanding improves, we refactor the code. Complexity metrics can be handy for prioritizing.

slide-83
SLIDE 83

Relative, not absolute!

One of the common themes woven through much of this discussion is that absolute limits for code metrics are not as helpful as relative measures within a project.

slide-84
SLIDE 84

Content-Type: multipart/wish

My hope is that you come away from this session with a couple of ideas for metrics you’d like to try, and with the well-founded belief that you can get started with very little time investment.

slide-85
SLIDE 85

❥ Find the answers you need ❥ Look like heroes ❥ Have fun

I hope you find the answers you need for your project, and that you have fun getting them.

slide-86
SLIDE 86

Fin

Thank you, and have a fantastic OSCON.

slide-87
SLIDE 87

Credits

flickr.com/photos/aussiegall/286709039 flickr.com/photos/bensutherland/252230820

The images in this presentation were used by permission under the terms of a Creative Commons license.