Individual Productivity Variation in Software Development
Some people have asked for more background on where the “10x” name of this blog came from. The gist of the name is that researchers have found 10-fold differences in productivity and quality between different programmers with the same levels of experience and also between different teams working within the same industries.
The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.
Related: 10x Software Development Course—Get started: 1 month for $1
Detailed examination of Sackman, Erickson, and Grant’s findings shows some flaws in their methodology (including combining results from programmers working in low level programming languages with those working in high level programming languages). However, even after accounting for the flaws, their data still shows more than a 10-fold difference between the best programmers and the worst.
In years since the original study, the general finding that “There are order-of-magnitude differences among programmers” has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).
There is also lots of anecdotal support for the large variation between programmers. During the time I was at Boeing in the mid 1980s, there was a project that had about 80 programmers working on it that was at risk of missing a critical deadline. The project was critical to Boeing, and so they moved most of the 80 people off that project and brought in one guy who finished all the coding and delivered the software on time. I didn’t work on that project, and I didn’t know the guy, so I’m not 100% sure the story is even true. But I heard the story from someone I trusted, and it seemed true at the time.
This degree of variation isn’t unique to software. A study by Norm Augustine found that in a variety of professions–writing, football, invention, police work, and other occupations–the top 20 percent of the people produced about 50 percent of the output, whether the output is touchdowns, patents, solved cases, or software (Augustine 1979). When you think about it, this just makes sense. We’ve all known people who are exceptional students, exceptional athletes, exceptional artists, exceptional parents–these differences are just part of the human experience; why would we expect software development to be any different?
Extremes in Individual Variation on the Bad Side
Augustine’s study observed that, since some people make no tangible contribution whatsoever (quarterbacks who make no touchdowns, inventors who own no patents, detectives who don’t close cases, and so on), the data probably understates the actual variation in productivity.
This appears to be true in software. In several of the published studies on software productivity, about 10% of the subjects in the experiments weren’t able to complete the experimental assignment. In the studies, the write ups say, “Therefore those experimental subjects’ results were excluded from our data set.” But in real life if someone “doesn’t complete the assignment” you can’t just “exclude their results from the data set.” You have to wait for them to finish, assign someone else to do their work, and so on. The interesting (and frightening) implication of this is that something like 10% of the people working in the software field might actually be contributing *negative& productivity to their projects. Again, this lines up well with real-world experience. I think many of us can think of specific people we’ve worked with who fit that description.
Team Productivity Variation in Software Development
Software experts have long observed that team productivity varies about as much as individual productivity does–by an order of magnitude (Mills 1983). Part of the reason is that good programmers tend to cluster in some organizations, and bad programmers tend to cluster in other organizations, an observation that has been confirmed by a study of 166 professional programmers from 18 organizations (Demarco and Lister 1999).
Related: 10x Software Development Course—Get started: 1 month for $1
In one study of seven identical projects, the efforts expended varied by a factor of 3.4 to 1 and program sizes by a factor of 3 to 1 (Boehm, Gray, and Seewaldt 1984). In spite of the productivity range, the programmers in this study were not a diverse group. They were all professional programmers with several years of experience who were enrolled in a computer-science graduate program. It’s reasonable to assume that a study of a less homogeneous group would turn up even greater differences.
An earlier study of programming teams observed a 5-to-1 difference in program size and a 2.6-to-1 variation in the time required for a team to complete the same project (Weinberg and Schulman 1974).
After reviewing data more than 20 years of data in constructing the Cocomo II estimation model, Barry Boehm and other researchers concluded that developing a program with a team in the 15th percentile of programmers ranked by ability typically requires about 3.5 times as many staff-months as developing a program with a team in the 90th percentile (Boehm et al 2000). The difference will be much greater if one team is more experienced than the other in the programming language or in the application area or in both.
One specific data point is the difference in productivity between Lotus 123 version 3 and Microsoft Excel 3.0. Both were desktop spreadsheet applications completed in the 1989-1990 timeframe. Finding cases in which two companies publish data on such similar projects is rare, which makes this head-to-head comparison especially interesting. The results of these two projects were as follows: Excel took 50 staff years to produce 649,000 lines of code. Lotus 123 took 260 staff years to produce 400,000 lines of code. Excel’s team produced about 13,000 lines of code per staff year. Lotus’s team produced 1,500 lines of code per staff year. The difference in productivity between the two teams was more than a factor of 8, which supports the general claim of order-of-magnitude differences not just between different individuals but also between different project teams.
What Have You Seen?
Have you seen 10:1 differences in capabilities between different individuals? Between different teams? How much better was the best programmer you’ve worked with than the worst? Does 10:1 even cover the range?
I look forward to hearing your thoughts.
References
Augustine, N. R. 1979. “Augustine’s Laws and Major System Development Programs.” Defense Systems Management Review: 50-76.
Boehm, Barry W., and Philip N. Papaccio. 1988. “Understanding and Controlling Software Costs.” IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.
Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.
Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. “Prototyping Versus Specifying: A Multiproject Experiment.” IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.
Card, David N. 1987. “A Software Technology Evaluation Program.” Information and Software Technology 29, no. 6 (July/August): 291-300.
Curtis, Bill. 1981. “Substantiating Programmer Variability.” Proceedings of the IEEE 69, no. 7: 846.
Curtis, Bill, et al. 1986. “Software Psychology: The Need for an Interdisciplinary Program.” Proceedings of the IEEE 74, no. 8: 1092-1106.
DeMarco, Tom, and Timothy Lister. 1985. “Programmer Performance and the Effects of the Workplace.” Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.
DeMarco, Tom and Timothy Lister, 1999. Peopleware: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.
Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.
Sackman, H., W.J. Erikson, and E. E. Grant. 1968. “Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” Communications of the ACM 11, no. 1 (January): 3-11.
Valett, J., and F. E. McGarry. 1989. “A Summary of Software Measurement Experiences in the Software Engineering Laboratory.” Journal of Systems and Software 9, no. 2 (February): 137-48.
Weinberg, Gerald M., and Edward L. Schulman. 1974. “Goals and Performance in Computer Programming.” Human Factors 16, no. 1 (February): 70-77.