With regards to software development, productivity is an entirely subjective concept.
It’s entertaining and somewhat disheartening to read some of the definitions of productivity:
“…the state or quality of producing something, especially crops.”
“…the effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input.”
Unit tests anyone?
And my favorite:
“…the rate of production of new biomass by an individual, population, or community; the fertility or capacity of a given habitat or area.
Wikipedia has an interesting definition:
“Productivity describes various measures of the efficiency of production. Often, a productivity measure is expressed as the ratio of an aggregate output to a single input or an aggregate input used in a production process, i.e. output per unit of input, typically over a specific period of time.”
The thing is, the concept of “input” and “output” with regards to productivity is highly abstract when it comes to defining productivity for software development. Input can range from someone saying “this needs to be done” to a full blown spec. Output can be anything from a bug fix to an entire application.
Because of this wildly ludicrous range, we have scrum and agile methodologies which create sprints, breaking down “productivity” into more chewable (but not necessarily more digestible) units:
“A sprint is a short, time-boxed period when a scrum team works to complete a set amount of work.”
It accomplishes this by forcing an arbitrary time interval to the work from which, again somewhat ludicrously, the team’s “velocity” can be measured to create nice graphs for the venture capitalists that are keeping the sinking ship from, well, sinking.
Because only so much can be done within a fixed time period, we have “iterations” and “refactoring” and “only do the minimal amount necessary to get the task in the sprint done.” So velocity looks good on paper, but does anyone measure how many times the same piece of code (and its dependencies) get refactored over a thousand or ten thousand sprints because the task wasn’t given enough time to do it right in the first place?
Of course the solution to that is to decompose the task into, you guessed it, smaller tasks which are “sprintable.” Rinse and repeat until you get a tower of babbling developers, project managers, and C-level managers, each speaking in unrecognizable tongues to the others.
Outsourcing addresses this bottomless pit by getting rid of costly developers and hiring droves of cheap developers that have laser focused myopic vision (see post below on the 737 Max) which results, if you’re lucky, in a failed product, and if you’re less lucky, death. Of the project, of people, of the company, of the management, any and all of the above.
So how do we then measure developer productivity? Let me ask a different question. Why should we measure developer productivity?
The productivity of developers is meaningless before the product hits the market. How can you measure “input” and “output” when the damn thing isn’t even generating any money? Charts of velocity are useless, at best they might tell you when your money is going to run out or when the VC’s are going to pull the plug. I feel my argument is weak here, but I stand by the premise.
The productivity of developers after the product hits the market and is generating revenue might be measurable against certain criteria, such as sales and customer satisfaction. It is also easier to perform sprints on an existing product that is in its maintenance cycle rather than its development cycle because maintenance is mostly tooling improvements, bug fixes and specific new features and the eternal pendulum swing between fragmented (microservices, serverless, etc) and monolithic architectures.
Using sales as a criteria becomes useless when you have a monopoly, or more PC, “cornered the market.” Or you have enough money to buy your competition. Customer satisfaction? Who really cares as long as you’re making sales?
So how then do we measure productivity? Simple. How much money did I make today vs. how much did my developers cost today? If that ratio is > 1, someone (not necessarily your developers) is productive. It could even be the consumer, being productive enough in whatever they do to afford your product, be they person, collective, corporation, or government. If that ratio is < 1, then you have a productivity problem. Somewhere. Not necessarily your developers. Maybe the consumer isn’t buying enough of your product due to an economic downturn. Or simply that your product sucks.
The only time you can actually measure developer productivity is when the company is too small to have a gaggle of managers, a shoal of lawyers, a caravan of tech support people, and a murder of sales “engineers”, on a product already bringing in revenue.
In other words, a startup company that has succeeded in making some sales, usually to corporations or government which will pay for maintenance contracts (hence some revenue stream after the initial sale) and that is most likely growing too fast and too hard and can’t keep up with the customer requirements and bug fixes but hasn’t yet hired the gaggle, shoals, caravans, and murders that a well greased “where did my productivity go?” company requires.
Which brings me to my Alice in Wonderland conclusion, that developer productivity can only be measured in that awkward, painful, stressful, and insane period when a company has “hit it” but hasn’t “gotten it”, there is a minimal amount of obfuscation between the customer and the developer, the backlog of work is far beyond what the current team can accomplish without the tech to transfer brains upon death, and productivity is measured against “this has to get done by Friday or we lose the customer or sale.” In that specific circumstance, productivity is easy to measure. You either succeeded to keep the customer or make the sale, or you failed. Binary. Black and white. You succeeded in producing the output or you didn’t. You were productive or you weren’t.
One final rabbit hole. Developer productivity is also meaningless because it assumes a manufacturing style of “input” and “output” over a given time period. Software isn’t like that. It might take years to write a Google or Facebook, but once it’s done, well, it’s done. The “consumption” of the product is a web link or a 30 second download (unless you’re Microsoft.) So how the heck do you measure productivity <i>now</i> when once the product (the software) is produced, the “output” is little more than a click that clones onto your hard drive (if even that) the pattern of 0 and 1 bits that define the product? Wow, my developers are insanely productive! We’ve had a million visitors to our site this year!!!
Which gets us to the evil of productivity — is Marc more productive than Joe? Meaning, given similar tasks, does Marc get the job done faster and with similar “accuracy” than Joe, or not?
Which, going back to my Alice in Wonderland scenario, is not an issue when your developers are “expert islands” and the developer to island ratio is 1:1. You have no basis for comparison until your company get passed that birth process and the developer to island ratio is 2:1 or more.
And this ratio, by the way, defines “job security” vs. “eek, I’m replaceable”, and therefore drives developers to be perceived as productive when in the “eek, I’m replaceable” corporate structure. Fortunately, there are many islands and the key to success (both for the developer and the corporation) is to have a healthy balance in the developer:island ratio, because developers want to feel unique and valued and not a cog in the machine, but a healthy level of stress and knowledge sharing is also socially rewarding. Which, in terms of psychology, makes for a happier and more productive developer! And ironically, in a corporate environment, leads to the conclusion that only the developer can tell you whether he/she “feels” productive and to what degree, so you’re productivity measure in that scenario becomes entirely subjective. Which was the first sentence in this tome that just killed your productivity.