If Writing is Hard…

In Kislay Verma’s excellent post Why programmers don’t write documentation (April 29, 2021), he writes:

Software engineers, like everyone else, don’t write because writing clearly is very, VERY difficult.

I find this both amusing and scary. If writing documentation clearly is very VERY difficult then doesn’t this imply that writing code clearly is also very difficult?

And there we have it. If you can’t write documentation clearly, I suspect your code is equally unclear, which is exactly my experience.

What is Productivity in the Context of Software Development?

With regards to software development, productivity is an entirely subjective concept.

It’s entertaining and somewhat disheartening to read some of the definitions of productivity:

“…the state or quality of producing something, especially crops.”

okaaaay.

“…the effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input.”

Unit tests anyone?

And my favorite:

“…the rate of production of new biomass by an individual, population, or community; the fertility or capacity of a given habitat or area.

Wikipedia has an interesting definition:

“Productivity describes various measures of the efficiency of production. Often, a productivity measure is expressed as the ratio of an aggregate output to a single input or an aggregate input used in a production process, i.e. output per unit of input, typically over a specific period of time.”

The thing is, the concept of “input” and “output” with regards to productivity is highly abstract when it comes to defining productivity for software development. Input can range from someone saying “this needs to be done” to a full blown spec. Output can be anything from a bug fix to an entire application.

Because of this wildly ludicrous range, we have scrum and agile methodologies which create sprints, breaking down “productivity” into more chewable (but not necessarily more digestible) units:

“A sprint is a short, time-boxed period when a scrum team works to complete a set amount of work.”

It accomplishes this by forcing an arbitrary time interval to the work from which, again somewhat ludicrously, the team’s “velocity” can be measured to create nice graphs for the venture capitalists that are keeping the sinking ship from, well, sinking.

Because only so much can be done within a fixed time period, we have “iterations” and “refactoring” and “only do the minimal amount necessary to get the task in the sprint done.” So velocity looks good on paper, but does anyone measure how many times the same piece of code (and its dependencies) get refactored over a thousand or ten thousand sprints because the task wasn’t given enough time to do it right in the first place?

Of course the solution to that is to decompose the task into, you guessed it, smaller tasks which are “sprintable.” Rinse and repeat until you get a tower of babbling developers, project managers, and C-level managers, each speaking in unrecognizable tongues to the others.

Outsourcing addresses this bottomless pit by getting rid of costly developers and hiring droves of cheap developers that have laser focused myopic vision (see post below on the 737 Max) which results, if you’re lucky, in a failed product, and if you’re less lucky, death. Of the project, of people, of the company, of the management, any and all of the above.

So how do we then measure developer productivity? Let me ask a different question. Why should we measure developer productivity?

The productivity of developers is meaningless before the product hits the market. How can you measure “input” and “output” when the damn thing isn’t even generating any money? Charts of velocity are useless, at best they might tell you when your money is going to run out or when the VC’s are going to pull the plug. I feel my argument is weak here, but I stand by the premise.

The productivity of developers after the product hits the market and is generating revenue might be measurable against certain criteria, such as sales and customer satisfaction. It is also easier to perform sprints on an existing product that is in its maintenance cycle rather than its development cycle because maintenance is mostly tooling improvements, bug fixes and specific new features and the eternal pendulum swing between fragmented (microservices, serverless, etc) and monolithic architectures.

Using sales as a criteria becomes useless when you have a monopoly, or more PC, “cornered the market.” Or you have enough money to buy your competition. Customer satisfaction? Who really cares as long as you’re making sales?

So how then do we measure productivity? Simple. How much money did I make today vs. how much did my developers cost today? If that ratio is > 1, someone (not necessarily your developers) is productive. It could even be the consumer, being productive enough in whatever they do to afford your product, be they person, collective, corporation, or government. If that ratio is < 1, then you have a productivity problem. Somewhere. Not necessarily your developers. Maybe the consumer isn’t buying enough of your product due to an economic downturn. Or simply that your product sucks.

The only time you can actually measure developer productivity is when the company is too small to have a gaggle of managers, a shoal of lawyers, a caravan of tech support people, and a murder of sales “engineers”, on a product already bringing in revenue.

In other words, a startup company that has succeeded in making some sales, usually to corporations or government which will pay for maintenance contracts (hence some revenue stream after the initial sale) and that is most likely growing too fast and too hard and can’t keep up with the customer requirements and bug fixes but hasn’t yet hired the gaggle, shoals, caravans, and murders that a well greased “where did my productivity go?” company requires.

Which brings me to my Alice in Wonderland conclusion, that developer productivity can only be measured in that awkward, painful, stressful, and insane period when a company has “hit it” but hasn’t “gotten it”, there is a minimal amount of obfuscation between the customer and the developer, the backlog of work is far beyond what the current team can accomplish without the tech to transfer brains upon death, and productivity is measured against “this has to get done by Friday or we lose the customer or sale.” In that specific circumstance, productivity is easy to measure. You either succeeded to keep the customer or make the sale, or you failed. Binary. Black and white. You succeeded in producing the output or you didn’t. You were productive or you weren’t.

One final rabbit hole. Developer productivity is also meaningless because it assumes a manufacturing style of “input” and “output” over a given time period. Software isn’t like that. It might take years to write a Google or Facebook, but once it’s done, well, it’s done. The “consumption” of the product is a web link or a 30 second download (unless you’re Microsoft.) So how the heck do you measure productivity <i>now</i> when once the product (the software) is produced, the “output” is little more than a click that clones onto your hard drive (if even that) the pattern of 0 and 1 bits that define the product? Wow, my developers are insanely productive! We’ve had a million visitors to our site this year!!!

Which gets us to the evil of productivity — is Marc more productive than Joe? Meaning, given similar tasks, does Marc get the job done faster and with similar “accuracy” than Joe, or not?

Which, going back to my Alice in Wonderland scenario, is not an issue when your developers are “expert islands” and the developer to island ratio is 1:1. You have no basis for comparison until your company get passed that birth process and the developer to island ratio is 2:1 or more.

And this ratio, by the way, defines “job security” vs. “eek, I’m replaceable”, and therefore drives developers to be perceived as productive when in the “eek, I’m replaceable” corporate structure. Fortunately, there are many islands and the key to success (both for the developer and the corporation) is to have a healthy balance in the developer:island ratio, because developers want to feel unique and valued and not a cog in the machine, but a healthy level of stress and knowledge sharing is also socially rewarding. Which, in terms of psychology, makes for a happier and more productive developer! And ironically, in a corporate environment, leads to the conclusion that only the developer can tell you whether he/she “feels” productive and to what degree, so you’re productivity measure in that scenario becomes entirely subjective. Which was the first sentence in this tome that just killed your productivity.

Progress

In the series Halt and Catch Fire, Joe MacMillan (Season 1, Episode 2) says:

“Progress depends on changing the world to fit us, not the other way around.”

I decided to google that, and it turns out it’s a condensed version from George Bernard Shaw, Man and Superman.

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

Feels like the story of my life (being the unreasonable man) when it comes to software development!

 

 

Microservices: Myth, Madness, or Magic

Excerpt:

If you drink the Kool-Aid, the key phrases to the microsservices bandwagon are:

  • loosely coupled services
  • fine-grained
  • lightweight
  • modular
  • resilient
  • parallelizes development
  • scaleable

The irony here is that we’ve heard pretty much the same mantra starting with object oriented programming / architecture / design, so why are microservices now suddenly (one of) the in-vogue solution to problems that have not been already solved? 

Read more on Code Project.

What is a Full Stack Developer? Two Takes

Take #1

Laurence Gellert’s blog post on “What is a Full Stack developer” is a good summary on the high level expectations of what this phrase might mean to an employer or manager.  He breaks it down into the following categories:

  1. Server, Network, and Hosting Environment.
  2. Data Modeling
  3. Business Logic
  4. API layer / Action Layer / MVC
  5. User Interface
  6. User Experience
  7. Understanding what the customer and the business need

Read his post for a more detailed breakdown if those 7 categories.

Take #2

As John Simmons / Outlaw Programmer so succinctly put it recently:

…”full stack” developers (another way of saying they don’t want to hire enough people to do the job right) that can work on a technology mix that would make most real programmers wince in pain, and nine times out of ten, all they want is to hire someone long enough to clean up the last guy’s mess, or to implement some tech agenda based on some idiot manager’s wrong-headed view of how things should work.

Personally, I think John’s view is more accurate with regards to the reality of how employers use the term “full stack.”  Laurence’s post is accurate at a technical high level but in my opinion is also growing obsolete, particularly items 2-5, when one starts to take into account emerging technologies such as AI, microservices, agent-based and context-based computing.  These technologies completely reshape how we think about data modeling, business logic, the old Model-View-Controller paradigm, and user interfaces.

The Joel Test: work vs. home

Joel Spolsky, 18 years ago, wrote “The Joel Test: 12 Steps to Better Code“.  Granted, it’s 18 years ago, but I thought it would be amusing to score my home work environment with my work work environment:

One point for each yes:

Work:

Do you use source control?  1/2 (yes, but not properly)
Can you make a build in one step? 1
Do you make daily builds? 0 – no, manually initiated
Do you have a bug database? 0 – not that I’ve ever seen
Do you fix bugs before writing new code? 0 – hahahaha
Do you have an up-to-date schedule? 0 – schedule, what’s that?
Do you have a spec? 0 – do you count screenshots in Excel as a spec?
Do programmers have quiet working conditions? 0 – unless you count needing to wear headphones
Do you use the best tools money can buy? 0 – VS2015, .NET 4.5, etc.
Do you have testers? 1
Do new candidates write code during their interview? 0 – I wasn’t.
Do you do hallway usability testing? 0

Score: 2.5

Home:

Do you use source control? 1
Can you make a build in one step? 1
Do you make daily builds? 0
Do you have a bug database? 1
Do you fix bugs before writing new code? 1/2 (I really try to practice this)
Do you have an up-to-date schedule? 0 – my clients are pretty loose about schedules…
Do you have a spec? 1 – …but they’re good about specs.
Do programmers have quiet working conditions? 1 (as in, total silence)
Do you use the best tools money can buy? 1
Do you have testers? 1 (assuming the client doing the testing counts)
Do new candidates write code during their interview? 0 – don’t interview people
Do you do hallway usability testing? 0 – unless the cats count.

Score: 7.5

It’s sad how my home environment scores considerably better than my work environment.  No wonder there’s a “no telecommuting policy”, right?

 

Building a Web-Based Diagramming App with SVG and Javascript

I’ve been wanting to learn about SVG for a while now, and there are certainly any number of helpful websites on creating SVG drawings and animations. But I didn’t want to learn how to create static (or even animated) SVG drawings, I wanted to learn how to use SVG dynamically:

  • Create, modify, and remove SVG elements dynamically.
    Hook events for moving elements around, changing their attributes, etc.
    Save and restore a drawing.
    Discover quirks and how to work around them.

That’s what this article is about — it will only teach you SVG and Javascript in so far as to achieve the goals outlined above. However, what it will teach you is how to create dynamic SVG drawings, and what better way to do this than to actually create a simple drawing program. Then again, I learned a lot about both SVG and modern Javascript writing this article.

Read the rest of the article on Code Project!

Code is also on GitHub.

 

Contextual Data Explorer

cde

Excerpt:

Object oriented programming and relational databases create a certain mental model regarding how we think about data and its context–they both are oriented around the idea that context has data. In OOP, a class has fields, thus we think of the class as the context for the data. In an RDBMS, a table has columns and again our thinking is oriented to the idea that the table is the context for the data, the columns. Whether working with fields or record columns, these entities get reduced to native types — strings, integers, date-time structures, etc. At that point, the data has lost all concept as to what context it belongs! Furthermore, thinking about context having data, while technically accurate, can actually be quite the opposite of how we, as human beings, think about data. To us, data is pretty much meaningless without some context in which to understand the data. Strangely, we’ve ignored that important point when creating programming languages and databases — instead, classes and tables, though they might be named for some context, are really nothing more than containers.

Contextual data restores the data’s knowledge of its own context by preserving the information that defines the context. This creates a bidirectional relationship between context and data. The context knows what data it contains and the data knows to what context it belongs. In this article, I explore one approach to creating this bidirectional relationship — a declarative strongly typed relational contextual system using C#. Various points of interest such as data types and context relationships (“has a”, “is a”, “related to”) are explored. Issues with such a system, such as referencing sub-contexts in different physical root-level contexts, are also discussed.

Read the full article on CodeProject.