The Servitude Language of Corporations and Managers

I was reading Don’t Let Employees Pick Their WFH Days and, like so many other articles of this kind, am struck by the use of “servitude” language when speaking of employees. Or, to use the more politically incorrect term, “master/slave” language. As Merriam-Webster defines servitude: a condition in which one lacks liberty especially to determine one’s course of action or way of life.

Examples:

How much choice should workers have in the matter?

When someone curtails the freedom of choice of another, that is control, and in the workplace, servitude.

On the one hand, many managers are passionate that their employees should determine their own schedule.

While that sentence sounds rather positive, the subtle message here is that managers have control over their employees schedule.

after talking to hundreds of organizations over the last year, have led me to change my advice from supporting to being against employees’ choosing their own WFH days.

“against employees’ choosing” – servitude.

So I have changed my mind and started advising firms that managers should decide which days their team should WFH. For example, if the manager picks WFH on Wednesday and Friday

Just amazing to me, the idea that someone else can have such control over my life.

Thankfully, there is this manager:

One manager told me “I treat my team like adults. They get to decide when and where they work, as long as they get their jobs done.

An employer/employee relationship is an agreement that among other things usually includes an expectation of work hours. Where the work is done can be explicitly stated (and often the work must be done in “the office”, but it is also frequently ill-defined. However, in many cases, particularly in the tech industry, the requirement to work at the office is entirely arbitrary. It is often prejudiced, with managers having more flexibility to work from home than the non-manager employees. It is often arbitrary, where one manager in the organization is very flexible regarding work-from-home and another manager is a militant about his team working in the office.

I have been lured into employment in the past with a stated “very flexible work from home schedule” during the interview, only to find myself “managed” by a militant dictator who doesn’t allow any work from home.

I have worked successfully as a contractor for 20 years, where it is illegal to require the contractor to work on premise unless required by the work itself.

And I have worked for companies where the response to working from home is “no problem!”

During the pandemic, I have had the luxury (many do not) of being able to work from home. This has dramatically and ironically improved the quality of my life. I live in a very rural, artistic, community. Working from home has afforded me the luxury of meeting with people in my pod with much greater flexibility. This includes artistic presentations, outdoor gatherings, and so forth, that I would have missed if I were in the office from 8 to 5 (all of course following the guidelines of the CDC.) And simple things as well, like running an errand during lunchtime or even when I just need to take a break and think about a problem away from the computer screen.

At the other extreme, 21% tell us they never want to spend another day working from home. These are often young single employees or empty nesters in city center apartments.

Corporations and their managers need to embrace diversity, as everyone’s individual needs are different.

They [managers] often confided that home-based employees in their teams get passed over on promotions because they are out of touch with the office … given the evidence that working from home while your colleagues are in the office can be highly damaging to your career.

This isn’t the responsibility of the employee to fix. It is the responsibility of the company and the managers.

While we have laws against slavery, in many ways working for corporate America is a silent, permissible, accepted form of servitude, and like slavery, does not recognize the free human being. While often we have little or no choice regarding the employer for which we work, that does not mean that the employer should treat us with any less human dignity.

To conclude, Merriam-Webster has this to say:

Servitude is slavery or anything resembling it. The entire black population of colonial America lived in permanent servitude. And millions of the whites who populated this country arrived in “indentured servitude”, obliged to pay off the cost of their journey with several years of labor. Servitude comes in many forms, of course: in the bad old days of the British navy, it was said that the difference between going to sea and going to jail was that you were less likely to drown in jail.

So I ask, why are we so willing to pay off the cost of “our journey through life” in servitude, rather than the antonyms of servitude. freedom, liberty? What are managers so afraid of?

Code Review – What You Can Learn From a Single Line of Code

codereview

(image credit – a good article)

My first article of 2018 has been posted on Code Project!

It never fails to surprise me what can be gleaned from a single line of code. Gone are the days of BASIC where each line did one thing, a print statement, a gosub or goto, if-then-elseif-end if. Nowadays, a single line of code can be a chain of method calls, LINQ expressions, and operators like ?: (ternary if-else), ?. (null continuation), ?? (null coalescing) and even if-then-else implemented as extension methods.

What we’ll look at here is what can be gleaned about the implementation from just one line of code.

Read more here.

My View of WPF

To tell the truth I’ve never worked with a larger pile of crap than WPF. It is a complicated, HTML duplicating, half baked, non updating, patterns over killing … anomaly … that provides zero productivity boost over WinForms and I’m still seeking for a cult that actually divides programming between a C# writing programmer and a XAML writing graphical designer. That’ll be the day.

They should’ve detached WinForms from underlying Windows and extended it instead. WinForms is a result of three decades of event driven GUI development. Bloody Bill’s Che Guevaras threw it away for a poor experiment in complicating what has already been simplified in the 90ties. — Tomaž Štih

I couldn’t have said it better!

Office Politics and Sh*tty Code

office politics.jpg

Kent Sharkey on Code Project recently asked this question:

I’m just curious to know how everyone else here deals with poorly written code in pre-existing projects.

Here’s my answer:

A few of the common questions about the code:

  • Is it bug riddled?
  • Is it hard (or impossible) to add new functionality?
  • Does it run only on/with obsolete or soon to be obsoleted technologies?
  • Does it have performance problems, and are those performance problems intrinsic to the architecture (or lack thereof)?
  • Is there a complete lack of unit and integration testing?  Would the code simply benefit by developing a test suite?
  • Is deployment a PITA?
  • Is the project set up correctly?  Does it have development, test, stage, and production deployment environments?
  • Are the tools being used for development archaic?

Regarding office politics:

  • Is the original coder/team still around?
  • Is there an adversarial situation between the coders and the users?
  • Does management b*tch about the problem but refuse to allocate the funds to fix it?
  • Has management become jaded with in-house development and thinks outsourcing / third party COTS, or bringing in a consulting team (at 10x the cost of in-house development) will fix the problem?
  • Does management think patching the code rather than rewriting it will fix the problem?
  • Does management even trust its developers?
  • Do the developers trust their managers?
  • Do the customers (in house or otherwise) trust the coders?
  • Do the customers trust the company / managers?

So, before even touching sh*tty code in an environment rich in office politics, those questions need to be answered and the issues addressed:

  • Managers, developers and customers need to be brought on board with small wins.  This doesn’t necessarily mean fixes in the code.  It also, and much more importantly, means communication, particularly, feeling heard.  And that means not just listening to the complaints, but coming back with a prioritized plan to address those complaints.
  • Develop trust before developing code. That’s a cute slogan, but think about how you communicate a project plan, even a small one, and how you present measuring success, how you communicate progress and obstacles, so the coders, managers, and customers (in house or otherwise) all, and I mean all, feel confidence moving forward.
  • Everyone needs to see themselves as a stake holder. Particularly, that means management needs to be interested, involved, and engaged in the fixing process. No “disconnected” management, the typical “do this by then or else” style of management.  Customers need to be engaged too with testing fixes and providing feedback.
  • Fix the blame game so that people are oriented toward solutions.
  • Get 100% agreement (even if everyone starts at “this all sucks”) and move rapidly towards 100 % agreement on “this is how we make it great.”
  • Instead of daily stand ups of “what did I do, what am I working on, what are my obstacles”, focus instead on “how’s my enthusiasm level, how well am I working with others, how well are others working with me.”
  • Lastly, be honest, and if you think someone isn’t being honest, call them on it. If trust and honesty doesn’t rapidly become the new culture, then accept it and move on to another job where the climate is healthier.

Those are all things I’ve encountered and is the boilerplate list of questions and “moving forward” approach that I take.

The Dangers of Duck-Typed Languages

ducktyped.jpg

Try these examples yourself in repl.it

First, is the ambiguity of what something is.  For example, consider this Python example:

> a=[] 

We have no idea what “a” is an array of.  Now, many people will say, it doesn’t matter, you’re not supposed to know, you just operate on the array.  There is a certain point to that, which however can lead to trouble.

Let’s try this:

> a=[1, 2, 3]

Ah, so you think we have an array of integers?  Think again:

> a.append('4')
> a
[1, 2, 3, '4']

Whoa!  That’s an array of mixed types.  In some ways that’s cool, in other ways, that’s dangerous.  Let’s say we want to add one to each element in the array, and we trust that the programmer that created / modified the array knows that it is supposed to be an array of int’s.  But how would they know?  Someone else can come along and not realize that they’re appending the array with a string.  So now we come along, expecting a happy array of int’s, and do this:

> [x+1 for x in a]
TypeError: cannot concatenate 'str' and 'int' objects

Oops – we get a runtime error!

What happens in Ruby:

> a=[1, 2, 3, '4']
[1, 2, 3, "4"]
> a.map {|x| x+1}
no implicit conversion of Fixnum into String

What happens in Javascript:

> a=[1, 2, 3, '4']
[1, 2, 3, '4']
> a.map(function(x) {return x+1})
[2, 3, 4, '41']

Holy Cow, Batman!  In Javascript, the string element is concatenated!

What does this mean?

It means that, among other things, the programmer must be defensive against, not necessarily the errors (sorry, I meant “usage”) of other programmers, but certainly the lack of strong typing in the language.  Consider these “solutions”:

Python:

> [int(x)+1 for x in a]
[2, 3, 4, 5]

Ruby:

> a.map {|x| x.to_i + 1}
[2, 3, 4, 5]

Javascript:

> a.map(function(x) {return parseInt(x)+1})
[ 2, 3, 4, 5 ]

Of course, if you have a floating point number in the array, it’ll be converted to a integer, possibly an unintended side-effect.

Another “stronger” option is to create a class specifically for integer arrays:

Python:

class IntArray(object):
  def __init__(self, arry = []):
    self._verifyElementsAreInts(arry)
    self.arry = arry

  # support appending to array.
  def __add__(self, n):
    self._verify(n)
    self.arry.append(n)
    return self

  # support removing element from array.
  def __sub__(self, n):
    self._verify(n)
    self.arry.remove(n)
    return self

  def _verifyElementsAreInts(self, arry):
    for e in arry:
      self._verify(e)

  def _verify(self, e):
    if (not isinstance(e, int)):
      raise Exception("Array must contain only integers.")


# good array
a = IntArray([1, 2, 3])
a += 4
print(a.arry)
a -= 4
print(a.arry)

try:
  a += '4'
except Exception as e:
  print(str(e))

# bad array
try:
  IntArray([1, 2, 3, '4'])
except Exception as e:
  print(str(e))

With the results:

[1, 2, 3, 4]
[1, 2, 3]
Array must contain only integers.
Array must contain only integers.

What this accomplishes is:

  1. Creating a type checking system that a strongly typed language does for you at compile-time
  2. Inflicting a specific way for programmers to add and remove items from the array (what about inserting at a specific point?)
  3. Actually doesn’t prevent the programmer from manipulating arry directly.
  4. Javascript? It doesn’t have classes, unless you are using ECMAScript 6, in which case, classes are syntactical sugar over JavaScript’s existing prototype-based inheritance.sed inheritance.

The worst part about a duck-typed language is that the “mistake” can be made but not discovered until the program executes the code that expects certain types.  Would you use a duck-typed language as the programming language for, say, a Mars reconnaissance orbiter?  It’ll be fun (and costly) to discover an error in the type when the code executes that fires up the thrusters to do the orbital insertion!

Which is why developers who promote duck-typed languages also strongly promote unit testing.  Unit testing, particularly in duck-typed languages, is the “fix” for making sure you haven’t screwed up the type.

And of course the irony of it all is that underlying, the interpreter still knows the type.

It’s just that you don’t.

TDD is not the Failure, our Culture of Development Is

In the article TDD is dead.  Long live testing.  and a subsequent response The pitfalls of Test-Driven Development, both authors, in my opinion, are missing the mark by a mile. In the real world, walking into an existing code base, the reason you need TDD (but can’t use it) is because programmers didn’t spend sufficient time on good architecture practices, resulting in code that is an entangled morass of intertwined concerns, and yes, as one of the author’s points out, because the code itself was never intended to be tested except through, at best, acceptance test procedures (most likely the pen and paper variety — with the developer watching in the background hoping he rolls a 20.)  There is an overall lack of attention paid to designing a component with sufficient abstraction that it can accommodate changing requirements, and little or no attention paid to separating (decoupling) the behaviors between components such that component A shouldn’t ever care about what component B is doing. We have a philosophy of refactoring with “just get the minimum to work” to blame for that — the days of thinking about architecture and abstraction are long gone.

The metaphor of building a sky-scraper is inaccurate because anyone building a sky-scraper would know that you can’t make the walls of the first floor so weak that they can’t support a second floor. Except it is accurate because, not paying attention to the requirements and living in a “do the minimum work” philosophy, promoted by the likes of Kent Beck’s Agile Programming and Martin Fowler’s refactoring philosophies (along with their mutant child Extreme Programming), this actually is exactly what ends up happening, and thus TDD is an absolute necessary fallout of a broken coding paradigm. A more accurate metaphor would have been, the requirements called for a single story building, then the requirements changed. Again, with sufficient abstraction up front, the straw-bale walls could be replaced with titanium reinforced hay quite easily.

As for Rails (or rather Ruby, or rather any duck-typed language), TDD is again essential because duck-typing allows for variances in the behavior at runtime, both of type and function calls. The non-strictness of duck-typing is leveraged in lieu of good object-oriented design–why create sub-classes when I can just pass in an instance that quacks just like the other “ducks.” While object-oriented programming cannot be done well without object-oriented design (and yes, I’ve seen both the “P” and the “D” done horribly and have done it horribly myself) a duck-type language allows the programmer to completely eliminate the “D” — class, method, quack, quack. Perhaps we should take a clue from the name, “duck-typing”, that it is actually quackery, and like medical quackery, promises rapid “feel good” development that you end up paying for in little fragments of time running unit and integration tests because you didn’t do the necessary up front design, you haven’t clearly abstracted how the rules are handled, you haven’t clearly decoupled the behaviors of complex systems to identify the dependencies (which all complex systems will have.) If you add up all the time spent running those tests (which of course spawned whole new technologies to run those tests faster and faster) you will discover that over the lifetime of the product, you spent far more time watching little green bars (or red ones) than you would have spent on solid up-front architecture, particularly in the areas of abstraction. But nay, it quacks, and it’s fast. At first.

As for “the industry’s sorry lack of automated, regression testing”, here, let’s not blame the programmer directly even though they consistently over-promise and under-estimate, but rather, let’s blame a culture, yes, starting with “the geek” but also placing responsibility firmly on the business practices of management and the continually demonstrated lack of understanding of the importance of regression testing, and the time & cost that developing regression tests and even more costly, maintaining those tests, requires. As with essentially all other aspects of our society, we are living in a constant tension between short-term gains and long-term investment (TDD can be considered an investment) and we all know which side is winning. We have a culture that rewards quick results and punishes the methodical and (seen as) slow thinker. Some of this may be justifiable due to market pressures and real budgetary constraints, but what is lacking is the consciousness to balance planning and activity.  So what we have instead is a knee-jerk culture oriented to quick results (Agile Programming and Refactoring) which, to support a broken development paradigm, demands TDD as the “fix”, but nobody seems sees that.

When one of the author’s writes “I have yet to see a concrete illustration of how to use Test-Driven Development to test a back-end system interacting with a 20-year-old mainframe validating credit card transactions” I laugh because I have done just that — CC validation systems all have the means of simulating transactions, it’s actually trivial to write TDD’s against such systems. Yes, the author does have a point that much of the “…source code [encountered in legacy systems]…was never designed to be tested in the first place…”, but again, that’s missing the point — TDD is clearly the wrong tool for those systems. TDD works best in an environment:

  • lacking architecture,
  • most likely using duck-typing languages,
  • and, most importantly, one that has started from ground zero with testing as one of the coding requirements.

This is independent of whether it’s a 200 line gem (as in Ruby library as opposed to a “great thing”) or a 100 million line application. If those 100 million lines were written with the intention of being unit / feature tested, then there is no problem. Except that it’s TDD and probably not well architected out of the gate.  And let me be clear that TDD, when applied to a well architected application, is a perfectly valid and beneficial practice, but then TDD is also simplified because it ends up testing behaviors, not architectural flaws and geese trying to pretend to be ducks.

So, is the failure TDD? No. The failure is in a culture entrenched at all levels of software development that says that Agile Programming and Refactoring can replace thinking and it’s brothers “design” and “planning.”  We have Kent Beck and Martin Fowler (I commit sacrilege in criticizing the gods) to squarely blame for that “regression”, excuse me, “story.”  And it’s those two aspects (pun intended) of programming that should be given the boot, not TDD, which has a validity in and of itself under the right conditions.  However, even Agile & refactoring are merely symptoms  (or victims) of a cultural disease that demotes thinking (we need only look at our K-12 education systems and Common Core for proof) and promotes the short-term gain (as demonstrated by our economic, medical, agricultural, etc. practices.)

Beware of Ruby’s string split function

Observe this behavior:

split

 

 

 

And the description in the documentation:

“If the limit parameter is omitted, trailing null fields are suppressed. If limit is a positive number, at most that number of fields will be returned (if limit is 1, the entire string is returned as the only entry in an array). If negative, there is no limit to the number of fields returned, and trailing null fields are not suppressed.”

Now, while I can somewhat understand this behavior, it is certainly in opposition with regards to the behavior of the split function in other languages that I’ve used.  Ruby’s default behavior can cause serious problems when parsing CSV files and auto-populating fields where you expect empty strings rather than nils.

Which brings up the next point:

split2

 

 

When I index outside of the array length, no exception is thrown.  Come on, Ruby!  That’s just bad form.  Again, the Ruby documentation for array says:

To raise an error for indices outside of the array bounds or else to provide a default value when that happens, you can use fetch.

Sigh.

 

LibXML — empty nodes (and the libxml-rails gem)

I was going to title this post “LibXML — how to fail right out of the box” but then thought a more accurately description of the problem might be better.  There is something I don’t understand about the open source community: its tolerance.  I encountered this problem right out of the gate:

The XML:

<?xml version="1.0" encoding="UTF-8"?>
<root_node>
<elem1 attr1="val1" attr2="val2"/>
<elem2 attr1="val1" attr2="val2"/>
<elem3 attr="baz">
  <elem4/>
  <elem5>
    <elem6>Content for element 6</elem6>
  </elem5>
</elem3>
</root_node>

The resulting root children:

libxml-1

 

 

 

 

Do tell me why whitespace and CRLF’s are seen as empty nodes?  Do tell me why this is absurd behavior is tolerated as the default?  I had to google for some indication as to what’s going on.  This fixes the problem (Ruby code):

doc = XML::Document.file('foo.xml', :
  encoding => XML::Encoding::UTF_8, 
  :options => LibXML::XML::Parser::Options::NOBLANKS)

Other than that, it looks like a decent enough package, though I haven’t explored it further.

The libxml-ruby gem

The libxml-ruby gem worked fine, but I did have to, as the documentation says, copy three binaries into the one of the directories in the Windows path.  Happily, the gem comes with the precompiled binaries – that’s a real help and kudos to the gem authors for providing the MinGW32 binaries.

Ruby, Nested Yields, and Implicit Return Values

This is one of the many reasons I cringe at languages like Ruby with implicit behaviors.  Take this example:

class DoStuff
  attr_reader :accum

  def initialize
    @accum = ''
  end

  def do_a(&block)
    @accum << 'a'
    @accum << yield
  end

  def do_b(&block)
    @accum << 'b'
    @accum << yield
  end
end

def fubar
  do_stuff = DoStuff.new

  do_stuff.do_a do
    do_stuff.do_b do
      "rec\r\n"
    end
  end

  puts (do_stuff.accum)
end

fubar

Quick, tell me why the return is:

abrec
abrec

The reason is because the outer call do_stuff.do_a() has an implicit return of the result of the call to do_stuff.do_b() and so also returns “rec”.

To fix this, one must explicitly return an empty string as the return:

  do_stuff.do_a do
    do_stuff.do_b do
      "rec\r\n"
    end
    ''
  end

and now the return is:

abrec

So, beware, especially beginner programmers, of the implicit return in Ruby functions.

Compare this with a C# implementation:

public class DoStuff
{
    public string Accum {get; protected set;}

    public DoStuff()
    {
        Accum = "";
    }

    public void DoA(Func a)
    {
        Accum.Append("a");
        Accum.Append(a());
    }

    public void DoB(Func b)
    {
        Accum.Append("b");
        Accum.Append(b());
    }
}

class Program
{
    static void Main(string[] args)
    {
        DoStuff doStuff = new DoStuff();
        doStuff.DoA(() => doStuff.DoB(() => "rec\r\n"));
        Console.WriteLine(doStuff.Accum.ToString());
   }
}

We get a compiler error:

Cannot implicitly convert type ‘void’ to ‘string’

This clearly tells us we have done something wrong.

If we change the return types to strings, then it becomes obvious (hopefully) that we want to return an empty string:

public string DoA(Func a)
{
    Accum.Append("a");
    Accum.Append(a());
            
    return "";
}

public string DoB(Func b)
{
    Accum.Append("b");
    Accum.Append(b());

    return "";
}

and we get the desired behavior.

We can of course write the code the illustrate the implicit return of the Ruby code, but of course the C# code clearly illustrates this (and therefore the error of our ways):

public string DoA(Func a)
{
    Accum.Append("a");
    Accum.Append(a());
            
    return Accum.ToString();
}

public string DoB(Func b)
{
    Accum.Append("b");
    Accum.Append(b());

    return Accum.ToString();
}

And indeed, we get:

abrec
abrec

just like in my “wrong” Ruby example.

When Metro Fails

Here’s a great example of a fail, in my opinion, of a Metro design.  It’s the installer for Wix Toolset:

wix1

This is a Metro looking starting screen, and it took me probably 15 seconds to figure out where I was supposed to click to install the toolset.

First issue: What’s with all the red?  This means I should be paying attention to something, right?

Second issue: What the heck is this screen?  I was expecting a standard install screen.

Third issue: Now what?  What am I supposed to do?  The icons are meaningless to me.  Oh wait, maybe I should read that teansie-weansie text for each of the boxes.

Fourth issue: Ah, there is “Install” in a tiny font.

Really, just because Microsoft says “Metro” doesn’t mean we all need to jump like automatons, does it?  And if you think Metro is the right way to go, please, please, design something that actually is intuitive.

Further Failures

After clicking on “Install”, I note the following further failures:

  1. The entry on my Window’s taskbar shows an icon that I can only assume is from Wix with no text.  wix2That’s helpful.
  2. The installation starts with a spinning “gear” – I have no idea what it’s doing.
  3. A lot of meaningless file information eventually flashes by, too fast to read, too long to fit on the screen.
  4. The progress bar (if you can figure out that the darker red is a progress bar) jumps right, left, right, left, like a spastic hamster
  5. After it’s completed, the first screenshot still stays there.  Now what?  I guess I should click on “Exit”?

That my 2c.