How to Conduct Code Reviews


A good post to start with is here:

I thought I’d share my own “wisdom” on the subject, regarding what I do:

1) I conduct the code review to cover the good, the bad, and the ugly.
2) I ask myself the question “if I had to maintain this code, what would I want to know?”
3) I ask others to speak up if they see anything in my algorithms or structure that I’m doing wrong.
4) I often present alternate implementations with pros and cons (for example, using metadata and reflection vs. imperative code, using LINQ vs. “old style” coding.)
5) And most importantly, I don’t lead code reviews of other people’s code, I ask them to lead a code review of their own code.

The result is a learning experience for everyone (including me) and particularly the discovery of algorithm deficiencies or where comments would really be helpful.


My View of WPF

To tell the truth I’ve never worked with a larger pile of crap than WPF. It is a complicated, HTML duplicating, half baked, non updating, patterns over killing … anomaly … that provides zero productivity boost over WinForms and I’m still seeking for a cult that actually divides programming between a C# writing programmer and a XAML writing graphical designer. That’ll be the day.

They should’ve detached WinForms from underlying Windows and extended it instead. WinForms is a result of three decades of event driven GUI development. Bloody Bill’s Che Guevaras threw it away for a poor experiment in complicating what has already been simplified in the 90ties. — Tomaž Štih

I couldn’t have said it better!

The First Nail in the Crypt of CryptoCurrency


Sorry, couldn’t help myself with that bad pun.  The National Conference of Commissioners on Uniform State Laws, or Uniform Law Commission (ULC, visit their website) which “provides states with non-partisan, well conceived, and well drafted legislation that brings clarity and stability to critical areas of state statutory law” has a committee on the “Regulation of Virtual Currency Businesses Act.”  On July 19, 2017, it released the “approved text” of a draft “approved and recommended for enactment in all the states,” the PDF which you can read here.

This recommends some very specific regulations with regards to cryptocurrencies, or “virtual currencies.”  Here’s a couple of the more interesting recommendations:

SECTION 201. LICENSE. A person may not engage in virtual currency business
activity, or hold itself out as being able to engage in virtual currency business activity, with a resident unless the person is:
(1) licensed under this [act];
(2) licensed to conduct virtual currency business activity by a state with which this state
has a reciprocity agreement;
(3) a registrant operating in compliance with Section 210; or
(4) exempt from this [act] under Section 103.

And with regards to what “virtual currency” means:

“Virtual currency” means
(A) a digital representation of value that:
(1) is used as a medium of exchange, unit of account, or store of value; and
(2) is not legal tender, whether or not denominated in legal tender; and
(B) does not include:
(1) a transaction in which a merchant grants value as part of an affinity or
rewards program, which value cannot be taken from or exchanged with the merchant for legal tender, bank credit, or virtual currency; or
(2) a digital representation of value issued by or on behalf of the publisher
and used within an online game, game platform, or family of games sold by the same publisher or offered on the same game platform.

This is the beginning of the process of controlling cryptocurrency, both in terms of license and regulation as well as constraining the exchange of cryptocurrency to essentially “play money.”

Regardless of the many voices decrying this recommendation, and regardless of China’s recent move to ban Initial Coin Offerings (ICO’s), we will most likely see laws put into place making it illegal, and therefore punishable by prison and fines (and not payable in virtual coins!) for anyone that attempts to create a cryptocurrency as a means of exchange of value, except for reward programs and games.

So if you’re one of those futurist people that thinks they can build a better, more equitable world by creating a community based on untaxed, unregulated exchange of goods and services (think organic food growers, alternative schools, natural building materials, and magical thinking healers) with your own virtual currency, think again.

RIP, Cryptocurrency!

Class-less Coding – Minimalist C# and Why F# and Function Programming Has Some Advantages


Can we use just the native .NET classes for developing code, rather than immediately writing an application specific class that often is little more than a container?  Can we do this using aliases, a fluent style, and extension methods?  If we’re going to just use .NET classes, we’re going to end up using generic dictionaries, tuples, and lists, which gets unwieldy very quickly.  We can alias these types with using statements, but this means copying these using statements into every .cs file where we want to use the alias.  A fluent (“dot-style”) notation reduces code lines by representing code in a “workflow-style” notation.  In C#, if we don’t write classes with member methods, then we have to implement behaviors as extensions methods.  Using aliases improves semantic readability at one level at the cost of confusing generic type nesting in the alias definition.  Extension methods can be taken too far, resulting in two  rules: write lower level functions for semantic expressiveness, and avoid nested parens that require the programmer to maintain a mental “stack” of the workflow.  In contrast to C#’s using aliases, F# type definitions are not aliases, they are concrete types.  New type definitions can be created from existing types.  Type definitions can also be used to specify a function’s parameters and return value.  The forward pipe operator |> is similar to the fluent “dot” notation in C#, but the value on the left of the |> operator “populates” the last parameter in the function’s parameter list.  When functions are written that return something, the last function must be piped to the ignore function, which is slightly awkward.  F# type dependencies are based on the order of the files in the project, so a type must be defined before you use it.  In C#, creating more complex aliases get messy real fast — this is an experiment, not a recommendation for coding practices!  In F#, we don’t need an Action or Func class for passing functions because F# inherently supports type definitions that declare a function’s parameters and return value — in other words, everything in functional programming is actually a function.  Tuples are a class in C# but native to functional programming, though C# 6.0 makes using tuples very similar to F#.  While C# allows function parameters to be null, in F#, you have to pass in an actual function, even if the function does nothing.  F# uses a nominal (“by name”) as opposed to structural inference engine, Giving types semantically meaningful names is very important so that the type inference engine can infer the correct type.  In C#, changing the members of class doesn’t affect the class type.  Not so with F# (at least with vanilla records) — changing the structure of a record changes the record’s type.  Changing the members of a C# class can, among other things, lead to incorrect initialization and usage.   Inheritance, particularly in conjunction with mutable fields, can result in behaviors with implicit understanding like “this will never happen” to suddenly break.  Extension methods and overloading creates semantic ambiguity.  Overloading is actually not supported in F# – functions must have semantically different names, not just different types or parameters lists.  Object oriented programming and functional programming both have their pros and cons, with some hopefully concrete discussion presented here.

Full article on Code Project here.

Luna – Visual and textual functional programming language with a focus on productivity, collaboration and development ergonomics.

Take a look at what these folks are doing.   Very cool stuff!!!

Software design always starts with a whiteboard. We sketch all necessary components and connect them to visualize dependencies. Such component diagram is an exceptionally efficient foundation for collaboration, while providing clear view over the system architecture and effectively bridging the gap between technical and non-technical team members.


Unfortunately, it is impossible to execute the diagram itself, therefore the logic has to be implemented as a code.

Now that’s the part I disagree with, in the sense that, once the code-behind is written, and written in a modular and semantic way, you should be able to visually build the workflows, reduce/map/filter operations from “primitive” blocks, which then become bigger blocks from which you build from, etc.