I was reading a post on LinkedIn (which I will not link to, haha) about why we use interfaces. I’ve found that the concept of interfaces is not something that junior to mid-level developers easily understand. Here’s the answer:
Interfaces let you separate specification from implementation.
The problem with this answer is that it leads to the next question:
Why is it useful to separate specification from implementation?
This is the sticky point — if you don’t have a lot of experience writing code, you probably don’t know why, simply because you haven’t had enough real world experience to understand this concept. So here’s the rest:
When specification is separated from implementation, it allows the implementation to change. In terms of actual usage, it means that different implementations can be concretely instantiated, each implementing the same specification. Once instantiated, the user of the specification (another part of the program) works with the implementation through the interface rather than the concrete implementing type. This way, the user of the specification doesn’t and shouldn’t care about the underlying implementing type.
- Design patterns are a response to solving the entanglement nightmare that OOD, while not creating, made more complex.
- While the formalization of the patterns was in some ways useful, the implementation often results in over-complexity and misapplication, especially by inexperienced programmers.
- Experienced programmers were already implementing decent ways to disentangle non-OO and OO code, so really, I think very little was gained by formalizing patterns. If anything, it made things worse for experienced developers who had to go in and fix the insanity of bad pattern application by less experienced developers.
And apparently (and sad to say), there is much agreement in the community regarding points 2 & 3.
This is not a long tome on software architecture. This is simply about how do to write good code.
Code consists of functions, and this is what a function should look like (using C# style as an example):
// A comment describing why the function exists
... how the function does what it says it does.
That’s it! If you follow the “why-what-how” approach, the quality of your code will start to improve!
Here’s a real life example:
/// The user can get an instance of a service implementing
/// the specified interface, but only if the service is
/// registered as supporting instance creation.
public virtual T GetInstance<T>()
where T : IService
IService instance = CreateInstance<T>();
- The “why” is clear — we’re supporting a need the programmer will have for creating service instances.
- The “what” is clear — the function returns an instance of the specified generic type T.
- The “how” is clear — there are four function calls that describe how this function works.
Here’s a simple guideline for figuring out when to break a function apart into smaller functions: If your function has “how” fragments that ought to have “how”, “what” or “why” comments, then this is great indicator that you should move those fragment into separate functions so that the function name describes “what” and then write a “why” comment for the function.
In other words, when you could write a comment in the body of the code that answers one of those questions, you need to move that code into a function that answers the “what”. Recurse that process until the function clearly describes “how”, then for each function you created, write an intelligent “why” comment.
Now go forth and code better!
Three blind programmers. Three blind programmers.
See how they code. See how they code.
They all went over the waterfall,
They scrambled and scrum’ed
But they weren’t agile enough,
And they drowned in soggy kanban post-its.
Did you ever see such a sight in your life,
As three blind programmers?
Recently Robert C. Martin posted The Programmer’s Oath, to which I came up with my own version and posted on the Code Project. It seems I hit a nerve–I don’t usually get so many up-votes for a post. Here’s my version (slightly less harsh than my original post!)
- I will not work with people that work with #1.
- I will code when my brain feels like coding, I will not code on YOUR time frame. I will however work as much as it takes to ensure that agreed upon deadlines are met.
- I will not work in a cubicle.
- I will not put up with sh***y equipment and stupid management decisions.
- I will write code that is maintainable, extensible, commented, and documented, no matter what management says.
- I will spend time testing my code, but YOU damn well better have a people, resources, and the commitment to test my code independent of me.
- I will write code using my own well thought out architecture, not some fly-by-the-seat-of-your-pants Agile methodology bullsh*t.
- I will not waste my valuable time learning some half-ass open source latest rage just because every other idiot says it’s the latest rage.
- I will always make time to work on my own stuff because frankly, it’s usually more interesting (but not always!) than the project I’m working on that actually pays the bills.
Given the number of up-votes, I think this speaks volumes to the issues developers are constantly dealing with and to the perceived problems with software development, the burgeoning of open-source frameworks, and the constant issues with management and work environment.
Download it from Syncfusion!
“The concept of a “web server” has become fuzzy because the server is now entwined with the dynamic requirements of web applications. Handling a request is no longer the simple process of “send back the content of this file,” but instead involves routing the request to the web application, which, among other things, determines where the content comes from. In Web Servers Succinctly, author Marc Clifton provides great insights on the benefits of building your own web server, and covers different options available for threading, work processes, session management, routing, and security.”