FlowSharpCode, continued…

fsc2.png

A simple example, but the “problem” is that the three Drakon shapes (begin loop, output, and end loop) each still have individual C# code-behind in each shape.  For example, the begin loop has the code-behind:

 var n in Enumerable.Range(1, 10)

My original idea was that the Drakon shape description should not define the language-specific syntax, instead that should be implemented by the developer in the code-behind.

In practice (and I’ve written a complex application in FlowSharpCode, so I know) it becomes unwieldy to deal with one-liner code behind, the result of which is that I tend not to use Drakon shapes, but that results in nothing better than a meaningless box with some code in it.

I’m also reluctant to put the code in the shape label (though this is supported) as again we’re now dealing with language specific syntax.

I’m also reluctant to create a meta-language for Drakon shapes, for example, something that could interpret:

n = 1..10

into C#, Python, whatever.  What if the developer wants to write:

Count from 1 to 10

So, what I’m considering is letting the developer create the Domain Specific Language (DSL) so that they can expressively communicate the semantics of a Drakon shape and also provide the rules for how the semantics is parsed, ideally in an intermediate language (IL), for example, something that expresses a for loop, a method call, whatever.

The advantage to this is that the developer can create whatever DSL they like to work in, the IL glues it together into the concrete language.

Two things happen then:

  1. The DSL is interchangeable.  Any IL can be super-composed into your DSL choice.
  2. The IL is language independent, so it can be de-composed into language specific syntax.

Item #2 of course imposes some significant limitations — what if a language doesn’t support classes, or interfaces, or yield operator, or whatever?  I’m not particularly too concerned about that as a language-independent DSL/IL is more of a curiosity piece, as it becomes rapidly untenable when your code starts calling language-framework-platform dependencies.

However, I’d love to hear my readers thoughts on this DSL/IL concept I’m considering.

 

 

Advertisements

FlowSharpCode Gets DRAKON Shapes

drakon1.png

I’ve added some select DRAKON shapes for creating flowcharts.  The Python code in the lower right editor is generated from the flowchart, and the output from the run is shown on the left.

PyLint is also now integrated into FlowSharpCode’s PythonCompilerService.  This really improves the development process as many syntactical errors are detected before even running the code.

Also, the code generator creates an execution tree which independent of the language syntax, which means that support for other languages is easily added.  Now granted, the code itself in each of the DRAKON shapes is Python code, but I have some ideas of how to make that code agnostic as well.

What Will a 6GL Look Like?

First generation languages (1GL) were closely tied to the hardware, requiring the human operator to physically manipulate toggle switches to enter in the machine language instructions directly.

Second generation languages (2GL) can be loosely categorized as assembly languages.

Third generation languages (3GL) abstracted assembly language into a more human readable syntax.

Fourth generation languages (4GL) are distinguished from 3GH in that they are typically further abstracted from the underlying hardware.

Fifth generation languages (5GL), which abstracts the language itself such that it is based on “”solving problems using constraints given to the program, rather than using an algorithm written by a programmer.”

What will a 6GL look like?

In my opinion, it will look a lot like FlowSharpCode in which programs are written by piecing together the building blocks of smaller pieces of code (“behaviors”) using very visual tools, either a 2D canvas or a 3D virtual surface.

flowsharpcode

And while we’re at it, a 7GL?

Some may argue that a 6GL will be an AI, but again in my opinion, an AI that truly succeeds at “writing” an original program will do so by building from smaller behaviors.  Expecting an AI to produce “code” in the languages that exist today is, well, a cute but absurd thought.  A successful AI most likely will utilize some kind of “visualization” (whatever that looks like to an AI) for manifesting its “imagination” into concrete behaviors.  And most likely, whatever visualization system the AI uses will most likely be able to be mapped onto a 3D or 4D (including time dimension) surface for us to peruse.

Writing Code Should be More Like Circuit Design

7476.png

Previously, I’ve written about FlowSharpCode and Visual Assisted Programming / Organizational Representation (V.A.P.O.R.)  Here’s a simple example of what I mean by this concept.

My first technology passion was actually hardware, but it was expensive (a 7476 flip flop in the 70’s cost $4.50 from Radio Shack, if I remember correctly.)  So I started goofing around with software — BASIC on a PDP/11, HP calculators, BASIC on a Commodore PET, etc.

But software was always missing something for me – a visual way of describing what the software does.  You see, software and hardware are very similar — they are both essentially a circuit.  With hardware, the lines describe the paths of electrons (signals) and the components describe how those signals are manipulated, (their voltages and current) like in this simple circuit that produces a tone you can vary using a 555 timer chip, a speaker, and some discrete components:

555-2.png

(By the way, the history of the 555 timer is quite amazing.)  “Camenzind spent nearly a year testing breadboard prototypes, drawing the circuit components on paper, and cutting sheets of Rubylitha masking film. “It was all done by hand, no computer,” he says. His final design had 23 transistors, 16 resistors, and 2 diodes.”

If we want to write a simple WinForm C# app to do the same thing (more or less):

winform.png

we need about 142 lines of code, which you can view on this Gist.

The Play button acts like B1 in the schematic, the trackbar is the variable resistor in RV1, and the code implements the 555 timer (generates a sine wave in this case) and speaker is actually a call to System.Media.SoundPlayer

So What’s the Problem?

Someone was said to me that they would never use an editor that didn’t have outlining capability.  And you can sort of see why — even 142 lines of code is a lot to look at to glean what is going.  Outlining helps:

outline.png

because at least it shows you what the top level methods are, so you can see what the programmer had in mind for overall structure.

If the programmer wrote the code with a sufficient fine level of granularity.  That’s a big “if.”  In fact, I refactored my original code (which was originally just Main and Play, so that there was something more to show here in the outline.

A good IDE also provides some useful information – here is what Visual Studio tells you about the file:

vsinfo.png

In both cases, what is lost is what was expressed so nicely in the hardware schematic:

555-2.png

the flow of signal!  A list of classes, fields, properties, and methods is like getting a bag of wires, chips, and discrete components:

grabbag.jpg

you still have no idea of how the program wires it all up!  To figure that out, you have to read the code and create, for yourself, a mental map (or maybe even some pen & paper flowcharting) of what the code is doing.  For the sound player, that’s trivial.  For thousands (or hundreds of thousands or millions of lines) of code, that is anything but trivial.

But We’ve Been Here, Done That

Or have we?  It’s ironic to me that hardware engineers are always using visual tools (software nowadays) to design, implement, and simulate their hardware, yet we have nothing like that for software.  Sure, there’s been numerous attempts, and of course we have various tools that create diagrams for us or even let us work in a diagramming mode.  Some of these tools will generate code stubs, some will reverse engineer code into diagram (the most sophisticated of which can actually parse your code.)

A few visual tools that have been tried, some with limited success, are:

UML

umlactor.png

BPEL and WWF

bpel.png

Schema Diagramming

db.png

Lego-like Programming (like Scratch)

scratch.png

Do These Tools Work?

For what they’re designed to do, yes, but I find these tools do very little to help me express visually the day-to-day work of writing code.  They are either too high level, too abstract, or too low level, too childish, or don’t work with the languages that I use, and, most importantly, limit me in how I want to express concepts, at the granularity that I think is appropriate.

V.A.P.O.R – Visual Assisted Programming / Organic Representation

You may notice I changed that “O” to Organic (it used to be Organizational.)

This is one way to express what the tone player “circuit” look like using FlowSharpCode (the thing that implements V.A.P.O.R):

soundPlayer.png

This should give you a moment of pause.

Notice:

  1. Yes, this is a working, running, application.
  2. The UI is on the same surface as the implementing code.
  3. The surface is visual, annotated representation of the program.
  4. A simple workflow is demonstrated, which helps to visualize the individual steps of a particular process.
  5. Arbitrary shapes and groups can be used for code fragments.

Where’s the Code?

That’s the beauty if it.  The code is embedded in each shape.  The shape can be anything–in fact, the speaker is actually a grouped rectangle and a triangle with appropriate z-ordering.  The code-behind is in the group box containing those two shapes!

IC’s

We can even package code into re-usable “integrated circuits”, implemented either as separate assemblies (dll’s) or simply by grouping them into logical and re-usable compositions:

  • The Waveform Generator group is like the 555 timer.
  • The speaker is, well, a speaker.
  • The Play button is like the button in the schematic.
  • The TrackBar control is like the variable resistor that changes the frequency.

If I want to re-use an IC (or even just a code fragment), I just copy and paste the desired shapes to my own application surface, and I get the shapes, the annotations, and the code-behind.

Now, granted, there’s three “IC’s” to make this all work that I haven’t shown in the picture above, consisting of:

  1. A bootstrapper to handle UI events (and internal events, but there aren’t any in this application.)
  2. A simple server that provides the communication channel to interface between the UI events and the application.  Why?  Because the UI events are actually generated from services running in the FlowSharpCode application, and we need to inform the SoundPlayer application of those events.
  3. A mechanism for updating the UI (which is hosted in FlowSharpCode) when state changes, in this case, Play and Stop.

More on all this later!

Is this concept limited to C# code?

Certainly not.  While I’m using C# for this demonstration and the SharpDevelop code editor, the code editor, compiler, etc, are services that are plugged in to FlowSharpCode.  Other services, supporting Java, Javascript, Node, Ruby, Python, along with syntax highlighting editors, etc., can be plugged in to FlowSharpCode as well.  In fact, one of the goals is to write FlowSharpCode as a web application, where your code, in whatever language you like, is built on a server, and you’re actually building web apps.

Can Coding be More Like Circuit Layout?

I certainly think so, and besides this simple demonstration, I’ve used this same process for writing an implementation of my favorite “prove the technology” game, Hunt The Wumpus.  I’ll be writing more about that soon!