Upcoming: Kirra, a language-independent API for business applications

What do NakedObjects, Apache Isis, Restful Objects, OpenXava, JMatter, Tynamo, Roma, and Cloudfier have in common?

These frameworks and platforms allow developers to focus on expressing all they know about a business domain in the form of a rich domain model. And they all support or enable automatically generating interfaces based on the application’s domain model that make the entire functionality of the application accessible to end users, without requiring any effort on designing a user interface. They can also often auto-generate a functional (usually REST) API for non-human actors.

However, each of those frameworks/platforms implement automatic UI or API generation independently, against their own proprietary metamodels – for each UI and API technology supported. So while Cloudfier supports a Qooxdoo client, Isis supports a Wicket viewer and a JQuery viewer, OpenXava seems to have a JQuery/DWR UI and so on.

This is the motivation for Kirra: Kirra aims to decouple the interface renderers from the technologies used for creating domain-driven applications, promoting the proliferation of high-quality generic UI and API renderers that can be used across domain-driven development frameworks, or even if your application is not built with a domain-driven framework.

But what is Kirra?

Kirra is a minimalistic language-independent API specification to expose functionality of a business application in a business and technology agnostic way.

Essentially, Kirra provides a simple model for exposing metadata and data for business applications, no matter how they were implemented, enabling generic clients that have full access to the functionality exposed by those applications.

Watch this space for more details and the first release, planned for later this month.

Command Query Separation in TextUML

Ever heard of Command Query Separation? It was introduced by Bertrand Meyer and implemented in Eiffel. But I will let Martin Fowler explain:

The term ‘command query separation’ was coined by Bertrand Meyer in his book “Object Oriented Software Construction” – a book that is one of the most influential OO books during the early days of OO. [...]

The fundamental idea is that we should divide an object’s methods into two sharply separated categories:

  • Queries: Return a result and do not change the observable state of the system (are free of side effects).
  • Commands: Change the state of a system but do not return a value.

Query operations in UML

UML too allows an operation to be marked as a query. The section on Operations in the UML specification states:

If the isQuery property is true, an invocation of the Operation shall not modify the state of the instance or any other element in the model.

Query operations in TextUML

The next release of TextUML (which runs in Cloudfier today) will start exposing the ability to mark an operation as a query operation. Being just a notation for UML, the same definition of the UML spec applies to TextUML operations marked as queries.

But how do you mark an operation as a query in TextUML, you ask? You use the query keyword instead of the usual operation keyword (it is not just a modifier, it is a replacement for the usual keyword):

query totalExpenses(toSum : Expense[*]) : Double;

The TextUML compiler imposes a few rules when it sees a query operation:

  • it will require the operation to have a return value
  • it won’t let the operation perform any actions that could have side effects, such as creating or destroying objects, writing properties or linking objects, or invoke any other non-query operations
  • also, it will only let you invoke operations from a property derivation if they are query operations

Example of a query operation


    private query totalExpenses(toSum : Expense[*]) : Double;
    begin
        return (toSum.reduce((e : Expense, sum : Double) : Double {
            sum + e.amount
        }, 0) as Double);
    end;

Example of a derived attribute using a query operation


    derived attribute totalRecorded : Double := {
        self.totalExpenses(self.recordedExpenses)
    };

But why is Command Query Separation a good thing?

By allowing a modeler/programmer to explicitly state whether an operation has side effects allows a compiler or runtime to take advantage of the guarantee of lack of side effects to do things such as reorder invocations, cache results, safely reissue in case of failure which can improve performance and reliability.

On automatically generating fully functional mobile user interfaces

An upcoming feature in Cloudfier is the automatic generation of fully functional user interfaces that work well on both desktop:
 
Screenshot from 2014-01-29 13:57:47
 
and mobile browsers:

Screenshot_2014-01-29-13-45-01 Screenshot_2014-01-29-13-44-53
 
This is just a first stab, but is already available to any Cloudfier apps (like this one, try logging in as user: test@abstratt.com, senha: Test1234). Right now the mobile UI is read-only, and does not yet expose actions and relationships as the desktop-oriented web UI does. Watch this space for new developments on that.

The case against generated UIs

Cloudfier has always had support for automatic UI generation for desktop browsers (RIA). However, the generated UI had always been intended as a temporary artifact, to be used only when gathering initial feedback from users and while a handcrafted UI (that accesses the back-end functionality via the automatically generated REST API) is being developed (or in the long term, as a power-user UI). The reason is that automatically generated user-interfaces tend to suck, because they don’t recognize that not all entities/actions/properties have the same importance, and that their importance varies between user roles.

Don’t get me wrong, we strongly believe in the model-driven approach to build fully functional applications from a high-level description of the solution (executable domain models). While we think that is the most sane way of building an application’s database, business and API layers (and that those make up a major portion of the application functionality and development costs), we recognize user interfaces must follow constraints that are not properly represented in a domain model of an application: not all use cases have the same weight, and there is often benefit in adopting metaphors that closely mimic the real world (for example, an audio player application should mimic standard controls from physical audio players).

If model-driven development is to be used for generating user interfaces, the most appropriate approach for generating the implementation of such interfaces (and the interfaces only) would be to craft UI-oriented models using a UI modeling language, such as IFML (although I never tried it). But even if you don’t use a UI-oriented modeling tool, and you build the UI (and the UI only) using traditional construction tools (these days that would be Javascript and HTML/CSS) that connect to a back-end that is fully generated from executable domain models (like Cloudfier supports), you are still much but much better off than building and maintaining the whole thing the traditional way.

Enter mobile UIs

That being said, UIs on mobile devices are usually much simpler than corresponding desktop-based UIs because of the interaction, navigation and dimension constraints imposed by mobile devices, resulting in a UI that shows one application ‘screen’ at a time, with hierarchical navigation. So here is a hypothesis:

Hypothesis: Mobile UIs for line-of-business applications are inherently so much simpler than the corresponding desktop-based UIs, that it is conceivable that generated UIs for mobile devices may provide usability that is similar to manually crafted UIs for said devices.

What do you think? Do you agree that is a quest worth pursuing (and with some likelihood of being proven right)? Or is the answer somehow obvious to you already? Regardless, if you are interested or experienced in user experience and/or model-driven development, please chime in.

Meanwhile, we are setting off to test that hypothesis by building full support for automatically generated mobile UIs for Cloudfier applications. Future posts here will show the progress made as new features (such as actions, relationships and editing) are implemented.

How Cloudfier uses Orion – shell features

Following last week’s post on editor features, today I am going to cover how Cloudfier plugs into Orion’s Shell page to contribute shell commands.

The cloudfier command prefix

All Cloudfier commands must be prefixed with ‘cloudfier’.

By just typing ‘cloudfier ‘ and hitting enter, you are given a list of all Cloudfier-specific commands.

cloudfier-commands

This is how the command prefix is contributed:


provider.registerServiceProvider("orion.shell.command", {}, {   
    name: "cloudfier",
    description: "Cloudfier commands"
});

which is a command contribution without a behavior. All the subcommands you see being offered actually include the prefix in their contributions.

Typical Cloudfier command

The typical Cloudfier command takes a workspace location (a file-type parameter), performs a remote operation and returns a message to the user explaining the outcome of the command (return type is String), and looks somewhat like this:

provider.registerServiceProvider("orion.shell.command", { callback: shellAppInfo }, {   
	name: "cloudfier info",
	description: "Shows information about a Cloudfier application and database",
	parameters: [{
	    name: "application",
	    type: "file",
	    description: "Application to get obtain information for"
	}],
	returnType: "string"
});

The behavior of the command is specified by the callback function. In this specific case, the callback performs a couple of HTTP requests against the server, so it it returns a dojo.Deferred which implements the Promise pattern contract used by Orion. Once the last server request is completed, it returns a string to be presented to the user with the outcome of the operation.

info-command

Note that the output of a command needs to use Markdown-style notation to produce links, HTML output is not suppported. Also, newlines are honored.

Commands that contribute content to the workspace

Commands that contribute content to the workspace use a “file” (single file) or “[file]” (multiple files) return type. Cloudfier has a few commands in this style:

An init-project command, which marks the current directory as a project directory:

provider.registerServiceProvider("orion.shell.command", { callback: shellCreateProject }, {   
    name: "cloudfier init-project",
    description: "Initializes the current directory as a Cloudfier project"
    returnType: "file"
});

An add-entity command which adds a new entity definition to the current directory:

provider.registerServiceProvider("orion.shell.command", { callback: shellCreateEntity }, {   
    name: "cloudfier add-entity",
    description: "Adds a new entity with the given name to the current directory",
    parameters: [{
        name: "namespace",
        type: {name: "string"},
        description: "Name of the namespace (package) for the entity (class)"
    },
    {
        name: "entity",
        type: {name: "string"},
        description: "Name of the entity (class) to create"
    }],
    returnType: "file"
});

And finally a db-snapshot command which grabs a snapshot of the current application database state and feeds it into a data.json file in the current application directory.

provider.registerServiceProvider("orion.shell.command", { callback: shellDBSnapshot }, {   
    name: "cloudfier db-snapshot",
    description: "Fetches a snapshot of the current application's database and stores it in as a data.json file in the current directory",
    returnType: "file"
});

That snapshot can be further edited and later pushed into the application database.

Note that for all file-generating commands, if files already exist (mdd.properties, <entity-name>.tuml, and data.json, respectively), they will be silently overwritten (bug 421349).

Readers beware

This ends our tour over how Cloudfier uses Orion extension points. Keep in mind this is not documentation.
See this wiki page for the most up-to-date documentation on the orion.shell.command extension point and this blog post by the Orion team for some interesting shell command examples.

How Cloudfier uses Orion – editor features

Cloudfier now runs on Orion 4.0 RC2. It took some learning and patience and a few false starts (tried the same in the Orion 2.0 and 3.0 cycles), but I finally managed to port the Cloudfier Orion plug-in away from version 1.0 RC2 (shipped one year ago) to 4.0 RC2. Hopefully when 4.0 final is released (any time now?), it should be a no-brainer to integrate with it. I will only then look into hack/branding it a bit so it doesn’t look identical to a vanilla Orion instance.

But how does Cloudfier extend the Orion base feature set? This post will cover the editor-based features.

Content type

Cloudfier editor-based features are applicable for TextUML files only. This content type definition provides the reference for all features to be configured against.

    provider.registerServiceProvider("orion.core.contenttype", {}, {
        contentTypes: [{  id: "text/uml",
                 name: "TextUML",
                 extension: ["tuml"],
                 extends: "text/plain"
        }]
    });

Outliner

outliner
The outliner relies on the server to parse and generate an outline tree for the contents in the editor.

    var computeOutline = function(editorContext, options) {
        var result = editorContext.getText().then(function(text) {
            return dojo.xhrPost({
	             postData: text,
	             handleAs: 'json',
	             url: "/services/analyzer/?defaultExtension=tuml",
	             load: function(result) {
	                 return result;
	             }
	        });
        });
        return result;
    };


    provider.registerServiceProvider("orion.edit.outliner", { computeOutline: computeOutline }, { contentType: ["text/uml"], id: "com.abstratt.textuml.outliner", name: "TextUML outliner" });

Note the outliner API changed in 4.0 and the editor buffer contents is now available via a deferred instead of directly. Also, note that in order to use this API your plugin needs to load Deferred.js (see this orion-dev thread) as it implicitly turns your service into a long-running operation.

Source validation

validator

Also a server-side functionality, which already returns a JSON tree in the format expected by the orion.edit.validator extension point.

    var checkSyntax = function(title, contents) {
        return dojo.xhrGet({
             handleAs: 'json',
             url: "/services/builder" + title,
             load: function(result) {
                 return result
             }
        });
    };

    provider.registerServiceProvider("orion.edit.validator", { checkSyntax: checkSyntax }, { contentType: ["text/uml", "application/vnd-json-data"] });

Note the validation service uses a GET method, and only uses the file path, not the contents. The reason is that the server reaches into the project contents stored in the server instead the client contents (in order to perform multi-file validation).

Syntax highlighting

highlighter

    /* Registers a highlighter service. */    
    provider.registerServiceProvider("orion.edit.highlighter",
      {
        // "grammar" provider is purely declarative. No service methods.
      }, {
        type : "grammar",
        contentType: ["text/uml"],
        grammar: {
          patterns: [
			  {  
			     end: '"',
			     begin: '"',
			     name: 'string.quoted.double.textuml',
			  },
			  {  begin: "\\(\\*", 
			     end: "\\*\\)",
			     name: "comment.model.textuml"
			  },
			  {  
			     begin: "/\\*", 
			     end: "\\*/",
			     name: "comment.ignored.textuml"
			  },
			  {  
			     name: 'keyword.control.untitled',
			     match: '\\b(abstract|access|aggregation|alias|and|any|apply|association|as|attribute|begin|broadcast|by|class|component|composition|constant|datatype|dependency|derived|destroy|do|else|elseif|end|entry|enumeration|exit|extends|external|function|id|if|implements|interface|in|initial|inout|invariant|is|link|model|navigable|new|nonunique|not|on|operation|or|ordered|out|package|port|postcondition|precondition|private|primitive|profile|property|protected|provided|public|raise|raises|readonly|reception|reference|required|return|role|self|send|signal|specializes|state|statemachine|static|stereotype|subsets|terminate|to|transition|type|unique|unlink|unordered|var|when)\\b'
			  },
              {
                "match": "([a-zA-Z_][a-zA-Z0-9_]*)",
                "name": "variable.other.textuml"
              },                  
              {
	            "match": "<|>|<=|>=|=|==|\\*|/|-|\\+",
	            "name": "keyword.other.textuml"
              },
              {
	            "match": ";",
	            "name": "punctuation.textuml"
              }
            ]
        }
    });

Source formatting

The code formatter in Cloudfier is server-side, so the client-side code is quite simple:

    var autoFormat = function(selectedText, text, selection, resource) {
        return dojo.xhrPost({
             postData: text,
             handleAs: 'text',
             url: "/services/formatter/?fileName=" + resource,
             load: function(result) {
                 return { text: result, selection: null };
             }
        });
    }; 

    provider.registerServiceProvider("orion.edit.command", {
        run : autoFormat
    }, {
        name : "Format (^M)",
        key : [ "m", true ],
        contentType: ["text/uml"]
    });

Content assist

contentAssist
Content assist support is quite limited, basically a few shortcuts for creating new source code elements, useful for users not familiar with TextUML, the notation used in Cloudfier.

    var computeProposals = function(prefix, buffer, selection) {
        return [
            {
                proposal: "package package_name;\n\n/* add classes here */\n\nend.",
                description: 'New package' 
            },
            {
                proposal: "class class_name\n/* add attributes and operations here */\nend;",
                description: 'New class' 
            },
            { 
                proposal: "attribute attribute_name : String;",
                description: 'New attribute' 
            },
            { 
                proposal: "operation operation_name(param1 : String, param2 : Integer) : Boolean;\nbegin\n    /* IMPLEMENT ME */\n    return false;\nend;",
                description: 'New operation' 
            },
            { 
                proposal: "\tattribute status2 : SM1;\n\toperation action1();\n\toperation action2();\n\toperation action3();\n\tstatemachine SM1\n\t\tinitial state State0\n\t\t\ttransition on call(action1) to State1;\n\t\tend;\n\t\tstate State1\n\t\t\ttransition on call(action1) to State1\n\t\t\ttransition on call(action2) to State2;\n\t\tend;\n\t\tstate State2\n\t\t\ttransition  on call(action1) to State1\n\t\t\ttransition on call(action3) to State3;\n\t\tend;\n\t\tterminate state State3;\n\tend;\n\t\tend;\n",
                description: 'New state machine' 
            }
        ];
    };

    provider.registerServiceProvider("orion.edit.contentAssist",
	    {
	        computeProposals: computeProposals
	    },
	    {
	        name: "TextUML content assist",
	        contentType: ["text/uml"]
	    }
	);

Coming next

The next post will cover the Shell-based features in Cloudfier.

Presenting Cloudfier at VIJUG

I will be presenting Cloudfier at the next VIJUG meeting on May 30th.

I believe you will find the subject at least a bit intriguing (oh, and there will be some beer). It is open to anyone interested. If you intend to come, it will help if you can RSVP. There is an EventBrite page, but a comment here or on the VIJUG blog or Google+ or Facebook pages should do as well.

The goal is to give an overview of Cloudfier and gather feedback from fellow Victoria developers, in preparation for the first release later this year. VIJUG meetings have a varied audience so that should mean some interesting feedback. Also, if you are a fan of my speaking skills (I kid!), it is possibly my last talk in Victoria, given we’ll be packing and moving back to Brazil in just a few months. Honestly though, it would be great to see again some of the many great tech folks I met in Victoria in the last 6 years before heading to the other side of the Equator.

UPDATE: the presentation was a great opportunity for gathering feedback on Cloudfier. Got some very good questions and suggestions from Paul, John, Kelly, Shea and others. Thanks to all who managed to attend. These were the slides presented:

cloudfier.com and abstratt.com servers moved to new hosting

The cloudfier.com and abstratt.com servers (including this blog) moved to new hosting. Please let me know if you find broken links or any issues. If you do, please try pinging the server you found it at. You should see something like this:

ping abstratt.com
PING abstratt.com (54.244.115.27) 56(84) bytes of data.
64 bytes from ec2-54-244-115-27.us-west-2.compute.amazonaws.com (54.244.115.27): icmp_req=1 ttl=51 time=0.860 ms
64 bytes from ec2-54-244-115-27.us-west-2.compute.amazonaws.com (54.244.115.27): icmp_req=2 ttl=51 time=0.992 ms
64 bytes from ec2-54-244-115-27.us-west-2.compute.amazonaws.com (54.244.115.27): icmp_req=3 ttl=51 time=0.848 ms
^C
--- abstratt.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.848/0.900/0.992/0.065 ms

If not, it is probably a temporary DNS propagation issue.

Authenticating users in Cloudfier applications

Up until very recently, Cloudfier applications had no way to authenticate users – there was a login dialog but all it was meant for was to allow users to assume an arbitrary identity by informing the name of an existing user.

That is no longer the case. The latest release build (#29) addresses that by implementing a full-blown authentication mechanism. Now, when you try to access a Cloudfier app, like this one, you will be greeted by this login dialog:

login

which allows you to sign up, request a password reset and sign in either with proper credentials or as a guest.

For more details about how authentication works in Cloudfier applications, check the new authentication documentation.

BTW, kudos to Stormpath for providing such a great API for managing and authenticating user credentials. Highly recommended.

What’s next?

Glad you asked. Next is authentication’s bigger friend: authorization. Right now any user can do anything to any data, and of course that is not reasonable even for the simplest applications. Stay tuned for more on that.

On MDA and reverse engineering

This appeared somewhere in a discussion in the MDA forum on LinkedIn:

If legacy code exists, MDA can be used to capture the models from the existing code. Extracting models from legacy code is difficult, but it is much better than having someone go through the code and create a model in their head

I would pose reverse engineering has nothing to do with MDA. MDA is really about transformations in one way only: from more abstract to more specific models (while reverse engineering is the opposite).

I am not saying though that one cannot use models obtained with reverse engineering in a MDA context, but that is out of the scope of MDA. But I’d go as far saying that in the general case reverse engineering is not a good approach for producing platform independent models. Reasons:

  1. good models require care and intent in representing the business rules and constraints in the domain being addressed, and that is really rare to see in handwritten code (Exhibit A: the incipient popularity of Domain Driven Design). If something is not there to begin with, a tool cannot extract it.
  2. manual implementation results naturally in high variability in how the same the domain and technical concerns are addressed (independently and in coordination).

Those two things make it really hard (impossible, I’d say, other than in exceptional cases) for a reverse engineering tool (that is based in some sort of pattern matching) to identify code elements and map them back to platform independent models that are not only accurate and complete, but also well designed.

Reverse engineering can be useful in getting an initial approximation of the PIMs (say, covering structure only, but not dynamics or behavior), but that will require significant manual work to become properly designed models.

New Cloudfier release supports many-to-many associations

One clear gap Cloudfier used to have was lack of support for many-to-many associations. That has now been implemented all the way from back-end to the UI.

For instance, in the ShipIt! sample issue tracking application, a user can watch multiple issues, and an issue can be watched by multiple users:

class Issue
end;

class User
end;

association WatchedIssues
    navigable role watchers : User[*];
    navigable role issuesWatched : Issue[*];
end;

UI

…which in the UI means that is now a way to link issues as watched issues for a user:

(and vice-versa from Issue, as the relationship is navigable both ways). Once the user triggers that action, they can pick multiple target objects (in this case, issues) to pair the source object (in this case, a User) up with, by clicking the “connector” button on the target entity instance’s toolbar (the second from left to right):

which once triggered shows a notice confirming the objects have now been linked together.

I will admit this UI may require some using to. It is just a first cut, and I am interested in suggestions from those of you less UX-challenged than me.

REST API

Accordingly, the application’s REST API allows querying related objects using a URI in the form:

…/services/api/<application>/instances/<entity>/<id>/relationships/<relationship-name>/

for instance:

…/services/api/demo-cloudfier-examples-shipit/instances/shipit.Issue/10/relationships/watchers/

produces a list of all users watching the base issue:

[
  {
    uri: ".../instances/shipit.User/2",
    shorthand: "rperez",
    type: ".../entities/shipit.User",
    typeName: "User",
    values: {
      ...
    },
    links: {
      ...
    },
    actions: {
      ...
    },
    relatedUri: ".../instances/shipit.Issue/10/relationships/watchers/2"
  },
  {
    uri: ".../instances/shipit.User/8",
    shorthand: "gtorres",
    type: ".../entities/shipit.User",
    typeName: "User",
    values: {
      ...
    },
    links: {
      ...
    },
    actions: {
      ...
    },
    relatedUri: ".../instances/shipit.Issue/10/relationships/watchers/8"
  }
]

and to establish links, you can POST a similar representation to the same URI, but you only really need to include the ‘uri’ attribute, everything else is ignored.

New tour video

There is also now a new tour video, this time with audio and much better image quality, if you gave up on watching the original one, please give this one a try!

How can modeling be harder than programming?

One argument often posed against model-driven development is that not all developers have the skills required for modeling. This recent thread in the UML Forum discussion group includes a very interesting debate on that and started with this statement by Dan George:

Don’t take his comments about the orders of magnitude reduction in code size to mean orders of magnitude reduction in required skill. I think this is the reason model-driven development is not mainstream. The stream is full of programmers that could never even develop the skills necessary to use MDD. Humbly, I know that I’m still a wannabe.

which was contested by H.S. Lahman

I have to disagree (somewhat) with this assessment. Yes, there is a substantial learning curve and OOA models need to be developed much more rigorously than they are in most shops today. Also, one could argue that the learning curve is essentially the same learning curve needed to learn to do OOA/D properly.

and later by Thomas Mercer-Hursh:

There is something confusing about the idea of good modeling being hard. After all, all one is doing is describing how the system is supposed to work without having to worry about the implementation details. If one can’t do that, then how is one supposed to manually create a correct, working system?

I sympathize with Lahman’s and Thomas’ points (and share some of their puzzlement), but I do agree with Dan’s initial point: modeling can be harder than programming.

Separation of concerns? Not in the job description

The fact is that one can deliver software that was apparently appropriately built (from a QA/product owner/user point-of-view) and yet fail to fully understand the constraints and rules of the business domain the software is meant to serve.

Also, even if a developer does understand the business requirements at the time the solution is originally implemented, it is unfortunately very common that they will fail to encode the solution in a way that clearly express the intent in a way that would be easy for other developers (or themselves) at a later time correlate the code to business requirements (as proposed by Domain Driven Design), leading to software that is very hard to maintain (because it is hard to understand, or hard to change without breaking things). Model-driven development is a great approach for proper separation of concerns when building software (the greatest, if you ask me). However, as sad as that is, proper separation of concerns is not a must-have trait for delivering “appropriate” software (from a narrow, external, immediate standpoint). Ergo, one can build software without modeling, even implicitly.

I don’t think those things happen because developers are sociopaths. I think properly understanding and representing the concerns of a business domain when building software is a very desirable skill (I would say critical), but realistically not all that common in software developers. But how can hordes of arguably proficient programmers get away without such skill?

Delivering software the traditional (programming-centric) way often involves carefully patching together a mess of code, configuration and some voodoo to address a complex set of functional and non-functional requirements that works at the time of observation (a castle of cards is an obvious image here). Building software that way makes it too easy for one to be overwhelmed by all the minutia imposed by each technology and the complexity of making them work together and lose track of the high level goals one is trying to achieve – let alone consciously represent and communicate.

Conclusion

So even though I fully agree with the sentiment that proper programming requires a good deal of modeling skills, I do think it is indeed possible to deliver apparently working software (from an external point of view) without consciously doing any proper modeling. If you stick to the externally-facing aspects of software development, all that is valued is time to deliver, correctness, performance, and use of some set of technologies. Unfortunately that is all that is required for most development positions. Easy of maintenance via proper separation of concerns is nowhere in that list. And model-driven development is essentially an approach for separation of concerns on steroids.

What do you think?

Checking the current state of a UML state machine

In Cloudfier, we use UML as the core language for building business applications. UML is usually well-equipped for general purpose business domain-centric application modeling, but that doesn’t mean it always does everything needed out of the box.

Case at hand: assuming one is developing an expense reporting application and modeled an expense’s status as a state machine (in TextUML):

class Expense
    /* ... */
    attribute status : Status;
    operation review();
    operation approve();
    operation reject();
    operation submit();

    statemachine Status
        initial state Draft
            transition on call(submit) to Submitted;
        state Submitted
            transition on call(approve) to Approved
            transition on call(reject) to Rejected
            transition on call(review) to Draft;
        terminate state Approved;
        terminate state Rejected;        
    end;
end;

How do you model the following in UML?

Show me all expenses that are waiting for approval.

Turns out there is no support in UML for reasoning based on the current state of a state machine.

Creative modeling

So, what do you do when UML does not have a language element that you need? You extend it, in our case, using a stereotype applicable to the LiteralNull metaclass (in TextUML):

stereotype VertexLiteral extends LiteralNull
    property vertex : Vertex;
end;

So, a vertex literal is a value specification, more specifically, a variant of LiteralNull, that can refer to a Vertex, which is a metaclass that represents the states (including pseudo-states) in a state machine.

Notation, notation

In terms of notation, I chose to make State/Vertex literals look like enumeration literals: Status#Approved or Status#Draft. So, back to the original question, this is how you could model a query that returns all expenses that are in the Submitted state:

    static operation findAllSubmitted() : Expense[*];
    begin 
        return Expense extent.select ((e : Expense) : Boolean {
            return e.status == Status#Submitted
        });
    end;

If you are thinking to yourself: I didn’t know UML had queries or closures!?, well, it usually doesn’t. See the posts on SQL queries in UML and Closures in UML for some background on this.

Note also that if you wanted to refer to the symbol Status from a class different than the one enclosing it you will need to qualify it (i.e. Expense::Status#Submitted).

Show me more!

You can run the Expenses application showing state machines and state-based queries in Cloudfier right now (login is “guest” or any of the employee names you will see later).

The entire Expenses sample application (currently 150 lines of generously spaced TextUML) is available on BitBucket. You can also easily check it out into Cloudfier so you can run your own copy of the application on the web (there is nothing to install). Give it a try!

What do you think?

Your feedback (questions, support or criticism) to any of the ideas presented in this post is very welcome.

UPDATE: I started a thread on the subject on the UML Forum group, and turns out you can do this kind of reasoning in OCL, but indeed, not in UML itself. Well, now you can.

Yet another Orion-based site: cloudfier.com

Okay, we are live.

I just put the last finishing touches on the developer site at cloudfier.com.

The developer site, develop.cloudfier.com, is powered by Orion. Cloudfier’s instance of Orion has several features to support modeling with TextUML, such as:

  • Syntax highlighting
  • Outline
  • Validation
  • Auto-formatting
  • Templates

and I have a picture to prove it:

but wouldn’t you rather see for yourself? If you are shy because you don’t know how to model in TextUML, just make sure you create a file with a “.tuml” extension and use the content assist templates to get a model going. Or if you are feeling lazy, just clone this Git repository: https://bitbucket.org/abstratt/cloudfier-examples.git

But what and who is Cloudfier for you may ask. I won’t tell you here though. Please go to cloudfier.com, give it a quick read. If you don’t get it, please let me know in the comments – a main goal now is to ensure the main page can get the message across.

TextUML Toolkit finally gets continuous integration thanks to Tycho and CloudBees

TextUML Toolkit 1.8 is now available! You can install it as usual using http://abstratt.com/update as the update site. There is also a snapshot update site, which will work from within Eclipse only:

jar:https://repository-textuml.forge.cloudbees.com/snapshot/com/abstratt/mdd/com.abstratt.mdd.oss.repository/1.0/com.abstratt.mdd.oss.repository-1.0.zip!/

This is a transition release where the TextUML Toolkit now uses continuous integration for producing builds via Eclipse Tycho, as opposed to developer initiated builds from the IDE. This benefits contributors (the development setup is much simpler), but primarily users – since it is now so much easier to obtain the source code and generate a release users can expect much more frequent releases, and hopefuly more goodies from occasional contributors.

Talking about frequent releases, if you don’t mind living on the bleeding edge, I invite you to install the TextUML Toolkit from the snapshot update site (that is what you get if you install the Toolkit using the Eclipse Marketplace Client). That way, features or fixes will become available to you a day after they have been committed.

This release contains a number of new features and bug fixes added since 1.7 was released a year ago, but we are not documenting those yet. You will see those properly promoted in a future release. Our focus now was to get our release engineering act straight, and I think we succeeded, thanks to Tycho.

Finally, we would like to thank CloudBees for their generous free plan that allows us to set up Jenkins continuous builds for the TextUML Toolkit at no cost. On that note, we are applying for a FOSS plan so we can have our build results available for everyone to see, and as a bonus, enjoy a slightly higher monthly build quota. As you can see, we are already living up to our side of the deal by spreading the word about their cool DEV@cloud product. :)

UPDATE: CloudBees is now providing the TextUML Toolkit project with a free DEV@cloud instance.

Adding State Machines to TextUML and AlphaSimple [take 1]

I decided to go ahead and finally implement support for state machines in TextUML and AlphaSimple.

This is an example of what a state machine will look like (take 1), based on fig. 15.33 in the UML specification 2.4:


(...)
statemachine Phone

  initial state
    entry { self.startDialTone() }
    exit { self.stopDialTone() }
    transition on digit to PartialDial;

  state PartialDial
    transition on digit to PartialDial
    transition when { self.numberIsValid() } to Completed;

  final state Completed;

end;
(...)

A state machine may declare multiple states. Each state declares a number of transitions to other states. Each transition may be triggered by many events (or none), each denoted by the keyword ‘on’, and may optionally present a guard constraint (using the keyword ‘when’). The initial state is the only one that may remain unnamed. The final state cannot have outgoing transitions, but just like any other state, it may declare entry/exit behaviors.

What do you think? I did try to find existing textual notations for UML, like this and this, but none of those seem to be documented or look like covering all the UML features I want to support. Any other pointers?

Feedback wanted: invariant constraints in AlphaSimple/TextUML

I am working on support for invariant constraints in AlphaSimple/TextUML.

Some of the basic support has already made into the live site. For instance, the AlphaSimple project has a rule that says:

A user may not have more than 3 private projects.”

This in TextUML looks like this:


class User 

    attribute projects : Project[*] 
        invariant Maximum 3 private projects { 
            return self.privateProjects.size() <= 3
        };
        
    derived attribute privateProjects : Project[*] := () : Project[*] {
        return self.projects.select((p : Project) : Boolean {
            return not p.shared
        });
    };

end;

(Note the constraint relies on a derived property for more easily expressing the concept of private projects, and that backslashes are used to escape characters that otherwise would not be allowed in identifiers, such as whitespaces.)

What do you think? Does it make sense? I know the syntax for higher order functions could benefit from some sugar, but that can be easily fixed later. I am much more interested in feedback on the idea of modeling with executable constraints than in syntax.

Wading in unknown waters

I am in the process of modeling a real world application in AlphaSimple and for most cases, the level of support for constraints that we are building seems to be sufficient and straightforward to apply.

I have though found one kind of constraint that is hard to model (remember, AlphaSimple is a tool for modeling business domains, not a programming language): in general terms, you cannot modify or delete an object if the object (or a related object) is in some state. For example:

"One cannot delete a project's files if the project is currently shared".

Can you think of a feature in UML that could be used to address a rule like that? I can't think of anything obvious (ChangeEvent looks relevant at a first glance, but there is no support for events in TextUML yet).

Any ideas are really appreciated.

MDD meets TDD (part II): Code Generation

Here at Abstratt we are big believers of model-driven development and automated testing. I wrote here a couple of months ago about how one could represent requirements as test cases for executable models, or test-driven modeling. But another very interesting interaction between the model-driven and test-driven approaches is test-driven code generation.

You may have seen our plan for testing code generation before. We are glad to report that that plan has materialized and code generation tests are now supported in AlphaSimple. Follow the steps below for a quick tour over this cool new feature!

Create a project in AlphaSimple

First, you will need a model so you can generate code from. Create a project in AlphaSimple and a simple model.


package person;

enumeration Gender 
  Male, Female
end; 

class Person
    attribute name : String; 
    attribute gender : Gender; 
end;

end.

Enable code generation and automated testing

Create a mdd.properties file in your project to set it up for code generation and automated testing:


# declares the code generation engine
mdd.target.engine=stringtemplate

# imports existing POJO generation template projects
mdd.importedProjects=http://cloudfier.com/alphasimple/mdd/publisher/rafael-800/,http://cloudfier.com/alphasimple/mdd/publisher/rafael-548/

# declares a code generation test suite in the project
mdd.target.my_tests.template=my_tests.stg
mdd.target.my_tests.testing=true

# enables automated tests (model and templates)
mdd.enableTests=true

Write a code generation test suite

A code generation test suite has the form of a template group file (extension .stg) configured as a test template (already done in the mdd.properties above).

Create a template group file named my_tests.stg (because that is the name we declared in mdd.properties), with the following contents:


group my_tests : pojo_struct;

actual_pojo_enumeration(element, elementName = "person::Gender") ::= "<element:pojoEnumeration()>"

expected_pojo_enumeration() ::= <<
enum Gender {
    Male, Female
}
>>

A code generation test case is defined as a pair of templates: one that produces the expected contents, and another that produces the actual contents. Their names must be expected_<name> and actual_<name>. That pair of templates in the test suite above form a test case named “pojo_enumeration”, which unsurprisingly exercises generation of enumerations in Java. pojo_enumeration is a pre-existing template defined in the “Codegen – POJO templates” project, and that is why we have a couple of projects imported in the mdd.properties file, and that is why we declare our template suite as an extension of the pojo_struct template group. In the typical scenario, though, you may would have the templates being tested and the template tests in the same project.

Fix the test failures

If you followed the instructions up to here, you should be seeing a build error like this:



Line	File		Description
3	my_tests.stg	Test "pojo_enumeration" failed: [-public -]enum Gender {n Male, Femalen}

which is reporting the code generated is not exactly what was expected – the template generated the enumeration with an explicit public modifier, and your test case did not expect that. Turns out that in this case, the generated code is correct, and the test case is actually incorrect. Fix that by ensuring the expected contents also have the public modifier (note that spaces, newlines and tabs are significant and can cause a test to fail). Save and notice how the build failure goes away.

That is it!

That simple. We built this feature because otherwise crafting templates that can generate code from executable models is really hard to get right. We live by it, and hope you like it too. That is how we got the spanking new version of the POJO target platform to work (see post describing it and the actual project) – we actually wrote the test cases first before writing the templates, and wrote new test cases whenever we found a bug – in the true spirit of test-driven code generation.

Can you tell this is 100% generated code?

Can you tell this code was fully generated from a UML model?

This is all live in AlphaSimple – every time you hit those URLs the code is being regenerated on the fly. If you are curious, the UML model is available in full in the TextUML’s textual notation, as well as in the conventional graphical notation. For looking at the entire project, including the code generation templates, check out the corresponding AlphaSimple project.

Preconditions

Operation preconditions impose rules on the target object state or the invocation parameters. For instance, for making a deposit, the amount must be a positive value:


operation deposit(amount : Double);
precondition (amount) { return amount > 0 }
begin
    ...
end;

which in Java could materialize like this:


public void deposit(Double amount) {
    assert amount > 0;
    ...
}

Not related to preconditions, another case assertions can be automatically generated is if a property is required (lowerBound > 0):


public void setNumber(String number) {
    assert number != null;
    ...
}

Imperative behavior

In order to achieve 100% code generation, models must specify not only structural aspects, but also behavior (i.e. they must be executable). For example, the massAdjust class operation in the model is defined like this:


static operation massAdjust(rate : Double);
begin
    Account extent.forEach((a : Account) { 
        a.deposit(a.balance*rate) 
    });
end;

which in Java results in code like this:


public static void massAdjust(Double rate) {
    for (Account a : Account.allInstances()) {
        a.deposit(a.getBalance() * rate);
    };
}

Derived properties

Another important need for full code generation is proper support for derived properties (a.k.a. calculated fields). For example, see the Account.inGoodStanding derived attribute below:


derived attribute inGoodStanding : Boolean := () : Boolean { 
    return self.balance >= 0 
};

which results in the following Java code:


public Boolean isInGoodStanding() {
    return this.getBalance() >= 0;
}

Set processing with higher-order functions

Any information management application will require a lot of manipulation of sets of objects. Such sets originate from class extents (akin to “#allInstances” for you Smalltalk heads) or association traversals. For that, TextUML supports the higher-order functions select (filter), collect (map) and reduce (fold), in addition to forEach already shown earlier. For example, the following method returns the best customers, or customers with account balances above a threshold:


static operation bestCustomers(threshold : Double) : Person[*];
begin
    return
        (Account extent
            .select((a:Account) : Boolean { return a.balance >= threshold })
            .collect((a:Account) : Person { return a->owner }) as Person);
end;        

which even though Java does not yet support higher-order functions, results in the following code:


public static Set<Person> bestCustomers(Double threshold) {
    Set<Person> result = new HashSet<Person>();
    for (Account a : Account.allInstances()) {
        if (a.getBalance() >= threshold) {
            Person mapped = a.getOwner();
            result.add(mapped);
        }
    }
    return result;
}

which demonstrates the power of select and collect. For an example of reduce, look no further than the Person.totalWorth attribute:


derived attribute totalWorth : Double := () : Double {
    return (self<-PersonAccounts->accounts.reduce(
        (a : Account, partial : Double) : Double { return partial + a.balance }, 0
    ) as Double);
};  

which (hopefully unsurprisingly) maps to the following Java code:


public Double getTotalWorth() {
    Double partial;
    partial = 0;
    for (Account a : this.getAccounts()) {
        partial = partial + a.getBalance();
    }
    return partial;
}

Would you hire AlphaSimple?

Would you hire a developer if they wrote Java code like AlphaSimple produces? For one thing, you can’t complain about the guy not being consistent. :) Do you think the code AlphaSimple produces needs improvement? Where?

Want to try by yourself?

There are still some bugs in the code generation that we need to fix, but overall the “POJO” target platform is working quite well. If you would like to try by yourself, create an account in AlphaSimple and to make things easier, clone a public project that has code generation enabled (like the “AlphaSimple” project).

11 Dogmas of Model-Driven Development

I prepared the following slides for my Eclipse DemoCamp presentation on AlphaSimple but ended up not having time to cover them. The goal was not to try to convert the audience, but to make them understand where we are coming from, and why AlphaSimple is the way it is.

And here they are again for the sake of searchability:

I – Enterprise Software is much harder than it should be, lack of separation of concerns is to blame.

II – Domain and architectural/implementation concerns are completely different beasts and should be addressed separately and differently.

III – What makes a language good for implementation makes it suboptimal for modeling, and vice-versa.

IV – Domain concerns can and should be fully addressed during modeling, implementation should be a trivial mapping.

V – A model that fully addresses domain concerns will expose gaps in requirements much earlier.

VI – A model that fully addresses domain concerns allows the solution to be validated much earlier.

VII – No modeling language is more understandable to end-users than a running application (or prototype).

VIII – A single architecture can potentially serve applications of completely unrelated domains.

IX – A same application can potentially be implemented according to many different architectures.

X – Implementation decisions are based on known guidelines applied consistently throughout the application, and beg for automation.

XI – The target platform should not dictate the development tools, and vice-versa.

I truly believe in those principles, and feel frustrated when I realize how far the software industry is from abiding by them.

So, what do you think? Do you agree these are important principles and values? Would you call B.S. on any of them? What are your principles and values that drive your vision of what software development should look like?

Interview on perspectives on MDD and UML

I had the honor of being interviewed by Todd Humphries, Software Engineer at Objektum Solutions, on my views on UML and model-driven development. Here is an excerpt of the interview:

Todd Humphries: Did you have a ‘Eureka!’ moment when modelling made sense for the first time and just became obvious or was there one particular time you can think of where your opinion changed?

Rafael Chaves: When I was first exposed to UML back in school it did feel cool to be able to think about systems at a higher level of abstraction, and be able to communicate your ideas before getting down to the code (we often would just model systems but never actually build them). The value of UML modeling for the purpose of communication was evident, but that was about it. I remember feeling a bit like I was cheating, as drawing diagrams gave me no confidence the plans I was making actually made a lot of sense.

After that, still early in my career, I had the opportunity of working in a team where we were using an in-house code generation tool (first, as many have done, using XML and XSLT, and later, using UML XMI and Velocity templates, also common choices). We would get reams of Java code, EJB configuration files and SQL DDL generated from the designer models, and it did feel a very productive strategy for writing all that code. But the interesting bits (business logic) were still left to be written in Java (using the generation gap pattern). It was much better than writing all that EJB boilerplate code by hand, but it was still cumbersome and there was no true gain in the level of abstraction, as we would model thinking of the code that would be generated – no surprise, as there was no escaping the facts that we would rely on the Java compiler and JUnit tests to figure out whether the model had problems, and in order to write the actual business logic in Java, we had to be very familiar with the code that was generated. So even though I could see the practical utility of modeling by witnessing the productivity gains we obtained, there was a hackish undertone to it, and while it worked, it didn’t feel like solid engineering.

It was only later, when…

Visit The Technical Diaries, Objektum team’s blog, for the full interview. Do you agree with my views? Have your say (here or there).