New paper: Cloudfier – Automatic Architecture for Information Management Applications

What do you do when you have a conference paper rejected? That has just happened to me. I could work on improving it according to some of the feedback I got, and resubmit it to another conference, but this paper was written for a “Tools Track” of a software engineering conference, and I would have a hard time trying to fit it elsewhere (at least here in Brazil, not planning to travel abroad at this time).

So if you want to take a look at the full paper, the PDF is freely available (download). It is basically an introduction to Cloudfier, what it is meant for, and a tour over the modeling capabilities. Comments are very welcome. Below is the abstract:

Cloudfier: Automatic Architecture for Information Management Applications

Information management applications have always taken up a significant portion of the software market. The primary value in this kind of software is in how well it encodes the knowledge of the business domain and helps streamline and automate business activities. However, much of the effort in developing such applications is wasted dealing with technological aspects, which in the grand scheme of things, are of little relevance.

Cloudfier is a model­-driven platform for development and deployment of information management applications that allows developers to focus on understanding the business domain and building a conceptual solution, and later apply an architecture automatically to produce a running application. The benefit is that applications become easier to develop and maintain, and inherently future­proof.

EmailFacebookLinkedInGoogle+Twitter

Poll: what are the best language and frameworks for building business apps?

Imagine you were building the back-end for a brand new business application. You would need to address the domain information model (with entities, properties, associations, operations, queries, events etc), its persistence (on a relational database), and a REST API (to support integration with a UI and other clients). Consider the UI is somebody else’s job – and it is going to be built separately using other language/frameworks.

If it was completely up to you (not your client, or boss, or co-workers), what would be your language (one) and frameworks (any required) of choice for developing such application? Why?

These are literally the 3 questions asked in a poll I recently started. Please help by answering the poll and sharing this post (or upvoting it on the site you came from). Results will be published here.

The poll has been running for a few days and while it is still early, there have been already a quite diverse set of responses.

What is this guy up to?

I figured someone would ask.

It is always fascinating to me to read research that shows what makes developers tick. But I have a specific motivation for finding out what language/framework characteristics are more attractive to developers: to figure out what would be a good target for generating code (MDD-style) from high-level models. The hypothesis is that the more popular (or desired) the target platform, the more interest a code generator for that platform will draw.

EmailFacebookLinkedInGoogle+Twitter

Upcoming: Kirra, a language-independent API for business applications

What do NakedObjects, Apache Isis, Restful Objects, OpenXava, JMatter, Tynamo, Roma, and Cloudfier have in common?

These frameworks and platforms allow developers to focus on expressing all they know about a business domain in the form of a rich domain model. And they all support or enable automatically generating interfaces based on the application’s domain model that make the entire functionality of the application accessible to end users, without requiring any effort on designing a user interface. They can also often auto-generate a functional (usually REST) API for non-human actors.

However, each of those frameworks/platforms implement automatic UI or API generation independently, against their own proprietary metamodels – for each UI and API technology supported. So while Cloudfier supports a Qooxdoo client, Isis supports a Wicket viewer and a JQuery viewer, OpenXava seems to have a JQuery/DWR UI and so on.

This is the motivation for Kirra: Kirra aims to decouple the interface renderers from the technologies used for creating domain-driven applications, promoting the proliferation of high-quality generic UI and API renderers that can be used across domain-driven development frameworks, or even if your application is not built with a domain-driven framework.

But what is Kirra?

Kirra is a minimalistic language-independent API specification to expose functionality of a business application in a business and technology agnostic way.

Essentially, Kirra provides a simple model for exposing metadata and data for business applications, no matter how they were implemented, enabling generic clients that have full access to the functionality exposed by those applications.

Watch this space for more details and the first release, planned for later this month.

EmailFacebookLinkedInGoogle+Twitter

Command Query Separation in TextUML

Ever heard of Command Query Separation? It was introduced by Bertrand Meyer and implemented in Eiffel. But I will let Martin Fowler explain:

The term ‘command query separation’ was coined by Bertrand Meyer in his book “Object Oriented Software Construction” – a book that is one of the most influential OO books during the early days of OO. [...]

The fundamental idea is that we should divide an object’s methods into two sharply separated categories:

  • Queries: Return a result and do not change the observable state of the system (are free of side effects).
  • Commands: Change the state of a system but do not return a value.

Query operations in UML

UML too allows an operation to be marked as a query. The section on Operations in the UML specification states:

If the isQuery property is true, an invocation of the Operation shall not modify the state of the instance or any other element in the model.

Query operations in TextUML

The next release of TextUML (which runs in Cloudfier today) will start exposing the ability to mark an operation as a query operation. Being just a notation for UML, the same definition of the UML spec applies to TextUML operations marked as queries.

But how do you mark an operation as a query in TextUML, you ask? You use the query keyword instead of the usual operation keyword (it is not just a modifier, it is a replacement for the usual keyword):

query totalExpenses(toSum : Expense[*]) : Double;

The TextUML compiler imposes a few rules when it sees a query operation:

  • it will require the operation to have a return value
  • it won’t let the operation perform any actions that could have side effects, such as creating or destroying objects, writing properties or linking objects, or invoke any other non-query operations
  • also, it will only let you invoke operations from a property derivation if they are query operations

Example of a query operation


    private query totalExpenses(toSum : Expense[*]) : Double;
    begin
        return (toSum.reduce((e : Expense, sum : Double) : Double {
            sum + e.amount
        }, 0) as Double);
    end;

Example of a derived attribute using a query operation


    derived attribute totalRecorded : Double := {
        self.totalExpenses(self.recordedExpenses)
    };

But why is Command Query Separation a good thing?

By allowing a modeler/programmer to explicitly state whether an operation has side effects allows a compiler or runtime to take advantage of the guarantee of lack of side effects to do things such as reorder invocations, cache results, safely reissue in case of failure which can improve performance and reliability.

EmailFacebookLinkedInGoogle+Twitter

On automatically generating fully functional mobile user interfaces

An upcoming feature in Cloudfier is the automatic generation of fully functional user interfaces that work well on both desktop:
 
Screenshot from 2014-01-29 13:57:47
 
and mobile browsers:

Screenshot_2014-01-29-13-45-01 Screenshot_2014-01-29-13-44-53
 
This is just a first stab, but is already available to any Cloudfier apps (like this one, try logging in as user: test@abstratt.com, senha: Test1234). Right now the mobile UI is read-only, and does not yet expose actions and relationships as the desktop-oriented web UI does. Watch this space for new developments on that.

The case against generated UIs

Cloudfier has always had support for automatic UI generation for desktop browsers (RIA). However, the generated UI had always been intended as a temporary artifact, to be used only when gathering initial feedback from users and while a handcrafted UI (that accesses the back-end functionality via the automatically generated REST API) is being developed (or in the long term, as a power-user UI). The reason is that automatically generated user-interfaces tend to suck, because they don’t recognize that not all entities/actions/properties have the same importance, and that their importance varies between user roles.

Don’t get me wrong, we strongly believe in the model-driven approach to build fully functional applications from a high-level description of the solution (executable domain models). While we think that is the most sane way of building an application’s database, business and API layers (and that those make up a major portion of the application functionality and development costs), we recognize user interfaces must follow constraints that are not properly represented in a domain model of an application: not all use cases have the same weight, and there is often benefit in adopting metaphors that closely mimic the real world (for example, an audio player application should mimic standard controls from physical audio players).

If model-driven development is to be used for generating user interfaces, the most appropriate approach for generating the implementation of such interfaces (and the interfaces only) would be to craft UI-oriented models using a UI modeling language, such as IFML (although I never tried it). But even if you don’t use a UI-oriented modeling tool, and you build the UI (and the UI only) using traditional construction tools (these days that would be Javascript and HTML/CSS) that connect to a back-end that is fully generated from executable domain models (like Cloudfier supports), you are still much but much better off than building and maintaining the whole thing the traditional way.

Enter mobile UIs

That being said, UIs on mobile devices are usually much simpler than corresponding desktop-based UIs because of the interaction, navigation and dimension constraints imposed by mobile devices, resulting in a UI that shows one application ‘screen’ at a time, with hierarchical navigation. So here is a hypothesis:

Hypothesis: Mobile UIs for line-of-business applications are inherently so much simpler than the corresponding desktop-based UIs, that it is conceivable that generated UIs for mobile devices may provide usability that is similar to manually crafted UIs for said devices.

What do you think? Do you agree that is a quest worth pursuing (and with some likelihood of being proven right)? Or is the answer somehow obvious to you already? Regardless, if you are interested or experienced in user experience and/or model-driven development, please chime in.

Meanwhile, we are setting off to test that hypothesis by building full support for automatically generated mobile UIs for Cloudfier applications. Future posts here will show the progress made as new features (such as actions, relationships and editing) are implemented.

EmailFacebookLinkedInGoogle+Twitter

How Cloudfier uses Orion – shell features

Following last week’s post on editor features, today I am going to cover how Cloudfier plugs into Orion’s Shell page to contribute shell commands.

The cloudfier command prefix

All Cloudfier commands must be prefixed with ‘cloudfier’.

By just typing ‘cloudfier ‘ and hitting enter, you are given a list of all Cloudfier-specific commands.

cloudfier-commands

This is how the command prefix is contributed:


provider.registerServiceProvider("orion.shell.command", {}, {   
    name: "cloudfier",
    description: "Cloudfier commands"
});

which is a command contribution without a behavior. All the subcommands you see being offered actually include the prefix in their contributions.

Typical Cloudfier command

The typical Cloudfier command takes a workspace location (a file-type parameter), performs a remote operation and returns a message to the user explaining the outcome of the command (return type is String), and looks somewhat like this:

provider.registerServiceProvider("orion.shell.command", { callback: shellAppInfo }, {   
	name: "cloudfier info",
	description: "Shows information about a Cloudfier application and database",
	parameters: [{
	    name: "application",
	    type: "file",
	    description: "Application to get obtain information for"
	}],
	returnType: "string"
});

The behavior of the command is specified by the callback function. In this specific case, the callback performs a couple of HTTP requests against the server, so it it returns a dojo.Deferred which implements the Promise pattern contract used by Orion. Once the last server request is completed, it returns a string to be presented to the user with the outcome of the operation.

info-command

Note that the output of a command needs to use Markdown-style notation to produce links, HTML output is not suppported. Also, newlines are honored.

Commands that contribute content to the workspace

Commands that contribute content to the workspace use a “file” (single file) or “[file]” (multiple files) return type. Cloudfier has a few commands in this style:

An init-project command, which marks the current directory as a project directory:

provider.registerServiceProvider("orion.shell.command", { callback: shellCreateProject }, {   
    name: "cloudfier init-project",
    description: "Initializes the current directory as a Cloudfier project"
    returnType: "file"
});

An add-entity command which adds a new entity definition to the current directory:

provider.registerServiceProvider("orion.shell.command", { callback: shellCreateEntity }, {   
    name: "cloudfier add-entity",
    description: "Adds a new entity with the given name to the current directory",
    parameters: [{
        name: "namespace",
        type: {name: "string"},
        description: "Name of the namespace (package) for the entity (class)"
    },
    {
        name: "entity",
        type: {name: "string"},
        description: "Name of the entity (class) to create"
    }],
    returnType: "file"
});

And finally a db-snapshot command which grabs a snapshot of the current application database state and feeds it into a data.json file in the current application directory.

provider.registerServiceProvider("orion.shell.command", { callback: shellDBSnapshot }, {   
    name: "cloudfier db-snapshot",
    description: "Fetches a snapshot of the current application's database and stores it in as a data.json file in the current directory",
    returnType: "file"
});

That snapshot can be further edited and later pushed into the application database.

Note that for all file-generating commands, if files already exist (mdd.properties, <entity-name>.tuml, and data.json, respectively), they will be silently overwritten (bug 421349).

Readers beware

This ends our tour over how Cloudfier uses Orion extension points. Keep in mind this is not documentation.
See this wiki page for the most up-to-date documentation on the orion.shell.command extension point and this blog post by the Orion team for some interesting shell command examples.

EmailFacebookLinkedInGoogle+Twitter

How Cloudfier uses Orion – editor features

Cloudfier now runs on Orion 4.0 RC2. It took some learning and patience and a few false starts (tried the same in the Orion 2.0 and 3.0 cycles), but I finally managed to port the Cloudfier Orion plug-in away from version 1.0 RC2 (shipped one year ago) to 4.0 RC2. Hopefully when 4.0 final is released (any time now?), it should be a no-brainer to integrate with it. I will only then look into hack/branding it a bit so it doesn’t look identical to a vanilla Orion instance.

But how does Cloudfier extend the Orion base feature set? This post will cover the editor-based features.

Content type

Cloudfier editor-based features are applicable for TextUML files only. This content type definition provides the reference for all features to be configured against.

    provider.registerServiceProvider("orion.core.contenttype", {}, {
        contentTypes: [{  id: "text/uml",
                 name: "TextUML",
                 extension: ["tuml"],
                 extends: "text/plain"
        }]
    });

Outliner

outliner
The outliner relies on the server to parse and generate an outline tree for the contents in the editor.

    var computeOutline = function(editorContext, options) {
        var result = editorContext.getText().then(function(text) {
            return dojo.xhrPost({
	             postData: text,
	             handleAs: 'json',
	             url: "/services/analyzer/?defaultExtension=tuml",
	             load: function(result) {
	                 return result;
	             }
	        });
        });
        return result;
    };


    provider.registerServiceProvider("orion.edit.outliner", { computeOutline: computeOutline }, { contentType: ["text/uml"], id: "com.abstratt.textuml.outliner", name: "TextUML outliner" });

Note the outliner API changed in 4.0 and the editor buffer contents is now available via a deferred instead of directly. Also, note that in order to use this API your plugin needs to load Deferred.js (see this orion-dev thread) as it implicitly turns your service into a long-running operation.

Source validation

validator

Also a server-side functionality, which already returns a JSON tree in the format expected by the orion.edit.validator extension point.

    var checkSyntax = function(title, contents) {
        return dojo.xhrGet({
             handleAs: 'json',
             url: "/services/builder" + title,
             load: function(result) {
                 return result
             }
        });
    };

    provider.registerServiceProvider("orion.edit.validator", { checkSyntax: checkSyntax }, { contentType: ["text/uml", "application/vnd-json-data"] });

Note the validation service uses a GET method, and only uses the file path, not the contents. The reason is that the server reaches into the project contents stored in the server instead the client contents (in order to perform multi-file validation).

Syntax highlighting

highlighter

    /* Registers a highlighter service. */    
    provider.registerServiceProvider("orion.edit.highlighter",
      {
        // "grammar" provider is purely declarative. No service methods.
      }, {
        type : "grammar",
        contentType: ["text/uml"],
        grammar: {
          patterns: [
			  {  
			     end: '"',
			     begin: '"',
			     name: 'string.quoted.double.textuml',
			  },
			  {  begin: "\\(\\*", 
			     end: "\\*\\)",
			     name: "comment.model.textuml"
			  },
			  {  
			     begin: "/\\*", 
			     end: "\\*/",
			     name: "comment.ignored.textuml"
			  },
			  {  
			     name: 'keyword.control.untitled',
			     match: '\\b(abstract|access|aggregation|alias|and|any|apply|association|as|attribute|begin|broadcast|by|class|component|composition|constant|datatype|dependency|derived|destroy|do|else|elseif|end|entry|enumeration|exit|extends|external|function|id|if|implements|interface|in|initial|inout|invariant|is|link|model|navigable|new|nonunique|not|on|operation|or|ordered|out|package|port|postcondition|precondition|private|primitive|profile|property|protected|provided|public|raise|raises|readonly|reception|reference|required|return|role|self|send|signal|specializes|state|statemachine|static|stereotype|subsets|terminate|to|transition|type|unique|unlink|unordered|var|when)\\b'
			  },
              {
                "match": "([a-zA-Z_][a-zA-Z0-9_]*)",
                "name": "variable.other.textuml"
              },                  
              {
	            "match": "<|>|<=|>=|=|==|\\*|/|-|\\+",
	            "name": "keyword.other.textuml"
              },
              {
	            "match": ";",
	            "name": "punctuation.textuml"
              }
            ]
        }
    });

Source formatting

The code formatter in Cloudfier is server-side, so the client-side code is quite simple:

    var autoFormat = function(selectedText, text, selection, resource) {
        return dojo.xhrPost({
             postData: text,
             handleAs: 'text',
             url: "/services/formatter/?fileName=" + resource,
             load: function(result) {
                 return { text: result, selection: null };
             }
        });
    }; 

    provider.registerServiceProvider("orion.edit.command", {
        run : autoFormat
    }, {
        name : "Format (^M)",
        key : [ "m", true ],
        contentType: ["text/uml"]
    });

Content assist

contentAssist
Content assist support is quite limited, basically a few shortcuts for creating new source code elements, useful for users not familiar with TextUML, the notation used in Cloudfier.

    var computeProposals = function(prefix, buffer, selection) {
        return [
            {
                proposal: "package package_name;\n\n/* add classes here */\n\nend.",
                description: 'New package' 
            },
            {
                proposal: "class class_name\n/* add attributes and operations here */\nend;",
                description: 'New class' 
            },
            { 
                proposal: "attribute attribute_name : String;",
                description: 'New attribute' 
            },
            { 
                proposal: "operation operation_name(param1 : String, param2 : Integer) : Boolean;\nbegin\n    /* IMPLEMENT ME */\n    return false;\nend;",
                description: 'New operation' 
            },
            { 
                proposal: "\tattribute status2 : SM1;\n\toperation action1();\n\toperation action2();\n\toperation action3();\n\tstatemachine SM1\n\t\tinitial state State0\n\t\t\ttransition on call(action1) to State1;\n\t\tend;\n\t\tstate State1\n\t\t\ttransition on call(action1) to State1\n\t\t\ttransition on call(action2) to State2;\n\t\tend;\n\t\tstate State2\n\t\t\ttransition  on call(action1) to State1\n\t\t\ttransition on call(action3) to State3;\n\t\tend;\n\t\tterminate state State3;\n\tend;\n\t\tend;\n",
                description: 'New state machine' 
            }
        ];
    };

    provider.registerServiceProvider("orion.edit.contentAssist",
	    {
	        computeProposals: computeProposals
	    },
	    {
	        name: "TextUML content assist",
	        contentType: ["text/uml"]
	    }
	);

Coming next

The next post will cover the Shell-based features in Cloudfier.

EmailFacebookLinkedInGoogle+Twitter

Authenticating users in Cloudfier applications

Up until very recently, Cloudfier applications had no way to authenticate users – there was a login dialog but all it was meant for was to allow users to assume an arbitrary identity by informing the name of an existing user.

That is no longer the case. The latest release build (#29) addresses that by implementing a full-blown authentication mechanism. Now, when you try to access a Cloudfier app, like this one, you will be greeted by this login dialog:

login

which allows you to sign up, request a password reset and sign in either with proper credentials or as a guest.

For more details about how authentication works in Cloudfier applications, check the new authentication documentation.

BTW, kudos to Stormpath for providing such a great API for managing and authenticating user credentials. Highly recommended.

What’s next?

Glad you asked. Next is authentication’s bigger friend: authorization. Right now any user can do anything to any data, and of course that is not reasonable even for the simplest applications. Stay tuned for more on that.

EmailFacebookLinkedInGoogle+Twitter

New Cloudfier release supports many-to-many associations

One clear gap Cloudfier used to have was lack of support for many-to-many associations. That has now been implemented all the way from back-end to the UI.

For instance, in the ShipIt! sample issue tracking application, a user can watch multiple issues, and an issue can be watched by multiple users:

class Issue
end;

class User
end;

association WatchedIssues
    navigable role watchers : User[*];
    navigable role issuesWatched : Issue[*];
end;

UI

…which in the UI means that is now a way to link issues as watched issues for a user:

(and vice-versa from Issue, as the relationship is navigable both ways). Once the user triggers that action, they can pick multiple target objects (in this case, issues) to pair the source object (in this case, a User) up with, by clicking the “connector” button on the target entity instance’s toolbar (the second from left to right):

which once triggered shows a notice confirming the objects have now been linked together.

I will admit this UI may require some using to. It is just a first cut, and I am interested in suggestions from those of you less UX-challenged than me.

REST API

Accordingly, the application’s REST API allows querying related objects using a URI in the form:

…/services/api/<application>/instances/<entity>/<id>/relationships/<relationship-name>/

for instance:

…/services/api/demo-cloudfier-examples-shipit/instances/shipit.Issue/10/relationships/watchers/

produces a list of all users watching the base issue:

[
  {
    uri: ".../instances/shipit.User/2",
    shorthand: "rperez",
    type: ".../entities/shipit.User",
    typeName: "User",
    values: {
      ...
    },
    links: {
      ...
    },
    actions: {
      ...
    },
    relatedUri: ".../instances/shipit.Issue/10/relationships/watchers/2"
  },
  {
    uri: ".../instances/shipit.User/8",
    shorthand: "gtorres",
    type: ".../entities/shipit.User",
    typeName: "User",
    values: {
      ...
    },
    links: {
      ...
    },
    actions: {
      ...
    },
    relatedUri: ".../instances/shipit.Issue/10/relationships/watchers/8"
  }
]

and to establish links, you can POST a similar representation to the same URI, but you only really need to include the ‘uri’ attribute, everything else is ignored.

New tour video

There is also now a new tour video, this time with audio and much better image quality, if you gave up on watching the original one, please give this one a try!

EmailFacebookLinkedInGoogle+Twitter

How can modeling be harder than programming?

One argument often posed against model-driven development is that not all developers have the skills required for modeling. This recent thread in the UML Forum discussion group includes a very interesting debate on that and started with this statement by Dan George:

Don’t take his comments about the orders of magnitude reduction in code size to mean orders of magnitude reduction in required skill. I think this is the reason model-driven development is not mainstream. The stream is full of programmers that could never even develop the skills necessary to use MDD. Humbly, I know that I’m still a wannabe.

which was contested by H.S. Lahman

I have to disagree (somewhat) with this assessment. Yes, there is a substantial learning curve and OOA models need to be developed much more rigorously than they are in most shops today. Also, one could argue that the learning curve is essentially the same learning curve needed to learn to do OOA/D properly.

and later by Thomas Mercer-Hursh:

There is something confusing about the idea of good modeling being hard. After all, all one is doing is describing how the system is supposed to work without having to worry about the implementation details. If one can’t do that, then how is one supposed to manually create a correct, working system?

I sympathize with Lahman’s and Thomas’ points (and share some of their puzzlement), but I do agree with Dan’s initial point: modeling can be harder than programming.

Separation of concerns? Not in the job description

The fact is that one can deliver software that was apparently appropriately built (from a QA/product owner/user point-of-view) and yet fail to fully understand the constraints and rules of the business domain the software is meant to serve.

Also, even if a developer does understand the business requirements at the time the solution is originally implemented, it is unfortunately very common that they will fail to encode the solution in a way that clearly express the intent in a way that would be easy for other developers (or themselves) at a later time correlate the code to business requirements (as proposed by Domain Driven Design), leading to software that is very hard to maintain (because it is hard to understand, or hard to change without breaking things). Model-driven development is a great approach for proper separation of concerns when building software (the greatest, if you ask me). However, as sad as that is, proper separation of concerns is not a must-have trait for delivering “appropriate” software (from a narrow, external, immediate standpoint). Ergo, one can build software without modeling, even implicitly.

I don’t think those things happen because developers are sociopaths. I think properly understanding and representing the concerns of a business domain when building software is a very desirable skill (I would say critical), but realistically not all that common in software developers. But how can hordes of arguably proficient programmers get away without such skill?

Delivering software the traditional (programming-centric) way often involves carefully patching together a mess of code, configuration and some voodoo to address a complex set of functional and non-functional requirements that works at the time of observation (a castle of cards is an obvious image here). Building software that way makes it too easy for one to be overwhelmed by all the minutia imposed by each technology and the complexity of making them work together and lose track of the high level goals one is trying to achieve – let alone consciously represent and communicate.

Conclusion

So even though I fully agree with the sentiment that proper programming requires a good deal of modeling skills, I do think it is indeed possible to deliver apparently working software (from an external point of view) without consciously doing any proper modeling. If you stick to the externally-facing aspects of software development, all that is valued is time to deliver, correctness, performance, and use of some set of technologies. Unfortunately that is all that is required for most development positions. Easy of maintenance via proper separation of concerns is nowhere in that list. And model-driven development is essentially an approach for separation of concerns on steroids.

What do you think?

EmailFacebookLinkedInGoogle+Twitter

Checking the current state of a UML state machine

In Cloudfier, we use UML as the core language for building business applications. UML is usually well-equipped for general purpose business domain-centric application modeling, but that doesn’t mean it always does everything needed out of the box.

Case at hand: assuming one is developing an expense reporting application and modeled an expense’s status as a state machine (in TextUML):

class Expense
    /* ... */
    attribute status : Status;
    operation review();
    operation approve();
    operation reject();
    operation submit();

    statemachine Status
        initial state Draft
            transition on call(submit) to Submitted;
        state Submitted
            transition on call(approve) to Approved
            transition on call(reject) to Rejected
            transition on call(review) to Draft;
        terminate state Approved;
        terminate state Rejected;        
    end;
end;

How do you model the following in UML?

Show me all expenses that are waiting for approval.

Turns out there is no support in UML for reasoning based on the current state of a state machine.

Creative modeling

So, what do you do when UML does not have a language element that you need? You extend it, in our case, using a stereotype applicable to the LiteralNull metaclass (in TextUML):

stereotype VertexLiteral extends LiteralNull
    property vertex : Vertex;
end;

So, a vertex literal is a value specification, more specifically, a variant of LiteralNull, that can refer to a Vertex, which is a metaclass that represents the states (including pseudo-states) in a state machine.

Notation, notation

In terms of notation, I chose to make State/Vertex literals look like enumeration literals: Status#Approved or Status#Draft. So, back to the original question, this is how you could model a query that returns all expenses that are in the Submitted state:

    static operation findAllSubmitted() : Expense[*];
    begin 
        return Expense extent.select ((e : Expense) : Boolean {
            return e.status == Status#Submitted
        });
    end;

If you are thinking to yourself: I didn’t know UML had queries or closures!?, well, it usually doesn’t. See the posts on SQL queries in UML and Closures in UML for some background on this.

Note also that if you wanted to refer to the symbol Status from a class different than the one enclosing it you will need to qualify it (i.e. Expense::Status#Submitted).

Show me more!

You can run the Expenses application showing state machines and state-based queries in Cloudfier right now (login is “guest” or any of the employee names you will see later).

The entire Expenses sample application (currently 150 lines of generously spaced TextUML) is available on BitBucket. You can also easily check it out into Cloudfier so you can run your own copy of the application on the web (there is nothing to install). Give it a try!

What do you think?

Your feedback (questions, support or criticism) to any of the ideas presented in this post is very welcome.

UPDATE: I started a thread on the subject on the UML Forum group, and turns out you can do this kind of reasoning in OCL, but indeed, not in UML itself. Well, now you can.

EmailFacebookLinkedInGoogle+Twitter

Yet another Orion-based site: cloudfier.com

Okay, we are live.

I just put the last finishing touches on the developer site at cloudfier.com.

The developer site, develop.cloudfier.com, is powered by Orion. Cloudfier’s instance of Orion has several features to support modeling with TextUML, such as:

  • Syntax highlighting
  • Outline
  • Validation
  • Auto-formatting
  • Templates

and I have a picture to prove it:

but wouldn’t you rather see for yourself? If you are shy because you don’t know how to model in TextUML, just make sure you create a file with a “.tuml” extension and use the content assist templates to get a model going. Or if you are feeling lazy, just clone this Git repository: https://bitbucket.org/abstratt/cloudfier-examples.git

But what and who is Cloudfier for you may ask. I won’t tell you here though. Please go to cloudfier.com, give it a quick read. If you don’t get it, please let me know in the comments – a main goal now is to ensure the main page can get the message across.

EmailFacebookLinkedInGoogle+Twitter

TextUML Toolkit finally gets continuous integration thanks to Tycho and CloudBees

TextUML Toolkit 1.8 is now available! You can install it as usual using http://abstratt.com/update as the update site. There is also a snapshot update site, which will work from within Eclipse only:

jar:https://repository-textuml.forge.cloudbees.com/snapshot/com/abstratt/mdd/com.abstratt.mdd.oss.repository/1.0/com.abstratt.mdd.oss.repository-1.0.zip!/

This is a transition release where the TextUML Toolkit now uses continuous integration for producing builds via Eclipse Tycho, as opposed to developer initiated builds from the IDE. This benefits contributors (the development setup is much simpler), but primarily users – since it is now so much easier to obtain the source code and generate a release users can expect much more frequent releases, and hopefuly more goodies from occasional contributors.

Talking about frequent releases, if you don’t mind living on the bleeding edge, I invite you to install the TextUML Toolkit from the snapshot update site (that is what you get if you install the Toolkit using the Eclipse Marketplace Client). That way, features or fixes will become available to you a day after they have been committed.

This release contains a number of new features and bug fixes added since 1.7 was released a year ago, but we are not documenting those yet. You will see those properly promoted in a future release. Our focus now was to get our release engineering act straight, and I think we succeeded, thanks to Tycho.

Finally, we would like to thank CloudBees for their generous free plan that allows us to set up Jenkins continuous builds for the TextUML Toolkit at no cost. On that note, we are applying for a FOSS plan so we can have our build results available for everyone to see, and as a bonus, enjoy a slightly higher monthly build quota. As you can see, we are already living up to our side of the deal by spreading the word about their cool DEV@cloud product. :)

UPDATE: CloudBees is now providing the TextUML Toolkit project with a free DEV@cloud instance.

EmailFacebookLinkedInGoogle+Twitter

Adding State Machines to TextUML and AlphaSimple [take 1]

I decided to go ahead and finally implement support for state machines in TextUML and AlphaSimple.

This is an example of what a state machine will look like (take 1), based on fig. 15.33 in the UML specification 2.4:


(...)
statemachine Phone

  initial state
    entry { self.startDialTone() }
    exit { self.stopDialTone() }
    transition on digit to PartialDial;

  state PartialDial
    transition on digit to PartialDial
    transition when { self.numberIsValid() } to Completed;

  final state Completed;

end;
(...)

A state machine may declare multiple states. Each state declares a number of transitions to other states. Each transition may be triggered by many events (or none), each denoted by the keyword ‘on’, and may optionally present a guard constraint (using the keyword ‘when’). The initial state is the only one that may remain unnamed. The final state cannot have outgoing transitions, but just like any other state, it may declare entry/exit behaviors.

What do you think? I did try to find existing textual notations for UML, like this and this, but none of those seem to be documented or look like covering all the UML features I want to support. Any other pointers?

EmailFacebookLinkedInGoogle+Twitter

Feedback wanted: invariant constraints in AlphaSimple/TextUML

I am working on support for invariant constraints in AlphaSimple/TextUML.

Some of the basic support has already made into the live site. For instance, the AlphaSimple project has a rule that says:

A user may not have more than 3 private projects.”

This in TextUML looks like this:


class User 

    attribute projects : Project[*] 
        invariant Maximum 3 private projects { 
            return self.privateProjects.size() <= 3
        };
        
    derived attribute privateProjects : Project[*] := () : Project[*] {
        return self.projects.select((p : Project) : Boolean {
            return not p.shared
        });
    };

end;

(Note the constraint relies on a derived property for more easily expressing the concept of private projects, and that backslashes are used to escape characters that otherwise would not be allowed in identifiers, such as whitespaces.)

What do you think? Does it make sense? I know the syntax for higher order functions could benefit from some sugar, but that can be easily fixed later. I am much more interested in feedback on the idea of modeling with executable constraints than in syntax.

Wading in unknown waters

I am in the process of modeling a real world application in AlphaSimple and for most cases, the level of support for constraints that we are building seems to be sufficient and straightforward to apply.

I have though found one kind of constraint that is hard to model (remember, AlphaSimple is a tool for modeling business domains, not a programming language): in general terms, you cannot modify or delete an object if the object (or a related object) is in some state. For example:

"One cannot delete a project's files if the project is currently shared".

Can you think of a feature in UML that could be used to address a rule like that? I can't think of anything obvious (ChangeEvent looks relevant at a first glance, but there is no support for events in TextUML yet).

Any ideas are really appreciated.

EmailFacebookLinkedInGoogle+Twitter

MDD meets TDD (part II): Code Generation

Here at Abstratt we are big believers of model-driven development and automated testing. I wrote here a couple of months ago about how one could represent requirements as test cases for executable models, or test-driven modeling. But another very interesting interaction between the model-driven and test-driven approaches is test-driven code generation.

You may have seen our plan for testing code generation before. We are glad to report that that plan has materialized and code generation tests are now supported in AlphaSimple. Follow the steps below for a quick tour over this cool new feature!

Create a project in AlphaSimple

First, you will need a model so you can generate code from. Create a project in AlphaSimple and a simple model.


package person;

enumeration Gender 
  Male, Female
end; 

class Person
    attribute name : String; 
    attribute gender : Gender; 
end;

end.

Enable code generation and automated testing

Create a mdd.properties file in your project to set it up for code generation and automated testing:


# declares the code generation engine
mdd.target.engine=stringtemplate

# imports existing POJO generation template projects
mdd.importedProjects=http://cloudfier.com/alphasimple/mdd/publisher/rafael-800/,http://cloudfier.com/alphasimple/mdd/publisher/rafael-548/

# declares a code generation test suite in the project
mdd.target.my_tests.template=my_tests.stg
mdd.target.my_tests.testing=true

# enables automated tests (model and templates)
mdd.enableTests=true

Write a code generation test suite

A code generation test suite has the form of a template group file (extension .stg) configured as a test template (already done in the mdd.properties above).

Create a template group file named my_tests.stg (because that is the name we declared in mdd.properties), with the following contents:


group my_tests : pojo_struct;

actual_pojo_enumeration(element, elementName = "person::Gender") ::= "<element:pojoEnumeration()>"

expected_pojo_enumeration() ::= <<
enum Gender {
    Male, Female
}
>>

A code generation test case is defined as a pair of templates: one that produces the expected contents, and another that produces the actual contents. Their names must be expected_<name> and actual_<name>. That pair of templates in the test suite above form a test case named “pojo_enumeration”, which unsurprisingly exercises generation of enumerations in Java. pojo_enumeration is a pre-existing template defined in the “Codegen – POJO templates” project, and that is why we have a couple of projects imported in the mdd.properties file, and that is why we declare our template suite as an extension of the pojo_struct template group. In the typical scenario, though, you may would have the templates being tested and the template tests in the same project.

Fix the test failures

If you followed the instructions up to here, you should be seeing a build error like this:



Line	File		Description
3	my_tests.stg	Test "pojo_enumeration" failed: [-public -]enum Gender {n Male, Femalen}

which is reporting the code generated is not exactly what was expected – the template generated the enumeration with an explicit public modifier, and your test case did not expect that. Turns out that in this case, the generated code is correct, and the test case is actually incorrect. Fix that by ensuring the expected contents also have the public modifier (note that spaces, newlines and tabs are significant and can cause a test to fail). Save and notice how the build failure goes away.

That is it!

That simple. We built this feature because otherwise crafting templates that can generate code from executable models is really hard to get right. We live by it, and hope you like it too. That is how we got the spanking new version of the POJO target platform to work (see post describing it and the actual project) – we actually wrote the test cases first before writing the templates, and wrote new test cases whenever we found a bug – in the true spirit of test-driven code generation.

EmailFacebookLinkedInGoogle+Twitter

Can you tell this is 100% generated code?

Can you tell this code was fully generated from a UML model?

This is all live in AlphaSimple – every time you hit those URLs the code is being regenerated on the fly. If you are curious, the UML model is available in full in the TextUML’s textual notation, as well as in the conventional graphical notation. For looking at the entire project, including the code generation templates, check out the corresponding AlphaSimple project.

Preconditions

Operation preconditions impose rules on the target object state or the invocation parameters. For instance, for making a deposit, the amount must be a positive value:


operation deposit(amount : Double);
precondition (amount) { return amount > 0 }
begin
    ...
end;

which in Java could materialize like this:


public void deposit(Double amount) {
    assert amount > 0;
    ...
}

Not related to preconditions, another case assertions can be automatically generated is if a property is required (lowerBound > 0):


public void setNumber(String number) {
    assert number != null;
    ...
}

Imperative behavior

In order to achieve 100% code generation, models must specify not only structural aspects, but also behavior (i.e. they must be executable). For example, the massAdjust class operation in the model is defined like this:


static operation massAdjust(rate : Double);
begin
    Account extent.forEach((a : Account) { 
        a.deposit(a.balance*rate) 
    });
end;

which in Java results in code like this:


public static void massAdjust(Double rate) {
    for (Account a : Account.allInstances()) {
        a.deposit(a.getBalance() * rate);
    };
}

Derived properties

Another important need for full code generation is proper support for derived properties (a.k.a. calculated fields). For example, see the Account.inGoodStanding derived attribute below:


derived attribute inGoodStanding : Boolean := () : Boolean { 
    return self.balance >= 0 
};

which results in the following Java code:


public Boolean isInGoodStanding() {
    return this.getBalance() >= 0;
}

Set processing with higher-order functions

Any information management application will require a lot of manipulation of sets of objects. Such sets originate from class extents (akin to “#allInstances” for you Smalltalk heads) or association traversals. For that, TextUML supports the higher-order functions select (filter), collect (map) and reduce (fold), in addition to forEach already shown earlier. For example, the following method returns the best customers, or customers with account balances above a threshold:


static operation bestCustomers(threshold : Double) : Person[*];
begin
    return
        (Account extent
            .select((a:Account) : Boolean { return a.balance >= threshold })
            .collect((a:Account) : Person { return a->owner }) as Person);
end;        

which even though Java does not yet support higher-order functions, results in the following code:


public static Set<Person> bestCustomers(Double threshold) {
    Set<Person> result = new HashSet<Person>();
    for (Account a : Account.allInstances()) {
        if (a.getBalance() >= threshold) {
            Person mapped = a.getOwner();
            result.add(mapped);
        }
    }
    return result;
}

which demonstrates the power of select and collect. For an example of reduce, look no further than the Person.totalWorth attribute:


derived attribute totalWorth : Double := () : Double {
    return (self<-PersonAccounts->accounts.reduce(
        (a : Account, partial : Double) : Double { return partial + a.balance }, 0
    ) as Double);
};  

which (hopefully unsurprisingly) maps to the following Java code:


public Double getTotalWorth() {
    Double partial;
    partial = 0;
    for (Account a : this.getAccounts()) {
        partial = partial + a.getBalance();
    }
    return partial;
}

Would you hire AlphaSimple?

Would you hire a developer if they wrote Java code like AlphaSimple produces? For one thing, you can’t complain about the guy not being consistent. :) Do you think the code AlphaSimple produces needs improvement? Where?

Want to try by yourself?

There are still some bugs in the code generation that we need to fix, but overall the “POJO” target platform is working quite well. If you would like to try by yourself, create an account in AlphaSimple and to make things easier, clone a public project that has code generation enabled (like the “AlphaSimple” project).

EmailFacebookLinkedInGoogle+Twitter

11 Dogmas of Model-Driven Development

I prepared the following slides for my Eclipse DemoCamp presentation on AlphaSimple but ended up not having time to cover them. The goal was not to try to convert the audience, but to make them understand where we are coming from, and why AlphaSimple is the way it is.

And here they are again for the sake of searchability:

I – Enterprise Software is much harder than it should be, lack of separation of concerns is to blame.

II – Domain and architectural/implementation concerns are completely different beasts and should be addressed separately and differently.

III – What makes a language good for implementation makes it suboptimal for modeling, and vice-versa.

IV – Domain concerns can and should be fully addressed during modeling, implementation should be a trivial mapping.

V – A model that fully addresses domain concerns will expose gaps in requirements much earlier.

VI – A model that fully addresses domain concerns allows the solution to be validated much earlier.

VII – No modeling language is more understandable to end-users than a running application (or prototype).

VIII – A single architecture can potentially serve applications of completely unrelated domains.

IX – A same application can potentially be implemented according to many different architectures.

X – Implementation decisions are based on known guidelines applied consistently throughout the application, and beg for automation.

XI – The target platform should not dictate the development tools, and vice-versa.

I truly believe in those principles, and feel frustrated when I realize how far the software industry is from abiding by them.

So, what do you think? Do you agree these are important principles and values? Would you call B.S. on any of them? What are your principles and values that drive your vision of what software development should look like?

EmailFacebookLinkedInGoogle+Twitter

MDD meets TDD: mapping requirements as model test cases

Executable models, as the name implies, are models that are complete and precise enough to be executed. One of the key benefits is that you can evaluate your model very early in the development life cycle. That allows you to ensure the model is generally correct and satisfies the requirements even before you have committed to a particular implementation platform.

One way to perform early validation is to automatically generate a prototype that non-technical stakeholders can play with and (manually) confirm the proposed model does indeed satisfy their needs (like this).

Another less obvious way to benefit from executable models since day one is automated testing.

The requirements

For instance, let’s consider an application that needs to deal with money sums:

  • REQ1: a money sum is associated with a currency
  • REQ2: you can add or subtract two money sums
  • REQ3: you can convert a money sum to another currency given an exchange rate
  • REQ4: you cannot combine money sums with different currencies

The solution

A possible solution for the requirements above could look like this (in TextUML):

package money;

class MixedCurrency
end;

class Money
  attribute amount : Double;
  attribute currency : String;
  
  static operation make(amount : Double, currency : String) : Money;
  begin 
      var m : Money;
      m := new Money;
      m.amount := amount;
      m.currency := currency;
      return m;
  end;
  
  operation add(another : Money) : Money;
  precondition (another) raises MixedCurrency { return self.currency = another.currency }
  begin
      return Money#make(self.amount + another.amount, self.currency);
  end;
  
  operation subtract(another : Money) : Money;
  precondition (another) raises MixedCurrency { return self.currency = another.currency }  
  begin
      return Money#make(self.amount - another.amount, self.currency);      
  end;
  
  operation convert(anotherCurrency : String, exchangeRate : Double) : Money;
  begin
      return Money#make(self.amount * exchangeRate, anotherCurrency);
  end;  
end;
        
end.

Now, did we get it right? I think so, but don’t take my word for it.

The proof

Let’s start from the beginning, and ensure we satisfy REQ1 (a money sum is a pair <amount, currency>:

[Test]
operation testBasic();
begin
    var m1 : Money;
    m1 := Money#make(12, "CHF");
    Assert#assertEquals(12, m1.amount);
    Assert#assertEquals("CHF", m1.currency);
end;

It can’t get any simpler. This test shows that you create a money object providing an amount and a currency.

Now let’s get to REQ2, which is more elaborate – you can add and subtract two money sums:

[Test]
operation testSimpleAddAndSubtract();
begin
    var m1 : Money, m2 : Money, m3 : Money, m4 : Money;
    m1 := Money#make(12, "CHF");
    m2 := Money#make(14, "CHF");

    m3 := m1.add(m2);    
    Assert#assertEquals(26, m3.amount);
    Assert#assertEquals("CHF", m3.currency);
    
    /* if m1 + m2 = m3, then m3 - m2 = m1 */
    m4 := m3.subtract(m2);
    Assert#assertEquals(m1.amount, m4.amount);
    Assert#assertEquals(m1.currency, m4.currency);
end;

We add two values, check the result, them subtract one of them from the result and expect the get the other.

REQ3 is simple as well, and specifies how amounts can be converted across currencies:

[Test]
operation testConversion();
begin
    var m1 : Money, result : Money;
    m1 := Money#make(3, "CHF");
    result := m1.convert("USD", 2.5);
    Assert#assertEquals(7.5, result.amount);
    Assert#assertEquals("USD", result.currency);
end;

We ensure conversion generates a Money object with the right amount and the expected currency.

Finally, REQ4 is not a feature, but a constraint (currencies cannot be mixed), so we need to test for rule violations:

[Test]
operation testMixedCurrency();
begin
    try
        Money#make(12, "CHF").add(Money#make(14, "USD")); 
        /* fail, should never get here */
        Assert#fail("should have failed");
    catch (expected : MixedCurrency)
        /* success */
    end;
end;

We expect the operation to fail due to a violation of a business rule. The business rule is identified by an object of a proper exception type.

There you go. Because we are using executable models, even before we decided what implementation platform we want to target, we already have a solution in which we have a high level of confidence that it addresses the domain-centric functional requirements for the application to be developed.

Can you say “Test-driven modeling”?

Imagine you could encode all non-technical functional requirements for the system in the form of acceptance tests. The tests will run against your models whenever a change (to model or test) occurs. Following the Test-Driven Development approach, you alternate between encoding the next requirement as a test case and enhancing the model to address the latest test added.

Whenever requirements change, you change the corresponding test and you can easily tell how the model must be modified to satisfy the new requirements. If you want to know why some aspect of the solution is the way it is, you change the model and see the affected tests fail. There is your requirement traceability right there.

See it by yourself

Would you like to give the mix of executable modeling and test-driven development a try? Sign up to AlphaSimple now, then open the public project repository and clone the “Test Infected” project (or just view it here).

P.S.: does this example model look familiar? It should – it was borrowed from “Test Infected: Programmers Love Writing Tests“, the classical introduction to unit testing, courtesy of Beck, Gamma et al.

EmailFacebookLinkedInGoogle+Twitter

Testing code generation templates – brainstorming

We would like to support automated testing of templates in AlphaSimple projects. I have been “test-infected” for most of my career, and the idea of writing code generation templates that are verified manually screams “unsustainable” to me. We need a cheap and easily repeatable way of ensuring code generation templates produce what they intend to produce.

Back-of-a-napkin design for code generation testing:

  1. by convention, for each test case, declare two transformations: one will hardcode the expected results, and another will trigger the transformation to test with some set of parameters (typically, an element of a model). We can pair transformations based on their names: “expected_foo” and “actual_foo” for a test case named “foo”
  2. if the results are identical, the test passes; otherwise, the test fails (optionally, use a warning for the cases where the only differences are around layout, i.e., non significant chars like spaces/newlines – optionally, because people generating Python code will care about layout)
  3. just as we do for model test failures, report template test failures as build errors
  4. run template tests after model tests, and only if those pass
  5. (cherry on top) report text differences in a sane way (some libraries out there can do text diff’ng)

Does that make sense? Any suggestions/comments (simpler is better)? Have you done or seen anything similar?

EmailFacebookLinkedInGoogle+Twitter