Friday, December 25, 2009

My first JavaFX script

I just wrote my first JavaFX script. This script is actually a little benchmark I run when first looking at a new language. Is it rigorous and scientific ? Not at all. It is a very modest benchmark that gives me a feel for the language speed. It includes a bunch of operations typical of the kind of code that I write (string manipulations, collection manipulations, etc...). The code is not optimized to any degree and might actually run faster (or slower) if I was to use more idiomatic style. I also use a number of Java class. I think this is needed because this is also representative of what actual code will look like.
So what is the result ? Turns out that in this benchmark JavaFX is only 2 times slower than Java (511 milliseconds in one tests compared to 272 using Java). If you compare that to other JVM languages that is actually quite good. Groovy for example runs this particular benchmark in about 2600 millis (about 5 times slower than JavaFX). Of course JavaFX is a statically typed language so I was hoping I would get that kind of performance. I suspect that this will improve with subsequent versions. Scala for example runs the benchmark in about the same time as Java.
Of course speed is not the only criteria for language selection. Groovy for example being a dynamic language has a lot of characteristics that make it really appealing. However, in my case for the kind of applications I work on I cannot run 10 times slower than Java. If I can write only a small portion of an application using a language then this language becomes much less interesting. With the kind of speed that JavaFX is giving me I know I can write most of my code using it. That makes it really appealing. Now if JavaFX graphic rendering can get fast enough and with its already compeling UI capability it becomes a clear winner in my book.

Sunday, November 1, 2009

In a previous blog I mentioned using DbC as a technique on a project. I forgot to mention one important element that is very important in DbC and that's invariants. Since I did not use any fancy tools to work with DbC I had to find a way to define invariants. I choose to simply define a private method called invariants and added a call to this method in a conditional compile block at the beginning and end of every public methods. Like I said in my previous discussion this simple way of using DbC adds a bit of clutter to the code but this is a small cost to pay to get the benefit of DbC.
Invariants are assertions about a class that are always true. For a linked list for example:

first != null || size == 0

Combined with preconditions and postconditions the impact on code correctness is just amazing. Thinking about the assertions just get you there faster.
On my next project I just might use Microsoft's Contract tool. The only annoying thing is that I probably won't be able to use this with MONO.

Sunday, October 18, 2009

FInally a decent Groovy and Scala IDE

Yesterday I installed InterlliJ Community edition on my Linux system at home. I have not completed my evaluation but I must say that what I have seen so far looks good.
Groovy support is available "out of the box" while Scala support can be added by downloading a plug-in. I have been reading about both languages and I even started writing little scripts at work in Groovy (utilities to extract data from text files and such). However I must say that the lack of a good IDE was quite a hindrance. As a Java programmer I expect a lot from my IDE being used to programs like Eclipse and Netbeans. Some of the productivity gains from those new languages are not so impressive when one is used to working with a good IDE. Take for example the Groovy (or Scala) "def" keyword. This is often presented as one of the advantages of Groovy over Java. You can replace:

StringBuilder myString = new StringBuilder();


def myString = new StringBuilder()

The problem is that in Eclipse for example when I enter the first expression I type the following:

myString = new StringBuilder();

Then I just press +F1 and select "Create local variable" and the IDE adds the missing type at the beginning of the line.
Another example is the Groovy @Delegate annotation that generates delegate methods for a given member. Again in Eclipse I just right click the member and select "Generate delegate methods" from the Source menu.
Of course, in this category both Groovy and Scala offer much more gains then what an IDE like Eclipse can offer. However, those gains have to be weighted against the loss of other IDE functionality. As a Java programmer and Eclipse user I expect a good browser for my language. A browser is an essential part of a good OOP environment. This is so true that Smalltalk development kits have always included a browser. It is somewhat painful to apply good OOP principles if you don't have a browser (good OO programming tends to result in more numerous small classes).
The other must of course is code completion. The large number of core classes and API makes this absolutely essential.
I think a good open source environment like IntelliJ will contribute to the adoption of both Groovy and Scala.

Sunday, October 4, 2009

Design by Contract and Containers classes

I had to write a special ordered linked list class for a project at work. Since I have seen a lot of example of applications of Design by contract for this type of code I decided to give it a try. For the DbC preconditions and postconditions I did not use anything fancy I just wrote a class with static methods that all look like this:

public static void precondition (string description, bool assertion)
if (!assertion)
throw new AssertionException (
"Precondition error: " + description);

When not running in debug mode the method becomes empty and if you decide not to put them in a conditional compilation block the overhead is very small. In my case because the ordered linked list is used in a very performance critical part of the program all use of the precondition and postcondition are in a #if DEBUG/#endif block. This clutters the code but considering the gains it is not so bad. (Microsoft has something available that is somewhat cleaner but when I last checked you needed the Team Edition of VS to use it).
Anyway, I found that for something like a container DbC is really great. Along with DbC I also wrote a good suite of unit tests for my class. Turns out the DbC checks detected a few errors in my code that would have gone undetected with my initial batch of tests. The DbC failure gave me a really good hint about the kind of tests I had to add so in the end with the DbC checks and the updated test suite I was really confident about my new class (I eventually got 80% coverage in my test and I plan to write the one or two missing tests I need to get to 100%). I felt that comming up with and writing the preconditions, postconditions and invariant really helped me quickly get to fully working solution.
I will not hesitate to use this again despite the clutter for any class I feel will benefit from DbC.

Sunday, September 13, 2009

Unit tests and the Layers Pattern (part 4)

Last time we looked at the layers in OPC UA module into a little bit more details. Now lets look at how the tests were structured. I was not please at how the diagram represented the organisation of the tests so here is a modified version:

Interface (Java) : (Unit tests)(Unit tests)
Logical (Java) : Unit tests | |
Low-level access : | |
Java : | |
JNI (ANSI C++) : V |
C++/CLI : |
C# : Unit tests

The unit tests are divided into the following categories:

Horizontal (single layer)

Test of classes in the logical layer

Most are classical Junit tests. Mostly, they test class methods in an isolated manner. I do have “behavior driven” tests here that use a mock implementation of the lower layers. Because of the layered approach, the Mock implementation is quite simple. It uses a Map in the background with backdoor methods to setup parameter values. The methods that fake UA method calls don't do anything except changed predefined parameter values.

Test of classes in the low-level layer

Same thing here except that I use Nunit since this is written in C#. The difference is that I don't have lower levels in my code. The next layer is external and that is the OPC UA framework. I was able to expand my tests here using generics and conditional compilation. I had to do this because the OPC UA framework does not use a lot of interface or abstraction. I ended up having to work very hard to test some part of the code. A lot of the tests here are for the special Queue used for subscriptions.

Vertical (multiple layers)

Tests of the JNI interface

Here I use a separate DLL that does not use the C++/CLI layer. This allows me to test the JNI part of the code in isolation so that if I have a bug I know that the problem is in the pure native C++ layer. I could have used horizontal tests here but they would have been very limited since most of the code is mostly JNI mechanics.

Tests of all upper layers

These tests go from the logical down to the low-level layer. The low-level layer however is Mocked so this group of tests is mainly a test of the C++/CLI mechanic. Of course there is a small amount of redundant tests of the JNI code here. This is unavoidable. However, because the JNI code is tested in isolation elsewhere this is not a problem. I know that if I have a bug here there is a high probability that the bug is in the C++/CLI mechanic.


Structuring the code in layers allows to more easily test more code. Having different group of tests allows to quickly find the source of a bug. You avoid much debugging using this modular approach. You also can tests more stuff as part of the build because you can use Mock implementations of key components and avoid having to use an actual OPC UA server on the build machine.
The code is tested with an actual OPC UA server as part of manual tests. These are JUnit tests that I run manually on my development machine and that use all real layers. Finally, system and integration tests close the loop.

Saturday, September 5, 2009

Unit tests and the Layers Pattern (part 3)

Last time we looked at the layers for my OPC UA client project without going into too much details. For convenience I have repeated the diagram below.

Interface (Java) : (Unit tests)
Logical (Java) : Unit tests
Low-level access : |
Java : |
JNI (ANSI C++) : V
C++/CLI : |
C# : Unit tests

Lets describe the layers into a little bit more details:


As described in part 1 this is were you define the public API for the module. In my Java code this is made up mostly of Java interfaces. In C++ I would use pure abstract classes. The interface also defines things like Enum and constants that are part of the interface. In my project this is in a separate group of package (namespace) and one could go as far as putting this in a totally separate projects. Putting the interface in a separate project helps make he separation between the interface and the rest of the code even more explicit and this helps to avoid some type of errors were the interface is contaminated with implementation elements from other layers. In my case I kept all the Java code in the same project and it went fairly well.


The logical layer is the part that uses the low-level access layer to implement actual business logic. Things like:

if (parameterX.value == aSpecificValue)
// Do something
// Do something else call a UA MethodY()

In this layer I actually have a state machine that switches state and takes different action based on parameter values. The logical layer uses other sublayers (configuration persistence, ...) but we won't go into those details here because it would make things too complicated. This layer contains a good number of unit tests (horizontal).

Low-level access

This layer defines an interface of its own. In my case this interface is not visible from outside the module. It defines the following method:

  • read one or more parameters

  • write one or more parameters

  • call UA methods

  • subscribe for update notification for one or more parameters

  • fetch data updated through the subscription mechanism

The only code in this layer is the code necessary to use the UA framework to perform the tasks listed above. And in fact the only part that contains more complicated logic is the part that manages the subscription and this is mainly a kind of smart queue mechanism. This code is the part responsible for most of the unit tests located directly in the layer (horizontal).

Parting comments

I want to emphasize that except for the sublayers in the low-level access layer the layers have nothing to do with the use of different languages. The same layers would have been present with an all Java module. In other words if a Java OPC UA framework had been available in a sufficiently advanced state for my project the layers would have been the same.
Next time we will keep exploring the layers and how the unit tests were structured.

Sunday, August 30, 2009

Unit tests and the Layers Pattern (part 2)

I have used the Layers Pattern on my last project. This was a OPC UA client for data acquisition module in an Industrial Continuous Data Acquisition suite of software. This suite of applications was already using a proprietary plug-in framework to allow integration of different analyzers. The idea was to take the generic approach one step further and use a standard technology (OPC UA) for interfacing to analyzers.
On this project OPC UA can be seen as providing the low-level data access module. Using this the program could read/write parameters synchronously or use a subscription mechanism to get notifications when parameters were updated. Starting the project I also faced the problem of not having much documentation available and not much in terms of sample code. It turned out that the only source of significant code example was the C# framework. For a Java application this is a problem. While I started the project thinking that I would use an ANSI C library and JNI, I ended up having to use JNI, C++/CLI and a C# framework. In the end I had something like the following layers for my new module:

Interface (Java) : (Unit tests)
Logical (Java) : Unit tests
Low-level access : |
Java : |
JNI (ANSI C++) : |
C++/CLI : |
C# : Unit tests

You can see that the low-level layer contains four sub-layer. One for each technology involved. Each of these sub-layers is fairly simple except for the C# sub-layer. This is because of the need to support the subscription mechanism. The diagram above also shows how the unit tests are distributed. Some tests are restricted to a layer (horizontal) and some tests span all layers (vertically).
Next time we will look more closely at each layers and also at how the unit tests are structured in more details.

Saturday, August 15, 2009

Unit tests and the Layers Pattern (part 1)

In previous postings I talked about techniques that I use when adding unit tests to legacy code. The two techniques were Introduce middle man and Extract static method. I admit that those techniques may not sound very impressive at first. However, they did help me to add unit tests to legacy code on a number of occasions where it would have been difficult otherwise. Adding tests to code late in the development (or in a later version of the software) is quite a challenge. Ideally you want to add unit tests early in the development. In fact you want to design your code for testing.
I should point out that on my blog when I talk about unit tests I mean automated unit tests that can run as part of the build process on any machine. When I talk about the other types of unit tests I will use an expression like manual unit tests on some other more precise expression depending on the exact type of tests..
Today I will talk about one of the most powerful design technique I have found for writing code to be unit tested. The technique is organized around an architectural Pattern calledLayers.
Anyone familiar with networking theory knows about the OSI layers. The OSI model is one good example of this pattern even though sometimes the actual layers implemented in the real world do not exactly match the theory. If you are not familiar with the OSI model it's not a problem because the actual example that I am going to present is not very complicated and should allow you to understand the pattern. The Layers pattern is presented in the excellant book A system of pattern by Bushmann, Meunier, Rohnert, Sommerlad and Stal:
The Layers pattern helps to structure applications that can be decomposed into groups of subtasks in which each group of subtasks is at a particular level of abstraction.

The Pattern is particularly useful when building modules that access hardware (device drivers) or that communicate with server or other external entity. In the case of a device driver for example you could split the module into three layers (I show the abbreviations that I will use later for those layers in parentheses):

Interface (IL)
Logical (LL)
Low-level access (LLAL)

The responsibilities are divided as:

  1. Interface Layer (IL): defines the methods and classes used to interact with the module from a client application.

  2. Logical Layer (LL): implements the functions defined in the interface layer using the data supplied by the low-level layer. Also uses the low-level layer to request additional information when needed. It is the layer where most of the complicated business logic is located.

  3. Low-level access Layer (LLAL): handles the low-level details. This can be communicating with the hardware, communication with the server or other low-level operations.

There are other benefits to using the layers Pattern but here we will focus on how it can benefit unit tests.

  • Often the problem with unit testing client/server or code that interfaces with hardware is that it is difficult to have the setup on all machine that needs to run the automated unit tests. Layers can help mitigate this by separating the part that interfaces with the hardware or server from the rest of the code.

  • Also sometimes it is difficult to test specific scenario when using the actual hardware or when interacting with a server.

In both cases, the Layers pattern can help by making the building and use of mock implementation easier. Having a simple well defined low-level layer is a key ingredient here.
Before we dive into an example lets consider some of the pitfalls that you might encounter when using Layers:

Communication overhead between layers

You have to watch out for this and if needed make compromise in order to get acceptable communication costs. You may need to transfer some responsibilities between layers if things degenerate.

Coupling between layers

You want to avoid static circular dependencies. In a clean design the dependencies will look something like:

|IL| /__ |LL| __\ |LLAL|

The dependencies are from LL to the other layers. Again, you may need to compromise and/or add packages (namespaces) to break circular dependencies. Sometimes simply adding an interface to a layer will do the trick (invert the dependency). In practice you will often end-up with something like:

/ \ / \
|IL| /__ |LL| __\ |LLAL|

The extra dependencies here are all towards package-1 or package-2. Of course you can have additional packages (or namespaces) inside each layers.

Next time we will look at a nice example.

Thursday, July 23, 2009

Real programmers write unit tests (part 3)

Ok, you know the drill: you have to modify a feature of Program X and to do this you have to modify a class – lets call it UglyThing – and this class is a mess and it has no unit tests. This class has Efferent coupling through the roof and large methods that are really difficult to understand and work with. It feels like you are going to have to build the whole application by hand just to set things up for unit testing this class. What is a pragmatic programmer to do ?
In my situation at work I don't have any choice because the development process requires me to have unit tests for new or modified code. In a more permissive environment I might be tempting to skip the unit tests. However, if the code really is that bad skipping unit tests is really not a good idea.
Of course the first thing to do is to analyze the code carefully. In the context of a commercial application you really have to be cautious about any modifications you make to the code. Yet, sometimes a few well chosen simple and safe modifications can do the trick. Other times, techniques like Introduce middle man may be required. In more extreme cases you may have to turn to other techniques.

Extract static method

In OO programming adding static methods normally is something that you want to minimize. However, if you have reviewed the code and have been unable to find more conventional ways to modify the class to add unit test in the context of your current project compromise are in order. This may be a case where using static methods could help. In practice I see different variations on this theme:
Sometimes a class has one central method that does all the work and that could have been static from the start. In Java I extract the body of this large method to a static method with package visibility. I use the minimum visibility required to write unit tests for this in another class in the same namespace (Java package). Sometimes one has to add some of parameters to make this work. Often more parameters than the maximum that I would normally recommend. However, since this method is not part of the public API of the class I think this is an acceptable compromise. The last step is to replace the original code by a call to the static method. Now you can write a bunch of unit tests (possibly using parametric tests) for this method (the static version). It is now safe to perform actual functional modifications to the code for your new feature.
In some other cases it is not possible to extract whole methods and one has to limit the extracted code to portions of the original code. Even this will often be sufficient to add an adequate level of unit tests and proceed safely with the functional modifications.

Parting comments

Next time I will talk about the Layers architectural pattern. A layered design is a great way to set the stage for unit tests in some situation where unit tests are sometimes difficult to implement.

Wednesday, July 8, 2009

Real programmers write unit tests (part 2)

In this post I will discuss some of the techniques that I use to work with external libraries and legacy code.
At this point I'm still experimenting with this. However, all of the techniques that I will discuss here have been very useful in several cases. All of the techniques that I have used require to make some compromise
about the simplicity or sometimes what I would call the OO purity of the code. However, in all the situations where I have used those techniques the gains were worth the sacrifices.
But first we need to talk a little bit more about the setup for the tests.

Test setup

Since writing unit tests adds a good number of files to the source directories we decided to put our test code under a different base directory than the one used for the tested code. Because of Java's visibility rules we use the same package (namespace) for a test class than for the class that we are testing. Thus, our test code directory structure mirrors that of the tested code. We put the test code under Test and the tested code under Src. This makes it easier to process the files separately as part of our daily Ant build (run unit tests, metrics and other Ant tasks).

Unit test best practices

I will not cover all the unit tests best practices here. If you want more information about that just read JUnit best practices at . Since several unit test tools (Nunit, DUnit) are ports of Junit those best practices apply on any platform or language with little modifications. C# programmers might want to read NUnit best practices at . Make sure you read the comments.

The recipes

I will use names inspired by the names used for refactorings in Marin Fowler's Refactoring for my recipes.

Introduce middle man

Normally if a class doesn't do much you want to remove it. For example, if a class just delegates calls to a member. The refactoring for this is called Remove middle man.
For unit testing involving legacy or third party libraries it can sometimes be useful to introduce such a class. I call this Introduce middle man.
For example, in my last project I decided to create an ErrorHandler class. I needed to be able to implement an error handling logic that was a little more elaborate than usual. One thing that this class had to do was to interact with an OPC UA Session object. Now building an OPC UA Session object is not easy. It requires a bunch of other code and because the Session class does not implement any simple abstract interface that I could use I was in a difficult position to write unit tests for my ErrorHandler class. Fortunately, my code only called one Session class method (Reconnect) so in order to write good unit tests I decided to introduce a middle man. I chose to create an Interface ISessionConnector with one method: Reconnect. Then I created a class than just delegated the call in the Src branch and created an implementation in the Test code branch that allowed me to simulate different scenarios (reconnection failure or success). I was able to get 100% test coverage of my new class. Another advantage of this approach is that I don't actually have to get an OPC UA server running in order to run this test. Good judgment is essential here. You should watch for middle man that gets too complicated. You might end up not testing the right code. If the middle man remains simple this works well. In my case, middle mans have often allowed me to test several classes that would have been difficult to test otherwise in legacy code or code that used third party libraries.

Parting comments

Some people might wonder why I don't use mock libraries like Jmock or others. The reason is simple: none of the mock libraries work the way I want. I plan to use mock libraries in the future but probably not exclusively.
Next time I will talked about another unit testing trick to handle legacy code: Extract static method.

Monday, July 6, 2009

Real programmers write unit tests (part 1)

Among the good practices that we have put in place at work the one that has been consistently rated the most beneficial by the programmers using it is writing unit tests. It has a beneficial impact on both the quality of the code, the number of bugs detected in the system tests and the schedule. Unit tests make development more deterministic and less stressful.

The tools

We write most of our code in Java so we use JUnit. We use version 1.4 and we use the Hamcrest matchers. We also use a few extensions:

  • Fest for testing GUI

  • DbUnit for testing database related code.

  • We do not yet use mock libraries. Because we write a lot of mathematical algorithm, the Junit 1.4 parametric tests is a great tool for us. It is very productive.
    We use Ant to run the unit tests as part of the daily build. The build is interrupted if there is a test failure. No release image is generated if a unit test fails.
    Recently I did some development in C#. I used NUnit for unit tests on that project.

    The process

    We do not yet use TDD but we do test early. Also, for me, the emphasis as shifted recently from testing methods to testing scenarios. I will probably keep using a mix of method oriented and scenario oriented tests because I find both type to be useful. Method oriented tests for simple more technical classes and scenario oriented tests for the more complex business related classes. The details of the process depends on the type of development involved: maintenance of legacy applications and development of new applications and modules use a slightly different approach.
    Development of new application and modules gets the most benefit from unit tests. It is easier to apply good design and coding practices with new code and those facilitate unit testing. Design principles and coding practices that are beneficial to unit tests includes:

  • Small highly cohesive classes

  • Those are easier to unit test

  • Defining interfaces and coding to them

  • This makes unit testing easier because it facilitates writing mock classes and stubs. I mentioned earlier that we do not yet use mock libraries. However, we do write mock classes. Having well defined interfaces makes this process easier.

  • Loose coupling

  • Keeping an eye on coupling is really important. I use a metrics tool for that and I refactor early if needed. High Efferent coupling can make unit tests very difficult.

    Even on new projects you will sometimes have to work with code that was not developed with unit tests in mind. For example, third party libraries and classes can be a source of complications. In my next post I will talk about writing unit tests in more details. I will show some techniques that you can use when working with non unit tests friendly code and legacy applications.

    Monday, June 22, 2009

    Avoid having your Scala code turning into APL

    With the rise of languages that support operator overloading like Ruby, Groovy, Scala and C# one is justified to wonder if libraries will become loaded with unreadable APL like code.
    I have already seen signs of the potential chaos that could come from the abuse of operator overloading. In the Scala actor library tutorial for example I have seen the following:

    producer ! Next

    I was initially unable to guess the meaning of this.

    The following example is from a Ruby library:

    (aobject / 'a string')

    In this case because I watched the full presentation I know that the slash is actually an alias for a search method.
    The speaker in the presentation was calling this cool. I call it stupid. It is the archetypal example of a bad use of operator overloading. The presenter himself said he was puzzled and could not understand what the code was doing at first. Definitely not cool.
    Since I'm looking at switching to Scala as a main language I thought I needed to think about what kind of rule I would put into our code convention document under the section
    Operator Overloading. I thought I would share this with others to get inputs and hopefully constructive comments.

    For me, the best applications of operator overloading makes the code easier to understand.
    Here are some example of this:

    Mathematical operations (+, -, *, /)

    This is the best application for operator overloading and as long as you don't
    start doing stupid things like using operators in a way that conflicts with established
    conventions you should be OK.

    Logical operations (&. |, ||, ..)

    This should be used on boolean values. One acceptable extension of this is to
    use them in cases where we have implicit conversion to boolean. They don't really look like their textbook equivalent but they have been in use long enough in enough languages to be used safely.

    Comparisons (>, <, ..)

    Use those on any set of ordered elements. The meaning should be obvious. For example:
    myWeight > aWeight
    Beware of things like:
    myDog > otherDog
    where we don't exactly know how things are being compared.

    Operations that are metaphorically related to a mathematical or logical equivalent

    Using the + when you want to add a string to another for example.

    Operators used as part of method names

    Of course this is not operator overloading but since it is another use of operators that can lead to abuse and unreadable code I include it here. In those language that allow this, using ? as a suffix for queries for example is OK. The meaning is clear and makes the query stand out. In this category I think that % and $ could be used if allowed by the language. The only other case that I can think of is the exclamation point as a warning that a method call might have side effects. This last one is not as obvious and is at the limit of what is acceptable for me.

    Operators that are already used in the core libraries of a language

    If those operators have been around long enough and it is too late to remove them from the standard library then we have no choice but to use them.

    All other uses of operator overloading is suspicious. The worst offenders of course are operators used as meaningless abbreviations for method names.

    In some cases you will have to watch for compiler quirks and language peculiarities. WIth C# for example when you define the ++ operator on a class it has the same semantic when used as a prefix (pre increment) or
    suffix (post increment). In both cases this works like a pre increment operation. This is a bug factory.
    In this case I think the compiler should give an error when ++ is used as a post increment operation because it will not have the expected result. You get the same thing with the -- operator.