Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Friday, January 5, 2018

Fooling around with C++17 (part 1)

New years resolution

In 2018 I will be back writing post here and I will try to be more consistent. I will eventually go back to the subject of 'dynamic like' Java programming but just for fun I will start the year by looking at C++17 (including x14 and x11) C++ is starting too look like a 21 century programming language so I think it is well worth looking at a few features that are starting to make programming really more productive. WARNING: this is not a C++ style guide.

auto this, auto that

C++14 and 17 extend the power of auto and really make it more productive. In C++11 they added the posibility of using auto for the return type of functions but you still had to provide the type using the new arrow syntax. In C++14 they completely removed the need to explicitly provide the type. Here is an example with both syntax:
class LazyCoder
    {
    public:
        auto get_it_Cv11 () -> int
            {
            return 11;
            }

        auto get_it_Cv17 ()
            {
            return 17;
            }
    };
I don't think this is the best use of auto since the type in function declaration is also good documentation (better than comments). In C++11 you could use auto for local variable declaration, for loop variables, function parameters. With C++14 and C++17 you can now also use auto when declaring a variable in the scope of an 'if' and also when doing structured bindings (similar to other languages 'destructuring')
    // Create a tuple
    auto tuple = std::make_tuple(3, 7);

    // C++14
    int i, j;
    std::tie(i,j) = tuple;

    assert (i == 3 && j == 7);

    // C++17 (look ma, no explicit variable declaration)
    auto [x, y] = tuple;

    assert (x == 3 && y == 7);

    // and then ...
    if (auto t = (x != y))
        {
        // I can use t here
        cout << "This is getting ridiculous like t=" << t << endl;
        }
C++z17 has one more trick up its sleeve. The 'structured binding' works with user data types.
    // I have this struct defined somwhere
    struct Zmas
        {
        int z, w;
        };

    // Now I can do this:
    Zmas zmas {7, 11}; // Uniform intialization with curly brackets (next post)
    
    // The number of items between square bracket here must match the number of public 
    // members of Zmas
    auto [z, w] = zmas;

    assert (z == 7 && w == 11);
Of course C++11 addition of auto was already a major improvement when working with complicated types but the latest extensions are even more icing on the cake. For more details about auto check out C++17 Structured Bindings

Tuesday, August 19, 2014

Separating the user interface form the rest of the code

Before I go back to my exploration of alternate programming languages a short one on design (or is it management ?). Every programmer knows that when you write code that might eventually get used in a GUI application a basic good practice is to always clearly separate the GUI code from the code implementing the actual functions. No big news here. There are a lot of tricks and design principles to help towards the goal of keeping things separate and of course any decent programmer should know about design patterns like MVC. Unfortunately, good design principles can be ignored or neglected if someone else is writing the actual code and additional control mechanism are needed (even you might benefit from additional safeguards). Code reviews can help. Automated unit test will also often help by encouraging developers to encapsulate and write more focused classes. Today's design or management trick is:
- Ask the programmer to write the functions in a separate module (module A)
- Ask him to provide a command line utility (module B) to call the functions in module A
Of course this does not provide an absolute protection against bad programming but it adds another level of control. This can be similar to what automated unit tests provide but not exactly the same since:
- It is fairly easy to check that module B uses A but not the reverse (no circular dependencies). You might not have the same level of isolation for the unit test code and the tested code.
- It is more focused on separating the two parts of the code. The part that implements the user interface (command line for this part of the project) from the part that implements the functions.
= The end product will often be a useful deliverable (this is the best application of this trick)
Of course writing a command line utility should not remove the requirements for automated unit tests.
If the result is not perfect, you or the other programmer doing the work will get another shot at cleaning up the API of module A when the time comes to use the module with the GUI. What do you think about this. Do you have any tricks to help you use good design principle ?

Sunday, June 2, 2013

The answer to the question at the end of the previous post

What is the answer to the question at the end of the previous post:
Is using an enum for analyzer family a good way to customize the processing in the client application (CT)
Well the answer in the case of our projet was no. The stable part of the application ecosystem was the CT and the analysis cycle step in it. Adding an enum that we would then use to skip step in the processing would have made it impossible for us to add new Analyzer class without opening the code of the CT to support the new class.(remember the OCP or Open Close Principle)
What is an API designer to do in such a case ? Well, what we did was define an interface for the steps in the analysis cycle of the CT and implement a kind of Decorator for that in all SM(I) instances. Of course, concrete instances of this interface are provided by the IInstrumentDefinition factory. The main interface for this is something like:

interface IAnalysisSteps
{
void doStep (String stepName);
}

The CT supplies a default implementation of this that is in fact a supertype that looks like this:

interface IAnalysisStepHandler extends IAnalysisStep
{
void skipStep (String stepName);
}

The IInstrumentDefinition interface has a factory method that looks like:

IAnalysisStep getAnalysisStep (IAnalysisStepHandler analysisStepHandler);

The SM(I) implements a version of IAnalysisSteps that will do one of two thing when it gets called:

- Simply call the default implementation of the IAnalysisStepHandler
- Call the skipStep(String) version of the method in the IAnalysisStepHandler

Now SM(I) instances can customize the steps in the CT without actually having to open the code of the CT. What if new analysis steps are added to the CT ? Well, in that case you don't have any choice but to open the CT and to support that you simply have to implement the SM(I) IAnalysisSteps decorator such that if an analysis step is unknown it is skipped. Since this step was originally not present in the CT skipping it should be a good default behavior for old SM(I) instances. I left out some details above like having different parameters for different analysis steps. That however is easily handled using a single additional parameter of type Map<INamedParameter>. If you don't remember what a INamedParameter is simply look at the previous blog entry.

Wednesday, May 22, 2013

Good software design principles (part 5)

AS you have seen the two main interfaces of the SMF are:
- IInstrumentDefinition: an Abstract Factory that defines methods to get different classes to work with analyzers.
- IInstrument: a Strategy that implements all the steps needed to acquire data from an analyzer. Concrete instances of this are returned by concrete implementation of IInstrumentDefinition dynamically loaded from a .jar file.
The only class that needs to be public (exported) in the SM(I) .jar is the implementation of the IInstrumentDefinition
Now, how do you customize the behaviour of the client application (CA) to work properly with a given SM(I) ? After all, the steps in a analysis sequence of the CA might not all be appropriate for the data returned by a given SM(I).
Would having a method that defines an analyser family be a good idea ? Something like:
Family getFamily()
Where Family would be an enum. Something like:

public enum Family
{
FTIR,
PARTICLE_COUNTER,
MASS_SPECTROMETER
}

Is this a good idea ? Think about this in light of the OCP.

Monday, May 13, 2013

Good software design principle part 4

Another thing that was a big success in the SMF was the use of List and Map(s) (Dictionary in .net). If a class looks like a List, smells like a List and sounds like a list then it probably is a List. This might sound trivial but I have seen project where a class with getters and setters was defined to handle a concept that obviously was a Map. People had to constantly update the interface when new properties where added and the whole thing really was a nightmare. Using data structures such as Lists and Maps makes programming in Java feel much more like dynamic programming. So the IInstrumentDefinition includes a number of methods that actually return unmodifiable Maps. The Maps are generic Map<String, INamedParameter> where the INamedParameter interface is a little one element heterogeneous container. The INamedParameter defines the name, class and value of a parameter or variable. A Map of those is a nice little package that can easily give access to the List<String> of INamedParameter names. Once you have the name you can get the actual INamedParameter from the Map.

Saturday, May 11, 2013

Good software design principle part 3

In the previous post I gave a summary of the problem to be solved using the Successful Module. I explained that in fact the problem is solved using a Successful Module Framework (SMF) and Successful Module Instances SM(i). The SMF is a library used by the client application (CA) and the SM(i) to interact with each other without building cumbersome circular references and dependencies. In Java all modules are .jar files. At compile time both the CA and the SM(i) know the SMF but they don't know each other. Of course at runtime the CA needs to know how to load the SM(i) but that information is limited to the file location. Once it is loaded the CA interacts with the SM(i) as if it consisted only of classes (more precisely interfaces) defined in the SMF. The CA loads the SM(i) .jar and looks for a class that implements an interface called IInstrumentDefinition that marks it as the point of entry into the module. In fact the CA knows only interfaces (pure abstract classes in C++) defined in the SMF. This is one of the key to the success of the whole project. No concrete classes in the API. To understand what the IInstrumentDefinition is we need to enumerate some of the interaction between the CA and an analyzer. The CA needs to:
1- Define the value of parameters used by the analyzer (resolution, number of scan, etc...)
2- Find the list of values that a particular analyzer can supply (the data it can return, etc...)
3- Start an acquisition
4- Read the status of the analyzer
5- Read the data at the end of an acquisition cycle
So. Does the IInstrumentDefinition need to supply operations (methods) related to all these interactions ? No, not at all. In fact it turns out that many of the operations listed above are defined in a separate class called an IInstrument. Things like operation 3, 4 and 5 are defined in the IInstrument. We found that in practice it was very useful to have a specific interface - the IInstrumentDefinition - to define the functions that would provide the more "static" information about an analyzer and that it made sense that this interface be the entry point into the module. We also identified another very important function of the IInstrumentDefinition: Abstract Factory.
The instrument definition is a factory for a host of classes used when interacting with a specific analyzer. Of course the most important of the factory method is the getInstrument() method that returns a IInstrument.

Saturday, April 27, 2013

Good software design principles part 2

For the benefit of this blog we will call the module mentioned in the previous post the SM for successful module. Remember that the objective for this module was to make our software capable of acquiring and analysing data from any third party instrument. When thinking about a problem like this many of you might immediately think about the Open Close Principle. If you did not think about this and/or if you do not know what the OCP is, here is a definition:
software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification
Here is a link to the Wikipedia article: OCP
Trying to satisfy the OCP has a number of implication. One of them is that you should not have to modify the application code each time you want to support a new instrument. So of course the first thing that was decided was that the module should be loaded dynamically at runtime based on information in the application configuration. This is not earth shattering or revolutionary but it is a very useful concept. What we will be doing with the SM is to build a plugin infrastructure for our application instrument interface. Plugin are used in Eclipse, Netbeans and a whole menagerie of applications. In Java this is easy to do you just pack your SM in a .jar. Now the tricky par is to define the API that this jar file should implement. Another more immediate task is to define how the whole thing will be structured in terms of module. Now from the discussion we know that the starting point is this:

Now with a project like this you want to get your dependencies right from the beginning. In the case of this project it is easy to understand that has illustrated on the diagram communication between the two module goes both way. The Application supplies the SM with parameters and possibly other information and the SM returns statuses and results to the Application. We will probably need to allocate objects and possibly implement interfaces on the application side as well as in the SM. The question then is: where do we define those classes and interfaces ? The answer is: in a third module. Now the high level view of the project looks like this:

Now except for the weird looking arrow the elements on that diagram are packages with their dependencies. You have:

1) Application.
2) SM(I). This is a specific SM Instance (I added the I between parentheses to highlight that).
3) SMF the Successful Module Framework

As you can see there is no circular dependency in the standard UML elements. At compile time the APplication depends only on the SMF and the SM(I) also knows only the SMF. Now I added a non UML element (the weird broken arrow not quite connected) to represent the runtime dependencies in the system. I think having this extra arrow makes everything obvious and clean. In my next blog entry we will continue on our analysis of the SM and SMF. In fact for a while the emphasis will turn on the SMF and the key patterns used in that module the most important being:

- Abstract Factory
- Strategy

Now a closing comment. It goes without saying that before you start on a project like this a good analysis and requirements definition phase is in order. This is beyond the scope of the current thread but we might come back to this or insert a few blog entries about this phase later.

Saturday, April 20, 2013

Good software design principles and weekends at the beach

I work on software that is used to read and analyse data from instruments made by the company I work for: Fourier Transform Infrared Spectrometer. The software I work on is a client/server type continuous acquisition and analysis software. I work on the server component of that software.
One key component used by the server is the data acquisition component - sometimes called the data acquisition driver. These days data acquisition module is a more appropriate name for it since this component is written in Java and does not really corresponds to what we would call a driver. In the old days, when our software was written in C/C++ the thing really was a driver. However, in recent years the link between the PC and our instrument was changed from a proprietary protocol to good old Ethernet TCP/IP link. Also, in the meantime, software development switched to Java. Of course, all of this is very nice since Java has very good Network communication libraries.
Now, our software is very flexible and has very good data processing capabilities so why not use it to analyse data from other instruments ? Well that's exactly what I was asked to do a few years ago: make our software capable of acquiring and analysing data from any third party instrument. Now, while this may sound like a simple task, doing it right really is not that simple.
In my next few blog entries I'm going to talk about how this problem was solved. Now of course, I'm not going to discuss this at a level of details that could get me in trouble with my employer but this is not a problem since the key to the success of the project are not in the kind of implementation details that could be considered trade secret. No actually the key to the success of this project was using well known and documented good software design principles. I think you will enjoy this.