Thursday, July 24, 2014

Home Automation (Part I)

Being a developer with background in electronics, I sometimes miss the physical aspect of writing software. I don't mean physical like touching other people or something, no, more like writing code that does something physically. Like operating a lamp or a motor. So, some time ago I saw my chances fit for a new side project: home automation.

While investigating what I would need, I Quickly became convinced that besides programming an installing it myself, the biggest challenge would be to get all bits and pieces together. There are a lot of systems and vendors and it is hard to get some objective information, or just information all together. So I decided to start sharing my experience here. Maybe it might serve to some good for other brave souls crazy enough to try the same thing for their homes.

But first, lets start with a good old rant. Why o why is the world of home  automation so badly organized? Let me come again; why? If you enter this world, set aside all open source principles. Do not hope that you  are going to find centralized/useful documentation on how to do things.

Imagine going back in a time machine, where the Internet is not sometimes referred to as the 'interwebs'. Where the Internet is still delivered over slow landlines owned by evil telco's wanting nothing more then your small fortune. Where marketeers did not yet understand that a platform can be offered almost free of charge and that there are plenty of ways to make money. Imagine a world where not everyone speaks German.

German you say? Yes, get used to words as 'Schnittstelle' (not to be confused with the well known Schnitzel) or 'Zählerfernauslesesystem'. Apparently a lot of these companies are German, and while most documents are indeed also available in English, some are left untranslated so you are stuck with the German version. Fact is that most of these sites need to be browsed in English and a second time in German locale to make sure all information surfaced. Some documents simple don't appear otherwise. But it is what it is, so lets suck it up and proceed.

As with every project it all starts with the requirements. This time being the customer myself, having bad, or no, requirements is not going to be an excuse. But here also lies the danger. Home automation is very subjective. Some people want their eggs to be backed automatically in the morning, others just want a panic button. For me it is a subtle mixture between what is doable, what is actually needed and the budget. The must haves:
  • All light points in the house should be able to be programmed and controlled via software. And with all lights I mean all of them, no exceptions
  • Some power circuits needs to be controllable
  • Lights in hallway, toilet, bathroom, garage, storage space, outside side door, outside house need to be operated with motion and or presence sensors. These sensors also needs to be cat proof. I don't want to turn my house into a blinking Christmas tree if my cats decide to play at night
  • Buttons need RGB LEDs. If the toilet is occupied I want to see a red light on the switch. Depending on the mode a 'sensor controlled light' is in, I want this being reflect by the LED. The brightness of the LEDs needs to be adjusted as well depending on the available light
  • Light sensors with timer functionality. For example; the roof lights need to turn on when it's getting dark, but only after 1900h. If it gets dark during daytime (bad weather or something) the lights should not switch on
  • Scene support: lights in the living-room/kitchen need to be dimmed to preset scenes
  • Blinds for the windows need to be controlled automatically.They need to be controlled together (the default) but  also individually
  • Ventilation. If the unit would be to loud at some point, it must be able to be controlled remotely, preferably by a phone
  • Support for programming over Ethernet
  • API for controlling the entire system via software. For example an app, or a dedicated touch panel
  • The interconnection of the components should be using a bus architecture with free topology
Nice to haves:
  • Heating. This is a bit of a gray zone, but it at least the possibilities needs to be investigated
  • Metrics for gas/electricity/water
For me this is where the barrier between useful and affordable automation ends. Audio for example, sure nice things are possible, but this is better controlled  separately. There is plenty of choice in media system with Spotify integration at far better prices. The thing is that for audio one wants to have freedom of choice and not be bound to a single (expensive) product.

Next the hardware. Easier said then done, so first let's try to identify some categories

PLCs

A barebone PLC, for example the Simatic S7 from Siemens. They are the SAP of automation. You can build nearly anything with them and they are very robust/stable.
However, the thing is that they are barebone. For example, programming is done with STL, which is very low level, comparable to assembler. On the image below you can get an idea  how this looks.



Agreed, you can also opt for a graphical way of programming, like LAD (ladder) but it remains pretty low level. But that's not all. These things have a high cost, which is understandable if you see in which kind of factories they are being used.
There are also other manufacturers which are S7 compliant, such as VIPA, but the price still remains high.

But the biggest downside is probably that there are no ready build HOME components. If you want a push button with LEDs, no problem, but then you have to build it yourself. I like a challenge but this would be a bit too much for me.

Mini PLCs

Siemens Logo! is such an example

While they are cheaper, they still suffer the same drawbacks as its bigger brothers. There are also no real home components and having something simple as LED feedback on a button (e.g. a 'occupied' LED) means you'll have to run an extra wire and an need an additional output just to steer this one LED.

While there is profibus (really expensive, but ok) for Simatic, Logo! has no bus system. This means that if you want 16 switches you'll at least need 17 wires running in your 'data' cable. Double that amount if you want to use LEDs

Don't get me wrong; Logo! (or PLCs) do have a place in home automation. First there is the ease of installation. You don't need cross switches or wiring switches directly to switching points as with a traditional installation. It will be well suited for basic stuff like push buttons (instead traditional toggle switches), programming panic buttons, simple motion detectors etc They are also more favorable over contactors/relays as they dynamically programmed and prices will be almost the same.

Home automation PLCs

Then there are what I call 'home automation PLCs, one of them is the Loxone Miniserver

It is a CPU just like Logo! or Simatic, but more focused on home automation out of the box. It comes shipped with a webserver and phone app allowing instant remote access from your browser/phone. There are also several already build components which you can use straight away. And most of all, it comes with KNX/EIB support (more on that later).

But to use it as the main home automation solution it would suffer the same disadvantages as the previous ones; limited intelligent components(°) and no bus system.

(°) limited, but there are at least extensions like a an IP cam based intercom, RGBW controllers, wireless extension and more (check it out) these home components are not available with traditional PLCs

Real home automation

With "real" I don't mean that these system would be more stable or mature, not at all. But they are more home oriented and therefore a closer match as to what one would need for home automation.

To make it easy I divided them in the closed systems; such as Niko (Belgian company) home control (NHC)  or BTicino (Italian) etc. These are examples of a closed standard and only the manufacturer in question produces components for it.


Then there are the open systems such as KNX/EIB. The latter is an open standard and every one is free to build components compatible with this standard. This means that you don't have a vendor lock-in; you can mix different brands and components, as long as they are KNX/EIB compatible. For example, one can have a MDT pushbutton combined with a Jung dim actuator and a Basalte thermostat. Just as NHC, KNX uses a bus (EIB) with a free topology.

Ok, then what is the best solution? There is no objective answer in which is better. It basically depends on budget, taste and mindset. For example; vendor lock-in is mostly not a good idea. But if you chose a good vendor that "has it all" it can also have advantages. If you go to your local Niko dealer, someone can probably explain all components. You get everything (information, ordering, support) from a single place. No need to endlessly browse the Internet comparing manufacturers or products. On the other hand a closed standard might impose future risks: what if Niko decides to stop producing NHC? They already did that with it's predecessor: Niko bus. Agreed, they solved it with NHC/NB bridge which allows to add NHC components to an existing NB installation. But not all functionality is usable this way.  So if you bought Niko bus some years ago you are already limited in adding new components.

KNX excels in the fact that it has no central processor (unlike PLCs). It is message driven and each sensor/actor has it's own processor. They communicate with each other using telegrams (~messages). A sensor or an actor generates can subscribe to a so called 'group address' (like a multicast address) and it will then be able to pick up telegrams sent to that address by other sensors or actors. Based on the configuration they can decide to do something with the telegram or ignore it. But there lies also it's weak point: each component is responsible for generating or processing telegrams, but it is limited to the logic offered by that component. For example; a light point needs to be switch on conditionally based on the value of a light sensor and the wall-clock time. This clearly involves clock logic. While perfectly possible with KNX, you'll need a separate actor which is able to generate telegrams based on clock schedules. This starts to become a pricing issue as each KNX actor is a processor on its own making them not cheap.

But there is also a solution. The so called KNX/EIB 'servers'. They allow more advanced and centralized programming and form an extension to the traditional KNX sensors and actuators. An example of this is Gira's home server. However, it's price tag is really, really high at €2.200. Luckily, this is where Loxone comes in. Loxone offers the same functionality but at a reasonable price. It can even do more as it is a traditional CPU with KNX/EIB integration. To program your KNX components via Ethernet, you also need an IP gateway. This is again an extra (special) actor. With Loxone you get all of that combined in a single module. So by adding Loxone we get follow things in return:
  • A 'soft entry' into home automation system. It allows for remote access via an API and apps thanks to it's built in webserver. Since it's KNX/EIB compliant, Loxone can be used as the programmatic interface to EIB, not needing extra KNX components
  • KNX/EIB gateway; using Loxone one can program KNX/EIB components using it as a gateway
  • The inputs on the Loxone are not really usable when using EIB. However, the outputs are. By using them you are also saving a 8 digital output KNX actor
  • Actual programming with logic circuits, controllers, smart components, you can even program in pico C  if that would be required

A KNX gateway and 8DO KNX actor already cost more then the entire Loxone setup. So it's a win-win deal: less money for more functionality.

So in the end I decided that an open system (such as KNX/EIB) in combination with a home automation PLC (Loxone Miniserver) combines the best of two worlds. KNX/EIB offers an open standard and freedom of choice in vendors and components. It also decentralized, message driven and open ended architecture which I felt comfortable with. Loxone adds the central processing power and interface to other platforms (web/mobile). It also allows to program all logic you would even need for your home automation at a reasonable price.

There is however one thing to keep in mind. While Loxone Miniserver is KNX/EIB compliant, it cannot be programmed like other KNX components. In other words, Loxone is programmed separately with different software and different 'programming techniques'. But this actually makes sense.  The way a KNX/EIB is programmed does not lend itself for complex logic. After all, KNX/EIB programming is more 'configuring'.
The Loxone development environment is better suited for this, taking full advantage of its possibilities. Do note that a Gira Home server for example is also programmed in a different way than traditional KNX/EIB components, so this is just natural.

In the next part I'll show the components I'm planning to use to meet the requirements and the test board together with the initial programming I did to be able to test everything from my desk.

Tuesday, January 28, 2014

Some Of My Mostly Used Development Mantras (1/2)

Statement: I assume it works this way!
Mantra: assumption is the mother of all fuckups

Let's start with this short fragment from Under Siege2: http://www.youtube.com/watch?v=wg4trPZFUwc

In this case his assumption had a rather lethal outcome which you might probably remember if you have seen the movie. But ok, being a mercenary, his situation might seem different from a day to day developer. So going further with this example, you might remember there was also a computer whiz on the train doing stuff like operating the satellite. Now imagine you're that guy and your asked to startup satellite communications. You quickly write some code and in doing so you write this line: 
i = i++;
Normally we would write something as 'i++' leaving out the assignment. However, the assignment seems redundant at most. We assume that it does not has any other side effect and we startup our program. But then suddenly....BOOM! Communication fail, Casey Rayback gets his ass kicked (actually I would love to have seen this), nukes are launched and world peace is far away. Our assumption had a rather negative impact.

Ok, let's see what this is about and make a small test. What do you expect it prints out, do you have any idea why it does so?
int i = 0;
i = i++;
System.out.println("i:"+ i);
While you're thinking about this, let me explain what happens. As can be seen we have an expression, existing of a unary operator and an assignment. As discussed before the assignment is redundant. But besides being redundant it is also causing an unpleasant side effect. It makes this code behave differently then one would expect. If we drop the assignment (just leaving 'i++;') the code would exactly do what you think it would do.

What happens is that when using an assignment, the JVM will put the value of the operands on the so called operand stack before doing something with them. When the expression is evaluated, the value for making the assignment to the left hand variable is popped from the operand stack. That sneaky unary operator follows different rules. It uses the 'iinc' instruction. The iinc instruction takes as parameter a position in the local variable array of the current frame to put it's value.

So, what happens is that "0" is pushed on the local operand stack. Next the variable "i" is incremented by 1 because of the iinc instruction. At this time the local variable 'i' would actually have the value '1'. We can't show this in a normal way, since the assignment/expression is evaluated in a single step. When the assignment is made, it will pop the latest value from the operand stack assigning it to the variable 'i'. This means that 'i' will be overwritten with the value from the operand stack, changing it's value from 1 back to 0.

Note: when you write  'i = ++i;' thinks will work as expected and the assignment is just redundant without side effects. In that case the iinc instruction is executed before pushing the local variable value to the operand stack. The rational is actually no different as when incrementing a variable in a method call; someMethod(++i) or someMethod(i++) . However, with an assignment it seems less clear that a pre or post unary makes any difference, because we tend to relate it to i++ or ++i on a single line where it absolutely makes no difference if the ++ is pre or post.

In nearly every line of code (depending on experience and knowledge) we take assumptions. This works out most of the time because some of the assumptions will be correct.  Incorrect assumptions do not necessarily lead to bugs as they might get balanced out by other assumptions. Or, maybe the side effects are never detected because of the boundaries in which our software operates.

A good way to track down incorrect assumptions in code is to write tests. But that alone is not enough. It is also  important to accept this rule. No matter how brilliant you are, there is always (a lot) of internals you don't know or fully understand. One needs to remain skeptical, don't be lazy. Always try lookup things as much as possible. Even in the case when there is only a little bit of doubt: look it up and/or make small test case. It does cost time and energy, but the ROI (being able to write working and high quality code) will be well worth it.

Statement: don't worry, it's only temporary!
Mantra: there is no such thing as temporary code

Agreed, a lot of things in life are temporary. The
 stock of Westvleteren 12 in my fridge is just one example. But in software projects, "temporary" tends to start a life on its own inflicting poorly designed or written code along its way.

This makes sense if you think about it; I don't think anyone works with the same amount of energy or dedication if something is considered temporary. The problem is however that temporary code will become permanent before you even realize it.

An example I can share on this topic is a user management system which we designed for a previous project. The system was going to be a "temporary" user management system because the "permanent" system was delayed for some reason. I won't go into detail, but the name of the software needed to reflect that it was temporary. So it was named TUM (Temporary User Management system). And you guessed it; in the end the system existed for years to come and over time became the one and only UAM. After a while it even became hard to convince new colleagues that the "T" actually stood for "temporary".

I'm going to put this a bit harsh, but "temporary" helps no one and gives developers a wild card to justify bad decisions, laziness or incompetence. In such situations chances are high that the design will not be what could have been and the resulting code will be suffering quality issues. So the rule is simple: always design or write code in the assumption it will stick around for quite a while. It will be beneficial in the long run as nearly all code will, in one form or another.

But make no mistake friends, no matter how dedicated or how good you are, the dark side is always waiting around the corner. If you are being tempted, simply ask the question how you would feel if your car mechanic decided to fix your brakes "temporary" because he didn't have any idea at the time on how to fix them properly? Enough said I think.

Like most rules, there are exceptions. It would be naive to think that there are no real temporary software projects in general. Of course there are, and fundamentally it is no big issue if that would be the case. The important part is to keep this decision on project management level. Doing so one avoids infecting the team with it.

Don't forget that in the end you're never sure how temporary a project is going to be. And even if the project is really temporary, it's code might still find it's way to other modules or projects in the future.

Saturday, January 4, 2014

Git: Keeping Your History Clean

As we migrated lately from subversion to git, it was clear from the beginning that we needed some extra rules for keeping the history clean. The first thing that we stumbled upon were the so called automatic 'merge commits' performed by git.

In case you don't know what they are; imagine a typical closed project setup with a central repository to share work between multiple developers. Now suppose developer John committed and pushed some changes. In the mean time there is Jim developing something on the same branch on his local repository. Jim has not pulled from central, so the central repository is still one commit ahead. If Jim is ready with his code he commits the changes to his repository. When he now tries to push his work to central, git will now be instructing Jim to pull first (to get the latest changes from central) before he'll be able to push again. From the moment Jim does a "git pull", a merge commit will be made by git indicating the commit done by John.

This is new when coming from subversion as with svn you won't be able to commit in the first place. You would first have to "svn up" before doing "svn commit", so your changes would still be in your local working copy (unlike with git where they are already committed to your local repository). To be clear: these merge commits don't form a technical issue as they are "natural" (to a certain extend) but they do clutter the history a lot.

To show how this would look in our history, we'll simulate this locally using three repositories (central, a bare git repo and john/jim both 2 clones of central). First John commits and pushes john.file1. Then Jim creates and commits jim.file1. If Jim now tries to push his work, he'll be getting:
 
! [rejected]        master -> master (fetch first)
error: failed to push some refs to '/tmp/central/'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first merge the remote changes (e.g.,
hint: 'git pull') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details
If Jim would simply 'pull' the changes from origin would be merged and Jim will on his turn be able to push his changes to central. All good. However, our history will now contain the so called merge commit as highlighted below:


As said before, this is happening because the local branch has advanced, compared to central, with the commit of Jim. Therefore git can no longer do a fast forward. Git will (by default) basically be merging in the changes it received from origin and thus has to make a commit. Altering the commit message is still possible, but you'll end up with a actual commit either way. The result is that the change appears to be done twice when looking at the history which can be quite annoying. Also, if the team is committing/pushing fast (as they should) a lot of these entries will appear and really start to clutter the history.

A solution for this is to rebase instead of merge. This can be done by doing:
 
git pull --rebase
When using 'git pull' git will first bring the changes from origin to the local repo and then (by default) merge them into your local working copy. If rebasing instead, git would (temporary) undo your commits, fast forward your branch and the re-apply your changes. These commits will be new commits as the old ones are thrown away. They get a new SHA1 identifier and all the likes.

Rebase is however only a solution in particular scenario's, so we would certainly not make it the defaul for everything. From the Git book :

If you treat rebasing as a way to clean up and work with commits before you push them, and if you only rebase commits that have never been available publicly, then you’ll be fine. If you rebase commits that have already been pushed publicly, and people may have based work on those commits, then you may be in for some frustrating trouble.

As long as you are not playing with commits that have already been pushed out to others there is no problem. We are working in a standard closed Java project with a single central repository that is adhering to these rules:
  • Developers don't synchronize repositories directly with each other, they always go via central
  • Developers don't push to anything else then central
In this case there is no risk for commits to be made public before rebasing them using git pull. Doing so we can safely make 'git pull' rebase automatically for our project master branch using:
 
git config branch.master.rebase true
Another problem that caught my attention (being triggered when looking into the automatic merge commit problem) is that our branching structure was not optimal. We are using a feature branch approach but as feature branching (and merging) with subversion comes at high pain we only did it if really necessary. However, when using git creating branches and merging is just a breeze so we changed our rules to be more natural:
  • There must be a (in our case; more) stable master branch on which only completed features will be merged into. Theoretically speaking, no one works directly on the master. This implies that almost all work is executed on feature branches
  • Releases are made from a release branch which is a copy of the master branch at the point the release is triggered
  • The master log must be as clean and straightforward as possible: each feature being merged back into the master must be exactly one commit with the log message stating the ticket and the history showing all changes relevant to that feature.
When working like this chances that you get automatic merge commits are very minimal.  Reintegrating features branches would become an (almost) atomic operation avoiding (nearly) any risk for having those automatic merge commits, even when not rebasing. Of course, some rules must be followed before integrating:
  • Code reviews must have been done on the feature, one way or the other
  • The feature branch must first be updated with the latest version of the master. This happened regularly while developing the feature (you want to be as up to date with the master as possible) but certainly a last time before you're planning to reintegrate the feature. With git this means you simply do "git merge master" while on the feature branch and it will complete in matter of seconds. Doing so conflicts (of other features that have been reintegrated in the master in the mean time) can be resolved while on the feature branch.
  • Tests and the likes must be run on the feature and must be 100% success before reintegrating the feature.
When following these rules the following 3 lines will reintegrate the feature succeeding one after the other making it an almost atomic operation and thus practically making rebase unnecessary:
git checkout master
git pull
git merge --no-commit --squash >feature branch<
git commit -am "FEATURE-XYZ"
git push 
Now, even when working like this we leave rebase set to true for the master as it is still possible for the remote to change in the meantime. In the event this happened (after committing the merge) a simple 'git pull' (with implicit rebase) will solve the problem so that our push will succeed.

Other important rules for keeping clean history are making use of '--squash' as used above:

The --squash will drop any relation to the feature it came from. Doing you can create a single commit that contains all changes without taking any other history from your feature into your master. This is mostly what you want. A feature can contain a lot of "junk" commits which you don't want to see on your master. In the typical case you simply want to see the result and not the intermediates. Also note that --squash also implies --no-ff as git will not fast forward even if it's possible (otherwise you would not be able to get a single merge commit in the first place).

To illustrate this, when we would simply be reintegrating using 'git merge' the history would look like this:


Since the feature was updated with the master before reintegrating, git performed a fast forward. If you look at the history the commits from the feature will be in-lined into the master. This gives the impression that all work had been done on the master. We can also see all intermediate commits and other non interesting commits such as the merge commits from updating the feature with the master. Not a real problem, but to see what has been changed you would (possibly) have to go through a lot of clutter and look at each individual commit to get the big picture, which is not what you typically want.

When disabling fast forward (git merge --no-ff) the history would look like this:


A bit better as we can now see which commit are related to feature-xyz. However, the changes are still scattered over different commits and we still have other commits in which we're not interested in. Someone needing history of feature-xyz is normally not interested in how the feature evolved or how many times a class was refactored. Typically you're only interested in the end result: the changes/additions that will be added to the master branch. By adding "--squash" we accomplish a clean and single commit in our master which list all changes for that feature:


Note that the history of the feature is not lost of course. You can still go back the feature branch and look at the history if that would be necesary.

The --no-commit is convenient if you are skeptical (like me) and want to review to changed/added files for a last time before making the reintegration final. If you leave this option out Git will directly commit the merge without leaving you time to do this final check.

On a side note: to perform code reviews on features before they can be reintegrated I tend to create a (local only) branch of the master being a kind of 'dry run' branch. The feature is then reintegrated on that review branch  (without committing changes) just to perform code review. Doing so all changed files will be marked dirty by your IDE making it easy to see what has actually changed (and to compare them with the HEAD revision). Then I gradually commit changes as I reviewed them. When code reviewing is done, the merge will be completely committed on the the review branch and it can then be merged back to the feature branch before it's finally reintegrated into the master branch. I (normally) don't make the review branch remotely visible, and just delete them locally after reviewing is done. Theoretically one can also merge the review branch back to master directly, but since this review branch will be deleted the real feature branch would never got the changes performed by the code review and you would be losing that history.