At Sioux we have developed several projects with a Model Driven Development (MDD) approach, that enables software designers to model behavior of the software, and to check this model for certain issues, well known to developers: deadlocks, live-locks, starvation, race conditions etc. This is performed by mathematical verification. From this verified model the source code is generated and it is guaranteed that the implementation is correct, which in fact means that the implementation is equivalent to the model. Whether the model itself is correct, is of course a different question.
Model Driven Development itself is known for quite some time. In the past years it has mainly been discussed in the academic world, practical use in real projects was hampered, because the approach (and tools) were unable to scale with complex and large systems. Only the last few years different tools are capable of handling such complex systems.
The use of such a methodology does not only impact the development itself, but also has enormous impact on the testing and quality activities.
More and more companies (especially in the technical/embedded domain) use MDD, and the testing discipline needs to be able to cope with this shift.
The case study described in this presentation has taken place at a manufacturer of electron microscopes. An important part of these microscopes is the vacuum subsystem, vacuum is needed so that electrons do not collide with molecules until they reach the object to be magnified. Part of the vacuum subsystem has been (re)designed using Model Driven Development. The design has been defined in a formal model. Large parts of the code have been generated. The assumption is that this code is error free. But what does that mean?
What impact does this generated code from mathematically verified models have on the test process? Can we skip testing of the subsystem altogether? Or should we stick to our proven test process? What kind of problems have we found, and what kind of problems did we not encounter?
Valuable information that we have already used for follow-up projects.
The presentation starts with a brief explanation into Model Driven Development but mainly focuses on the testing activities and the quality of the system. Although at Sioux we used specific tooling, this talk is not about the tooling, but about the methodology.
The talk addresses questions like:
– What parts of the system can be described in formal models? What are the limitations?
– How is the mathematical correctness of the models proven, and what does this mean for testers?
– Which testing activities are influenced by MDD? Which can be skipped? Which need to change? Often heard: “The code is generated, so we do not need to test this!” Of course this is not the truth!
– What are our own experiences with this approach in this technical environment? And how can we learn from these experiences for future projects (at the same customer, but also at other customers)?