At SoCraTes Day in Zurich I hold a session on the topic of applying the Cynefin Framework to Software Engineering.
Most discussions arouse around testing. For me it is quite obvious:
- Automated testing in general handles obvious errors well. After all, they are so obvious that they can be explicitly spelled out and checked for by code. There may be some special cases like performance testing, that may be targeting more complicated errors that can arise during development.
- “Manual" test cases will find errors situated in the complicated domain. The test cases have been written by a specialist and will follow the path the specialist has designed. The testers find a an error here and there a little off the beaten track, but will generally follow that path.
- Exploratory tests have the highest possibility of finding errors that arise from complex situations. A tester sets off to challenge a certain area of the application. Nobody knows where it will take her and what she will find out. It might be clear in retrospect, but it is impossible to forecast.
Several sessions also resemble strongly of parallel safe-to-fail experiments - the way to go in the complex domain.
Fitness Landscapes
Let's map the three approaches onto a (hypothetical) fitness landscapes, where the fitness function describes the fitness of the software to test. - “Automated testing” (or checking as some testers prefer to say) tests the same spots on the landscape over and over. It will test this everyday (or every check-in), but it will never (or very randomly and rarely) discover other errors. These checked spots will in general have a very high quality (because they are tested over and over), even if they are within weak areas of the software, because they just check a single spot. Automated testing shapes the fitness landscape accordingly. If we rely completely on automated testing these high quality spots will very likely be surrounded by valleys of lower quality, where no tests are ever run… Of course, smart engineers will cover the important parts of the application with tests and hope, that untested spots are met rarely by users.
- Working on detailed test cases will lead to a landscape with high quality “high ways”, that are tested again and again. They might not reach the quality of the spots covered by automated tests (as they will be done less frequently and humans may oversee errors that could be covered by machine), but they will cover a much larger area, because humans are good in spotting errors that are not coded-out explicitly. As mentioned above I’d expect a landscape with high quality highway that flatten out quite quickly into areas of low quality, which are almost never hit by a tester’s eye.
- Exploratory testing would produce a landscape that in general has a high quality, because most areas get covered from time to time and the tester seeks to challenge those areas the most where the largest risks are suspected. I’d expect a landscape where most areas are covered by tests and therefore no deep valleys or wells occur.
Congratulations @esmard! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit