Overused Code Reuse

 
Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
 
 

First of all, I want to congratulate all fellow rabbits on the Year of the Rabbit, which started on 3rd February. I wish all rabbits all the best in this year, but want to remind you that it is not only a year of great opportunity, but also of great responsibility. Let us try to make the Year of the Rabbit as bug-free as possible! One of the important steps on this road will be understanding the pitfalls of code reuse.

Built from Rubbish

Since ancient times, using pre-existing code from somewhere else has been seen as a Holy Grail by project management. ‘Why develop functionality ourselves if we can buy it?’ With open source software becoming ubiquitous, the temptation became even stronger: the argument ‘Why spend money if we can get it for free?’ is as strong as it can possibly get for a manager. On the developers’ side the temptation to reuse is also rather strong: ‘Hey, we can use this neat 3rd-party class and get this very cool feature!’ (It often happens that nobody has actually asked for this feature, but that rarely stops this kind of reuse.)

While I certainly don’t want to claim that all code reuse is inherently evil, it does have significant drawbacks which are often overlooked. In this month’s column I will try to describe these issues, which might cause lots of problems down the road, and to provide some reflections on the question ‘when to reuse’.

Toll of code reuse: real-life disasters

For some examples where things went really badly, I will describe two well-known cases when code reuse has significantly contributed to disasters which happened because of software bugs.

Therac-25 The Therac-25 was a radiation therapy machine produced by Atomic Energy of Canada Limited (AECL)... It was involved in at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation.— Wikipedia — In 1982, a new radiation therapy machine Therac-25 [Therac-25] was developed by Atomic Energy of Canada Ltd. Therac-25 was a further development of two previous models, Therac-6 and Therac-20, and (naturally) it had been decided to reuse some software from the previous models [Leveson]. The hardware was a bit different though, and in particular, while the Therac-20 had hardware interlocks to prevent the software from activating the beam in the wrong position, the Therac-25 didn’t have such interlocks relying instead solely on the software. The software that was reused actually had a difficult to find bug, which had never manifested itself on Therac-20 because of the hardware interlocks. Reusing the software on Therac-25, where hardware wasn’t available to prevent malfunction, resulted in at least 6 confirmed cases of massive (up to 1000x) radiation overdose, and in 3 to 5 (estimates vary depending on source) deaths due to this bug. Some may argue that 3 deaths is nothing compared to the number of deaths in car accidents every day, but would you personally like to be responsible for them? I hope not.

Ariane 5 Explosion Ariane 5's first test flight (Ariane 5 Flight 501) on 4 June 1996 failed, with the rocket self-destructing 37 seconds after launch because of a malfunction in the control software.— Wikipedia — In 1996, an Ariane 5 rocket self-destructed 37 seconds into its first launch, with estimated damages at least in the order of several hundred million euros [Robinson]. An investigation [Ariane Inquiry][Robinson] has shown that it had been caused by re-using a subsystem of the software of Ariane 4. The bug was within a piece of code which wasn’t necessary for Ariane 5 but which was (naturally) maintained ‘for commonality reasons’ [Ariane Inquiry]; the bug had never manifested itself in Ariane 4 because of different flying dynamics.

I cannot tell if it was a mistake to reuse any code in these two cases (though there are indications it was – for example, [Leveson] says ‘The reuse of Therac-6 design features or modules may explain some of problematic aspects of the Therac-25 software design’), but what is clear is that careless reuse is what essentially caused both of these disasters. Two observations can be made out of these two cases.

The first rule of thumb reads:

When reusing, one needs to carefully consider the new environment where the code is moved; failure to do so can be catastrophic.

The second one is the following (based on the Ariane 5 failure above, but is also supported by personal experience):

When reusing, it is often difficult to understand how much code you’ve just added.

Resource bloat

In his presentation [Martin], Robert Martin has stated that since the time of the PDP8, hardware has been improved by 27 orders of magnitude. While I can’t comment on the exact numbers, it is obvious that improvements in hardware within last 20 years were HUGE. Does anybody remember the ZX Spectrum home computer from 80’s? Hare thumb up:If the guys who wrote 'Elite' for ZX Spectrum were around, I’m pretty sure that they would manage to write apps for a cellphone which wouldn’t feel sluggish on a 'mere' 1GHz CPU, or 'mere' 512MB RAM. It had 3.5MHz Z80 CPU (this is not only without floating point at all, this is without hardware multiplication!), and 48KB RAM (of which 7KB was video RAM); no HDD to swap to, not even a floppy disk, everything needed to be loaded from tape into RAM. And still, developers were able to do wonders with this hardware. One game of the time, Elite, contained an inter-planetary trading system (with price based on supply and demand), real-time 3D space fights (OK, it was contour-only 3D, but keep in mind the restrictions), several special missions, and a galaxy map of a few thousand planets – all within 41KB of RAM (code and data combined), on a CPU which is 1000+ times slower than today’s ones. There were also compilers, word processors and spreadsheets. One starts to wonder – if it was possible to do these kind of things in 41KB, how much better should be software which can use 41MB! Unfortunately, it is not the case. Modern software can do (as a rule of thumb) absolutely nothing in 41K and just a few minor things in 41MB; for example, the Eclipse IDE requires as much as 512MB RAM by default, this is 10000 times more than ZX Spectrum had. No doubt that Eclipse has more capabilities than ZX Spectrum era development tools, but is it four orders of magnitude more? I don’t think so. One can argue that these days RAM is cheap, so who cares about all this bloat? The answer is: I do, at least, because if the guys who wrote Elite were around to develop for the more constrained modern environments, I’m pretty sure that they would manage to write apps for a cellphone which wouldn’t feel sluggish on a mere 1GHz CPU (it is still 300 times faster than the ZX Spectrum had), or 512MB RAM (which is four orders of magnitude more). Also I’m pretty sure that they would manage to write software for a Blu-Ray player which wouldn’t take 10 seconds to ‘load’, and wouldn’t take a second to react to a remote control button. The whole culture of respecting resources has evaporated during the late 1980–1990s, and now it backfires in resource-constrained environments like cellphones.

Obviously, code reuse is certainly not the only reason for this waste of resources; it seems that such reasons are multiple, but I’m sure code reuse is a significant contributing factor (just one example: even in Ariane 5, which is a resource-constrained environment, they decided to keep useless code ‘for commonality reasons’; even if it wouldn’t crash the whole thing, it would still be a waste of CPU resources). But usually the reason is simpler than that: as we have noted above, it is often difficult to understand just how much code you’ve added. Therefore, it often happens that just one tiny DLL/.so is used, which in turn calls a dozen other DLLs/.so’s, and so on. Did you know, for example, that loading MFC42.DLL implicitly loads not only OLE/COM, but also loads the print spooler, even if you never use any of it? Or how many DLLs depend on SHDOCVW.DLL?

3rd-party code and dependencies

Reusing 3rd-party code introduces dependencies. Such dependencies are often detrimental for several reasons:
if there is a bug in the 3rd-party code, it is still your application which will crash, and your users will blame you (see also [NoBugsToDll]).

  • if there are changes to the 3rd-party code, you are at their mercy to keep the APIs stable. Moreover, it is not only their understanding of the APIs that should not be affected by change, it is also your understanding (and they are not always the same thing).
  • if a 3rd-party API does not exactly correspond to your requirements (which is almost always the case), relying on it will likely lead to lower cohesion and higher coupling, increasing overall code rigidity. While these effects can be mitigated by creating ‘glue code’ to isolate 3rd-party code, it is an extra cost and is rarely done in practice.
  • careless reuse of 3rd-party code can easily lead to ‘licence hell’, with a need to handle lots of potentially incompatible licences.

Conclusions

With all those problems with code reuse outlined above, does it mean code reuse is always wrong? Not at all, there are perfectly legitimate uses for it. For instance, I don’t mean that if you’re writing a business application, you should start by writing your own operating system or database. The reason why I listed all those problems is to illustrate that Threatening hare:Reusing code from other projects or (even worse) from 3rd-parties SHOULD NOT be taken lightlyreusing code from other projects or (even worse) from 3rd-parties SHOULD NOT be taken lightly, but only with a full understanding of all the implications and consequences. Individual analysis is required in each case, but there are several rules of thumb I and many of my fellow rabbits use, which can be a reasonable starting point:

  • If reusing, one needs to carefully consider the new environment where the code is moved; failure to do so can be catastrophic. One example of reuse from non-software field would be to reuse bridge piers when building a new bridge in place of the old one. While such reuse is possible and sometimes undertaken, it is always preceded by a very careful analysis; such analysis also often reveals that reuse will be dangerous, or more expensive than using new ones, and a new bridge is often built completely separately. Why should software be any different?
  • All decisions about reuse of 3rd-party code (this includes code from within the same company, but perhaps from a different project) must only be made after careful consideration at project level; both architectural and legal analyses should be performed before making a decision about reuse. If you need to make some major decisions (and as discussed, the decision to reuse 3rd-party code is a major one), it requires some formalities; licence issues are a contributing factor too.
  • When making decisions about reuse, remember integration costs. This rule of thumb is of special importance for managers. While reused code can be free, integration with your own code is never free, and in some cases can exceed the costs of writing code from scratch.
  • As a rule of thumb, the lower the level of API, the more chances that it will be suitable for your needs. For example, the chances that ‘JPEG library’ will be exactly what you’re looking for, are much higher than that ‘business flow handling’ will suit your needs; the longer-term chances of the latter are further decreased by likely changes to the business flow logic.
  • To avoid increasing code rigidity, if reusing 3rd-party code, think about adding ‘glue code’ around it. Note that ‘dumb’ wrappers (wrapping every function 1-to-1) don’t tend to help with it, and are essentially useless. This can be a tough exercise, but unless you’re building something which has a 100% dependency on 3rd-party code, having ‘glue code’ is paramount to keep software maintainable in the long run. Over the time all kinds of things can happen: 3rd-party code can go out of circulation, a competing product can become better, new management can strike a deal with another vendor. Proper ‘glue code’ can save you from rewriting the whole program (or at least to reduce the amount of work significantly), but the trick here is to find what kind of ‘glue code’ is appropriate. As a rule of thumb, it is better to specify ‘glue’ APIs in terms of ‘what we need to do’ (as opposed to ‘what this code can do for us’). It essentially rules out ‘dumb’ wrappers (where the ‘glue’ API merely mirrors the functionality of the API being wrapped), which are indeed pretty much useless.
  • If you’re not writing something inherently reusable, like an OS or public API, don’t write for reuse – reuse existing code instead. It has been mentioned by both [Brooks] and [Kelly], that writing code aimed for reuse is three times more expensive than writing single-use code. It is in line with practical observations by fellow rabbits: among other things, when writing code which is aimed for code reuse it can be not so easy to adhere to the ‘No Bugs’ Axe’ principle, and deviations from it are likely to lead to ‘creeping featuritis’ [NoBugsAxe].
  • Assertive hare:If you are re-using code, it is your responsibility to make sure it is suitable for your purposes Know what exactly you’re including, what resources it takes, and what are the implications of the code being reused. Maybe reused code includes a call which is specific to Win7, and you’re required to support XP? Or maybe it will not run unless some specific version of some other library is installed? Or maybe you’re writing an Internet application, and the reused code issues 100 successive RPC calls which you won’t notice over your LAN but which will cause delay of several seconds across a transatlantic link? If you are re-using code, it is your responsibility to make sure it is suitable for your purposes.
  • It is important to note that while some of these points do not apply to ‘internal reuse’ (such as placing code in functions and calling them from many different places), some of these ‘rules of thumb’ are still essential regardless of reuse being ‘internal’ or ‘external’. In particular, ‘new environment’, ‘integration costs’, and ‘know what exactly you’re including’ points stand even for ‘internal reuse’. If reusing internal small well-defined functions, these points may be trivial to address, but as the complexity of reused code grows, analysis can become more complicated and the lack of such analysis may cause significant problems down the road.
Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook

[+]References

[+]Disclaimer

Acknowledgements

This article has been originally published in Overload Journal #101 in February 2011 and is also available separately on ACCU web site. Re-posted here with a kind permission of Overload. The article has been re-formatted to fit your screen.

Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.