This article needs additional citations for verification. (November 2024) |
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (March 2023) |
In computer programming and software engineering, software brittleness is the increased difficulty in fixing older software that may appear reliable, but instead, fails, when presented with unusual data or data that is altered in a seemingly minor way. The phrase is derived from analogies to brittleness in metalworking.[1]
Causes
editWhen software is new, it is very malleable; it can be formed to be whatever is wanted by the implementers. But as the software in a given project grows larger and larger, and develops a larger base of users with long experience with the software, it becomes less and less malleable. Like a metal that has been work-hardened, the software becomes a legacy system, brittle and unable to be easily maintained without fracturing the entire system.[citation needed]
Brittleness in software can be caused by algorithms that do not work well for the full range of input data. Following, are some examples:
- A good example is an algorithm that allows a divide by zero to occur, or a curve-fitting equation that is used to extrapolate beyond the data that it was fitted to. Another cause of brittleness is the use of data structures that restrict values. This was commonly seen in the late 1990s as people realized that their software only had room for a 2 digit year entry; this led to the sudden updating of tremendous quantities of brittle software before the year 2000.[2]
- Another more commonly encountered form of brittleness is in graphical user interfaces that make invalid assumptions. For example, a user may be running on a low resolution display, may witness the software opening a window too large to fit the display. The opposite could be the case as well; say, a window too small for the display, without the capability to resize, or a window, where elements do not fit correctly, because the developers' assumption about the resolution were no longer true. Another common problem is expressed when a user uses a color scheme other than the default, causing text to be rendered in the same color as the background, or a user uses a font other than the default, which won't fit in the allowed space and cuts off instructions and labels.
Very often, an old codebase is simply abandoned in favor of a brand-new one (which is intended to be free of many of the burdens of the legacy system; a.k.a. a rewrite) created from scratch, but this can be an expensive and time-consuming process.
Some examples and reasons behind software brittleness:
- Users expect a relatively constant user interface. Once a feature has been implemented and exposed to the users, it is very difficult to convince them to accept major changes to that feature, even if the feature was not well designed or the existence of the feature blocks further progress.
- A great deal of documentation may describe the current behavior and would be expensive to change. In addition, it is essentially impossible to recall all copies of the existing documentation, so users are likely to continue to refer to obsolete manuals.
- The original implementers, who knew all intricate details of the software, have moved on and left insufficient documentation of said intricate details. Many such details were only passed onto others via oral traditions of the design team, many of which are eventually irretrievably lost, although some can be rediscovered through the diligent (and expensive) application of software archaeology.
- Patches have probably been issued throughout the years, subtly changing the behavior of the software. In many cases, these patches, while correcting the overt failure for which they were issued, introduce other, more subtle, failures into the system. If not detected by regression testing, these subtle failures make subsequent changes to the system more difficult.
- More subtle forms of brittleness commonly occur in artificial intelligence systems. These systems often rely on significant assumptions about the input data. When these assumptions aren't met, perhaps because they were not be stated (as is often the case), then the system will respond in completely unpredictable ways.
- Systems can also be brittle if the component dependencies are too rigid. One example of this is seen in the difficulties transitioning to new versions of dependencies. When one component expects another to output only a given range of values, and that range changes, then it can cause errors to ripple through the system, either during building (compiling) or at runtime.[citation needed]
- Fewer technical resources are available to support changes when a system is in maintenance, rather than during development (in terms of the Systems Development Life Cycle (SDLC)[citation needed]).
See also
editReferences
edit- ^ "Definition of software brittleness". PCMAG. Retrieved 2023-05-19.
- ^ "Y2K bug". education.nationalgeographic.org. Retrieved 2023-05-19.
- Robert E. Filman; Tzilla Elrad; Siobhán Clarke; Mehmet Aksit (2004). Aspect-Oriented Dependency Management. Addison Wesley Professional. ISBN 0-321-21976-7. Archived from the original on January 30, 2013.
- Virginia Postrel (1999). "Power fantasies: the strange appeal of the Y2K bug – Year 2000 transition problem". Reason. Archived from the original on 2005-09-10. Retrieved 2008-07-25.