Complexity Theory: A New Paradigm for Software Integration

To say that software is ubiquitous is an understatement. Without software, and the ever-miniaturized circuits upon which it runs, there would be no Information Age. Electronic circuitry and its coded instructions have clearly become the key disruptive technology enabling this…

To say that software is ubiquitous is an understatement. Without software, and the ever-miniaturized circuits upon which it runs, there would be no Information Age. Electronic circuitry and its coded instructions have clearly become the key disruptive technology enabling this new era. There’s no turning back. At the same time, adaptive, dynamic, and highly complex networks—the unintended consequence of the information age—bring new challenges. Network science, which has emerged from complexity theory, presents opportunities to better test, understand, and harness software for its intended purposes.
Software—the code and data necessary for process execution—comes in many flavors. System software, interacting directly on silicon, supports higher-level abstractions. The smartphone’s multilayered software environment can host a multitude of diverse applications on a single system chip. At the same time, smartphones increasingly share data via networks with other smartphones and devices. In fact, the Internet of Things (IoT) overtook the Internet of people by volume in 2008. 1 The software spectrum truly spans the unit level through to massively indeterminate systems of interactive systems.
Military aircraft, for example, rely on software for integrated flight, engine, and fire control to maintain stability in inherently unstable platforms. The Joint Strike Fighter sports over 9.5 million lines of airborne code, with another 14.5 million lines of ground support code. 2 Advanced military drone software will enable totally autonomous flight, even in concert with controlled manned-flight environments. The nontrivial question is one of how to test software at ever-increasing levels of integration, interdependence, and public risk. Network science and complexity theory might hold vital clues in this increasingly perplexing arena.
Software Integration Testing
Back to Top
Software integration testing has traditionally been problematic for several reasons.
Open to Interpretation
As with any language, software is nothing more than another expressive convention with widely varying syntax and varying usage. Some argue software needs to be reduced to selected abstractions to fully understand it. 3 Software, ultimately the creation of human imagination, is subject to human semantic interpretation. There are simple and eloquent ways to express things, but there are also convoluted ways of saying the same thing. Good designers can indeed be trained in software engineering. Unfortunately, great designers with real creative flair are rare in the software engineering field. 4
The fact that code is open to interpretation and is often convoluted affects software testing. No one testing formulation fits all interpretations. Differences exist in software, where the same notion might be expressed in entirely different ways depending on the language used and the degree of syntactical compliance. In short, there’s no one “right” way to express an intended action, making language and style a necessary element for interpretation. The proliferation of various software languages, architectural models, services, object methodologies, and agents ensure healthy diversity while simultaneously spawning greater incomprehensibility among those charged with interpretation through testing. Ill-formed or nonexistent semantic domains inhibit necessary cross mapping to implementation languages, particularly in modeling environments such as UML. 5 Likewise, higher-order languages often suffer from incomplete semantic architectures 6 and rarely account for all environmental variables. Even machine-generated code is subject to the culturally influenced style of the code-proliferating design firm. Formal methods and state checkers offer some limited relief but fall short of fully bridging the chasm between machines and human intent.
Limited White-Box Testing
Nonetheless, if the code is available for scrutiny, we can certainly assess its capability to perform as designed. White-box testing determines the code’s integrity and capability to produce the desired output—no more and no less. It depends on the skills of other programmers to sufficiently understand the code design to produce an array of test-case scenarios. It’s best to perform white-box testing at the unit level, during development, when the source code is still easily accessible.
However, as integration moves to the component, subsystem, system, system-of-systems, and, ultimately, ultra-large-systems level, independent white box testing becomes decreasingly practical. In the case of distributed autonomous systems, such as a military drone, independent white-box testing is nearly impossible. The question of ownership then prevails.
Ownership Issues
A litigious part of the human condition stipulates that we should be rewarded for our creations. In software, the best way to ensure reward is to “own” the source code, which represents highly valued intellectual capital. Corporations rely on this principle to protect revenue streams, particularly during software maintenance. This is likely to hold true for larger, high-payoff systems that exhibit high-risk consequences and thus require lucrative high-assurance testing.
The natural pushback to code ownership exists in the “open systems” movement. It has been argued, however, that open source software is still proprietary based on intellectual property rights, unless it has been expressly contributed to the public domain. 7
Challenges of Open Interface Standards
Another way to achieve better software interoperability entails designing to standards. The Navy’s Future Airborne Capability Environment program involves a growing government and industry consortium committed to defining standards for interchangeable code ( www.opengroup.org/getinvolved/consortia/face). The goal is to produce an “app store” for the Department of Defense.
The daunting task of developing open interface standards to account for myriad dynamic interactions among numerous disparate software components might ultimately throttle the open systems movement, where large-scale integration is unavoidable. 8 Worse, integrated software is also influenced by environmental variables, which tend to manipulate interacting modules in differing and unexpected ways. 9
Independent Black-Box Testing
Software layering lets successive layers support higher levels of abstraction. In the case of the smartphone, individual developers can create and market their own unique high-level applications that draw from subordinate features of the phone’s distinctive layered architecture. If the phone’s architecture is open, white-box testing is feasible. In a closed environment, however, independent testers are often forced to face the alternative to white-box testing— black-box testing.
Here, the software is evaluated based on its capability to produce the desired output with a given input. This is purely a behavioral approach to software testing. Ownership issues tend to force independent black-box integration testing.
Extensive Regression Testing
According to a growing body of evidence, software, when embedded, exhibits scale-free network characteristics. 10 The same characteristics are evident in object-oriented systems. 11 Similar small-world network patterns have been found to exist in the relationships of leading technology vendors, 12 open system development teams, 13 and even the system development process in general. 14 A failure-estimating tool has been proposed that takes system-of-systems complexity into consideration for risk management. 15
The Software Engineering Institute notes that ultra-large-scale systems—those that transcend systems of systems—are highly networked, holistic, and capable of complex behaviors. 16 In most all respects, software has moved from its former stand-alone status to become part of a larger network phenomenon. How else would Google’s 16,000-processor system learn to identify cats through imagery? 17
Ultimately, because of increasing complexity born of such interdependence, traditional test methods appear inadequate for software integration testing. If embedded software modules exist in network relationships, the linear processes of separate units, give way to nonlinear responses in the larger software network. In essence, software modules act as nodes in the network, responding when stimulated. Sometimes, this response is at odds with the responses of neighboring nodes reacting to different stimuli. This is why introducing each new software module in an incremental integration build requires costly, time-consuming, and extensive regression testing to wring out newly introduced failure modes. This integration test paradigm is antiquated. A new paradigm is in order.
A New Testing Paradigm
Back to Top
Complexity theory, to which the growing field of network science is closely related, suggests that the whole is greater than the sum of its parts in complex adaptive systems. Unlike the mechanical systems of an era gone by, overall system behaviors are no longer exclusively deterministic. Software-intensive systems can’t be rationalized by a detailed understanding of the known behavior of the various modules. Rather, networked software components interact nondeterministically to generate holistic behavior patterns.
These patterns might not be well understood through complete knowledge of each of the subordinate units. As exemplified by the Joint Strike Fighter software status, 2 this might explain why large, integrated software projects historically exceed cost and schedule budgets, 18 , 19 despite endless regression testing. Integrated software isn’t easily reduced. Thus, reductionist approaches to integration testing are inadequate integration-assurance strategies.
Although a holistic approach seems necessary for software integration testing, the exact means are still emerging. However, some promising avenues exist, given the nature of the problem. A robust research agenda is available for applied systems integration.
Network Science
Network science, with its roots in the mathematical field of graph theory, is beginning to provide a robust mathematical framework for describing network topography and behavior. 20 Although the applied framework is still in its relative infancy, it can isolate and describe software network nodes. Assuming available source code, software “call graphs” can be visualized to track control flows. 21 Large systems employing bus architectures are hospitable, and passive network-analysis tools can capture packet transfers at the Internet Protocol level.
The alleged Israel-designed Stuxnet would never have targeted the correct Siemens controller if alleged US software hadn’t been able to map the network on which the ill-fated Iranian centrifuges operated. 22 Applied network science provides a means to numerically characterize the small-world topographies in which software operates across the systemic network.
Big-Data Analytics
Once the topographies are mapped, scalability comes into play. Big data and big-data visualization represent hotspots on the national technology agenda. Together, they provide a potential means for visualizing activity on a given network. If sufficiently scalable, this activity could be observed in real time under varying loads and conditions. Much as positron emission tomography (PET) scans capture brain activity under various stimuli, similar pattern behavior could be mapped for operating software systems and systems of systems.
Such a method puts black-box testing on steroids with the added advantage of real-time visual insight to system operations. Once functionally defined through further research, big-data analytic techniques could account for specific behaviors, including unresolved semantic differences in modeling and language. Latent semantic analysis technology, when scaled, appears adequate for the contextually sensitive task.
Pattern Analysis
Applying network science and complexity theory to the resulting patterns could help define a nominal operating range for a given system configuration. Theoretically, we could define limits beyond which divergent patterns either lock the system or drive it into a chaotic state. Likewise, once operating parameters are established, we could identify and remedy root causes. In some instances, we could develop data or command “pills” to inhibit or control unacceptable behaviors. We could also define new parameters for degraded mode operation or for introducing new components, including entirely new systems.
Recent research reveals that, if we can map the network graph of a complex protein reaction, understanding the behavior of only a few nodes in the network is sufficient to predict overall system dynamics. 23 The implication of this finding has profound ramifications for observing pattern behavior as a means of understanding embedded software behavior. The control implications in the system design world are even more profound.
Significantly, such pattern analysis, once perfected, begins to permit new insights into cybersecurity, as the sudden appearance of anomalous patterns could be early indicators of a system breach worthy of further investigation. Presumably, such investigation could lead to early isolation of the deviant patterns before they create too much havoc. Applied network science principles tend to bear this out, as is the case with the use of hybrid graph theory measures to detect malware in dynamic mobile networks. 24
A New Architectural Model
Although these ongoing research initiatives address integration testing techniques and might eventually help develop operational monitoring tools, they fall well short of useful design constructs. Most institutional system architecture models have stayed with a cumbersome data-centric approach. This is particularly true in the military defense sector. 25 Worse, many commercial products cling steadfastly to the relational model for data representation. This sells well, because most buyers finally understand the relational model after digesting it for years in the 1970s and 80s.
Ontological thinking, however, promotes the multiple many-to-many relationships that are today’s reality but ill-suited to relational algebra. Although software has gone the more ontologically friendly route of object-oriented code, many institutional architecture models haven’t stepped up. This inhibits the kind of environment where a “living ontology” can thrive and grow with the context of the world it supports. Furthermore, because of architectural reliance on data modeling, timing—a critical part of the state transition process—isn’t captured. Thus, the most important portion of dynamic systems behavior is architecturally shunned. It can be no surprise that most formal architectural documents become expensive shelfware. Additional research is necessary to produce an architectural model that goes to the heart of system dynamics in a way compatible with a living, contextually based ontology as the system morphs over time.
Although white- and black-box testing have their place in the software development cycle, neither likely belongs in software integration testing. Because of the network effects, a holistic view is necessary to discover and remedy anomalous behavior in systems that embed multiple software modules. In medicine, a specialist might be required to rectify a specific organ malfunction, but it’s the general practitioner who must make the preliminary diagnosis. Just as the medical field is migrating toward multidisciplinary teams for patient treatment, software design and testing must be integrated into a team approach. The software seldom stands alone; it’s affected by myriad environmental variables that independently affect system performance.
Software design and deployment must ultimately be considered a part of the greater whole and no longer an end unto itself. This will require holistic thinking at unprecedented levels. This, indeed, harnesses the power of complexity theory in the context of the learning organization. Only then might software be viewed as a critical organ in the integrated, complex network that sustains the larger system.