Innovations and Approaches for Resilient and Adaptive Systems

Innovations and Approaches for Resilient and Adaptive Systems

Vincenzo De Florio (PATS Research Group, University of Antwerp and iMinds, Belgium)
Release Date: September, 2012|Copyright: © 2013 |Pages: 343
DOI: 10.4018/978-1-4666-2056-8
ISBN13: 9781466620568|ISBN10: 1466620560|EISBN13: 9781466620575
  • Printed-On-Demand (POD)
  • Usually ships one day from order
(Multi-User License)
List Price: $195.00
10% Discount:-$19.50
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
Hardcover +
(Multi-User License)
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
(Individual Chapters)
  • Purchase individual chapters from this book
  • Immediate PDF download after purchase or access through your personal library
Description & Coverage

Our society continues to depend upon systems that are built in a way that they end up being inflexible and intolerant to change. Therefore there is an urgent need to investigate innovations and approaches to the management of adaptive and dependable systems. These studies are usually implemented through design, development, and the evaluation of techniques and models to structure computer systems as adaptive systems.

Innovations and Approaches for Resilient and Adaptive Systems is a comprehensive collection of knowledge on increasing the notions and models in adaptive and dependable systems. This book aims to enhance the awareness of the role of adaptability and resilience in system environments for researchers, practitioners, educators, and professionals alike.


The many academic areas covered in this publication include, but are not limited to:

  • Adaptive and context-aware multimedia
  • Adaptive Fault Models
  • Adaptive Fault-Masking
  • Methods Focusing on Optimizing Quality of Experience
  • Methods, Models, and Architectures to Manage and Express Strategies and Provisions for Cross-Layer Adaptation
  • Personalization
  • Recovery-oriented computing
  • Resilience engineering
Table of Contents
Search this Book:
Editor Biographies
Vincenzo De Florio received his “Laurea in Scienze dell’Informazione” (MSc, computer science) from the University of Bari (Italy, 1987) and his PhD in engineering from the University of Leuven (Belgium, 2000). He was researcher for six years in Tecnopolis, formerly an Italian research consortium, where he was responsible for the design, testing, and verification of parallel computing techniques for robotic vision and advanced image processing. Within Tecnopolis, Vincenzo was also part of SASIAM, the School for Advanced Studies in Industrial and Applied Mathematics, where he served as a researcher, lecturer, and tutor, and took part into several projects on parallel computing and computer vision funded by the Italian National Research Council. Vincenzo was then researcher for eight years with the Catholic University of Leuven (Belgium) in their ACCA division where he participated in several international projects on dependable computing (EFTOS, TIRAN, and DePauDE) He is currently a researcher with the Performance Analysis of Telecommunication Systems (PATS) research group at the University of Antwerp, where he is responsible for PATS’ branch on adaptive and dependable systems under the guidance of Professor Chris Blondia.

He is also a researcher with IBBT, the Flemish Interdisciplinary Institute for Broad-Band Technology. Vincenzo De Florio published about seventy reviewed research papers, fourteen which were for international research journals. He is member of various conference program committees. He is local team leader for IST-NMP Project ARFLEX (Adaptive Robots for Flexible Manufacturing Systems). He is an editorial reviewer for several international conferences and journals. He also served as expert reviewer for the Austrian FFF. In the last few years, he has been teaching courses on computer architectures, advanced C language programming, and a course of seminars in computer science. He is co-chair of workshop ADAMUS (the Second IEEE WoWMoM Workshop on Adaptive and DependAble Mission- and bUsiness-critical mobile Systems, Vincenzo’s interests include resilient computing, dependability, adaptive systems, embedded systems, distributed and parallel computing, linguistic support to non-functional services, complex dynamic systems modelling and simulation, autonomic computing, and more recently service orientation.

Peer Review Process
The peer review process is the driving force behind all IGI Global books and journals. All IGI Global reviewers maintain the highest ethical standards and each manuscript undergoes a rigorous double-blind peer review process, which is backed by our full membership to the Committee on Publication Ethics (COPE). Learn More >
Ethics & Malpractice
IGI Global book and journal editors and authors are provided written guidelines and checklists that must be followed to maintain the high value that IGI Global places on the work it publishes. As a full member of the Committee on Publication Ethics (COPE), all editors, authors and reviewers must adhere to specific ethical and quality standards, which includes IGI Global’s full ethics and malpractice guidelines and editorial policies. These apply to all books, journals, chapters, and articles submitted and accepted for publication. To review our full policies, conflict of interest statement, and post-publication corrections, view IGI Global’s Full Ethics and Malpractice Statement.


It was with great pleasure that I accepted the kind invitation from my friends at IGI-Global to play again the role of editor for this series of books collecting the papers from the yearly volumes of IJARAS, the International Journal of Adaptive, Resilient, and Autonomic Systems, established in 2010 and now in its third year of publication. Reflecting on the work done and rearranging the contents into a coherent new “whole picture” provided me again with the chance to get closer and learn once more from the work of so many and diverse contributors. My message here is first and foremost a chance to express my gratitude to them for sharing their lessons learned with me and the readers of IJARAS, as well as for considering our journal for their papers. Some of these contributions are particularly important to me because of their affinity with my own research interests; others for having provided me with new and broader insight and ideas; still others for their innovative character or for the sheer pleasure I had in being shown familiar concepts and approaches from a different angle. Editing this book allowed me also to try and present those contributions under a new light and hopefully better expose their true innovation potential.

The focus of this book is on two fundamental ideas in (computing) systems — resilience and adaptivity. Simply amazing is the number and stature of the scholars who addressed these two intertwined concepts throughout the past centuries. Still more surprising is how much we have yet to understand about
resilience and adaptivity, and how actively the scientific and industrial communities are investigating or applying today models, systems, and applications exhibiting resilient and adaptive behaviors. It is my opinion that the common nature of both concepts should be traced back to the Aristotelian concept of entelechy (Sachs, 1995; Aristotle & Lawson-Tancred, 1986; De Florio, 2012a). Entelechy is in fact one of the main conceptual cornerstones in Aristotle’s philosophy — so difficult a concept to be captured in a concise definition that Sachs refers to it as being a “three-ring circus of a word” (Sachs, 1995). Difficulties notwithstanding, it is that very same scholar that in the cited reference provides us with a practical and ingenious translation of entelechy as “being-at-work-staying-the-same.” Such a definition obviously consists of the following two parts:

1. “Being at work,” which refers to a system’s ability to continuously adjust their functions so as to compensate for foreseen and/or unpredicted changes in a given execution environment.
2. “Staying the same,” referring to that system’s ability to retain their “identity” — in other words, its peculiar and distinctive functional and non-functional features — despite both the above mentioned environmental changes and the adjustments carried out so as to improve the system-environment fit.

As it can be easily realized, the two abilities above are actually adaptivity and resilience — again, the major characters in this book. The common denominator of all articles presented here is in fact their dealing with facets, requirements, and aspects of this “three-ring circus” of a concept that is adaptive-and-resilient computing systems.

Some of the above mentioned aspects and facets are structural and pertain to general approaches towards these “entelechial systems” (De Florio, 2012b). The first section of this book — Approaches for Resilient And Adaptive Systems — presents a number of excellent contributions in that domain. Others aspects are behavioral and focus on the emergence of autonomic properties, e.g. self-safety, self-tuning, self-optimization, self-management, and self-configuration. Accordingly, the interesting articles in our second section all deal with Autonomic Behaviors and Self-Properties. Engineering adaptive and resilient systems in an effective and cost-conscious way calls for practical solutions to common requirements for resilient and adaptive systems. An “accumulation point” of sort for such requirements and solutions is given by middleware and frameworks. Accordingly section 3 offers four valuable contributions dealing with Middleware and Framework Support for Resilient and Adaptive Systems. Operative aspects constitute
the subject of the final section of this book, which is entitled Algorithms and Protocols for Resilient and Adaptive Systems and provides excellent examples of ingenious solutions to achieve in practice the emergence of resilience and adaptivity. In what follows I will introduce these sections and their papers.


Section 1 includes four very interesting contributions dealing with open problems in quite different domains.
In their excellent paper “Systematic Design Principles for Cost-Effective Hard Constraint Management in Dynamic Nonlinear Systems,” authors S. Munaga and F. Catthoor argue that the current system design principles for hardware controllers do not match anymore with current requirements for systems that be able to satisfy hard system constraints while at the same time minimizing costs. The traditional non-predictive approaches privilege simplicity over effectiveness and do not exploit any predictability in the system at hand. Existing predictive or hybrid approaches such as the System Scenarios methodology (Gheorghita et al., 2009) can only handle limited forms of dynamism and nonlinearities, which results in non optimal behaviors that still do not fully exploit the available predictability. The answers of the authors is a number of systematic design principles based on the combined strategy of dynamic bounding and proactive system conditioning for a predicted likely future. This led to more than halving
the energy expenditure in a preliminary application in the design of a video decoder. It is my personal conjecture that much more and better results can be expected by engineering the reported approach to the full extent of its potential.

A second very promising work in this section is given by paper “A Recovery-Oriented Approach for Software Fault Diagnosis in Complex Critical Systems,” contributed by G. Carrozza and R. Natella. The reported approach is based on the observation that production run failures and field failures cause the software system failure modes to be impossible to define at design time. Then how to be able to match behaviors that continuously change over time? The clever answer of the authors is given by a holistic diagnostic approach in which error detection, fault location, and error recovery have been integrated into one on-line adaptive diagnosis process. When an error is detected, the corresponding fault is located by means of machine learning algorithms, and an error recovery best matching that fault is selected and executed. Teleologic behavior is ensured by including corrective actions to improve dynamically the detection quality. After defining their approach the authors put it to use in a real-life application in the field of Air Traffic Control. Very satisfactory results are observed, including high accuracy and low overhead.

In his brilliant position paper “Abstract Fault Tolerance: A Model-Theoretic Approach to Fault Tolerance and Fault Compensation without Error Correction,” L. Marcus explores the concept of “pure fault tolerance,” namely the study of a system’s emergence of certain desired behaviors despite the fact that not all sub-systems behave as expected. Dr. Marcus discusses this issue and poses several important questions, including:
• What is the degree of resilience intrinsically possessed by a given architecture?
• Given a system modeled after that architecture, to what extent the properties of that system are a function of the same properties in the components of that system? (For instance when we consider reliability and the triple modular redundant (TMR) architecture, a well-known fact is that it is possible to tell precisely how the reliability of the composite is in relation with that of its constituents2 (Johnson,1989). In this case the formulation is simple and elegant, but when we slightly modify the architecture we get to a much more complex expression3 (De Florio et al., 1998). Obviously it is very unlikely that analytical formulations such as those may be found for Complex Adaptive Systems!)
• To what extent a system achieves its goals despite misbehaviors of its constituents — a question that makes very much sense also for large collective adaptive systems and even societal systems such as certain governments.

Clearly the focus of this position paper is not in providing an answer to these questions but rather in highlighting their role in understanding the “hidden rules” that govern the emergence of resilience in (adaptive) systems (Holland, 2004). Another contribution of the author is a framework for the expression of the semantics of Pure Fault Tolerance.

Finally, in paper “IoT-IMS Communication Platform for Future Internet” by C.-Y. Chen, H.-C. Chao, T.-Y. Wu, C.-I Fan, J.-L. Chen, Y.-S. Chen, and J.-M. Hsu, the authors point out how the Future Internet is likely to exhibit a convergence of two currently hot topics — Internet of Things and Cloud Computing. A possible common fabric to enable this integration was indicated in the so-called IP Multimedia Subsystem, a communication platform based on All-IP and the Open Services Architecture. The authors
discuss corresponding scenarios for the Future Internet based on aspects including cloud services, data sensing and communication technology, authentication and privacy protection mechanisms, and mobility and energy-saving Management. An approach towards the Future Internet based on the above-mentioned aspects is also described.


Five excellent papers constitute the section on self-properties and autonomic behaviours.
Self-tuning and self-optimization are the key objectives in M. Leeman’s paper “A Resource-Aware Dynamic Load-Balancing Parallelization Algorithm in a Farmer-Worker Environment.” Built on top of the dependable farmer-worker algorithm described in (De Florio et al., 1997), the system described in this paper is currently being used to dynamically balance the workloads of regression tests of a digital television content-processing embedded device. The proposed resource-aware algorithm automatically
compensates for failures in the worker processes as well as in the worker nodes and it ensures that tasks are scheduled according to a policy that minimizes execution time. Most importantly, the algorithm does not assume all the workers to have the same capabilities nor all assignments to include the same set of requirements. Its adoption within CISCO appears to be steadily growing mainly due to the algorithm’s ability of reaching its intended design goals with very limited costs.

In “Non-Intrusive Autonomic Approach with Self-Management Policies Applied to Legacy Infrastructures for Performance Improvements,” by R. Sharrock, T. Monteil, P. Stolf, D. Hagimont, and L. Broto, the authors address the problem of limiting the time and costs required to operate and maintain large legacy infrastructures. How to let these infrastructure self-manage despite their being designed with software engineering principles and practices dating back to long before the advent of autonomic computing? The authors answer this challenge by proposing an outer control loop that does not change the managed system but the environment around it. The state of the managed system is actively monitored and corrective actions are autonomously injected when needed in a non-intrusive way. The approach
makes use of administration policies expressed in a graphical language inspired by UML. The authors prove the effectiveness of their approach by evaluating the performance of a distributed load balancer of computational jobs over a grid. They show in particular how the policy description language approach is powerful enough to express self-management of complex scenarios.

The paper “Agents Network for Automatic Safety Check in Constructing Sites” by R. Aversa, B. Di Martino, M. Di Natale, and S. Venticinque represents an interesting example of how information and communication technology may be successfully applied to enhance existing social services and organizations. Often such services and organizations are not “smart” enough and their lock-ins (Stark, 1999) lead to situations where sub-optimal behaviors and properties ultimately emerge. In some cases this even implies endangering human lives, as regrettably enough it is the case in construction processes. Despite a tradition as old as humanity itself and the availability of many and proven safety standards, still such processes are commonly subjected to tragic failures due to e.g. negligence or criminal behavior. Often
such failures are the result of bad practice — for instance safety checks missing or not being performed when due or security violations not being filed and persisted. The authors’ answer to this matter of fact consists in regarding the construction site as a hybrid environment in which smart devices, software agents, safety mechanisms and human beings cooperate as a collective adaptive system able to produce autonomously self-safety behaviors. Such a new social organization automatically performs safety checks by matching observed behaviors with the safety plans and, in case of mismatches, by enforcing secure logging of violations and real-time execution of recovery actions. The authors present their ingenious design as well as a working prototype taking the form of a three layered software/hardware architecture. It is interesting to realize how the consequence of badly constructed software may lead to consequences
as catastrophic as those this interesting paper is meant to tackle (Leveson, 1995); it is only natural then to advocate extending the concepts such as the one described in this contribution so as to enhance the processes for constructing safe software and avoid what I called the “endangeneer” syndrome in (De Florio, 2009; De Florio 2012a).

In their paper “Run-Time Compositional Software Platform for Autonomous NXT Robots,” N. Gui et al. describe an approach towards self-restructuring and self-configuring software based on the ACCADA middleware (Gui, De Florio, Sun & Blondia, 2009) (Gui, De Florio, Sun & Blondia, 2011). Here the context does not just drive adaptation but actually the selection of the adaptation logics best matching the current run-time conditions. Such meta-adaptation allows the adaptation logic to be dynamically maintained, which paves the way to being able to face autonomously unprecedented environments. Robotic environments are typical cases where such a feature is very attractive — especially when the robot is set to operate in a location that forbids any form of supervision and control. The approach is validated on a Lego NXT robot system: depending on the available energy budget different adaptation logics are dynamically selected, which leads to different quality vs. cost trade-offs.

Last but by no means the least, this section reprints paper “Self-Adaptable Discovery and Composition of Services Based on the Semantic CompAA Approach,” by J. Lacouture and P. Aniorté. As in ACCADA, also in the case of the Auto-Adaptable Components approach the aim is reaching self-adaptation, which is obtained here through an ingenious combination of components and intelligent agents technologies. A third ingredient to self-adaptation is semantic processing, which is used to express and mechanically
manipulate non-ambiguous information about the functional and non-functional properties of the auto-adaptable components. The resulting hybrid entities are shown to exhibit self-discovery and selfcomposition mechanisms. Auto-Adaptable Components build on top of the Ugatze Component Reuse
MetaModeling tool. The authors apply their approach to an e-Portfolio service — namely a “personal digital collection of information describing and illustrating a person’s learning, career, experience and achievements” — in the domain of collaborative learning. The authors provide evidence that their approach achieves important key properties to self-adaptation, including robustness, reusability, autonomy, and flexibility.


As already mentioned, middleware and software frameworks are becoming key resources to support the execution of resilient and adaptive systems. In what follows we describe four contributions in this domain.

The first paper is “Timely Autonomic Adaptation of Publish/Subscribe Middleware in Dynamic Environments,” by J. Hoffert, A. Gokhale, and D. C. Schmidt. In their excellent article the authors discuss the challenges in providing middleware support able to guarantee quality of service properties such as reliability, latency, and timeliness in distributed environments characterized by dynamic variability of resources. The addressed platforms are distributed real-time and embedded systems while typical scenarios for their middleware include safety-critical and mission-critical applications for crisis management or power grids management. Among the challenges addressed by the authors is the hard requirement of a limited and bounded time for reasoning about the context, planning appropriate reactions and taking pointed adaptations.

In their paper “A Generic Adaptation Framework for Mobile Communication,” H. Sun, N. Gui, and C. Blondia describe an approach towards the design of user-aware adaptive systems based on coupling
aspect orientation and service-oriented architectures. Rather than taking all decisions autonomously, the described system explicitly “wraps” the user in the control loop by means of a simple graphical userinterface. Dynamic trade-offs of quality of service and quality of experience allow resource consumption to be reduced without introducing performance failures or other quality losses. The approach is based on the event-condition-action model and is demonstrated through an adaptive multimedia application that
trades off dynamically quality-of-experience and quality of service versus cost. The software platform chosen by the authors is based on OSGi and uses the AspectJ aspect-oriented programming language as well as so-called reflective and refractive variables — an application-level approach to express adaptation concerns in applications written in “good old” C (De Florio & Blondia, 2007).

Considerable attention is being given by both the research and the development communities to OSGi, “a light-weight standardized service management platform that allows for dynamic service provision from multiple providers” (Sun,Gui & Blondia, 2011). The OSGi framework supports the Java programming language and couples the component approach with service-oriented programming, which considerably simplifies the design of applications able to adapt to highly dynamic environments.
Ambient specific issues call for specific fine-tuning though, and in their paper “Various Extensions for the Ambient OSGi Framework” S. Frénot, F. Le Mouël, J. Ponge, and G. Salagnac provide their lessons learned on this problem and report on approaches to extend OSGi so as to enhance its ability to cope with ambient-specific concern.

Opportunistic communication is an emerging paradigm based on systematic wireless exchanges of data owned by mobile nodes in proximity of each other. Not only this realizes a greater “social memory,” but it allows resources to be economized by intercepting long haul communication requests and translating them into local wireless exchanges when the corresponding data is already available in the memory of nearby nodes.

As can be easily understood, measuring the performance of opportunistic communication systems is vital in order to come up with effective adaptation decisions and it paves the way to autonomic selfadaptation through middleware services. An answer to this need is described by I. Carreras, A. Zanardi, E. Salvadori, and D. Miorandi in their paper “A Distributed Monitoring Framework for Opportunistic Communication Systems An Experimental Approach.” The paper reports the lessons learned by the authors while developing the opportunistic communication middleware U-Hopper. The authors present the design of U-Hopper as well as that of the distributed monitoring framework that was set up to monitor U-Hopper’s performance and dynamically adapt it to context changes. The paper also demonstrates the practical utilization of the monitoring framework and reports experiments and evaluations.


The final section of this book focuses on algorithms and protocols.
In the first paper of this section, entitled “COADA: Leveraging Dynamic Coalition Peer-to-Peer Network for Adaptive Content Download of Cellular Users,” L. Vu, K. Nahrstedt, R. Malik, and Q. Wang propose a very interesting theory, namely the spontaneous emergence of dynamic clusters of mobile nodes when users move towards certain points of interest. Such nodes constitute dynamic “coalitions” whose size and distance to target follow exponential distributions. Building on top of this theory and assumption the authors propose the adaptive protocol COADA (COalition-aware Adaptive content DownloAd). As its acronym reveals, COADA aims to reduce to a minimum content download from cellular networks by making use of opportunistic peer-to-peer communication among the nodes in the current
coalition. Making use of an exponential coalition-size function COADA nodes periodically monitor the current size of the coalition and make use of this information to predict the future of the coalition. In turn this allows to predict how much data may be acquired within the coalition without resorting to more expansive cellular data transfers. COADA also takes timeliness into account and minimizes the probability to miss content file download deadlines. The effectiveness of COADA is demonstrated by means of simulations, which show how COADA meets its design goals by reducing significantly the transfer of data over cellular networks with no negative impact on transfer deadlines.

The second paper in this section is “ROCRSSI++: An Efficient Localization Algorithm for Wireless Sensor Networks,” authored by F. Frattini, C. Esposito and S. Russo. Here the authors investigate optimal localization of sensors in a wireless sensor network. Several important non-functional requirements are typical of this problem, including energy efficiency, high performance and low error rates, but currently available algorithms typically tackle only a few of them. Improving such algorithms is key towards achieving effectiveness in real-life scenarios. The answer of the authors to this problem is an improved version of algorithm ROCRSSI consisting in a better management of the information produced and consumed by the sensors. The new version, ROCRSSI++, is shown to be characterized by reduced localization errors. The paper also describes experiments and reports about the new algorithm’s energy consumption and localization latency.

Situation awareness (Ye, Dobson & McKeever, 2012) is an important prerequisite towards efficient and effective performance. An interesting example of this is reported in paper “Load-Balanced Multiple Gateway Enabled Wireless Mesh Network for Applications in Emergency and Disaster Recovery,” by M. Iqbal, X. Wang, H. Zhang. The authors observe how the performance of wireless mesh networks is strongly affected by exceptional situations such as crisis management in a disaster stricken area. In such cases a typical group behavior arises in which large amounts of data traffic travel in the same direction to access the same destination, namely the gateway node of the network. The latter is the node acting as bridge between mesh nodes and the external network. In such situation then the gateway node becomes a bottleneck, which translates in chaotic behaviors, strong channel contention of nodes in proximity of each other, and therefore congestions. Redundant gateways is the answer to this problem proposed by the authors. This is achieved through a Load-Balanced Gateway Discovery routing protocol called LBGDAODV, which extends the Ad hoc On-Demand Distance Vector routing protocol and uses a periodic gateway advertisement scheme with an efficient algorithm that load-balances the routes to the gateway nodes. The authors show that their approach reduces congestion and improves the performance of the network with no penalty with respect to the original routing scheme.

Again situation awareness (Ye, Dobson & McKeever, 2012) and corresponding optimizations are the key topic of the next paper in this section, which is entitled “An OMA DM Based Framework for Updating Modulation Module for Mobile Devices” and authored by H. Zhang, X. Wang, and M. Iqbal. Depending on the current situation and context, the optimal operation of mobile devices calls for specific adaptations. One such adaptation is reported in this paper. in this case it is the condition of the channel that is used to select dynamically among multiple modulation approaches. The corresponding updating protocol is based on OMA DM1. The authors describe their protocol as well as the design of a framework to enable its use.

Often algorithm neglect the effects of the interference produced by the concurrent execution of other tasks. An example of this is algorithms for the measurement of the duty cycle of sensor-originated waveforms. When executed on resource constrained microprocessors it becomes very difficult to hold to assumptions such as the possibility to enact samplings at regular intervals. Algorithms requiring such regularity would be “too rigid,” which would go to the detriment of the expected accuracy. C. Taddia, G. Mazzini and R. Rovatti answer this problem in their paper “Duty Cycle Measurement Techniques for Adaptive and Resilient Autonomic Systems.” There they propose a duty cycle measurement algorithm that is designed so as to coexist adaptively with other concurrent tasks regardless of their nature and duration. Sampling at non-controlled instants can then be regarded as a common event rather than a fault. In this sense, with the terminology introduced in (De Florio, 2010), the solution proposed by the authors constitute an example of an assumption failures tolerant algorithm. Such algorithm is shown to result in lightweight code with a fast and reliable convergence to the duty cycle value.

As a final message before leaving you to the fine contributions in this book I like to focus once more here on the complex and intertwined relationships that characterize entelechial (that is, adaptive and resilient) systems. As suggested by Boulding (1956), mastering such relationships is likely to call for two complementary approaches:

• Extending the research scope to multidisciplinary — actually inter-disciplinary domains, such as it is the case for the study of complex socio-ecological systems, and
• Narrowing down our models to some shared nucleus of ideas and concepts able to “directing research towards the gaps which they reveal” — that is, towards what Boulding referred to as gestalts (Boulding, 1956).

Both IJARAS and this book are in fact meant to serve as conceptual and practical tools to aid in the above processes. This goal is pursued by providing researchers and practitioners with a venue designed specifically 1) to disseminate complementary views on a set of problems at the core of disciplines that study the behaviors of adaptive and resilient systems, and 2) to expose theories and approaches aiming to capture the hidden structures (Holland, 1995) at the heart of those disciplines. The high quality of the papers being submitted to IJARAS — a remarkable example of which can be found in this very book — as well as the high significance of their scientific innovations and contributions constitute for me a clear indication that we “are moving in the right direction” — which, incidentally, is an alternative
concise definition of entelechy.

Vincenzo De Florio
University of Antwerp & IBBT, Belgium
May 9, 2012


Aristotle, . (1986). De anima (On the Soul) (H. Lawson-Tancred, Trans.). Penguin classics. Penguin Books.

Boulding, K. (1956, April). General systems theory—The skeleton of science. Management Science, 2(3). doi:10.1287/mnsc.2.3.197

De Florio, V. (2009). Application-layer fault-tolerant protocols. Hershey, PA: IGI Global. doi:10.4018/978- 1-60566-182-7

De Florio, V. (2010). Software assumptions failure tolerance: Role, strategies, and visions . In Casimiro, A., de Lemos, R., & Gacek, C. (Eds.), Architecting Dependable Systems VII (Vol. 6420, pp. 249–272).
Lecture Notes in Computer Science Berlin, Germany: Springer. doi:10.1007/978-3-642-17245-8_11

De Florio, V. (2012a). Preface . In De Florio, V. (Ed.), Technological innovations in adaptive and dependable systems: Advancing models and concepts (pp. 1–425). Hershey, PA: IGI Global.

De Florio, V. (2012b). On the constituent attributes of software resilience. Submitted for publication in “Assurances for Self-Adaptive Systems,” Lecture Notes in Computer Science, State-of-the-Art series. Springer.

De Florio, V., et al. (1997). An application-level dependable technique for farmer-worker parallel programs. Proceedings of the High-Performance Computing and Networking International Conference and Exhibition (HPCN Europe 1997), Lecture Notes in Computer Science, Vol. 1225, Vienna, Austria (pp. 644–653). Berlin, Germany: Springer.

De Florio, V. (1998). Software tool combining fault masking with user-defined recovery strategies. IEE Proceedings on Software: Special Issue on Dependable Computing Systems, 145(6), 203–211. doi:doi:10.1049/ip-sen:19982441

De Florio, V., & Blondia, C. (2007). Reflective and refractive variables: A model for effective and maintainable adaptive-and-dependable software. Proceedings of the 33rd EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA 2007), Lübeck, Germany. doi: 10.1109/

Gheorghita, S. V., Palkovic, M., Hamers, J., Vandecappelle, A., Mamagkakis, S., & Basten, T. (2009). System-scenario-based design of dynamic embedded systems. ACM Transactions on Design Automation
of Electronic Systems, 14, 1–45. doi:10.1145/1455229.1455232

Gui, N., De Florio, V., Sun, H., & Blondia, C. (2009). ACCADA: A framework for continuous contextaware deployment and adaptation. Proceedings of the 11th International Symposium on Stabilization,
Safety, and Security of Distributed Systems (SSS 2009), Lecture Notes in Computer Science, Vol. 5873, Lyon, France, November 2009 (pp. 325–340). Springer.

Gui, N., De Florio, V., Sun, H., & Blondia, C. (2011). Toward architecture-based context-aware deployment and adaptation. Journal of Systems and Software, 84(2), 185–197. Elsevier. Holland, J. H. (1995). Hidden order: How adaptation builds complexity. Addison-Wesley.

Johnson, B. W. (1989). Design and analysis of fault-tolerant digital systems. New York, NY: Addison Wesley.

Leveson, N. G. (1995). Safeware: Systems safety and computers. Addison-Wesley.

Lin, S., Jiang, S., Lin, H., & Liu, J. (2006). An introduction to OMA device management. Retrieved from

Sachs, J. (1995). Aristotle’s physics: A guided study. Masterworks of Discovery. Rutgers University Press.

Stark, D. C. (1999). Heterarchy: Distributing authorithy and organizing diversity . In Clippinger, J. H. III, (Ed.), The biology of business: Decoding the natural laws of enterprise (pp. 153–179). Jossey-Bass.

Sun, H., Gui, N., & Blondia, C. (2011). A generic adaptation framework for mobile communication. International Journal of Adaptive, Resilient and Autonomic Systems, 2(1), 46–57. doi:10.4018/ jaras.2011010103

Ye, J., Dobson, S., & McKeever, S. (2012). Situation identification techniques in pervasive computing: A review. Pervasive and Mobile Computing, 8(1), 36–66. doi:10.1016/j.pmcj.2011.01.004

1. OMA, that is Open Mobile Alliance in an initiative started in 2002 and grouping a large set of
companies in the mobile industry sector (Lin, Jiang, Lin, & Liu, 2006). OMA’s objective are the development of interoperable mobile service enablers able to facilitate Device Management (DM). Self-management, self-optimization, self-diagnosis and self-healing are typical OMA DM design
2. Using Markov models, under the assumption of independence between occurrences of faults, it is possible to show that if R(t) is the reliability of a single, non-replicated component, then the reliability of a triple-modular redundant composite is equal to R t R t R t TMR( ) = 3× 2( )-2× 3( ).
This in particular means that the reliability of the composite is larger than that of its constituents only when the latter is at least 0.5.
3. If C is the probability associated with the process of identifying the failed module out of those available and being able to switch in the spare, then the reliability of TMR-plus-one-spare can be expressed in function of the reliability of a single, non-replicated constituent as follows: R t C C R t R t R t TMR+SPARE TMR ( ) = (-3 2 + 6 )×( ( )(1- ( ))2 + ( ). Note that in this case the reliability of the composite is larger than that of its constituents when the latter is at least greater than the threshold (De Florio et al., 1998).