International Journal of Critical Computer-Based Systems (12 papers in press)
Using Temporal Logics for Specifying Weak Memory Consistency Models
by Maximilian Senftleben, Klaus Schneider
Abstract: The formal verification of multithreaded programs is not just more difficult due to the concurrent behaviours, but also due to the used underlying weak memory consistency models. Weak memory models arise from techniques like store buffering that were introduced to increase the performance. However, all of these techniques weaken the memory consistency, and may result in unintuitive behaviours where processors may disagree on the order in which write operations occurred. A requirement for verification are therefore unambiguous and complete specifications of such memory consistency models. In the past, specifications based on different formalisms have been presented which often lacked of comparability and the direct usability for model checking. In this paper, we therefore introduce the use of temporal logic to describe the behaviour of memory systems. In particular, we use Linear Temporal Logic (LTL) to define the weak memory models. Thereby, we can easily check the properties of a multithreaded program against several different consistency models and determine the weakest consistency guarantees required to fulfill the given specification.
Keywords: weak memory model; memory model; weak memory consistency; memory consistency; weak consistency; memory specification; specification; temporal logic; model checking; verification; LTL.
Formal probabilistic performance verification of randomly-scheduled wireless sensor networks
by Maissa Elleuch, Osman Hasan, Sofiène Tahar, Mohamed Abid
Abstract: Energy efficiency in wireless sensor networks (WSN) is one of the most critical issue regardless of the target application. While scheduling sensors by partitions to preserve energy is a simple and intuitive approach in this context, it is also important to not compromise on the main performance requirements of the considered application. For mission-critical WSN applications, different quality of service (QoS) requirements on network performance have to be met. Besides, various assumptions, may effectively impact the sensing performance capabilities of the network. Nevertheless, most analysis techniques focus on the average performance values, and do not consider neither the targeted QoS requirements, nor the probabilistic feature of the algorithm. Based on the theorem proving approach, we first provide, in this paper, an accurate formal analysis of the network lifetime maximisation problem, under QoS constraints, for randomly-scheduled wireless sensor networks. After that, we tackle the higher-order-logic formalisation of the intrusion coverage intensity, for a modified version of the randomised scheduling, with more realistic assumptions for the intrusion object, in a two or three dimensional plane.
Keywords: theorem proving; wireless sensor networks; WSN; node scheduling; performance analysis; network lifetime; intrusion coverage.
Precise Use Cases in a Context-aware Model-Checking Approach
by Amel Benabbou, Safia Nait- Bahloul, Philippe Dhaussy
Abstract: Formal verification exhibits well known benefits but comes at the price of formalizing precise and sound requirements, what often remains a challenging task for engineers. We propose a high-level formalism for expressing requirements based on interaction overview diagrams, which orchestrate activity diagrams that we automatically derived from use cases. Informal requirements are transformed into scenarios which fuel a context-aware model-checking approach. The approach assumes the availability of a formal model of the system, such as concurrent and communicating automata, and a set of requirements specified in the form of contexts, which point out boundaries between the system and its environment. The requirement specification approach blends elaboration and transformation phases. Thanks to this blending, the semantic gap between informal and formal requirements is reduced, while model-checking is improved by contexts modelling. As a consequence, engineers gain support for moving towards formal verification.
Keywords: Model-checking; context; use cases; interaction overview diagram; context-aware verification.
Performance Analysis of the M/G/c/c+r Queuing System for Cloud Computing Data Centers
by Assia Outamazirt
Abstract: In this paper, we propose M/G/c/c+r queuing system as a model for performance evaluation of cloud server farms. Analytical resolution of this queuing system remains, to this day, an open and challenging issue because an exact analytical solution is difficult to reach. Therefore, we provide new approximate formulas to compute the transition-probability matrix of this system. In order to examine the accuracy of our approximate formulas, we test them numerically on some examples. Then, we compute the steady-state probabilities and some performance indicators such as blocking probability, mean response time, probability of immediate service and delay probability.
Keywords: Cloud Computing; Performance Analysis; M/G/c/c+r Queue; Embedded Markov Chain; Transition-probability Matrix.
Special Issue on: VECoS'2015/16 Modelling and Verification
A Model Based Approach to Combine Conformance and Load Tests: An eHealth Case Study
by Moez Krichen, Afef Jmal, Mariam Lahami
Abstract: In this paper, we propose a new model-based framework that combines both conformance and load tests in the context of real-time systems. Our new framework is based on the model of extended timed automata with inputs/ouputs and shared integer variables. In addition, we define a new extended timed input output conformance relation etioco. This latter allows to compare a given implementation with respect to its specifications considering load conditions. We also discuss some modelling issues and we provide a new technique for deriving analog-clock tests from the specification of the System Under Test (SUT). In order to show the relevance of our approach, we report on a critical case study from the healthcare field. An important contribution in this work was to use a rich formalism to model multi-user systems and to combine conformance and load tests in the same model. This point constitutes an important testing area that is usually misunderstood.
Keywords: Model-Based Software Testing; Conformance/Load Tests; Extended Timed Automata; Test Generation; Real-Time Systems.
Performance Evaluation of Stochastic Real-Time Systems with the SBIP Framework
by Ayoub Nouri, Braham Lotfi Mediouni, Marius Bozga, Jacques Combaz, Saddek Bensalem, Axel Legay
Abstract: The SBIP framework consists of a stochastic real-time component-
based modelling formalism and a statistical model checking engine. The former
is built as a stochastic extension of the real-time BIP formalism and enables
the construction of stochastic real-time systems in a compositional way. The
statistical engine implements a set of statistical algorithms for the quantitative
and qualitative assessment of probabilistic properties. The paper provides a
thorough introduction to the SBIP formalism and the associated verification
method. In a second part, it surveys several case studies about modelling
and verification of real-life systems, including various network protocols and
Keywords: Stochastic Systems; Real-time; Component-based; Formal Models; Generalised Semi-Markov Process; GSMP; BIP; Statistical Model Checking; SMC; Performance Evaluation.
Model-based Specification and Validation of the Dual-Mode Adaptive MAC Protocol
by Admar Ajith Kumar Somappa, Andreas Prinz, Lars M. Kristensen
Abstract: Wireless Sensor and Actuator Networks (WSANs) rely on MAC protocols to coordinate access to the wireless medium access and for managing the radio unit on each device. The Dual-Mode Adaptive MAC (DMAMAC) protocol is a recently proposed protocol designed to reduce the energy consumption of the radio communication in WSANs. The DMAMAC protocol targets the industrial WSANs used for real-time process control. At its core, DMAMAC exploits the distinction between transient and steady of the controlled plant process to dynamically adapt the MAC superframe structure and thereby conserve energy. The switch between steady and transient mode of operation is a safety-critical part of the protocol. The contribution of this paper is to develop a rigorous specification of the DMAMAC protocol using timed automata and the supporting Uppaal software tool. The Uppaal tool is also used to verify key functional and real-time properties of the protocol. Finally, we have used execution sequences from the formal specification model to validate an OMNET simulation model used for performance evaluation of DMAMAC, and to validate a nesC implementation of the protocol on the TinyOS platform.
Keywords: Verification; Validation; Simulation; Deployment; Industrial Wireless Sensor Network; MAC protocol; Uppaal.
Fault Diagnosis of Discrete-Event systems Based on the Symbolic Observation Graph
by Abderraouf Boussif, Mohamed Ghazel, Kais Klai
Abstract: Fault diagnosis of Discrete-Event Systems (DESs) has received a lot of attention in industry and academia during the last two decades. In DESbased diagnosis, the two main discussed topics are offline diagnosability analysis and online diagnosis. A pioneering approach that led to the development of various techniques is based on the so-called diagnoser, which is a deterministic automaton used for both diagnosability analysis and online diagnosis if the system is diagnosable. However, this approach suffers from the combinatorial explosion problem due to the exponential complexity of construction. Thus, the diagnoser approach is no longer adequate when dealing with large complex systems. To partially overcome this problem, an efficient approach to construct a symbolic diagnoser is proposed in this paper. The proposed approach consists in constructing a diagnoser based on the Symbolic Observation Graph (SOG), which combines symbolic and enumarative representations in order to build a determinitic observer from a partially observed model. The symbolic observation graph was firstly used for the formal verification using event-based modelchecking as an efficient alternative to the Kripke structure. Besides, the construction of the diagnoser as well as the verification of diagnosability are performed simultaneously on the fly, which can considerably reduce the generated state space of the diagnoser and thus the overall running time. In order to evaluate the efficiency and the scalability of the proposed approach, some experimental and comparative results relatively to other existing methods are presented and discussed, on the basis of a DES benchmark.
Keywords: Discrete-Event Systems; Diagnosability Analysis; Symbolic Observer Graph; On-the-Fly Verification.
Formal Verification of Intermittent Fault Diagnosability of Discrete-Event Systems Using Model-Checking
by Abderraouf Boussif, Mohmaed GHazel
Abstract: Fault diagnosis of complex and dynamic systems is a crucial and
challenging task, essentially with respect to guaranteeing the reliable, safe and
efficient operation of such systems. Most research in this field has been focused
on permanent failure diagnosis, i.e., once a fault occurs, the system remains
indefinitely faulty. However, experience with fault diagnosis in real-life systems
shows that intermittent faults, i.e., faults that can be automatically recovered,
are predominant and are among the most challenging kinds of faults to detect
and isolate. In this paper, we address the formal verification of intermittent fault
diagnosability in discrete-event systems. The system is modeled by a finite state
automaton and intermittent faults are defined as faults that can automatically
recover once they have occurred. Two definitions of diagnosability, regarding
the detection of fault occurrence within a finite delay and the detection of fault
occurrence before its recovery, are discussed. The diagnosability is analyzed on
the basis of the twin-plant structure, which is encoded as a Kripke structure, while
diagnosability conditions are formulated using LTL temporal logic.
Keywords: Discrete-Event Systems; Fault Diagnosis; Diagnosability analysis,rnIntermittent faults; Model-Checking.
Special Issue on: Methods and Tools for Assurance of Critical Infrastructure Protection
A Versatile Approach for ranking and modelling of Non Functional Requirements
by Harsimran Kaur
Abstract: RTo effectively encode domain knowledge of customers and implementation strategies of developers in Critical Computer Based System (CBS), Requirement Engineering plays a very significant role. For development of quality software for CBS, it is indeed necessary to specify both Functional and Non-Functional Requirements (NFRs) for the proposed software. However, unlike functional requirements, NFRs are not given much importance and are often generically captured which results in missing out on quality parameters during CBS development. This lacuna can be taken care with inclusion of details regarding NFRs for the proposed softwares at initial stages of CBS development. On the other hand subjective nature and complex common connections amongst the NFRs makes it quite unrealistic to concentrate on each of the NFR. This complexity can be addressed by use of Interpretive Structural Modelling (ISM) and Analytical Hierarchy Process (AHP) methods for identification of critical NFRs. Further, new artifacts have been introduced in Use Case Diagram and Reference Model to document and validate the identified critical NFRs. The proposed work determines the degree of mutual reliance between NFRs so that software analysts can figure out all the concerns related to NFRs during the underlying periods of programming advancement. Credibility of proposed work, each proposed artifact in Software Requirement Specification (SRS) document was initially analyzed individually and then later on was compared with other built up and noticeable methodologies. By use of cognitive dimension analysis and complexity measures it was established that the proposed artifact not only treats NFRs informally and formally but also make them viable, traceable and visible to different classes of users for analysis.
Keywords: Non-Functional Requirements (NFR); Software Requirement Specification (SRS); Use Case; Reference Model; Interpretive Structural Modelling; Formal Modelling.
Formal Methods in Dynamic Software Updating: A survey
by Razika Lounas, Mohamed Mezghiche, Jean-Louis Lanet
Abstract: Dymanic Software Updating (DSU) consists in updating running programs on-the-fly without any downtime that leads to systems unavailability. The use of DSU in critical applications raises several issues related to update correctness. Indeed, an erroneous dynamic update may introduce safety vulnerabilities and security breaches. In this perspective, the use of formal methods has gained a large interest since they respond to the high need of rigor required by such applications. Several frameworks were developed to first express update correctness which is based on several criteria. Then, the proposed formalisms are used to specify DSU systems, express correctness criteria and establish them. In this paper, we present a review of researches on the application of formal methods to DSU systems. We give a classification of systems according to the paradigms of programming languages and then we explain the correctness criteria and categorize the articles regarding the approaches of formalisation to establish the correctness. This information is useful to help ongoing researches in having an overview on the application of formal methods in DSU.
Keywords: Dynamic software updating; formal methods; correctness criteria; critical systems; systems safety.
A Fully Encrypted High-Speed Microprocessor Architecture: The Secret Computer in Simulation
by Peter Breuer, Jonathan Bowen
Abstract: The architecture of an encrypted high performance microprocessor designed on the principle that a nonstandard arithmetic generates encrypted processor states is described here. Data in registers, in memory and on buses exists in encrypted form. Any block encryption is admissible in the architecture.rnrnAn encrypted version of the standard OpenRISC instruction set is understood by the processor. It is proved here, for programs written in a minimal subset of instructions, that the platform is secure against `Iago' attacks, in that the privileged operator or a subverted operating system cannot decrypt the program output, nor change the program's output to any designated value.rnrnPerformance measures from cycle-accurate behavioural simulation of the platform are given for 64-bit RC2 (symmetric, keyed) and 72-bit Paillier (asymmetric, additively homomorphic, no key in-processor) encryptions. Measurements are centred on a nominal 1GHz clock with 3ns cache and 15ns memory latency, which is conservative with respect to available technology.
Keywords: Computer Security; Encrypted Computation.