International Journal of Reliability and Safety (11 papers in press)
A scoping review on role of communication media for effective occupational safety and health awareness and training
by Sangeeta Bhanja Chaudhuri, Manoj Majhi, Sougata Karmakar
Abstract: Communication media (CM) plays a significant role in occupational safety and health (OSH) awareness and training for employees of organisations. As there is no guideline for selecting appropriate CM in a given context, the superiority of a particular CM over another for any specific OSH-AT program is solely judged by the individual's conscience and experience. Hence, the present review aims to portray the advantages and disadvantages of different CM, their role in effective OSH awareness and training, and factors that determine the advantage of one CM over others in a specific scenario. The article highlights the research lacunae pertaining to the domain of CM's role in effective OSH based on available information in peer-reviewed published articles searched from Web of Science, Scopus, and Google Scholar, following the PRISMA-ScR model for review. The current review is expected to motivate and assist the OSH professionals in selecting an appropriate CM for delivering successful OSH awareness and training.
Keywords: communication media; effective OSH; OSH awareness; OSH training; occupational safety and health; motivate and assist; awareness and training; reliability; safety.
Investigation of reliability analysis of shallow foundation by combining Hasofer-Lind index, response surface methodology and multi-objective genetic algorithm
by Brahim Lafifi, Ammar Rouaiguia
Abstract: In this research work, a reliability analysis of a shallow foundation solicited by a central vertical loading is presented, using the classical limit equilibrium theory and the Hasofer-Lind reliability approach. A deterministic model represented by the Meyerhof equations and Holtz formula are used to compute the ultimate bearing capacity and the settlement of the foundation. The major difficulty of such problems lies in the evaluation of the performance function that costs complicated calculations and much computation time. So, to remedy this constraint, the concepts of design of experiments and response surface method (RSM) can be used for the evaluation of the performance functions by generating the approximate polynomial equations. In this study, unit weight, cohesion, and friction angle of soil were considered as random variables for the case of bearing capacity, whereas Young's modulus and Poisson ratio were considered for the case of settlement. The evaluation of the Hasofer-Lind reliability index and the failure probability is performed through the optimisation of a constrained response surface model, using the multi-objective genetic algorithm. From the obtained results in terms of reliability index and design points, it is important to notice that the adopted method showed a satisfactory concordance with the authors' methods applicable in the reliability problems.
Keywords: reliability analysis; shallow foundation; RSM; optimisation; genetic algorithm.
An approach to the application of the DempsterShafer theory in the megacity community security risk assessment
by Ke Yin, Haiyun Zhou, Jimin Chen, Chen Zhao
Abstract: Through the study of community security risks in a megacity by the application of the DempsterShafer theory, this paper develops an indicator system for the assessment of urban community security based on a combined array of both static and dynamic factors, including natural disasters, man-made hazards, community administration and security support. A proper model is thereafter constructed in order to better solve the problems such as content inadequacy, insufficient indicators, model inaccuracy and other problems that are frequently brought forward in the former assessments made by others upon the static and dynamic factors. The is applied in a case study of a megacity. The findings show that more effective improvements have been made by using the newly constructed model in the urban community security risk management.
Keywords: risk assessment; modern community security; megacities; Dempster–Shafer theory.
Optimal allocation of the outage centres in an electrical distribution system: the strategic method for utilities in order to improve power system reliability and reduce customer service outage cost
by Arman Alimardani, Reza Dashti
Abstract: In this paper, 'outage centre' is defined as the buildings from where repair crew are dispatched to the failure location in order to restore power. This paper proposes a practical model of optimal allocation of the outage centres to minimise outage costs and improve system reliability simultaneously. A stepwise approach to obtain the optimum site of the outage centre in a work location is presented. Finally, the implementation of the proposed model is discussed in a sample network that consists of 81 buses. Analysis of the indices sensitivity to parameters is also carried out and discussed.
Keywords: optimal allocation of outage centres; outage centre; travel time; electricity distribution company.
Performance assessment of a multi-state computer network system in series configuration using copula repair
by Praveen Kumar Poonia
Abstract: The analysis of various reliability measures of a computer network system comprising two load balancers (LB), five web servers (WB), and three database replica servers (DBRS) as a series parallel system with four subsystems is described in this paper. Subsystem-1and 3: load balancers, act as the traffic cops fixed just before group of servers. Subsystem-2: five identical web servers, used to demonstrate website material through storage, dispensation and transporting webpages to users, operational under the process of 2-out-of-5: G policy. Subsystem-4: three database replica servers, dedicated to database storage and its retrieval, operational under the process of 1-out-of-3: G policy. Failure rates of LB/WB/DBRS in all the subsystems are constant and follow exponential distribution, whereas the repair follows two kinds of distribution, namely general and copula distribution. Using transition diagram, first order partial differential equations are developed and solved using supplementary variable techniques and copula modus-operandi. The objective of the paper is to find the main reliability indexes, such as availability of the system, reliability of the system, mean time to failure and expected profit. The results are shown in tables and graphs through which we display the effect of failure rate and repair rate on reliability physiognomies. Lastly, the analysis of results indicates that copula repair is more effective in availability and expected profit analysis.
Keywords: computer network; load balancer; reliability; catastrophic failure; Gumbel-Hougaard family copula distribution.
Generalised rule induction based model for software fault prediction
by Ashutosh Mishra, Meenu Singla
Abstract: Software fault predictions have great importance during maintenance and evolution of software. To upgrade the quality of the software, it is necessary to predict the faulty software modules. Earlier research was mainly focused on the binary classification of software class modules using different fault prediction techniques. However, much less work has been done on the prediction of the number of faults. In this study, a descriptive technique called Generalised Rule Induction (GRI) associations rule mining is proposed to identify the number of faults in the faulty class module. The proposed technique is implemented for five releases of open source Apache Ant project, which is taken from 1PROMISE repository. The results show that the generated rules for the class modules containing a single fault achieves better accuracy with fewer rules.
Keywords: software fault prediction; dependent variable; independent variable; clustering; association rule mining.
A repairable second optional service queueing system with warm and cold standbys
by Pikkala Vijaya Laxmi, Girija Bhavani Edadasari
Abstract: Throughout this paper, efforts are made to analyse the availability of a second optional service queueing system with two kinds of server on hold. One is warm-standby and the other is cold-standby. First essential service (FES) is provided to all arriving customers and only some of them may further demand second optional service (SOS). Both operating and standby servers provide FES and SOS to the customers. Replacing the warm-standby server with the failed main operating server may not be successful. Each time an operating server or a warm-standby server fails, it is quickly sent to a repair facility. The combined approach for two models is presented through the indicator function. We derive the steady-state solution of the model using matrix geometric method. The numerical results are shown in which several features of the system are evaluated for both models. A cost model is developed to determine the optimal service rates during first essential and second optional services using particle swarm optimisation.
Keywords: queueing system; second optional service; warm-standby; cold-standby; matrix geometric method; particle swarm optimisation.
Reliability analysis using change-point concept and optimal version-updating for open source software
by Ajay Kumar
Abstract: Our main objective is to analyse the reliability of open source software (OSS) using a change-point concept and estimate the optimal time for version updating of OSS. The focus is to provide an efficient parametric decomposition that will result in an analytical expression for the mean value function for predicting the OSS reliability that could be used by the software developer as well as software user to judge the reliability of OSS. A reliability growth model for OSS has been proposed on the basis of a modified NHPP which is developed by incorporating the change-point concept and imperfect debugging with constant Fault Reduction Factor (FRF). Owing to the varying nature of bugs and frequent software releases in OSS, the fault detection process cannot be smooth. To overcome the non-smoothness in the fault detection process, the change point concept has been incorporated in the model to assess OSS reliability. Such a reliability growth model further can be applied to estimate optimal time for version updating of OSS by considering the two important factors in the context, involvement of the volunteers and newcomers and reliability to ensure the quality of the software using multi-attribute utility theory. The proposed model has been tested on datasets from the public release of Apache open-source software and it has been found that the proposed model provides more accurate reliability estimation. Moreover, the proposed reliability growth model for OSS helps management to determine the optimal version-updating time for OSS using multi-attribute utility theory. The proposed model is practically implicated since it considers the multiple releases of the software to be common by taking the reliability growth model for each release individually into consideration. Furthermore, multiple change points can be considered in order to make the model more accurate.
Keywords: open source software; fault reduction factor; change-point; imperfect debugging; software reliability growth model; multi-attribute utility theory; non-homogeneous Poisson process.
A multi-criteria decision-making approach for optimal selection of software reliability growth models
by Aakash Gupta, Rakesh Garg
Abstract: The software industries are major stakeholders in the economy of any country. Most business processes are now implemented through software both online and offline. Before this software is deployed and delivered to the client, it is essential for the solution provider to extensively check its reliability. Reliability is an important factor that directly affects the future of any software in the market. The reliability of software could be enhanced by the use of software reliability growth models (SRGMs). In the present research, a hybrid multi-criteria decision making (MCDM) approach, namely Entropy-Evaluation based on Distance from Average Solution (EEDAS), is proposed and implemented to rank 16 SRGMs by selecting nine criteria using a failure dataset. Further, the EEDAS method results are compared with two well-known methods, WEDBA and TOPSIS, and validated by Kendall's tau correlation test. The present research results based on EDAS show that the Pham Zhang IFD model is the best one whereas the Gompert model is the worst.
Keywords: software reliability; SRGMs; EDAS; multi-criteria decision making.
A discrete modelling framework for fault prediction with logistic fault reduction factor
by Avinash K. Shrivastava, Ruchi Sharma
Abstract: Many software reliability growth models were developed by considering constant and time-dependent (increasing and decreasing) fault reduction factors (FRF). All these models were developed by considering the continuous-time approach. However, sometimes the appropriate unit of fault detection is the number of test runs or the number of test cases executed. Software reliability growth models developed for such cases are termed as discrete ones. In this work, we have developed two software reliability growth models with FRF by considering the perfect and imperfect debugging environment. We have further extended our discrete modelling approach for the multi-release of software. FRF is modelled by using logistic distribution. The results obtained for the proposed discrete FRF-based software reliability growth models are compared with the existing discrete models. On comparing the results, we see that the proposed models fit better on the three datasets used for numerical illustration.
Keywords: software reliability; modelling; discrete; prediction; fault reduction factor.
Code-based CRUD analysis for prioritising test cases
by Tomohiro Takeda, Satoshi Masuda, Tohru Matsuodani, Tsuyoshi Yumoto, Kazuhiko Tsuda
Abstract: When software is modified, an impact analysis is conducted to determine the effect of these modifications on other functions. However, the current impact-analysis techniques cannot identify such impact analysis. To compensate for this, comprehensive test cases are created. Therefore, impact analysis faces problems when increasing the true-positive ratio, which denotes the impacted implementations, and when reducing the false-positive ratio, which denotes the non-impacted implementations. To address this, Impact-Data-All-Used (IDAU) can be used to create and prioritise test cases based on CRUD information contained in design documents. We herein propose a code-based-IDAU (CB-IDAU) that applies IDAU to the source code using the control-graph and call-graph analysis. Based on a performance comparison of CB-IDAU to that of its previously proposed version, CB-IDAU, we observed an increase in the true-positive value by 157% and a reduction in the false-positive value by 60% when the full-coverage-test performance was used as the benchmark.
Keywords: software testing; impact analysis; test case creation; call flow; control flow; data flow; testing automation; test case prioritisation; graph search; intermediate language.