- Optimizing Structure of Parallel Homogeneous Systems under Attack
- KJELL HAUSKEN GREGORY LEVITIN
- 2012, 8(1): 5-17. doi:10.23940/ijpe.12.1.p5.mag
- Abstract PDF (398KB)
- Related Articles
A system of identical parallel elements has to be purchased and deployed. The cumulative performance of the elements must meet a demand. There are different types of elements characterized by their performance and cost in the market. We consider convex, linear, and concave relationships between performance and cost. The defender determines the system structure by choosing the type and the number of elements in the system. The defender distributes its limited resource between purchasing the elements and protecting them from outside attacks. The attacker chooses the number of elements to attack and distributes its limited resource evenly among all the attacked elements. The vulnerability of each element is determined by a contest success function between the attacker and the defender. The damage caused by the attack is associated with the cost of destroyed elements and the reduction of the cumulative system performance below the demand. The defender tries to minimize the damage anticipating the best attacker's strategy for any system structure. An algorithm for determining the optimal system structure is suggested. Illustrative numerical examples are presented.
References: 29Received on November 04, 2010, and revised on July 13, 2011
- K-Round Duel with Uneven Resource Distribution
- GREGORY LEVITIN KJELL HAUSKEN
- 2012, 8(1): 19-34. doi:10.23940/ijpe.12.1.p19.mag
- Abstract PDF (491KB)
- Related Articles
The paper considers optimal resource distribution between offense and defense and among different rounds in a K-round duel. In each round of the duel, two actors exchange attacks. Each actor allocates resources into attacking the counterpart and into defending itself against the counterpart's attack. The offense resources are expendable (e.g., missiles), whereas the defense resources are not expendable (e.g., bunkers). Offense distribution across rounds can increase or decrease as determined by a geometric series. The outcomes of each round are determined by a contest success functions which depend on the offensive vs. defensive resources ratio. The game ends when at least one target is destroyed or after K rounds. It is shown that when each actor maximizes its own survivability, then both actors allocate all their resources defensively. Conversely, when each actor minimizes the survivability of the other actor, then both actors allocate all their resources offensively. We then consider two cases of battle for a single target in which one of the actors minimizes the survivability of its counterpart whereas the counterpart maximizes its own survivability. It is shown that in these two cases the minmax survivabilities of the two actors are the same, and the sum of their resource fractions allocated to offense is equal to 1. However, their resource distributions are different. When both actors can choose their offense resource distribution freely, they distribute all offense to the first round. When one actor is constrained to distribute offense resources across multiple rounds, it is not necessarily optimal for the other actor to allocate all offense to the first round. We illustrate how the resources, contest intensities and number of rounds in the duels impact the survivabilities and resource distributions.
References: 19Received on August 14, 2009, and revised May 27, 2010
- The Evolution and History of Reliability Engineering: Rise of Mechanistic Reliability Modeling
- M. AZARKHAIL M. MODARRES
- 2012, 8(1): 35-47. doi:10.23940/ijpe.12.1.p35.mag
- Abstract PDF (168KB)
- Related Articles
To address the risk and reliability challenges in both private and regulatory sectors, the reliability engineering discipline has gone through a number of transformations during the past few decades. This article traces the evolution of these transformations and discusses the rise of mechanistic-based reliability modeling approaches in reliability engineering applications in recent years. In this paper we discuss the ways reliability models have progressively become more practical by incorporating evidence from the real causes of failure. Replacing constant hazard rate life models (i.e., exponential distribution) with other distributions such as Weibull and lognormal was the first step toward addressing wear-out and aging in the reliability models. This trend was followed by accelerated life testing, through which the aggregate effect of operational and environmental conditions was introduced to the life model by means of accounting for stress agents. The applications of mechanistic reliability models were the logical culmination of this trend. The physics-based (or mechanistic-based) reliability models have proven to be the most comprehensive representation, capable of bringing many influential factors into the life and reliability models of the components. The system-level reliability assessment methods currently available, however, seem to have limited capabilities when it comes to the quantity and quality of the knowledge that can be integrated from their constituent components. In this article, past and present trends as well as anticipated future trends in applications of mechanistic models in reliability assessment of structures, systems, components and products are discussed.
References: 26Received on October 31, 2010, and revised onJune 30, 2011
- Fault Diagnosis and Failure Mode Estimation by a Data-Driven Fuzzy Similarity Approach
- ENRICO ZIO and FRANCESCO DI MAIO
- 2012, 8(1): 49-65. doi:10.23940/ijpe.12.1.p49.mag
- Abstract PDF (870KB)
- Related Articles
In the present work, a data-driven fuzzy similarity approach is proposed to assist the operators in fault diagnosis tasks. The approach allows: i) prediction of the Recovery Time (RT), i.e., the time remaining until the system can no longer perform its function in an irreversible manner, ii) Fault Diagnosis (FD), i.e., the identification of the component faults and iii) estimation of the system Failure Mode (FM), i.e., the system-level outcome of the failure scenario. The approach is illustrated by way of the analysis of failure scenarios in the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS).
References: 31Received on December 10, 2010, and revised on September 06, 2011
- An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization
- HENRIK MADSEN, GRIGORE ALBEANU, and FLORIN POPENTIU-VLADICESCU
- 2012, 8(1): 67-76. doi:10.23940/ijpe.12.1.p67.mag
- Abstract PDF (142KB)
- Related Articles
Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints.
Received on December 18, 2010, revised on August 18, 2011
References: 18
- Optimal Metro-Rail Maintenance Strategy using Multi-Nets Modeling
- LAURENT BOUILLAUT, OLIVIER FRANCOIS, and STEPHANE DUBOIS
- 2012, 8(1): 77-90. doi:10.23940/ijpe.12.1.p77.mag
- Abstract PDF (1695KB)
- Related Articles
Reliability analysis has become an integral part of system design and operating. This is especially true for systems performing critical tasks such as mass transportation systems. This explains the numerous advances in the field of reliability modeling. More recently, some studies involving the use of Bayesian Networks (BN) have been proved relevant to represent complex systems and perform reliability studies. In previous works, the generic decision support tool VirMaLab, developed to evaluate complex systems maintenance strategies, was introduced. This approach is based on a specific Dynamic BN, designed to model stochastic degradation processes and allowing any kind of state sojourn distributions along with an accurate context description: the Graphical Duration Models. This paper deals with a multi-nets extension of VirMaLab, dedicated to maintenance of metro rails. Indeed, due to fulfillment of high-performance levels of safety and availability (the latter being especially critical at peak hours), the operator needs to estimate, hour by hour its ability to detect broken rails.
Received on October 10, 2010, revised on August 24, 2011
References: 10
- Replacement Models for Combining Additive Independent Damages
- XUFENG ZHAO, HAISHAN ZHANG, CUNHUA QIAN, TOSHIO NAKAGAWA, and SYOUJI NAKAMURA
- 2012, 8(1): 91-100. doi:10.23940/ijpe.12.1.p91.mag
- Abstract PDF (159KB)
- Related Articles
In some practical situations, most systems would degrade with time and suffer failure finally by both causes of additive and independent damages. From such a viewpoint, this paper considers replacement models for combining with two kinds of damages: The unit is replaced at a planned time or when the total additive damage exceeds a failure level, whichever occurs first, and undergoes minimal repair when independent damage occurs. First, a standard cumulative damage model where the unit suffers some damage due to shocks and the total damage is additive is considered. Second, the total damage is measured at periodic times and increases approximately with time linearly. Using the techniques of cumulative processes in reliability theory, expected cost rates are obtained and optimal policies which minimize them are derived analytically. Finally, optimal policies are computed and compared numerically, and useful discussions for such results are done.
Received on October 21, 2010, revised on August 03, 2011
References: 15
- Diagnosis Decision-Making using Threshold Interpretation Rule and Expected Monetary Value
- MOHD RADZIAN ABDUL RAHMAN, M. ITOH, and T. INAGAKI
- 2012, 8(1): 101-110. doi:10.23940/ijpe.12.1.p101.mag
- Abstract PDF (323KB)
- Related Articles
Lack of information in dissolved gas analysis (DGA) pieces of evidence necessitates Dempster-Shafer theoretic approach for combining these pieces of evidence. The threshold ground probability assignment (THG) that firmly judge major fault condition is determined from DGA dataset prior to year 2009. A threshold interpretation rule is proposed. Four distinct scenarios resulted from the application of the interpretation rule inclusive of a scenario, which the system operator is uncertain about the condition of a power transformer. DGA dataset of all power transformers that experienced electrical and thermal failures in year 2009 is collected to validate the threshold interpretation rule. Six decision policies are introduced to map power transformer condition propositions to decision spaces for decision-making under uncertainties. Expected monetary value is utilized to assess each decision policy and to select the optimal decision policy.
Received on September 28, 2010, revised on August 02, 2011
References: 16
Print ISSN 0973-1318