Please wait a minute...
, No 3

■ Cover page(PDF 3223 KB) ■  Table of Content, March 2025(PDF 32 KB)

  
  • A Review of Software Fault Prediction Techniques in Class Imbalance Scenarios
    Ashu Mehta, Navdeep Kaur, and Amandeep Kaur
    2025, 21(3): 123-130.  doi:10.23940/ijpe.25.03.p1.123130
    Abstract    PDF (802KB)   
    References | Related Articles
    A thorough analysis of methods for addressing class imbalance in software failure prediction is presented in this work. A common problem that has a big influence on machine learning models' performance and frequently results in biased predictions is class imbalance. To lessen this difficulty, a range of strategies have been investigated, including ensemble strategies like Bagging, Boosting, Stacking, and Two-Stage Ensembles; algorithm-level strategies like Cost-Sensitive Learning; and data-level strategies like SMOTE and MAHAKIL. Based on important performance criteria like accuracy, precision, recall, and stability, the evaluation determines how well these methods work on a number of popular datasets, including PROMISE, NASA, and CPDP. Furthermore, hybrid approaches that blend ensemble learning and sampling strategies have demonstrated encouraging outcomes in terms of enhancing prediction resilience and accuracy. In order to help choose the best techniques for software failure prediction in unbalanced situations, this research attempts to shed light on the advantages and disadvantages of each strategy.
    Optimizing Latent Dirichlet Allocation using Metaheuristic Technique: A Comparative Study
    Sneh Prabha and Neetu Sardana
    2025, 21(3): 131-140.  doi:10.23940/ijpe.25.03.p2.131140
    Abstract    PDF (473KB)   
    References | Related Articles
    Community websites offer specialized online platforms for people to connect and share their knowledge about specific topics or objectives, encouraging deep engagement. A significant amount of unstructured text data can be analyzed to uncover valuable insights and trends. Latent Dirichlet Allocation (LDA) is a commonly used technique in topic modeling. However, many people use LDA with default parameters, leading to inaccurate and less cohesive topics. The selection of hyperparameters impacts the effectiveness of LDA models, and we seek to investigate the influence of the varied metaheuristic approaches on enhancing LDA's performance. In this study, we aim to analyze and compare five metaheuristic optimization algorithms for tuning the hyperparameters of the LDA model. We compare the effectiveness of Genetic Algorithms (GA), Particle Swarm Optimization algorithm (PSO), Grey Wolf Optimizer algorithm (GWO), Firefly Algorithm (FA), and Whale Optimization Algorithm (WOA). It has been found that GA is a superior metaheuristic in producing enhanced results from LDA. LDA+GA has achieved an improvement of 12.4 % over the baseline LDA technique. It has demonstrated a notable improvement in perplexity score compared to GA.
    Petri Net-Based Decision Support System for Maintenance Prioritization in Butter Oil Production Systems
    Parveen Sihmar and Vikas Modgil
    2025, 21(3): 141-148.  doi:10.23940/ijpe.25.03.p3.141148
    Abstract    PDF (473KB)   
    References | Related Articles
    This study employs Petri nets to conduct an availability analysis of the butter oil production system (BOPS) with the objective of minimizing downtime and improving system availability. The dynamic behavior of complex industrial systems can be analyzed using Petri nets, which provide a robust modelling framework that enables the identification of critical failure-repair cycles within subsystems. The system's performance behavior and availability were investigated in relation to the availability of repair facilities and the effects of varying failure and repair rates using a licensed software suite. The proposed decision support system for maintenance order priority is based on the availability matrices obtained by Petri nets. The maintenance personnel will be able to plan the maintenance policies and schedule in advance and identify the criticality of various subsystems based on the proposed maintenance order.
    Data Driven Software Quality Assessment: Correlation Analysis of Code Metrics and Fault-Proneness
    Seema Kalonia and Amrita Upadhyay
    2025, 21(3): 149-156.  doi:10.23940/ijpe.25.03.p4.149156
    Abstract    PDF (390KB)   
    References | Related Articles
    Predicting software faults is essential for raising program quality and cutting maintenance expenses. Debugging efforts can be minimized, software failures can be avoided, and overall software reliability can be increased via early detection of problematic modules. Code metrics from NASA's Metrics Data Program (MDP) datasets are analyzed in this research in order to find trends and connections between software complexity and defectiveness. We examine how different code complexity indicators and software flaws are related using statistical methods and exploratory data analysis. We discover that defect-prone modules are highly correlated with cyclomatic complexity, decision density, and unique operands. By determining threshold values for these important indicators, we offer information on the quality of the software and possible places where code maintainability could be improved. This analysis emphasizes the value of empirical investigation, statistical validation, and organized feature selection in defect prediction. We lay the groundwork for future defect avoidance efforts by providing useful suggestions to lower software complexity and increase reliability through comparative analysis across several NASA datasets. By offering data-driven insights that can assist developers in optimizing code architectures and reducing defect risks, the study advances software engineering. Furthermore, our analysis highlights how important it is to comprehend software complexity early on in the development process so that teams may proactively enhance maintainability and code quality. Software engineers, quality assurance teams, and companies looking to create more reliable and fault-resistant software systems can use the research's findings as a guide. Software teams can improve software lifecycle management, reduce post-release problems, and increase productivity by methodically identifying defect-prone modules based on predetermined thresholds. Future developments in real-time monitoring and automated flaw detection systems can bolster these initiatives even more, increasing the effectiveness and dependability of software development.
    Trust Management in WSN using ML for Detection of DDoS Attacks
    Vikas, Charu Wahi, Bharat Bhushan Sagar, and Manisha Manjul
    2025, 21(3): 157-167.  doi:10.23940/ijpe.25.03.p5.157167
    Abstract    PDF (630KB)   
    References | Related Articles
    Wireless Sensor Networks (WSNs) have emerged as an attractive solution for many challenging applications, including but not limited to environmental monitoring, health care, or industrial automation. Introduction to Trust Management in WSNs using ML(TMWSNML) is a significant part of WSN security, especially DDoS attacks. However, these networks also face many security threats such as DDoS (Distributed Denial of Service) attacks being one of the major challenges. To overcome such threats, the proposed model uses five machine learning models (Random Forest (RF), K-Nearest Neighbors (KNN), Decision Trees (DT), Support Vector Machines (SVM), and XGBoost). Experimental Findings of TMWSNML Algorithm achieves better metrics than existing lightweight methods in terms of Detection Rate (Accuracy), Precision, Recall, and F1-Score. In particular, RF achieves a stunning performance of 99.81% in all measurements, but KNN and DT also improve significantly with 99.55% and 99.74%, respectively. This indicates the favorability of SVM, while a similar result was also achieved by XGBoost which gave 99.3% accuracy while a 99.8% detection rate. The results demonstrate the capability of TMWSNML in the timely detection of DDoS attacks, therefore maintaining strong trust management and improving the security of WSN. This study highlights the power of machine learning in strengthening WSNs against emerging cyber threats.
    Intelligent Job Allocation and Adaptive Migration in Cloud Environments using a Dynamic Dual-Threshold Strategy
    Sonia Sharma and Rajendra Kumar Bharti
    2025, 21(3): 168-177.  doi:10.23940/ijpe.25.03.p6.168177
    Abstract    PDF (650KB)   
    References | Related Articles
    Cloud Computing Environment (CCE) plays a very important role in improving resource utilization in general and SLAs in particular, because while data resources may be slightly more cost-effective compared to traditional on-premise resources, there is a chance for cloud resource to lead to significant over-usage due to improper resource management in cloud systems. This paper proposes a new scheduling framework with two threshold load balancing policy for dynamic workload balancing on the Virtual Machines (VMs), which uses CPU utilization as a key indicator and defines the upper and lower thresholds of CPU utilization to identify overutilized and underutilized VMs. This is described by the dual threshold mechanism, which leads to migration of jobs to reduce the load of the VM, preventing overloading and ensuring the plane without occupation. To optimize the scheduling further, a wide-ranging job selection and migration algorithm is implemented that embraces CPU demand, RAM use, and migration expense as well. With this specific task, the focus shifts to how the focus can help in redistributing high-demand jobs to the underutilized VMs, thereby reducing power consumption and avoiding SLA violations. The algorithm also assesses the feasibility of migrations, meeting resource constraints and maintaining energy efficiency. The proposed framework is experimentally validated over varying job loads (10,000-50,000 tasks) and a Dynamic Workload (up-to 500 VMs), with results showcasing its prowess in handling multi-VM workloads in a highly dynamic environment. The results demonstrate a notable decrease in the general power consumption and SLA violations with respect to the non-migration cases. Specifically, power consumption reduced up to 16.67% during the high-demand hours and SLA violations reduced by 50%. It encourages sustainable cloud computing through intelligent load balancing, CPU optimization, and energy waste reduction. The dual threshold load balancing and job migration approach focus on the reliable, scalable, and energy-efficient cloud infrastructures. Future work will explore the extension of this framework to multi-dimensional resource metrics and real-time workload fluctuations in heterogeneous cloud environments.
Online ISSN 2993-8341
Print ISSN 0973-1318