Please wait a minute...
, No 2
  
  • Original article
    A Randomized Iterated Greedy Algorithm for the Minimum Partial Vertex Cover Problem
    Bouzaroura Ahlam, Bouamama Salim
    2026, 22(2): 57-66.  doi:10.23940/ijpe.26.02.p1.5766
    Abstract    PDF (410KB)   
    References | Related Articles

    The Minimum Partial Vertex Cover (MPVC) problem aims to identify a minimum set of vertices that covers at least k edges in a graph and arises in performance-critical scenarios such as network monitoring, fault detection, and coverage optimization under resource constraints. As a classical NP-hard combinatorial optimization problem, MPVC requires efficient heuristic approaches to balance solution quality and computational efficiency in large-scale systems.

    This paper proposes an Improved Randomized Iterated Greedy (IRIG) algorithm that incorporates adaptive mechanisms to regulate both the construction greediness and the intensity of solution perturbation based on online performance feedback. The approach combines a reactive Restricted Candidate List, a cooling-warming destruction strategy, an elite solution pool, and a reverse-delete pruning procedure to enhance solution compactness while maintaining robustness. The effectiveness of the proposed method is evaluated on 31 benchmark instances from the DIMACS and BHOSLIB datasets. Experimental results indicate that IRIG achieves competitive and often superior performance compared to representative baseline methods, while exhibiting stable behavior across multiple independent runs. These results suggest that the proposed approach provides an effective and computationally efficient solution framework for MPVC in performance-oriented optimization contexts.

    Autoencoder-Guided ML for Real-Time IoT Anomaly Detection
    Vaishali N. Rane, Arunkumar M S
    2026, 22(2): 67-76.  doi:10.23940/ijpe.26.02.p2.6776
    Abstract    PDF (522KB)   
    References | Related Articles

    As the volume and complexity of Internet of Things (IoT) implementations proliferate, new cybersecurity challenges emerge that make anomaly detection harder, particularly in the case of limited data and real time requirements. In the past, Intrusion Detection Systems (IDS) are usually trained on balanced datasets, having access to clean normal traffic, which is rarely the case in working IoT environments. This paper presents a framework for supervised anomaly detection based on inverting the usual way of applying information to data labelling; in this case using only the attack traffic rather than normal traffic to train a deep autoencoder in order to generate realistic pseudo-normal samples based on low reconstruction error, and then using this data to produce normal balanced traffic made up of pseudo-normal samples, which statistically represents a true behavior without the introduction of any synthetic noise as in the case of SMOTE or GANs. Then, a high recall and good performance XGBoost classifier can be trained to robustly differentiate between pseudo-normal and attack traffic. This method not only resolves the data imbalance problem, but also eliminates the need for clean normal traffic, a great benefit in realistic deployments where clean normal traffic is often lacking and often unreliable if it does exist. Test results using the BoT-IoT 5% data set indicate the framework presented produced a recall of 91% and better than 91% accuracy rates, showing a great capability over the baseline Isolation Forest models. This framework is computationally lightweight, runs on edge deployments, and provides exploitability outputs to support operational trust. Finally, this work presents a new learning task for reverse autoencoders, optimized for recall-first detections, and represents a paradigm shift for how anomaly detection systems can resiliently function under adversarial, constrained, and imbalanced data volume cases in IoT networks.

    HEA-NIDS: A Hybrid-Ensemble Anomaly Detection System for Mitigating Network Intrusions and DDoS Precursors in Cloud Storage Environments
    Callistus Tochukwu Ikwuazom, Francisca Nonyelum Ogwueleka, Mohammed Baba Hammawa, Rajesh Prasad
    2026, 22(2): 77-87.  doi:10.23940/ijpe.26.02.p3.7787
    Abstract    PDF (587KB)   
    References | Related Articles

    Cloud infrastructures are becoming more vulnerable to complex attacks, such as the precursors of Distributed Denial of Service (DDoS) and misuse of insider privileges, which are hard to detect using traditional signature-based intrusion detection systems (IDS). This work presents HEA?NIDS, a Hybrid Ensemble?based Anomaly Detection System designed for dynamic cloud environments. A heap?ranking strategy was employed to select candidate classifiers, retaining the four most consistent models which were integrated into a dual?engine ensemble comprising stacking with a Random Forest meta?learner and soft voting for probability aggregation. The experiments with the NF-UQ-NIDS-v2 dataset, which consists of 76 million NetFlow records and 21 attack types, and stratified 10-fold cross-validation showed a high predictive performance of above 99 percent accuracy, false positive rate 0.0055, true positive rate 0.9898, and an AUC-ROC of approximately 1.0. The temporal drift will be addressed in future work, and adaptive retraining and multi-dataset validation will be used to make the model even stronger and bring it a step closer to the practical implementation.

    A Hybrid Oversampling and Cleaning Framework for Accurate and Reliable Software Fault Prediction
    Ashu Mehta
    2026, 22(2): 88-98.  doi:10.23940/ijpe.26.02.p4.8898
    Abstract    PDF (692KB)   
    References | Related Articles

    Software fault prediction is an essential tool in improving the quality of software and minimizing the maintenance expenses of the software by detecting modules that were prone to defects at an early stage of the development lifecycle. Nevertheless, the class imbalance, in which faulty modules are a small subset of the dataset, is also a significant issue to the traditional machine learning classifier, which can subsequently result in low detection of instances of minority classes. To eliminate this problem, the current research suggests MC-SMOTE (Meta-Clustered SMOTE with Cleaning), a new hybrid sampling method that combines the clustering-based selective oversampling and mild under sampling with noise removal through ENN and Tomek Links. MC-SMOTE produces quality balanced data, minimizes artificial noise, and stabilizes decision-borders to better detect the minority-classes. The efficiency of the suggested method is measured on 6 NASA PROMISE datasets (CM1, KC1, JM1, PC1, PC2, PC3) with the help of 6 popular classifiers: Random Forest, Naive Bayes, K-Nearest Neighbors, Support Vector Machine, Logistic Regression, and Decision Tree. The experiment has shown that MC-SMOTE demonstrates significantly better results in all metrics, such as Accuracy, Precision, Recall, F1-Score, AUC-ROC, MCC, and G-Mean, and shows significant improvement in minority-class recognition and false alarm reduction. The results indicate that the hybrid methodology provides an increase in the reliability of the prediction of fault and effective generalization of the concept across various classifiers, which provide a strong solution to the challenge of dealing with class imbalance in the context of software quality assurance.

    Requirement Engineering Framework for Target-Driven Data Warehouse Design and Optimization
    Vishal Sharma, K. K. Sharma
    2026, 22(2): 99-109.  doi:10.23940/ijpe.26.02.p5.99109
    Abstract    PDF (473KB)   
    References | Related Articles

    The strategic alignment with the primary commercial purposes of data warehouse (DW) systems are a continuous challenge of data management. Conventional requirement engineering (RE) approaches do not tend to achieve success in translating high-level business requirements into unambiguous and implementable specifications needed to produce the best DW design, creating such systems that are functional but strategically uncomfortable and strategically inadmissible. The current paper eliminates this major distinction by the introduction of a new target-driven requirement engineering (TDRE) structure. The structure offers a methodical role to extract, break up and execute purposeful business aims and significant performance measures (KPI) in verification DW specifications. This encompasses the systematic breakdown model, semi-automatic natural language processing (NLP) need analysis, and a modified failure mode and impact analysis (FMA) rooted on the risk-inconstant preference determination mechanism. At the core of the structure are the suggested target-run requirement decomposition and adequacy (TDRD-T) algorithms, which articulately render the process of transforming business goals into a tragic requirement list. The efficiency of this framework is presented in the form of a comparative case study in the organizational setting of the real world. The findings imply that the TDRE framework reduces the need to 71.9% and covers a greater part of trackability by 98%. Moreover, the DW created with the help of this framework outperformed and achieved the key objectives of query response time and data innovation; it was rated 40 percent higher by users. Both the theoretical input to the DW design approach and the practical, tested instrument that is offered in this research enables practitioners to ensure that DW investments achieve their desired business value.

    A Dynamic Resource-Aware Load Balancing Approach for Optimized Performance in Cloud Computing
    Sunaina Mehta, Sushil Bhardwaj
    2026, 22(2): 110-118.  doi:10.23940/ijpe.26.02.p6.110118
    Abstract    PDF (487KB)   
    References | Related Articles

    Cloud computing has emerged as a dominant paradigm in response to the increased need for effective computing services over the internet, data sharing, and resource utilization. Efficient load balancing is crucial in managing resources efficiently for improving the performance, scalability, and reliability of cloud computing environments. This paper presents a comparative performance analysis of three dynamic load balancing algorithms — Least Loaded, Weighted Round Robin, and Enhanced Load Balancing (ELB) — for optimizing ERP component allocation in cloud environments. Experiments were conducted using a python-based simulation that dynamically allocates Enterprise Resource Planning (ERP) components to virtualized cloud resources under both uniform and non-uniform configurations. Simulation results reveal that under uniform configuration, ELB achieved the highest CPU utilization (79%) and throughput (295 units) with the lowest response time (39.31%) as compared to traditional algorithms. Similarly, under non-uniform configuration, ELB maintained superior performance with maximum CPU utilization of 78.89%, average utilization of 61.43%, and throughput of 281 units. These results highlight ELB’s capability to adapt dynamically to workload variations while efficiently utilizing computational resources and reducing latency. The ELB approach enhances CPU utilization and overall system responsiveness, as a feasible solution for scalable and adaptive ERP-based cloud environments.

Online ISSN 2993-8341
Print ISSN 0973-1318