Please wait a minute...
, No 12

■ Cover page(PDF 3220 KB) ■  Table of Content, December 2024(PDF 33 KB)

  
  • Original article
    Enhanced Recognition Approach for Herb Medicine using YOLOv8 in Medical Information Systems
    Shou-Yu Lee, Yu-Sheng Chu, Tzu-Wei Hsu, I-Hsiang Yu, and W. Eric Wong
    2024, 20(12): 713-722.  doi:10.23940/ijpe.24.12.p1.713722
    Abstract    PDF (377KB)   
    References | Related Articles

    This study presents the Enhanced Recognition Approach for Herb Medicine (in this paper we focus more on Traditional Chinese Medicine) leveraging advancements in Artificial Intelligence (AI) and Machine Learning (ML). By integrating the YOLOv8 model and TensorFlow Lite optimization, the system enables real-time herb recognition on mobile devices with over 90% accuracy. It addresses inefficiencies in herb identification and knowledge dissemination, offering users detailed information on herb functions, applications, and personalized health recommendations. Through optimized datasets, user-friendly interfaces, and portable deployment, the system promotes TCM culture and supports health management. This innovation lays a foundation for digital and intelligent TCM development, with plans to expand functionalities for broader health applications.

    Discovering Elementary Discourse Units in Textual Data using Canonical Correlation Analysis
    Mehndiratta Akanksha and Asawa Krishna
    2024, 20(12): 723-732.  doi:10.23940/ijpe.24.12.p2.723732
    Abstract    PDF (357KB)   
    References | Related Articles

    Canonical Correlation Analysis (CCA) has been exploited immensely for learning latent representations in various fields. This study takes a step further by demonstrating the potential of CCA in identifying Elementary Discourse Units (EDUs) that capture the latent information within the textual data. The probabilistic interpretation of CCA discussed in this study utilizes the two-view nature of textual data, i.e. the consecutive sentences in a document or turn in a dyadic conversation and has a strong theoretical foundation. Furthermore, this study proposes a model for Elementary Discourse Unit (EDU) segmentation that discovers EDUs in textual data without any supervision. To validate the model, the EDUs are utilized as textual units for content selection in textual similarity tasks. Empirical results on Semantic Textual Similarity (STSB) and Mohler datasets confirm that, despite being represented as a unigram, the EDUs deliver competitive results and can even beat various sophisticated supervised techniques. The model is simple, linear, adaptable and language-independent making it an ideal baseline particularly when labeled training data is scarce or nonexistent.

    Hybrid Fuzzy-Neuro and DNN-Based Framework for VM Allocation and Resource Optimization in Cloud Systems
    Vipan and Raj Kumar
    2024, 20(12): 733-740.  doi:10.23940/ijpe.24.12.p3.733740
    Abstract    PDF (765KB)   
    References | Related Articles

    Efficient resource management in cloud computing is crucial for reducing energy consumption, minimizing SLA violations, and optimizing virtual machine (VM) allocation. This paper proposes a novel VM allocation mechanism that ranks physical machines (PMs) based on their historical performance using a hybrid Fuzzy-Neuro Engine, considering both idle and execution costs. The system intelligently allocates VMs by analyzing power consumption and service reliability, ensuring optimal use of cloud resources. A Deep Neural Network (DNN) is utilized to rank PMs, enhancing decision-making for VM allocation and migration.

    The proposed method was evaluated against state-of-the-art approaches demonstrating significant improvements. In terms of power consumption, the proposed method achieved reductions of up to 11.12%, with values ranging from 8.21 kW for 50 VMs to 10.40 kW for 550 VMs, compared to higher consumption by the other methods. Similarly, the proposed model maintained lower SLA violations, with values starting at 0.0840 for 50 VMs, outperforming Talwani et al. and Saurabh et al. in every configuration. These results indicate that the proposed method offers an effective and energy-efficient solution for cloud computing environments, optimizing resource utilization while maintaining high levels of service quality and reliability.

    ALLI: A High-Performance Approach to Data Deduplication in Hadoop using Enhanced Hashing and Two-Level Indexing Techniques
    Ammar Zakzouk, Bassim Oumran, and Hasan Hasan
    2024, 20(12): 741-752.  doi:10.23940/ijpe.24.12.p4.741752
    Abstract    PDF (491KB)   
    References | Related Articles

    There are many systems like Hadoop that have been developed to effectively handle big data. However, these systems face challenges related to duplicate files, which consume additional resources for both storage and processing. Several approaches have been developed to eliminate duplicate files using hash algorithms. However, these algorithms have struggled to achieve a balance between execution speed and collision probability. Furthermore, the methods employed for storing hash values lead to lengthy match times and an elevated risk of collisions. In this paper, we propose ALLI, an approach designed to accelerate execution time and reduce collision probability during both the hashing and matching stages. ALLI combines the Arithmetic Logic Hash Algorithm (ALHA) for generating 1024-bit hash values and Two-Level Indexing in HBase (TLI-HBase) for efficient storage of hash values. Experiments conducted on four different datasets demonstrate that ALLI outperforms existing file-level deduplication techniques, achieving execution times that are twice as fast as those of other approaches. Moreover, the results indicate that ALHA is 2 to 3 times faster than other hash algorithms while also reducing collision probability even further. Additionally, TLI-HBase improves performance during the matching stage by significantly reducing the number of hash value comparisons compared to other storage methods.

    Optimizing Energy Efficiency and Delay in IoT Networks using M/G/1 Queuing with Adaptive Vacation Policy
    Annu Malik and Rashmi Kushwah
    2024, 20(12): 753-763.  doi:10.23940/ijpe.24.12.p5.753763
    Abstract    PDF (389KB)   
    References | Related Articles

    This paper presents an M/G/1 queue model with exhaustive service and a multiple vacation policy to balance energy conservation and delay minimization in Internet of Things (IoT) networks. The proposed model employs a general service time distribution and allows the server to take several adaptive vacations, each with an exponentially distributed duration when the system is idle. The exhaustive service ensures that all packets in the queue are processed before the server goes on vacation, reducing energy consumption by placing the server in a low-power state during idle periods. This method dynamically adjusts the server's operational state, leading to significant savings in energy consumption and overall system delay compared to conventional M/G/1 models without vacations or with predefined vacation periods. Simulation results demonstrate that the adaptive vacation policy outperforms the fixed vacation policy by effectively balancing active and idle processing times. It also outperforms the non-vacation policy, as it conserves energy without adversely affecting delay performance. The results indicate that the proposed approach provides a flexible solution that can adapt to varying traffic loads and service time distributions, making it particularly valuable in IoT environments where energy efficiency is critical.

    Identifying Cyber Threats in Metaverse Learning Environment using Explainable Deep Neural Networks
    Deepika Singh, Shajee Mohan, and Preeti Dubey
    2024, 20(12): 764-774.  doi:10.23940/ijpe.24.12.p6.764774
    Abstract    PDF (703KB)   
    References | Related Articles

    Rapid Internet of Artificial Intelligence and Internet of Things (AI-IoT) technology integration has led to the development of the Metaverse, a key component of the approaching digital era. Because it has created more immersive, interactive, and improved learning experiences for students, teachers, and institutions, this convergence has had a big impact on virtual learning platforms. But as the use of the Metaverse increases, strong cybersecurity measures are needed to identify and neutralize online threats and protect users. An explainable deep neural network (DNN) is proposed in this paper to detect and handle network intrusion attacks in Metaverse learning settings. In this paper, we used the IIoT Edge Cybersecurity dataset from Kaggle and implemented a neural network technique to create a quantitative and dependable network intrusion detection system (NIDS). To enhance the model's interpretability, we applied Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), allowing for a visual understanding of its decision-making process. By processing network traffic features from networked Metaverse devices and Internet of Things sensors, the explainable DNN makes it possible to accurately and understandably separate anomalous from benign Metaverse activity. The NIDS model establishes a more dependable and secure metaverse learning environment with a high-performance accuracy of 99.87%.

Online ISSN 2993-8341
Print ISSN 0973-1318