Please wait a minute...
, No 11
Special Section on System and Software Reliability

Background Software and system reliability is the probability of failure-free software and system operation for a specified period of time in a specified environment. Reliability is also an important factor affecting software and system dependence. Many critical systems operate without a system failure for a given period of time, such as nuclear, aerospace, spacecraft and high-speed trains and other such systems. These systems need t

International Journal of Performability Engineering (IJPE), in collaboration with the 5th International Symposium on System and Software Reliability (an IEEE technically sponsored conference), will have a special section to address the above issue. Authors of high quality papers selected from the symposium will be invited to submit an extended version to this special section. In addition, the submission is open to the community for scientists and engineers from both industry and academia to report their ongoing work, share their research outcomes and experiences, and present the best and most efficient techniques for the development of reliable, secure, and trustworthy systems.

Topics of Interest

Submissions related to the following topics are encouraged. Others relevant to system and software reliability are also welcome.

• Architecture and design-based reliability and performance

• Formalization and verification

• Fault tolerance and diagnosis

• Process improvement and maintenance

• Quality and safety

• Redundancy technology

• Reliability analysis and optimization

• Reliability measurement, estimation, and predication

• Reliability modeling and validation

• Reliability requirement and growth models

• Testing and simulation

• Trustworthy evaluation

• Data-driven reliability model design

• Machine learning related to reliability

Submission We welcome high quality submissions that are original work, not published, and not currently submitted elsewhere. We also encourage extensions to conference papers, unless prohibited by copyright, if there is a significant difference in the technical content. Improvements such as adding a new case study or including a description of additional related studies do not satisfy this requirement. A description explaining the difference between the conference paper and the journal submission is required. The overlap between each submission and other articles, including the authors’ own papers and dissertations, should be less than 30%. Each submission must conform to the IJPE template which can be downloaded on the journal’s official website. Please submit your paper to the IJPE submission page.

Important Dates

• July 15, 2019 Paper submission

• September 1, 2019 First round notification

• October 15, 2019 Second round notification

Papers accepted for this special section have already been published in the November and December 2019 issues.


• Professor Steven Li, Western New England University, USA

• Dr. Suprasad V. Amari, BAE Systems, USA

• Professor Fevzi Belli, University of Paderborn, Germany

Guest Editors

• Professor Yuanshun Dai, University of Electronic Science and Technology of China, China

• Professor Junhua Ding, University of North Texas, USA

[Detail] ...

■  Cover Page (JPG 4.81 MB) ■ Editorial Board (PDF 72.8 KB) ■ Table of Contents, Nov 2019 (PDF 197 KB)

  • Performability Analysis for Feeding Unit of a Sugar Plant using Particle Swarm Optimization Technique
    Gaurav Sharma, and P. C. Tewari
    2019, 15(11): 2835-2842.  doi:10.23940/ijpe.19.11.p1.28352842
    Abstract    PDF (390KB)   
    References | Related Articles
    The present study illustrates the application of the Particle Swarm Optimization technique in a sugar plant for the purpose enhancing the performance in terms of availability of the operational units. After evaluation and analysis of unit performance by applying Markov approach, the algorithm of PSO technique is applied to optimize the level of performance. The results of the work reflect the modification of solution using PSO and one can further receive the most advantageous blend of failure rates and repair rates that will be used further by the maintenance department for the design of effective maintenance strategies.
    Quality Assessment of Technical Education using SERVQUAL: S/N Ratio and Grey Relation Analysis
    Amol Nayakappa Patil, V. Mariappan, Leslie C. D'souza, and Reuben J. Nazareth
    2019, 15(11): 2843-2851.  doi:10.23940/ijpe.19.11.p2.28432851
    Abstract    PDF (270KB)   
    References | Related Articles
    Education has a considerable role in the development of our future. Hence, the quality of education is of prime importance in crafting the future of our country. Services offered by education institutes articulate the quality of the education. This is also valid to higher education. Higher education institutes in India are increasing and thus, to remain competitive, they strive to improve the quality of education. Technical institutes are growing at a rapid pace compared to non-technical and the same is experienced in the Goa state. Hence, this paper focuses on technical higher education in Goa, India. This research investigates service attributes contributing towards the quality of technical education and suggests operating levels of these attributes to improve student performance in academics. To distinguish quality of technical education, a face to face survey with stakeholders was conducted. The SERVQUAL form questionnaire addresses five service quality dimensions. Further analysis was carried out using Signal to Noise ratio and Grey Relation Analysis to predict optimal levels of service attributes to improvised student performance.
    Non-Intrusive Polynomial Chaos for a Realistic Estimation of Accident Frequency
    Mourad Chebila
    2019, 15(11): 2852-2859.  doi:10.23940/ijpe.19.11.p3.28522859
    Abstract    PDF (534KB)   
    References | Related Articles
    The pivotal role of frequency analysis in quantitative risk assessments necessitates realistic and efficient prediction of their metrics. Uncertainty analysis, which aims to determine the effects of the involved input uncertainties on the output of interest form a strong basis for all the decisions to be taken based on the results of such assessments. The main purpose of this paper is to employ a non-intrusive polynomial chaos approach to simplify the model relating to the estimation of the frequency of major accidents. Such application facilitates the propagation of the associated parametric uncertainties and makes the whole process less expensive while assuring the desired accuracy when compared with the classical application of the Monte Carlo simulation.
    Hadoop-based Parallel Algorithm for Data Mining in Remote Sensing Images
    Yanhua Wang, Yaqiu Liu, and Weipeng Jing
    2019, 15(11): 2860-2870.  doi:10.23940/ijpe.19.11.p4.28602870
    Abstract    PDF (652KB)   
    References | Related Articles
    As a typical distributed parallel computing model, cloud computing can greatly reduce the execution time of computing tasks. Remote sensing image data mining, an important part of data mining, plays a significant role in meteorological analysis and earthquake prediction. By constructing a Hadoop cloud computing platform, this paper studies the Hadoop-based parallel algorithm for remote sensing image data mining. In accordance with the Hadoop distributed computing framework, the parallel algorithm for remote sensing image data mining is realized through data preprocessing, image feature extraction, and clustering analysis. The main work of this paper includes image preprocessing, Hadoop-based parallelization of remote sensing image feature extraction, and a Hadoop-based parallel algorithm for remote sensing image data mining.
    Coarse-Grained Automatic Parallelization Approach for Branch Nested Loop
    Hui Liu, Jinlong Xu, and Lili Ding
    2019, 15(11): 2871-2881.  doi:10.23940/ijpe.19.11.p5.28712881
    Abstract    PDF (538KB)   
    References | Related Articles
    GCC compiler is a retargetable compiler program that was developed to increase the efficiency of programs in the GNU system. In recent years, compiler optimization based on data dependency analysis has become an important research area of modern compilers. Existing GCC compilers can only conduct dependency analysis on perfect nested loops. In order to better explore the coarse-grained parallelism of the nested loops, we propose a dependence test method that can deal with the branch nested loops. Firstly, we identify the branch nested loop in the programs. Then, we analyze the relationship between the array subscript and the outer index variable of the branch nested loop. Finally, we calculate the distance vector of the outer loop index variable and determine whether the loop has dependence through distance vector detection. Experimental results show that our method can correctly and effectively analyze the dependence relationship of branch nested loops.
    Icing Prediction of Fan Blade based on a Hybrid Model
    Cheng Peng, Jing He, Hao Chi, Xinpan Yuan, and Xiaojun Deng
    2019, 15(11): 2882-2890.  doi:10.23940/ijpe.19.11.p6.28822890
    Abstract    PDF (755KB)   
    References | Related Articles
    For the problem that fan blade icing failures cannot be accurately predicted in advance, a data-driven fault prediction method is proposed in this paper. Firstly, the delay window is introduced to the PCA algorithm to extract the fault mode related features from the SCADA high-dimensional data. Then, the trained Elman neural network is adopted to predict the future value of the relevant features. Finally, a BP self-clustering algorithm is designed to predict the icing fault of the blade with the multi-source data fusion. The results show that the proposed method can effectively predict the icing failure of wind turbine blades and has reference significance for the maintenance of wind turbines.
    Gaze-Tracking Algorithm based on Infrared Gray Image
    Nan Xue, Siwen Cai, Yuanyuan Chen, Minglei Shao, and Peng Wang
    2019, 15(11): 2891-2898.  doi:10.23940/ijpe.19.11.p7.28912898
    Abstract    PDF (592KB)   
    References | Related Articles
    Gaze-tracking technology is a new type of HCI (human-computer interaction) technology that has great application potential in the field of intelligent control. According to the characteristics of the reflected spot formed by the human eye under the irradiation of infrared light, a gaze-tracking algorithm was designed based on the pupil corneal reflection principle. Firstly, the human eye is quickly located according to the characteristics of the reflected spot to obtain the eye sub-image, and the eye sub-image is binarized using the image segmentation method based on threshold iteration. Then, the edge detection algorithm is used to obtain the pupil and reflected spot edge, and the geometric center coordinates of the pupil and the reflected spot are obtained by ellipse fitting. Finally, the relationship between the gaze and the gaze point is obtained by view-point mapping based on the polynomial function, so as to realize gaze-tracking. The simulation results show that the gaze point's maximum vertical error is seven pixels and maximum horizontal error is nine pixels. It can track the gaze point in real time and has the characteristics of rapidness and high precision. It has application value in human-computer interaction.
    Public Opinion Data Fusion Method based on Ontology Semantics
    Pengju Wang, Huifeng Xue, Zhe Yu, and Feng Zhang
    2019, 15(11): 2899-2907.  doi:10.23940/ijpe.19.11.p8.28992907
    Abstract    PDF (456KB)   
    References | Related Articles
    In order to improve the decision-making level for public opinion responses and realize the semantic fusion of multi-level and multi-source heterogeneous public opinion information, an ontology-based public opinion information fusion method is proposed. Firstly, aiming at quick response decision-making, the situation assessment model of public opinion information fusion is studied, and the information fusion system is constructed. The multi-level evaluation model of situation recognition, situation understanding, and situation prediction is formed. Then, the multi-indicator ontology model and method for public opinion decision-making are constructed, and the public opinion data fusion model based on ontology semantics is proposed, which realizes the relevance analysis and semantic fusion of domain knowledge. Finally, a multi-level public opinion data fusion model is constructed, and the construction of the underlying emergency information knowledge base to support the above functions is deeply studied. The simulation results show that the feasibility and efficiency of the situation assessment problem are solved by this method, the time complexity and space complexity of attribute reduction and value reduction are reduced, and the matching efficiency of situation assessment rules is improved.
    Information Security Evaluation based on Artificial Neural Network
    Rong Li, Bing Tian, Yan Li, and Yansheng Qu
    2019, 15(11): 2908-2915.  doi:10.23940/ijpe.19.11.p9.29082915
    Abstract    PDF (402KB)   
    References | Related Articles
    In order to improve the information security ability of the network information platform, an information security evaluation method is proposed based on artificial neural networks. Based on the comprehensive analysis of the security events in the construction of the network information platform, the risk assessment model of the network information platform is constructed based on the artificial neural network theory. The weight calculation algorithm of artificial neural networks and the minimum artificial neural network pruning algorithm are also given, which can realize the quantitative evaluation of network information security. The fuzzy neural network weighted control method is used to control the information security, and the non-recursive traversal method is adopted to realize the adaptive training of the information security assessment process. The adaptive learning of the artificial neural network is carried out according, and the ability of information encryption and transmission is improved. The information security assessment is realized. The simulation results show that the method is accurate, and the information security is ensured.
    Water Saving Irrigation Decision-Making Method based on Big Data Fusion
    Xiaojuan Zhang, Feng Zhang, Yongheng Zhang, and Xiaoyan Ai
    2019, 15(11): 2916-2926.  doi:10.23940/ijpe.19.11.p10.29162926
    Abstract    PDF (409KB)   
    References | Related Articles
    In order to realize the intelligence of irrigation management and the wisdom of irrigation decision-making, improve the efficiency of water resource utilization, and introduce information fusion technology into the field of farmland irrigation, an irrigation decision-making method based on multi-source information fusion is proposed. Firstly, according to the actual situation and specific needs of the study area, the multi-objective irrigation water quantity optimization configuration model is constructed, and the multi-objective intelligent algorithm is used to solve the model. Then, using the adaptive weighted average fusion algorithm, the weight coefficient of soil moisture of millet in different growth stages and different soil layers is constructed, and the fusion of soil moisture in the data layer is realized. Finally, in order to meet different irrigation requirements, the multi-objective particle algorithm is used to solve the multi-object canal optimal water allocation model based on the optimized configuration of irrigation water volume. The experimental results show that the fusion results obtained by the multi-source large data adaptive weighted fusion algorithm are more reasonable, the uncertainty of irrigation decision-making is greatly reduced, the reliability of irrigation decision-making is improved, and the water consumption can be saved by 25.61% by using the multi-objective optimal allocation model.
    A Novel Ensemble Forecasting Algorithm based on Distributed Deep Learning Network
    Tao Ma, Fen Wang, Yanshan Tian, Yan Ma, and Xu Ma
    2019, 15(11): 2927-2935.  doi:10.23940/ijpe.19.11.p11.29272935
    Abstract    PDF (674KB)   
    References | Related Articles
    This paper proposes an ensemble model based on distribution deep learning network. The ensemble model is composed of deep belief network (DBN) for reconstructing original data, and the bidirectional long short-term memory (BLSTM) method is used for prediction due to its good results in big data applications. The dynamic weighting strategies are proposed and applied to the sub models of the ensemble by a weighted least square method. The weight update with variable training sets and the predictions for each ensemble are obtained from the distributed computing engine Apache Spark. The performance of the proposed model is evaluated on wind data on the wind farm of the Hexi Corridor in China. The simulation results show that the dynamic ensemble algorithm performs well, which is a very valuable result for the forecasting of big data time series. Furthermore, the results are successfully compared with back propagation neural Network (BPNN), LSTM, BLSTM, and stacked LSTMs with memory between batches (SBLSTM), improving the accuracy of prediction.
    A Heuristic Collaborative Filtering Recommendation Algorithm based on Book Personalized Recommendation
    Chaoyang Ji
    2019, 15(11): 2936-2943.  doi:10.23940/ijpe.19.11.p12.29362943
    Abstract    PDF (230KB)   
    References | Related Articles
    Through data mining technology, the design of intelligent and personalized book recommendation system is an important development direction of scientific library management in the future. This paper proposes a heuristic collaborative filtering recommendation algorithm based on book personalized recommendation and data mining technology. The proposed algorithm calculates the similarity between users by inputting the two-dimensional matrix of user items and using the similarity formula to get the set of user preferences, and finally generates a recommendation list for each user. The simulation results fully show that the proposed collaborative filtering recommendation algorithm has strong personalized recommendation function, can mine the relevance between readers and books, and recommend suitable book information according to readers' personal preferences.
    Crowdsourced Testing Ability for Mobile Apps: A Study on Mooctest
    Peng Yang, Jin Xu, Hongyu Sheng, Yong Huang, and Jianfeng Xu
    2019, 15(11): 2944-2951.  doi:10.23940/ijpe.19.11.p13.29442951
    Abstract    PDF (298KB)   
    References | Related Articles
    As an emerging trend of software testing, crowdsourced testing is increasingly attracting attention from various fields, such as education and industry. In this paper, we studied the ability of crowdsourced testing for teaching. We mainly aimed to find the relationship between students' crowdsourcing ability and their performance in software testing courses, so as to help schools devise a better teaching model and further improve teaching quality. Our study included two parts. First, we published crowdsourced tasks of a mobile application to students on a well-known crowdsourced testing platform. Then, we conducted a follow-up survey to collect more information about the students. We surprisingly found that the superiority of a student's grades can be reflected with higher crowdsourced testing skills to an extent, but not as an absolute measure. More importantly, practical crowdsourced testing is naturally complementary to purely theoretical teaching.
    Fault Detection Capabilities of Combinatorial Testing and Random Testing for Boolean-Specifications
    Ziyuan Wang, Yang Li, Xueqing Gu, Xiaojia Zheng, and Min Yu
    2019, 15(11): 2952-2961.  doi:10.23940/ijpe.19.11.p14.29522961
    Abstract    PDF (816KB)   
    References | Related Articles
    The problem of fault detection capability of combinatorial testing has drawn a lot of attention. People conducted many experiments on different subjects to compare fault detection capabilities of combinatorial testing and random testing. However, previous confusing results can hardly answer the question of whether combinatorial test suite detects more faults than random test suite. To answer this question more trustfully, we conducted an experiment on general-form Boolean-specifications. In our experiment, fault detection frequencies and fault detection ratios in combinatorial testing, where test suites are generated by some classic combinatorial test generation algorithms, are collected by the repeated running of combinatorial testing. Moreover, fault detection probabilities in random testing are obtained by theoretical analysis. By comparing fault detection frequencies, ratios, and probabilities, our experimental results suggest that combinatorial testing has a little advantage of fault detection capability over random testing.
    An Energy Prediction Model for Cloud Data Centers Through Performance Counter
    Sa Meng, Peng Sun, Jie Luo, and Han Xu
    2019, 15(11): 2962-2971.  doi:10.23940/ijpe.19.11.p15.29622971
    Abstract    PDF (731KB)   
    References | Related Articles
    In recent years, with an increased environmental protection concern, the carbon footprints of large-scale cloud data centers have come into public view. Energy efficiency has become a key indicator for these data centers. Management personnel in cloud data centers should know the relationship between the workload patterns and the energy consumption of the infrastructure in order to optimize energy efficiency. In this paper, we first discuss the energy consumption problems of cloud data centers and summarize the related work. Then, we purpose an energy prediction model to estimate the energy consumption of servers in cloud data centers based on performance counters of their processors. Afterward, the proposed model is tested and analyzed under a wide selection of benchmarks, including SPEC2006, I/Ozone, and Netperf. Finally, through analyzing the results of the contrast experiments, it is shown that the proposed energy prediction model can predict energy consumption of cloud servers with high accuracy.
    Improving Extreme Learning Machine by a Level-based Learning Swarm Optimizer and its Application to Fault Diagnosis of 3D Printers
    Jianyu Long, Ying Hong, Shaohui Zhang, Diego Cabrera, and Jingjing Zhong
    2019, 15(11): 2972-2981.  doi:10.23940/ijpe.19.11.p16.29722981
    Abstract    PDF (735KB)   
    References | Related Articles
    Fault diagnosis plays a significant role in the printing quality for 3D printers. In this paper, an extreme learning machine based on level-based learning swarm optimizer (LLSO-ELM) is proposed to diagnose faults of delta 3D printers. Extreme learning machine (ELM) achieves better performance in learning speed than traditional gradient descent algorithms. However, the random inputs weights and hidden biases are influential factors for the accuracy and generalization performance of ELM. LLSO has competitive performance in solution quality and computational efficiency for large scale optimization problems, and it is used to obtain the optimum configuration of the weights and biases for ELM. The proposed model is tested by using the attitude data of a delta 3D printer under different operating modes. The experimental results verify that the proposed approach performs better in generalization and stability than ELM.
    A Fast Inter Mode Decision Algorithm for HEVC Encode System based on Spatial and Temporal Correlations
    Yongshuang Yang, Wei An, and Qiuwen Zhang
    2019, 15(11): 2982-2989.  doi:10.23940/ijpe.19.11.p17.29822989
    Abstract    PDF (535KB)   
    References | Related Articles
    With the development of video display technology, high efficiency video coding (HEVC) has recently been proposed to optimize the coding efficiency of video encoders, and it has demonstrated great improvements in coding efficiency by using the layered structure of the coding unit (CU), transform unit (TU), and prediction unit (PU). For the purpose of accomplishing the prefect coding efficiency, we must find the best combination of CU, TU, and PU in the situation of the lowest rate distortion (RD) cost, which is time-consuming. Among these CU, TU, and PU, the determination of CU size has the most significant impact on the RD optimization (RDO) of HEVC encoding, which results in a large amount of calculation costs in the operation of PU and TU size determination. Many research works focus on reducing the complexity through fast CU dividing and early skip in the stage of intra-slice coding. We propose a fast inter mode decision algorithm for the HEVC encode system in this paper, and a reformed early CU SKIP detection method constitutes this algorithm. For the current CU block that is being encoded, we use the related spatial coding parameters of CU to evaluate the texture complexity (TC) affecting the CU blocks that are being encoded. In addition, we utilize motion vectors, TU size, and block marking information to gauge the time complication of CU partitions. The proposed method efficiently uses the spatial coding parameters without additional computations. The proposed techniques reduce the encoding time significantly, by up to 47.9%, with an insignificant BD rate increase of about 1.6%.
    A Fault Tolerance Aware Virtual Machine Scheduling Algorithm in Cloud Computing
    Heyang Xu, Pengyue Cheng, Yang Liu, and Wei Wei
    2019, 15(11): 2990-2997.  doi:10.23940/ijpe.19.11.p18.29902997
    Abstract    PDF (445KB)   
    References | Related Articles
    Virtual machine (VM) scheduling in cloud computing is a complicated problem, particularly when taking reliability factors into account. In modern cloud datacenters, cloud providers may adopt fault tolerance techniques to improve their service reliability, which will in turn influence the performance metrics of VM scheduling. This influence is worthy of further investigation. However, few studies have considered fault tolerance in VM scheduling and explored its impact. This paper studies fault tolerance aware VM scheduling with cost optimization in clouds by considering the probability that a physical server may fail during execution. The optimization objective of the studied problem is to minimize the expectation of all cloud users' total execution costs under fault tolerance aware cloud environments. Then, a modified best fit decreasing (MBFD) algorithm is proposed based on a defined cost efficiency factor. The simulation results show that fault tolerance can significantly influence the execution time of VM requests, and the proposed MBFD algorithm can improve VM requests' successful execution rate, reduce the average execution costs of cloud users, and thus achieve better performance under fault tolerance aware cloud environments.
    A General Formal Memory Framework for Smart Contracts Verification based on Higher-Order Logic Theorem Proving
    Zheng Yang, and Hang Lei
    2019, 15(11): 2998-3007.  doi:10.23940/ijpe.19.11.p19.29983007
    Abstract    PDF (615KB)   
    References | Related Articles
    Blockchain technology is one of the newest technologies in computer science and has been employed in many important fields. The correctness verification of smart contracts must be protected by the most reliable technology. One of the most reliable methods for s the security and reliability of smart contracts is a formal symbolic virtual machine based on higher-order logic proof system. Therefore, the present work proposes a formal specification framework of memory architecture as the basis for the symbolic execution and theorem proving combination. The framework is independent and customizable. It formalizes logic addresses, nonintrusive application programming interfaces, physical memory structures, and auxiliary tools in Coq. Simple case studies are employed to demonstrate its effectiveness. Finally, the proposed GERM framework is verified in Coq.
    Survivability of Distributed Fault Detection Systems
    Lijun Zhou, Haiyan Lv, Kai Liu, and Jie Zhang
    2019, 15(11): 3008-3015.  doi:10.23940/ijpe.19.11.p20.30083015
    Abstract    PDF (387KB)   
    References | Related Articles
    In the design of distributed fault detection systems, an important basis is to monitor the entities in the distributed network computing system in order to achieve the full coverage of the fault detection function. Such distributed computing systems often involve many nodes, large geographical span, unstable communication delays, and loose management. It is very difficult to cover such systems functionally. Aiming at this problem, based on the idea of self-organizing networks and the realization of coverage monitoring of system nodes, this paper studies the survivability of monitoring functions caused by highly dynamic nodes and proposes a set of detection and repair methods for the system cut vertexes, which reduces the impact of highly dynamic nodes on system monitoring.
    Smart Contract Receipt based on Virtual Iterative Function
    Yifeng Yin, Tingjun Zhang, Chaofei Hu, and Yong Gan
    2019, 15(11): 3016-3023.  doi:10.23940/ijpe.19.11.p21.30163023
    Abstract    PDF (330KB)   
    References | Related Articles
    Smart contracts are the most important feature in block chain applications, and they are also the main reason why blockchains are called disruptive technology. Traditional intelligent contracts with receipts are generated by SHA-256A UXTO (unexpended transaction output), and increasing the number of receipts slows down the speed. This paper introduces the operation of receipts in smart contracts and proposes to generate contract receipts with the VIF virtual iteration function. VIF takes advantage of the excellent features of the Hash function and the unreadable nature of the self-compiled system, so that different contract parameters generate unique and non-repudiation receipts through the virtual iterative function, providing a secure and reliable credential for smart contracts. Finally, the speeds at which the VIF receipt and traditional UXTO receipt are generated are compared.
    Mining Key Users of Microblog Topics based on Trust Model
    Guozhong Dong, Bei Li, Xinhong Wei, and Tao Qin
    2019, 15(11): 3024-3030.  doi:10.23940/ijpe.19.11.p22.30243030
    Abstract    PDF (236KB)   
    References | Related Articles
    Microblog topics play an important role in public opinion. The effective identification of key users in microblog topics is key for microblog public opinion mining. In this paper, we select topic-related microblog messages based on topic features. Candidate key users are selected according to the microblog topic user graph and edge weight. Key users of microblog topics are ranked according to the users' trust values. Experiments on Sina microblog topic datasets show that our method can mine high quality key users that lead to the formation of microblog topics in the early stage.
    An Improved Optimal Method for Classification Problem
    Wei Huang, Xiao Dong, Wenqian Shang, Weiguo Lin, and Menghan Yan
    2019, 15(11): 3031-3041.  doi:10.23940/ijpe.19.11.p23.30313041
    Abstract    PDF (652KB)   
    References | Related Articles
    In order to better mine and analyze the massive data generated by search engine companies, this paper proposes a search traffic classification and dimension reduction method based on a logistic regression algorithm. Combined with distributed Hadoop technology, a text classification model is designed and implemented by data research, data analysis, and contrast experiments. In the process of feature extraction of word units, the feature combination method is used, and auxiliary information such as URL is introduced as a semaphore and optimized for the problem of low quality of training samples. The experimental results show that the model optimization effectively improves the quality of the training set. The addition of auxiliary information to train the training set can solve the under-fitting to a certain extent and improve the classification effect. The accuracy of the search traffic classification method and other indicators can reach an artificially accepted range.
    Wireless Underground Sensor Networks
    Muhammad Sohail Sardar, Wan Xuefen, Yang Yi, Farzana Kausar, and Mohammad Wasim Akbar
    2019, 15(11): 3042-3051.  doi:10.23940/ijpe.19.11.p24.30423051
    Abstract    PDF (435KB)   
    References | Related Articles
    With the development of sensing networks and Internet of things, wireless sensor networks have been applied to different fields of our society. With the perceived sensing demands for soil, mining lanes, etc., wireless underground sensor networks (WUSNs) have developed rapidly recently. Compared to typical terrestrial wireless sensor networks, WUSNs have unique applications and features due to their wireless transmission characteristics and layout environments, especially WUSNs in soil. In this paper, we discuss the wireless propagation characteristics and engineering implementation methods of WUSNs in tunnels/tubes/pipelines/lanes and WUSNs in soil. The prospects for the future development of WUSN are discussed as well.
    Design of Register File for Negative Bias Temperature Instability
    Yuanyuan Ma, Bai Na, Wei Tan, and Gelan Yang
    2019, 15(11): 3052-3060.  doi:10.23940/ijpe.19.11.p25.30523060
    Abstract    PDF (782KB)   
    References | Related Articles
    Negative bias temperature instability (NBTI) is becoming an important reliability problem in the semiconductor industry. As time goes on, the NBTI aging impact affects microprocessors' ability to perform correct calculations. The SRAM-based register file block is one of the largest logic units, and it is affected by process deviations. SRAM is the bottleneck of the whole process deviation tolerance. Based on theoretical analysis, the connection between the SRAM static noise margin value and the bitcell probability is discussed. Moreover, this paper adopts a dynamic shifter combined with periodic bitcell inversion design to reduce the NBTI aging impact and achieve a more robust register file. Simulation results exhibit that this design improves the bitcell probability by 3.7 times and reduces the uncertainty of SNM caused by NBTI stress to 46.79%.
    Negative Information Filtering Algorithm based on Text Content in Multimedia Networks
    Wenqing Chen, and Weina Fu
    2019, 15(11): 3061-3071.  doi:10.23940/ijpe.19.11.p26.30613071
    Abstract    PDF (618KB)   
    References | Related Articles
    In the multimedia network environment, it is necessary to effectively filter negative information in the multimedia network and enhance the ability to mine and identify valid data. This paper presents a new algorithm of negative information filtering based on text content in multimedia networks. The principal component features of negative information in multimedia networks are extracted, and matched filters are designed to filter the negative information reasonably. All text contents and negative information are normalized and sorted. They are transformed into the same text format for classification and processing, and the filtering and detection of negative information are realized. Finally, based on the semantic features of text content, the support vector machine algorithm is used to extract negative information features from data. Experimental results show that the algorithm improves the filtering accuracy and performance for negative information in multimedia networks, and it has good application value.
    Emotional Recognition of EEG Signals based on Fractal Dimension
    Xin Xu, Meng Cao, Jiawei Ding, Hong Gu, and Wenjuan Lu
    2019, 15(11): 3072-3080.  doi:10.23940/ijpe.19.11.p27.30723080
    Abstract    PDF (592KB)   
    References | Related Articles
    Abstract:A method based on fractal dimension is proposed to identify EEG emotional signals, and fractal dimension is introduced as an eigenvalue into emotion recognition research. The design experiment obtains the EEG raw data of the experimenter and uses the experimental video to locate and capture the effective signal from the original data. After the pre-electron interference and low-pass filtering are applied, the effective signal is subjected to principal component analysis. The dimension reduction dimension is obtained by reducing the dimension. The support vector machine (SVM) and K nearest neighbor (KNN) classification algorithm are used to classify the eigenvalues to obtain their respective accuracy. The results show that the EEG emotion recognition method based on fractal dimension can distinguish different emotions, and the highest accuracy rate is 83.33%. Therefore, fractal dimension is feasible as the characteristic value of emotion recognition.
    Speech Enhancement Algorithms in Vehicle Environment
    Chunli Wang, Yuchen Li, and Huaiwei Lu
    2019, 15(11): 3081-3089.  doi:10.23940/ijpe.19.11.p28.30813089
    Abstract    PDF (691KB)   
    References | Related Articles
    In the actual driving process, the driver is in a complex noise interference environment of the vehicle's own mechanical vibration, the passenger dialogue inside the vehicle, and the sound of other equipment. In order to improve driving efficiency and ensure driving safety, the operation of the vehicle equipment is precisely controlled by the voice control system. Aiming at the residual music noise in traditional spectral subtraction, the improved multi-window spectrum estimation algorithm is applied to improve the estimation accuracy of a priori SNR (signal-to-noise ratio). The experimental results show that the algorithm significantly eliminates the music noise. In the case of low SNR, the signal-to-noise ratio gain is improved by 0.64dB. The waveform similarity and speech naturalness are improved after speech enhancement. Furthermore, the current single-microphone voice de-reverberation technology only takes advantage of the information of time domain and frequency domain with the spatial information limitedly utilized, resulting in a difficulty of achieving a better de-reverberation effect. In light of these insufficiencies, we combine the de-reverberation technique with complex cepstrum blind deconvolution, and a simulation experiment is carried out according to the subjective and objective evaluation indexes of the waveform and the effect of de-reverberated voice, proving that the optimized algorithm improves the intelligibility of the de-reverberated voice.
    Residual Network Structure-based High Accuracy Spectral Analysis Method
    Sai Wu, Zhihui Wang, Sachura Meng, Weijun Zheng, and Weiping Shao
    2019, 15(11): 3090-3098.  doi:10.23940/ijpe.19.11.p29.30903098
    Abstract    PDF (854KB)   
    References | Related Articles
    The pivotal technology in spectrum analysis is the classification for normal communication signals and interference ones. Automatic modulation classification (AMC) is widely utilized to identify modulation types of received signals. In this paper, original signals with different modulation types are taken as the original input of the network. A convolutional neural network with residual network structure is designed to identify the modulation type. Meanwhile, a sliding window method is proposed to expand the data set. RadioML2016.a data sets are utilized for simulation, and the simulation results indicate that the complexity and accuracy of this method are better than those of recent methods.
ISSN 0973-1318