关键词:
coverless information hiding;text big data;web text;web spider;location information
摘要:
Coverless information hiding has become a hot topic because it can hide secret information (SI) into carriers without any modification. Aiming at the problems of the low hiding capacity (HC) and mismatch in text big data, a novel method of coverless information hiding by retrieving the massive amount of web text on the Internet. First, the proposed method uses a web spider technology to capture web texts associated with SI to construct a web-text library. Second, some texts containing SI are searched and the optimal web text is selected from them. Then, the location of the SI in the selected web text is described by using a 2-D coordinate system. Finally, the URL of the web text is combined with the obtained location information and then sent to the recipient. The experimental results and analysis show that the performances are improved in terms of HC, hiding success rate, and security.
摘要:
The encrypted image retrieval in cloud computing is a key technology to realize the massive images of storage and management and images safety. In this paper, a novel feature extraction method for encrypted image retrieval is proposed. First, the improved Harris algorithm is used to extract the image features. Next, the Speeded-Up Robust Features algorithm and the Bag of Words model are applied to generate the feature vectors of each image. Then, Local Sensitive Hash algorithm is applied to construct the searchable index for the feature vectors. The chaotic encryption scheme is utilized to protect images and indexes security. Finally, secure similarity search is executed on the cloud server. The experimental results show that compared with the existing encryption retrieval schemes, the proposed retrieval scheme not only reduces the time consumption but also improves the image retrieval accuracy.
期刊:
International Journal of Embedded Systems,2018年10(2):113-119 ISSN:1741-1068
通讯作者:
Pan, Lili(lily_pan@163.com)
作者机构:
[Pan, Lili; Qin, Jiaohua; Xiang, Xuyu] College of Computer Science and Information Technology, Central South University of Forestry and Technology, 410004 Changsha, Hunan, China;[Wang, Tiane] The Commission Institute, Hunan Electric Power Transmission and Substation Construction Company, 410017 Changsha, Hunan, China
通讯机构:
College of Computer Science and Information Technology, Central South University of Forestry and Technology, Changsha, Hunan, China
关键词:
Fault detection;Information use;Testing;APFD;Average of the percentage of faults detected;Class method;DU-chain coverage;Regression testing;Test case;Software testing
摘要:
Test case prioritisation schedules the test cases for execution in an order that attempts to maximise (an) objective(s) or expose faults earlier in testing. In the past, many test case prioritisation techniques prioritised test cases based on mainly test-requirement coverage and ignored many other testing factors. In view of the DU-chain importance in programs, this paper presents a test case prioritisation approach of method-based DU-chain coverage. The technique combines the DU-chain coverage and fault detection rate as test-case quantitative factors. Different from existing techniques, the novel approach makes use of information from executed testing and module coupling, and dynamically calculates a priority quantitative value for every test case. The experiments performed show that the dynamic prioritisation approach is fault-detection effective, and the APFD of the test suites constructed by the dynamic prioritisation approach is higher than that of the test suites constructed by the static prioritisation technique.
会议论文集名称:
2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC)
关键词:
Image classification;Food recognition;Multi class classification;Deep learning;Feature extraction;Convolutional neural network
摘要:
Deep learning has brought a series of breakthroughs in image processing. Specifically, there are significant improvements in the application of food image classification using deep learning techniques. However, very little work has been studied for the classification of food ingredients. Therefore, this paper proposes a new framework, called DeepFood which not only extracts rich and effective features from a dataset of food ingredient images using deep learning but also improves the average accuracy of multi-class classification by applying advanced machine learning techniques. First, a set of transfer learning algorithms based on Convolutional Neural Networks (CNNs) are leveraged for deep feature extraction. Then, a multi-class classification algorithm is exploited based on the performance of the classifiers on each deep feature set. The DeepFood framework is evaluated on a multi-class dataset that includes 41 classes of food ingredients and 100 images for each class. Experimental results illustrate the effectiveness of the DeepFood framework for multi-class classification of food ingredients. This model that integrates ResNet deep feature sets, Information Gain (IG) feature selection, and the SMO classifier has shown its supremacy for food-ingredients recognition compared to several existing work in this area.
期刊:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),2016年10039 LNCS:156-165 ISSN:0302-9743
摘要:
Although anti-phishing solutions were highly publicized, phishing attack has been still an important serious problem. In this paper, a novel phishing webpage detecting algorithm using the webpage noise and n-gram was proposed. Firstly, the phishing webpage detecting algorithm extracts the webpage noise from suspicious websites, and then expresses it as a feature vector by using n-gram. Lastly, the similarity of feature vector between the protected website and suspicious is calculated. Experimental results on detecting phishing sites samples data show that: this algorithm is more effective, accurate and quick than existing algorithms to detect whether a site is a phishing website.
期刊:
International Journal of Network Security,2015年17(5):637-642 ISSN:1816-353X
通讯作者:
Huang, Huajun(hhj0906@163.com)
作者机构:
[Huang, Huajun; Pang, Shuang; Deng, Qiong; Qin, Jiaohua] College of Computer and Information Engineering, Central South University of Forestry and Technology, 498 Shaoshan South Road, CHangsha, Hunan Province;410004, China;[Huang, Huajun; Pang, Shuang; Deng, Qiong; Qin, Jiaohua] 410004, China
通讯机构:
College of Computer and Information Engineering, Central South University of Forestry and Technology, 498 Shaoshan South Road, CHangsha, Hunan Province, China
摘要:
The conventional text similarity detection usually use word frequency vectors to represent texts. But it is high-dimensional and sparse. So in this research, a new text similarity detection algorithm using component histogram map (CHM-TSD) is proposed.This method is based on the mathematical expression of Chinese characters, with which Chinese characters can be split into components. Then each components occurrence frequency will be counted for building the component histogram map (CHM) in a text as text characteristic vector. Four distance formulas are used to find which the best distance formula in text similarity detection is. The experiment results indicate that CHM-TSD achieves a better precision, recall and F1 than cosine theorem and Jaccard coefficient.
期刊:
Journal of Software Engineering,2015年9(2):337-349 ISSN:2152-0941
通讯作者:
Huang, Huajun
作者机构:
[Qin, Jiaohua; Xie, Lili; Huang, Huajun] College of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, 410004, China
通讯机构:
College of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, China
关键词:
Chinese text;Component histogram map;Component relation map;Similarity detection