International Journal of Information and Computer Security (48 papers in press)
by Aniruddha Bhattacharjya, Xiaofeng Zhong, WANG JING
Abstract: In Future Internet Architectures, End to End (E2E) secured personal messaging is essential. So here an E2E user two way authenticated double encrypted messaging architecture based on Hybrid RSA for private messaging is proposed. Our P2P protocol works over TCP protocol for creating direct connections in between, with IPv4 broadcast options to discover peers on the same LAN. Our protocol implements Perfect Forward Secrecy using DiffieHellman key exchange with re-negotiation capability in every session with Optimal Asymmetric Encryption Padding and random salts. For making Hybrid RSA with double encryption, in encryption level, main RSA is integrated with efficient RSA to give more statistical complexity. In the decryption process, the CRT is used for very high efficiency with integration with Shared RSA. Our architecture also gives a hassle-free, secure, Peer-to-Peer, strong and reliable platform with E2E encryption for private messaging and it can work with Future Internet Architectures also.
Keywords: CRT; D-H; PFS; OAEP; Shared RSA; Efficient RSA; Hybrid RSA.
Accountable Administration in Operating Systems
by Lei Zeng, Hui CHen, Yang Xiao
Abstract: Accountability implies that entities should be held responsible forrntheir actions or behaviors so that the entities are part of larger chains of accountability.rnMany security models and systems are built upon the assumptionrnthat super users are trustworthy. However, holding super users accountable becomesrnchallenging since they can erase any trace of their activities. The workrnat hand proposes an accountable system administration model for operatingrnsystems where all system administrators can be accounted for even if they arernuntrustworthy. The accountability policy and operating system primitives arerndesigned and constructed so that the proposed model is provable. The modelrnis implemented in Linux, which is a real-world operating system. The performancernoverhead of the scheme is evaluated using the implementation. Thernexperiments show that the performance overhead is tolerable under normalrnI/O load, despite the fact that the overhead can be very high if the system isrnoverwhelmed with I/O requests.
Keywords: Accountability; Operating System; System Administration;rnlogging; OS security.
Security Engineering Methods: In-Depth Analysis
by Shruti Jaiswal, Daya Gupta
Abstract: Providing security to complex information system development is challenging because of complex network and ubiquitous system. Traditional mechanisms address security concerns during development or design of software systems that may lead to various loopholes or over-constrained system. The field of Security Engineering has emerged whereby security requirements are first gathered along with other requirements during the initial phase of software development. However, dealing with security concerns during the initial phases of Information System development is a challenging job because of design and code unavailability. In this paper, proposals of various researchers are presented and analyzed. The paper first represents the proposals for security requirements engineering based on use case approach and moves towards the goal-oriented approach, then further explores the proposals based on process-oriented approach. These methodologies are evaluated along the various parameters such as Security Engineering activities ( Security Requirements Engineering, Security Design Engineering, and Security Testing) covered, application domain and so on. The in-depth analysis of Security Engineering Methods ends with a recent proposal for Security Engineering that deals with all security concerns effectively and also specifies the unresolved issues that need to be addressed. The outcome of the paper can be exploited to drive further research in the field.
Keywords: Security Requirements; Security Engineering; Security Requirements Engineering; Security Design Engineering; Security Testing.
Opportunistic Key Management in Delay Tolerant Networks
by Sofianna Menesidou, Vasilios Katos
Abstract: Key Management is considered to be a challenging task in Delay Tolerant Networks (DTNs) operating in environments with adverse communication conditions such as space, due to the practical limitations and constraints prohibiting effective closed loop communications. In this paper we propose opportunistic key management as a more suitable solution for key management in networks requiring opportunistic behaviour. We show that opportunistic key management is better exploited and utilized when used in conjunction with routing decisions by security aware DTN nodes.
Keywords: opportunistic key management; security-aware routing decision; delay tolerant networks.
A Standardized Data Acquisition Process Model (SDAPM) for Digital Forensic Investigations
by Reza Montasari
Abstract: Similar to traditional evidence, courts of law do not assume that digital evidence is reliable if there is no evidence of some empirical testing regarding the theories and techniques pertaining to its production. Courts take a careful notice of the way in which digital evidence has been acquired and stored. In contrast with traditional crimes for which there are well-established standards and procedures upon which courts can rely, there are no formal procedures or models for digital data acquisition to which courts of law can refer. A standardized data acquisition process model is needed to enable digital forensic investigators to follow a uniform approach, and to assist courts of law in determining the reliability of digital evidence presented to them. This paper proposes a model that is standardized in that it can enable digital forensic investigators in following a uniform approach, and that is generic in that it can be applied in both law enforcement and corporate investigations. To carry out the research presented in the paper, the Design Science Research Process (DSRP) methodology proposed by Peffers et al. (2006) has been followed.
Keywords: digital forensics; data acquisition; process model; standardized model; digital investigations; computer forensics; formal process.
Keyed Hash Function using Bernoulli Shift Map
by Jhansi Rani Prathuri, Durga Bhavani Surampudi
Abstract: Chaos based cryptography involves real number computations and
hence produces slow algorithms. In order to address this issue, the existing
approaches use piece-wise linear maps that speed up computations. Highdimensional
linear maps have been chosen to avoid dynamical degradation. In this
paper, we claim that a single one-dimensional non-linear chaotic map can produce
ergodic orbits in a fast manner. We propose a keyed hash function that takes
advantage of the interplay between chaos-based dynamics and Bernoulli shift
dynamics. The proposed Bernoulli keyed hash function proves to be an efficient
scheme achieving speeds on par with the existing schemes in the literature.
Extensive validation is carried out at byte, block and the whole message level for
collision resistance and sensitivity to key analysis.We provide empirical analysis
to show that the proposed scheme is preimage and second preimage resistant.
Keywords: Logistic map; Bernoulli map; Chaotic cryptography; Secure hash function; Preimage resistance.
Data Hiding using Lifting Scheme and Genetic Algorithm
by Geeta Kasana, Kulbir Singh, Satvinder Bhatia
Abstract: In this paper, data hiding algorithm by using lifting scheme and Genetic Algorithm (GA) has been proposed. Arnold Transform has been used to scramble the secret image to secure the extraction of secret image. Lifting scheme is applied on the cover image to get the wavelet subbands. In this scheme scrambled secret image is embedded into significant wavelet coefficients of subbands of cover image. Scaling Factor (SF) parameter is used in embedding and extracting process of the proposed algorithm and GA is used to optimize this parameter. This optimization is used to maximize the value of Peak Signal to Noise Ratio (PSNR) of composite image and Similarity Index Modulation (SIM) of extracted secret image. Experimental results reveal that proposed algorithm provides high embedding capacity and better quality of composite images than the existing data hiding techniques. To show the effectiveness of the proposed algorithm, statistical tests have been performed to show that the imperceptibility is maintained.
Keywords: Lifting Scheme; Arnold Transform; PSNR; GA; SIM; and SF.
Update Enabled Multi-keyword Searchable Encryption Scheme for Secure Data Outsourcing
by Vasudha Arora, Shyam Tyagi
Abstract: Over the last decade cloud computing has emerged as a distinct IT environment that is provisioned to provide remote access to a set of decentralized IT resources. Cloud computing enables the data owner to outsource their data and applications so that users could access the data from anywhere and at anytime without any concern about local hardware and software management. However concerns about outsourcing sensitive data causes privacy problems. Encrypting data before outsourcing protects data to some extent but searching on encrypted data may lead to compromised efficiency. Searchable encryption allows the cloud data to be retrieved efficiently based on certain relevance criterion. Our proposed scheme enables the dynamically updating already existing searchable encryption schemes with a high level accuracy and security so that information leakage can be eliminated.rn
Keywords: Index Terms—Cloud; Searchable Encryption; UEMSE; TRSE; Data Outsourcing; Relevance Scoring.
A Novel Quantum Distributed Key Management Protocol for Ring-organized Group
by Rima Djellab, Mohamed Benmohammed
Abstract: Key distribution is a core building block for secure communication. In group communication, key distribution is not a simple extension of two-party communication. Many approaches were proposed in the classical field, but those proposals are still based on the assumption that some computational problems are hard. Based on quantum mechanics lows, new field emerges allowing to generate and share a secret and secure key between two, or more, participants. In this paper, we propose new multiparty key distribution protocol in group communication based on the well-known quantum key distribution protocol BB84. The security of the proposed solution is based on the unconditional security of the BB84 allowed by the mechanics lows and the mathematical proved secure operation, XOR. In proposed solution each participant collaborates with a partial key in order to obtain at the end of the protocol the same group key that can be used for encryption aims. We also analyze and verify some security properties of the proposed protocol. This is done using a probabilistic symbolic model-checker, the PRISM tool.
Keywords: BB84; key management; PRISM model-checker; quantum key distribution; QKD in group; security; verification.
A Trust-based Approach for Securing Data Communication in Delay Tolerant Networks
by Djoudi Touazi, Mawloud Omar, Abdelhakim Bendib, Abdelmadjid Bouabdallah
Abstract: The proliferation of network technologies drives to many different network architectures that provide huge variety of services and contents to end clients. This task becomes more difficult when we are in the networks with intermittent connections, called Delay Tolerant Networks (DTN) where security is an important issue. In this paper, we propose a trust based approach to secure data transfer in DTN in the presence of malicious transporters. Our proposal is intended to a DTN architecture which includes several sub-networks geographically dispersed in isolated regions and having an intermittent access to an infrastructure based network (like Internet). Our approach is based on a particular Web-of-trust, which is formed based on existing social relationship among clients and transporters. We conducted intensive simulations and the obtained results show that it offers high packet delivery rate and resists against malicious transporters behavior.
Keywords: Trust; Web-of-trust; Public-key certification; Delay tolerant networks.
A Survey on Forensic Event Reconstruction Systems
by AbdelRahman Abdou, Abes Dabir, Ashraf Matrawy
Abstract: Security related incidents such as unauthorized system access, data tampering and theft have been noticeably rising. Tools such as firewalls, intrusion detection systems and anti-virus software strive to prevent these incidents. Since these tools only prevent an attack, once an illegal intrusion occurs, they cease to provide useful information beyond this point. Consequently, system administrators are interested in identifying the vulnerability in order to (1) avoid future exploitation (2) recover corrupted data and (3) present the attacker to law enforcement where possible. As such, forensic event reconstruction systems are used to provide the administrators with possible information. We present a survey on the current approaches towards forensic event reconstruction systems proposed over the past few years. Technical details are discussed, as well as analysis to their effectiveness, advantages and limitations. The presented tools are compared and assessed based on the primary principles that a forensic technique is expected to follow.
Keywords: Forensic Event Reconstruction; ReVirt; Forensix; Backtracker.
MAM-ISSIDS: Multi-Agent Model Based Intelligent and Self-Sharing Intrusion Detection System for Distributed network
by Anusha .K, Sathiyamoorthy .E
Abstract: Intrusion Detection System (IDS) is essential for protecting the computer networks from various threats and attacks. The autonomous Multi-Agent Model (MAM) architecture is a scalable and smart alternative to leverage the strengths of the host and network based IDS. This paper proposes a MAM-based Intelligent and Self-Sharing IDS (MAM-ISSIDS) for distributed network to detect the host, network and web service attacks. Feature selection is performed by using the integrated Particle Swarm Optimization-Genetic Algorithm (PSO-GA) approach. The intuitionistic Fuzzy rules are used to formulate the rules of the existing attackers for the benchmark dataset. The ontology structure is used to share the rules in network. The MAM is used for detecting the occurrence of abnormal traffic resulting due to the intrusion attacks. The proposed system achieves higher attack detection rate, accuracy and lower false positive rate due to the distributed sharing strategy of the MAM.
Keywords: Intrusion Detection; Intuitionistic Fuzzy Rules; Multi-Agent Model (MAM); Particle Swarm Optimization-Genetic Algorithm (PSO-GA).
PERFORMANCE ANALYSIS OF IMAGE STEGANALYSIS TECHNIQUES AND FUTURE RESEARCH DIRECTIVES
by SANCHITA PATHAK, Ratnakirti Roy, Suvamoy Changder
Abstract: Steganography is a technique of hiding information imperceptibly inside other medium so that the very fact of communication taking place remains hidden. Recently, the reach of Internet has extensively widened through social networking and blogging websites and a high amount of digital media interchange, especially in the form of digital images is being witnessed. This poses a huge threat to security from hackers and terrorists, as this medium can be used for covert communication, thus justifying the need for good steganalysis techniques to detect the existence of hidden messages in digital images. This paper analyzes some steganalysis techniques which attack various kinds of Spatial Domain Steganography techniques. Some of them are Chi-square attack, Triples Analysis, Sample Pair analysis, TPVD steganalysis and Analysis of Adjacent Pixel Pair steganalysis. This paper also identifies the current research challenges and discusses possible directions for future research in this field.
Keywords: Steganography; Steganalysis; Spatial Domain Steganography; Chi-square attack; Triples Analysis; Sample Pair analysis; TPVD steganalysis; Analysis of Adjacent Pixel Pair steganalysis.\r\n.
Modelling A Secure Support Vector Machine Classifier For Private Data
by Sumana Maradithaya
Abstract: Privacy preserving data mining engrosses in drawing out information from distributed data without disclosing sensitive information to collaborating sites. This paper aims on the construction of a vertically distributed privacy preserving support vector machine classifier. The learning model is build for datasets, where one of the collaborating parties comprises the dependent attribute. Furthermore, the amount of privacy, computation speed and the accuracy of our classifier outperform other benchmark algorithms while modelling support vector machine classifier retaining privacy. Privacy of the data distributed at multiple sites is maintained by performing secure computations. The perceptive attributes values of the cooperating sites are unknown to the other sites. Collaborative classification is performed using these attributes. The site with the dependent attribute is the master site that initiates the process of secure computation to identify support vectors. Homomorphic property is used to protectively compute the data matrix on records/tuples available at sites. The recommended nonlinear privacy preserving classifier provides an accuracy equivalent to the non privacy undistributed SVM classifier which uses all the attributes directly.
Keywords: Support Vector Machine Classification; Homomorphic Encryption; Vertically Partitioned Data; Secure Multiparty Computation; Privacy Preserving Data Mining.
Reliable and Secure Communication using Fundamental cut-sets and Fundamental Circuits
by Pavan Kumar C
Abstract: Ensuring reliability and security has been a challenge in modern communication systems. To achieve these challenges a novel reliable and secure communication system is designed in this paper. Reliability is achieved, constructing a class of error correcting codes called concatenated kernel codes. Security in terms of source authentication is achieved from using graph nature of trellis employing techniques from graph theory namely fundamental cut-set and fundamental circuit. It has been shown that the proposed communication framework achieves the goal of reliability and security considering the channel noise and cryptanalytic attacks. The theoretical basis of the proposed framework is validated and its performance is evaluated through simulations.
Keywords: Reliability; Security; Error Correcting Codes; Trellis; Fundamental cut-sets; Fundamental Circuits.
DEVELOPMENT OF AN EFFICIENT CLASSIFIER USING PROPOSED SENSITIVITY BASED FEATURE SELECTION TECHNIQUE FOR INTRUSION DETECTION SYSTEM
by H.S. Hota, Dinesh Sharma, A.K. Shrivas
Abstract: Intrusion detection system protects an individual computer or network computer from suspicious data and protects the system from unauthorized access. On the other hand, feature selection is necessary in the case of high dimensionality data. In this paper, we propose a Feature Selection Technique (FST) known as Sensitivity Based Feature Selection Technique (SBFST) which selects relevant features from intrusion data based on the value of sensitivity, which is an important classification measure to evaluate a classification model more accurately. We also compare various existing FSTs with the proposed technique from NSL-KDD data set. Experimental work is conducted using three different categories of NSL-KDD data sets: (i) Binary class problem with normal and Denial of Service (DoS) (ii) Binary class problem with normal and attack and (iii) Multiclass problem with normal and four types of attack. The proposed FST, as well as existing FSTs with C4.5 decision tree classifier, have been applied to these three different categories of NSL-KDD datasets using different partitions. Experimental results at testing stage reveal that SBFST performs better than other existing FSTs. C4.5 with SBFST produce a high accuracy of 99.68% and 99.95% for the multiclass problem and binary class problem respectively. The performance of proposed technique is also verified using the intersection of features, segment by segment with other FSTs and found to be better.
Keywords: Feature Selection Technique (FST); Sensitivity Based Feature Selection Technique (SBFST); Intrusion Detection System (IDS); Information Security.
Assessing Cyber-Incidents Using Machine Learning
by Ross Gore, Saikou Diallo, Jose Padilla, Barry Ezell
Abstract: One of the difficulties in effectively analyzing and combating cyber attacks is an inability to identify when, why and how they occur. Victim organizations do not reveal this data for fear of disclosing vulnerabilities and attackers do not reveal themselves for fear of being prosecuted. In this paper, we employ two machine learning algorithms to identify: (1) if a text-based report is related to a cyber-incident and (2) the topic within the field of cyber-security the incident report addresses. First, we evaluate the effectiveness of our approach using a benchmark set of cyber-incident reports from 2006. Then, we assess the current state of cyber-security by applying our approach to a 2014 set of cyber-incident reports we gathered. Ultimately, our results show that the combination of automatically gathering and organizing cyber-security reports in close to real-time yields an assessment technology with actionable results for intelligence and security analysts.
Keywords: Computer crime prevention and detection; Information warfare; National security; Wireless and mobile network security; Software vulnerabilities; Emerging malware.
High frequency implementation of cryptographic hash function keccak-512 on FPGA devices
by Soufiane El Moumni, Mohamed Fettach, Abderrahim Tragha
Abstract: Cryptographic hash functions have an important role in numerous cryptographic mechanisms like computing digital signatures, checking data integrity, storing passwords and generating random numbers. Due to the cryptanalysis attacks on hash functions, NIST expressed its need to a new resistant hash function by announcing a public competition, this competition made Keccak hash function the new secure hash algorithm SHA-3. This new SHA-3 proved its strengths against recent attacks, however it has to be implemented efficiently in order to keep its resistance. In other words, an efficient FPGA design of hash functions is needed, be it increasing frequency, minimizing area consumption, or increasing throughput. In this paper we have focused on increasing frequency of the keccak-512, and we have achieved 401.2 MHz as a maximum frequency, and 9.62 Gbps as a throughput. The proposed design has been implemented in Xilinx Virtex-5 and Virtex-6 FPGA devices and compared to existing FPGA implementations.
Keywords: SHA-3; Keccak; hardware implementation; FPGA; frequency; hash function; cryptographic protocols.
Innovative Data Security Model Using Forensic Audio Video Steganography for Improving Hidden Data Security and Robustness
by SUNIL MOON
Abstract: Data embedding using steganography is not a major issue but recovery of hidden data in a secured way without degradation of both original and secret data are the major problem. To the best of our knowledge many researchers are working on image steganography using (Exploited Modified Direction) EMD algorithm to improve the hiding capacity and security. But nowadays internet is mostly popular due to Face book, YouTube and WhatsApp which consists of videos; hence in this paper we have proposed a combination of video crypto-steganography and digital forensic technique using Modified General EMD (MGEMD) algorithm for enhancing the embedding capacity and security of secret data. We have embedded the secret data as an image and audio behind the selected frames of video and obtained the key security parameters using forensic technique to improve the hiding capacity and the data security which is found to be better than any other existing methods.
Keywords: Modified general EMD; Normalized cross correlation (NCC); Video Crypto-Steganography; Attacks; Audio steganography; Data security;.
Detection Algorithm for Internet Worms Scanning that Used User Datagram Protocol
by Mohammad M. Rasheed
Abstract: The Internet pervades almost every aspect of our lives. Also, with the development of network technologies and applications, worm attacks greatly affect the network infrastructure security and safety. As a key technique in network security domain, Intrusion Detection System (IDS) plays a vital role of detecting various kinds of worm scanning. The main purpose of IDS is to find out intrusions among normal audit data and this can be considered as a classification problem. This problem is brought about by the User Datagram Protocol (UDP) which is a connectionless protocol that means it does not require a formal handshake to get the data flowing and has no need for SYNs, ACKs, FINs flags, or any other handshaking. With UDP protocol, the packets are sent and received without warning, and previous notice is not usually expected. Worms also make use of UDP protocol to connect or scan with other hosts. In this research, UDP Scanning Worm Detection (UDPSWD) was proposed to detect UDP worm scanning by checking the failure message connections. UDPSWD focuses on The Internet Control Message Protocol (ICMP) unreachable, ICMP time exceeded and UDP is not responded to. The results show that UDPSWD is faster in comparison to other techniques, with no false positive or negative alarm.
Keywords: Internet worm detection
SPHERES: An Efficient Server-side Web Application Protection System
by Ouissem Ben Fredj
Abstract: While the web attacks grow in number and manner, the current web protection methods fail to follow this evolution. This paper introduces a new design of a Web application protection method called SPHERES. The main idea behind SPHERES is that it is placed in the application server, it intercepts the decrypted traffic, and checks it against a set of filtering rules specific to the requests. This design allows SPHERES to have the most accurate picture of the exchanged traffic, the websites structures and workflows, the user sessions and their states, and the system states. This accurate picture of the total system allows SPHERES to build a protection sphere around the website and checks several types and levels of protections efficiently. In addition to the detection of known attacks, SPHERES is able to detect zero-day attacks at runtime. The performance study of SPHERES shows that it is much better than two famous existing web protection tools.
Keywords: Web application security; Protection method; Web application firewall; Owasp; Xss; Csrf; Sql injection.
A novel verifiable and unconditionally secure (m,t,n)-threshold multi-secret sharing scheme using Overdetermined systems of linear equations over finite Galois fields
by Faraoun Kamel Mohamed
Abstract: Threshold multi-secrets sharing schemes allow sharing a set of m secrets among n participants, while secrets can be revealed only if t or more participants collude. Although many multi-secret sharing schemes have been proposed, several improvements remain essential in order to cope with actual effectiveness and security requirements, including computational performances and compliance for large-scale data. In this paper, we present a novel multi-secrets (m,t,n)-threshold scheme using overdetermined systems of linear equations defined over finite Galois fields. The scheme provides unconditional security, linear sharing /reconstructing complexities and holds secure verifiability and t-consistence. By considering both secrets and shares as elements over finite Galois fields GF(2r), optimal and space-efficient representation is ensured compared to recent sharing schemes. In addition, the scheme provides dynamic secrets sharing, forgery/cheating detection and robustness against common attacks, while lower computational overhead is required.
Keywords: Verifiable multi-secrets sharing; overdetermined systems of linear equations; Galois field; unconditional security.
A generic construction of identity-based proxy signature scheme in the standard model
by xiaoming hu, huajie xu, jian wang, wenan tan, yinchun yang
Abstract: Recently, numerous identity-based proxy signature (IDPS) schemes are constructed by direct methods or generic methods. However, most of them are proved only to be secure in the random oracle model or are involved high computational cost. In this paper, we present a novel and generic construction method of IDPS scheme secure in the standard model from any identity-based signature (IDS) scheme. The security of IDPS scheme constructed by our method is based on the security of the original IDS scheme. The computational cost of constructing an IDPS scheme is almost the same as that of constructing an original IDS scheme. Compared with other existing IDPS schemes constructed by direct methods or other generic methods, our IDPS scheme has better performance: the signature length and the computational cost of our IDPS scheme are almost half of other existing IDPS schemes. What's more, our method can be applied to construct other identity-based proxy cryptosystems.
Keywords: cryptography; identity-based proxy signature; identity-based signature; provably secure; standard model.
Outsourcing Computation for Private Function Evaluation
by Henry Carter, Patrick Traynor
Abstract: Outsourcing secure multiparty computation (SMC) protocols has allowed resource-constrained devices to take advantage of these developing cryptographic primitives with great efficiency. While the existing constructions for outsourced SMC guarantee input and output privacy, they require that all parties know the function being evaluated. Thus, stronger security guarantees are necessary in applications where the function itself needs to be kept private. We develop the first linear-complexity protocols for outsourcing private function evaluation (PFE), a subset of SMC protocols that provide both input and function privacy. Assuming a semi-honest function holder, we build on the most efficient two-party PFE constructions to develop outsourced protocols that are secure against a semi-honest, covert, or malicious Cloud server and malicious mobile devices providing input to the function. Our protocols require minimal symmetric key operations and only two rounds of communication from the mobile participants. To make these protocols possible, we develop a technique for combining public and private sub-circuits in a single computation called partially-circuit private (PCP) garbling. This novel garbling technique allows us to apply auxiliary circuits to check for malicious behavior using only free-XOR overhead gates rather than the significantly more costly PFE gate construction. These protocols demonstrate the feasibility of outsourced PFE and provide a first step towards developing privacy-preserving applications for use in Cloud computing.
Keywords: private function evaluation; garbled circuits; server-assisted cryptography
An Ensemble Algorithm for Discovery of Malicious Web Pages
by Hedieh Sajedi
Abstract: Internet has become one of our daily life activities that all of us agree on its important role. It is necessary to know how it can either have misuse. Identity theft, brand reputation damage and loss of customer’s confidence in e-commerce and online banking are examples of the damages it can cause. In this paper, we proposed an ensemble learning algorithm for discovery of malicious web pages. The goal is to provide more learning chance to the data instances, which are misclassified by previous classifiers. To this aim, we employ a Genetic Algorithms (GA) to improve classification accuracy. In this algorithm a weight is assigned to a weak classifier and GA chooses the best set of committee members of weak classifiers to make an optimal ensemble. Experimental results demonstrate that this algorithm leads to the classification accuracy improvement.
Keywords: Genetic Algorithms; Malicious Web Pages; Evolutionary Learning; Ensemble Learning.
PrivacyContext: Identifying Malicious Mobile Privacy Leak Using Program Context
by xiaolei wang, Yuexiang Yang
Abstract: Serious concerns have been raised about user’s privacy leak in mobile apps, and many detection approaches are proposed. To evade detection, new mobile malware starts to mimic privacy-related behaviors of benign apps, and mix malicious privacy leak with benign ones to reduce the chance of being observed. Since prior proposed approaches primarily focus on the privacy disclosure discovery,these evasive techniques will make differentiating between malicious and benign privacy disclosures difficult during privacy leak analysis.
In this paper, we propose PrivacyContext to identify malicious privacy leak using context. PrivacyContext can be used to purify privacy leak detection results for automatic and easy interpretation by filtering benign privacy disclosures.Experiments show PrivacyContext can perform an effective and efficient static privacy disclosure analysis enhancement and identify malicious privacy leak with 92.73% true positive rate. Evaluation also indicates that to keep the accuracy of privacy disclosure classification, our proposed contexts are all necessary.
Keywords: Privacy Leak, Context, Activation Event, Dependent operation, Sources, Sinks
Special Issue on: Recent Trends in Security of Mobile Cloud Computing and the Internet of Things
Detection of Phishing attacks in financial and e-banking Websites Using Link and Visual Similarity Relation
by Ankit Jain, Brij Gupta
Abstract: Phishing is one of the major problems faced by the cyber-world and could lead to financial losses for both industries and individuals. In this paper, we present our proposed system which can detect Phishing attacks in financial and e-banking websites using link and visual similarity relation. Our proposed system analyse the keywords, hyperlinks and CSS layout of webpage, as many links point to corresponding legitimate page and phisher always tries to mimic the visual design of the page to steal confidential information. In the proposed system, we make set of all the associate domains and explore the links and similarity relation. In addition, we use the login form and whitelist based filtering to increase the running time and reduce the false positive rate. Our proposed system is not only able to detect phishing page accurately but its source page. Moreover, it does not require any prior training to detect zero-hour phishing attacks. Experiments are conducted over a 6616 phishing and legitimate sites and proposed system gives approximately 99.72% true positive rate and less than 1.89% false positive rate.
Keywords: Phishing; Anti-phishing system; TF-IDF; Hyperlinks; DOM Tree; Webpage; Cascading Style Sheet (CSS).
Special Issue on: Security and Privacy for Massive Cloud Data Storage
Reconfigurable design and implementation of nonlinear Boolean function for cloud computing security platform
by Su Yang
Abstract: Nonlinear Boolean function plays a pivotal role in the stream cipher algorithms and cloud computing security platforms. Based on the analysis of multiple algorithms, this paper proposes a hardware structure of reconfigurable nonlinear Boolean function. This structure can realize the number of variables and AND terms less than 80 arbitrary nonlinear Boolean function in stream cipher algorithms. The entire architecture is verified on the FPGA platform and synthesized under the 0.18m CMOS technology, the clock frequency reaches 248.7MHz, the result proves that the design is propitious to carry out the most nonlinear Boolean functions in stream ciphers which have been published, compared with other designs, the structure can achieve relatively high flexibility, and it has an obvious advantage in the area of circuits and processing speed.
Keywords: nonlinear Boolean function; reconfigurable; cloud computing; security platform.
Network Optimization for Improving Security and Safety Level of Dangerous Goods Transportation Based on Cloud Computing
by Haixing Wang, Guiping Xiao, Zhen Wei
Abstract: Network Optimization for Improving Security and Safety Level of Dangerous Goods Transportation (NOISSLDGT) belongs to NP-Hard problems with strict constraints, and that makes it harder to solve. NOISSLDGT is an important part of dangerous goods logistics security monitoring system. Cloud storage is one of the core technology of the system, and it ensure the system security and stability based on data backup and disaster technology. In order to dealing with NOISSLDGT, an improved risk analysis which combining the features and factors in NOISSLDGT is devised. To achieve the purpose of balanncing the security and the cost for the rout, the improved risk model is designed. On the basis of former algorithm, a network optimization model to minimize the total cost is established considering the network capacity and the maximum risk limits. The elements and objectives of the flow distribution process have been analyzed in this dissertation, and a relevant optimization model has been put forward, which deals with the selection process as a multi-objective decision-making problem. The problem has been discussed with LINGO first. Furthermore, the cloud computing technology is introduced, and the task scheduling in cloud computing environment is analysed. Cloud Computing Security Architecture, including Physical Security, Web Services Security, Database Security and Platform Security is presented and it provided a safe Cloud Computing environment for NOISSLDGT. Based on cloud computing task scheduling, a detailed design of the simulated annealing algorithm (SAA) is presented. An example is analyzed to demonstrate that the improved algorithms are efficient and feasible in solving NOISSLDGT.
Keywords: LINGO; Simulated annealing Algorithm (SAA); Improving Security and Safety Level of Dangerous Goods Transportation; Cloud Computing.
Proofs of Retrievability from Linearly Homomorphic Structure-Preserving Signatures
by Xiao Zhang, Shengli Liu, Shuai Han
Abstract: Proofs of Retrievability (PoR) enables clients to outsource huge amount of data to cloud servers, and provides an efficient audit protocol, which can be employed to check that all the data is being maintained properly and can be retrieved from the server. In this paper, we present a generic construction of PoR from Linearly Homomorphic Structure-Preserving Signature (LHSPS), which makes public verification possible. Authenticity and Retrievability of our PoR scheme are guaranteed by the unforgeability of LHSPS. We further extend our result to Dynamic PoR, which supports dynamic update of outsourced data. Our construction is free of complicated data structures like Merkle hash tree. With an instantiation of a recent LHSPS scheme proposed by Kiltz and Wee (EuroCrypt15), we derive a publicly verifiable (dynamic) PoR scheme. The security is based on standard assumptions and proved in the standard model.
Keywords: Cloud Storage; Cloud Security; Data Outsourcing; Data Integrity;
Proofs of Retrievability; Digital Signatures; Linearly Homomorphic Structure-Preserving Signature; Dynamic Update.
Public Key Encryption with Conjunctive and Disjunctive Keyword Search for Cloud Storage
by Siyu Xiao, Aijun Ge, Jie Zhang, Chuangui Ma
Abstract: Public key encryption with keyword search(PEKS) enables one to retrievernencrypted data stored on an untrusted server without revealing the contents. Now,beyond single keyword search, more and more attention have already been paid to the problem of multi-keyword search. However, existing schemes are mainly based on composite-order bilinear groups. In this paper, we propose a public key encryption with conjunctive and disjunctive keyword search(PECDK) scheme which can simultaneously support conjunction and disjunction within each keyword field for cloud storage. It is based on prime-order bilinear groups, and can be proved fully secure under the standard model.
Keywords: Cloud Storage; Searchable Encryption; PECDK; Inner Product Encryption;Dual Pairing Vector Space;.
A Study of the Internet Financial Interest Rate Risk Evaluation Index System in Cloud Computing
by Mu Shengdong, Tian Yi-xiang
Abstract: Cloud computing is a product of computer technologies combined with network technologies and it has been widely applied in China. Experts and scholars in all fields begin to make many studies of cloud computing infrastructure construction and effective resource utilization. With the improvement of cloud computing technology (especially security technology), Internet finance will be deployed widely and will develop rapidly. ITFIN (Internet finance) is the results of finance comprehensively combined with network technology. It is also a new ecological finance fermenting in this Internet era. ITFIN integrates online transaction data generated in various social network. It studies and judges the credit standing of customers and completes credit consumption, loan and other borrowing behavior by e-payment. With ITFIN, people can enjoy financial services in dealing with various problems. However, one person can play many identities in the network. This phenomenon posed a severe challenge to ITFIN network security and has largely intensified the risks, including the operational risk, market selection risk and network and information security risk. ITFIN resolves the risks by establishing a reliable, reasonable and effective risk assessment model. We conducted theoretical and empirical analysis, then constructed an assessment model against Chinas ITFIN risk. The model integrates rough set and PSO-SVM (particle swarm optimization support vector machine). Finally, the model was used to assessment the ITFIN risk in China. The empirical research results indicate that the model can effectively reduce redundant data information with rough set theory. The theory also guarantee a reliable, reasonable and scientific model, enhance the classification effect of the model. The parameters of SVM model obtained by optimizing with PSO can effectively avoid local optimum, improve the effect of the classification model. Overall, the model has good generalization ability and learning ability.
Keywords: Cloud Computing ;ITFIN; Risk assessment; Rough set; PSO; SVM.
Automatic Verification of Security of Identity Federation Security Protocol with ProVerif in the Cloud Security Platforms
by Jintian Lu, Jinli Zhang, Yitong Yang, Bo Meng, Xu An Wang
Abstract: In recent years several Identity Federation security protocols have been introduced and deployed by Software as a Service venders into their cloud platforms to protect its cloud applications. Hence Identity Federation has been playing an increasingly important role in cloud security. Owning to the complexity, assessing its security is a hot issue. In this study we firstly review the development of the formal methods on Identity Federation Security Protocol Based on SAML. And then a Identity Federation Security Protocol Based on SAML is modelled in formal language: the Applied Pi calculus. After that the model is translated into the inputs of ProVerif .Finally we apply the automatic formal model proposed by Blanchet to analyze its security properties. The result shows that it has not secrecy and has some authentications. At the same time we present a solution to the security problems to protect the security of the cloud platforms and applications.
Keywords: security protocol; formal method; authentication; Applied Pi calculus.
Locality-aware and Energy-aware MapReduce Multiple Jobs Scheduling in Heterogeneous Datacenter
by Lei Chen, Jing Zhang, Lijun Cai
Abstract: Map-Reduce scheduling in the heterogeneous datacenter has been aroused more and more attention, and faces some new challenges on energy consumption, execution time, and job cost. To further balance the performance of job secluding among job cost, execution time and energy consumption, a locality-aware and energy-aware Map-Reduce multiple jobs scheduling algorithm is proposed for the heterogeneous datacenter in this paper. Firstly, the importance of rack in data locality and energy saving is analyzed. Secondly, a capacity pre-judged method is developed to measure the ideal capacity of one rack for different jobs, where energy-efficient is defined to measure the balance status of rack usage among job cost, execution time and energy consumption in the job scheduling process. Thirdly, based on pre-judged idea best capacity of racks, multiple jobs pre-assignment method is proposed to adjust the job execution order for improving the resource utilization and avoiding the resource waste from the traditional first-come-first-served scheduling model. By using multiple jobs pre-assignment method, each job is centrally assigned to virtual machines of several booked racks for saving energy consumption and reducing data communication. Finally, after job pre-assignment stage, all tasks of one job are split into many task groups where multiple Map tasks and one reduce task are merged into a task group and pasted a same label. Further, a parallel task execution strategy is used to ensure each virtual calculate all tasks of multiple task groups for enhancing data locality and decreasing data communication. By comparing with other three algorithms, the extensive experimental results show our algorithm has good performance on job execution time, cross rack traffic, and energy consumption in the heterogeneous datacenter.
Keywords: energy-aware; locality-aware; Map-Reduce; heterogeneous; datacenter.
Novel Implementation of Defense Strategy of Relay Attack based on Cloud in RFID systems
by He Xu
Abstract: Radio Frequency Identification technology (RFID) is widely used in identity authentication and payment, and it also becomes an indispensable part of daily life. Cloud based RFID systems have broad application prospects, and can be provided as a service provided to individuals or organizations.For example, RFID cards can be used for cash-less payment, physical access control, temporary rights and identification in cloud environment. When an RFID card is used, there is a wireless transaction between the card and its reader, which could be attacked by several methods, including a relay attack. Relay attacks are difficult to completely prevent and a serious threat to RFID systems security. An attacker could use limited resources to build up this kind of attack and may need little knowledge of the underlying protocol. In recent years, researchers have proposed solutions using second channels to resist relay attack, such as using environmental measurements including noise, light and temperature. This paper describes research on the defense techniques for relay attacks in Cloud based RFID systems.The Cloud based Architecture for RFID systems typically consists of RFID tags, card readers (fixed or mobile) and Cloud-based server functionality.
Keywords: relay attack; RFID systems; Internet of Things; NFC.
Special Issue on: Advanced Techniques in Multimedia Watermarking
A Robust Reversible Image Watermarking Scheme in DCT domain using Arnold Scrambling and Histogram Modification
by Soumitra Roy, Arup Kumar Pal
Abstract: Among the various watermarking scheme, reversible watermarking scheme has drawn extensive attention in the recent years for its application in sensitive issues like medical, military and typical law-enforcement images. Cover image dependent embedding capacity and lack of robustness are the most crucial concerns of the reversible watermarking methods. To overcome these issues, a DCT(Discrete Cosine Transform) and histogram shifting based robust reversible image watermarking scheme using Arnold scrambling is presented in this paper. Initially, the image is decomposed into non-overlapping blocks. In the next step, DCT is employed on each block to embed a binary bit of watermark into each transformed block by modifying one pair of middle significant AC coefficients and subsequently there location map is generated for cover image restoration purpose. Then, this location map is embedded in the cover image using histogram modification technique. In the extracting side, at first location map is generated from image using histogram modification method. Then watermark is recovered from the image and using location map reversible image is reversed. The proposed reversible watermarking scheme has also been experimented to verify the robustness property against several image processing attacks and satisfactory results are achieved.
Keywords: Arnold scrambling; DCT; Histogram modification; Reversible watermarking; Robustness.
A Non-Linear Two Dimensional Logistic-Tent Map for Secure Image Communication
by Sujarani Rajendran, Manivannan Doraipandian
Abstract: In recent technology development, images are playing vital role in different applications such as social network, biometrics, medical, military and satellite fields. It is essential to protect these image from intruders during transmission on insecure networks. This paper propose a new chaotic map for image cryptosystem by combining tent map and 2D logistic map in different form. The proposed 2D Logistic-Tent map (2DLT) generates two chaotic series. These chaotic series are used to perform the confusion and diffusion phases of image cryptosystem. A comparison between existing standard 2D logistic map and proposed 2D logistic-tent map shows that the proposed map has high random chaotic series than the existing one. In order to evaluate the strength of the proposed image cryptosystem, the developed chaos cryptosystem was subjected to different analysis such as differential, key size and sensitivity, chosen plain text and cipher text attack analyses. All the analysed results proved that the proposed cipher has good security level and can be used for different secure image communication applications.
Keywords: Cryptography; Image security; Chaos theory, Chaotic map; Logistic map; Tent map; Confusion; Diffusion ; chaotic series, differential analysis; cipher image attack analysis
Robust Injection Point-Based Framework for Modern Applications against XSS Vulnerabilities in Online Social Networks (OSNs)
by Shashank Gupta, Brij Gupta
Keywords: Injection Points; Script Injection Vulnerabilities; Cross-Site Scripting (XSS) Attack.
3D Reconstruction of Human Face from an input Image under Random Lighting Condition
by Yujuan Sun
Abstract: The three-dimensional reconstruction from single input image is quite difficult due to many unknown parameters, such as the light condtions, the surface normal and albedo of the object. However, there are overall similar characteristics for different human faces, such as the shapes and the positions of the eyes, nose, mouth and ears are generally identical. The similar characteristics has been used in this paper to relax the numbers of the input face images, and reconstruct the 3D shape based on a couple statistical model. Moreover, the light condition of the single input image can be different from that of training database. The experiment results show the effectiveness of the proposed method.
Keywords: three-dimensional reconstruction; Coupled statistical Model; Human face.
The Research of Reputation Incentive Mechanism of P2P Network File Sharing System
by Shaojing Li, Wanli Su
Abstract: In the digital information age, data sharing and security are importantrnresearch topics, data sharing technology and information security technologyrnhave developed rapidly. The reputation incentive mechanism based on interest ofrnnodes is important in P2P file sharing system. This mechanism can reduce therntransaction risk in P2P file sharing system, improve the success rate of transaction and maintain the sound development of network. In addition, in this paper, two typical security problems (naive attack and sybil attack) are studied to minimize the damage to the network. The simulation and analysis of the success rate of resource location and transaction show that the reputation incentive mechanism is correct, feasible and effective. Furthermore, it has significant improvements in security and simplicity.
Keywords: P2P Network; File Sharing; Data Security; Reputation Mechanism.
Panoramic Image Mosaics Via Distributed Systems Using Color Moments and Local Wavelet-features
by Feng Guo, Ying Wang
Abstract: Panorama has been widely used in virtual reality or the game application. This paper proposed an efficient method to perform the panoramic image mosaics by fusing color moments and local wavelet-features. Firstly, color moments are used to extract the key features of the panoramic image mosaics, which represents the physical quantities of the objects in the input image. Then, wavelet transform method is used to extract the macro characteristics of the input image. At last, the color moments and the features of waveletsub band statistics are combined to construct the feature vectors for image-patch representation. With a distributed system of local area network, the proposed mosaics method can achieve the accuracy of 295 FPS. Experimental results verify the effectiveness and satisfactory of the proposed method.
Keywords: Panoramic images; local features; wavelet feature; color moments.
Node Authentication Algorithm for Securing Static Wireless Sensor Networks from Node Clone Attack
by Vandana Mohindru, Yashwant Singh
Abstract: Wireless Sensor Networks (WSN) consist of small size sensor nodes with limited sensing, processing, communication, and storage capabilities. These sensor nodes are vulnerable to the node clone attack where the attacker compromises the node and extracts secret information from the node and replicate the large numbers of clones of captured node throughout the sensor network. Therefore, providing security in such networks is of utmost importance. The main challenge to achieve this is to make the security solution energy efficient so that it is feasible to implement in such resource constrained nodes in WSN. In this paper, an energy efficient algorithm is proposed for node authentication. Aim of node authentication algorithm is to authenticate the sensor nodes before message communication within WSN so that cloned nodes are identified in the initial step of the communication. This algorithm uses encryption decryption operations and also XOR, extraction, bitwise shift operations. The performance of the proposed algorithm is analyzed in terms of communication, storage, and computation overheads metrics. Finally, performance of the proposed algorithm is analyzed with the other node authentication algorithms.
Keywords: Wireless sensor network; Security; Encryption; Authentication; Node clone attack; Network security; Attacks; Message communication; Cryptography; Energy efficient.
Reversible Data Hiding in Absolute Moment Block Truncation Coding Compressed Images Using Adaptive Multilevel Histogram Shifting Technique
by Amita , Amandeep Kaur, Marut Kumar
Abstract: Due to advancement in communication technology, data are transmitted over the network which is either confidential or private. So, the information security is one of the most critical factors considered when secret data is transmitted between two parties. Another important issue is the bandwidth utilization for data transmission. Image steganography is a widely used technique for data hiding. It is used in critical applications like military and medical areas. Most of the work is done in uncompressed images, which leads to high storage and large bandwidth required for transmission. Keeping these two factors in mind, this paper presents the multilevel histogram shifting technique in the compressed domain with the addition of adaptive block division scheme to improve the embedding capacity as well as reduce the utilization of the bandwidth. In this method, Absolute Moment Block Truncation Coding (AMBTC) Compression technique has been used for compression because of its good compression ratio.
Keywords: Reversible Data Hiding; Stego image; Embedding capacity; Secret data; Image Compression; Absolute moment block truncation coding and Histogram Shifting.
Improved Pixel Relevance based on Mahalanobis Distance for Image Segmentation
by Lihua Song
Abstract: Image segmentation is to partition one given image into different regions. In essence, the procedure of image segmentation is to cluster the pixels into different groups according to the retrieved features. However, artifacts in the given images make the features be contaminated, resulting in poor performance of current segmentation algorithms. Therefore, how to reduce the effect of image artifacts is one hot topic in image processing. In current algorithms, neighbor information is adopted to resist the effect of image artifacts. However, when the image is contaminated with high-level noise, current algorithms also perform poor. Recently, non-local information is introduced to improve the quality of segmentation results, in which pixel relevance between pixels is crucial. In this paper, pixel relevance is measured based on Mahalanobis distance. More specifically, we consider the distribution of different samples and relevance interference between samples in the procedure of computing pixel relevance.Then, a new algorithm based on the novel pixel relevance is proposed, where non-local information can be incorporated into fuzzy clustering for image segmentation. The new algorithm can improve the robustness of corresponding algorithms greatly. Experiments on different noisy images show that the proposed algorithm can retrieve better results than conventional algorithms.
Keywords: Image segmentation; Pixel relevance; Non-local information; Mahalanobis distance.
WeChat Traffic Classification Using Machine Learning Algorithms & Comparative Analysis of Datasets
by Muhammad Shafiq, Xiangzhan Yu, Asif Ali Laghari
Abstract: Identifying network traffic accurately is very important for both network operator and internet service providers (ISPs) to manage Quality of Service (QoS) accurately. In the field of computer network, classification technique got very importance from last few years. Many researchers endeavored hard to propose effective machine learning model to identify and classify online application network traffic. However an important application still not considered and no classification study has been proposed as well as whether there exist essential difference between large instances of dataset and small instances of dataset. In this research paper, we present the first classification study to classify WeChat application service flow traffic (Text Messages, Picture Messages, Audio Call and Video Call Traffic), Secondly to find out the effectiveness of large dataset and small dataset and as well as to find out effective machine learning classifiers out of 6 classifiers. We firstly capture WeChat traffic in two different network environments. And then extract 44 features from the capture traffic respectively. After that, we combine capture traffic to make full instance of dataset. After making full instance of dataset, we make reduce instances of dataset from the full instance of dataset to show the effectiveness of large dataset and small dataset. Then we execute training and testing method classification using 6 well known machine learning classifiers. Using statistical test, we use Wilcoxon statistical test for data sets and ML classifiers to find more deeply effectiveness. Experimental results show that reduce instance dataset show high accuracy result compare to full instance of dataset as well as C4.5 decision tree classifier perform very well as compare to other machine learning classifiers.
Keywords: WeChat Traffic Classification; Machine Learning; Audio and Video Call; Text and Picture Messages; Comparison.
Physiological Trait Based Biometrical Authentication of Human-Face Using LGXP and ANN Techniques
by Rohit Raja, Tilendra Shishir Sinha, Raj Kumar Patra, Shrikant Tiwari
Abstract: In the recent times, it has been found from the literature that, only front-view of human-face images are used for the authentication of the human being. Very little amount of work has been carried out using side-view and temporal-view of the human-face for the authentication of the human being. The main fact lies in the mentality of present youth, who are very busy in taking the photographs with different poses. Generally the poses are taken from side-view. Hence in the present paper, the main focus has been kept, in the authentication process using methods of recent trends in the field of engineering. The main objective is to handle the variability in human-face appearances due to changes in the viewing direction. Poses, illumination conditions, and expressions are considered as three main parameters, which are processed for the overall authentication process. For the overall processing, extensive feature set like texture, contrast, correlation and shape are extracted by employing modified region growing algorithm and texture feature by Local Gabor XOR Pattern (LGXP) and Artificial Neural Network (ANN) technique. The present work has been analysed using the data of different subjects with varying ages.
Keywords: Local Gabor XOR Pattern (LGXP); Modified region growing algorithm; artificial neural network; false matching rate; false non-matching rate; genuine acceptance rate.
Special Issue on: Cyber Attacks in Cloud Computing Security, Privacy, and Forensics Issues
MONCrypt: A Technique to Ensure the Confidentiality of Outsourced Data in Cloud Storage
by Manikandasaran S S, Arockiam L, Sheba Kezia Malarchelvi P.D
Abstract: Data management is a monotonous task for Small and Medium Scale Enterprises (SMEs). Cloud storage provides enormous virtual storage space to store the cloud users data. Data outsourcing helps the SMEs to reduce headache to manage the data in their premises. Many SMEs are attracted to outsource their data to the cloud. Once the data are outsourced, they are kept by the third party cloud storage providers and it should be controlled and monitored by them. The users dont have the rights to control and monitor their own data in the cloud storage. This causes the data security issue of outsourced data in cloud storage. If anything wrong happens on the data, the users suspect the cloud storage providers. Ensuring the confidentiality of outsourced data plays a vital role in the cloud security. To ensure the confidentiality of outsourced data, this paper proposes a technique called MONcrypt. MONcrypt is based on obfuscation technique. Obfuscation is a process of masking the original text into irrelevant text without using any key unlike encryption. MONcrypt uses key for de-obfuscation. This novel obfuscation technique is used to ensure the confidentiality of outsourced data in cloud storage. The paper compares the proposed technique with existing technique like Base32, Base64, Hexadecimal Encoding, DES, 3DES and Blowfish. The proposed technique shows better performance and security compared with the existing techniques.
Keywords: Data Outsourcing; Confidentiality; Cloud Storage; Obfuscation; Security;.
Special Issue on: Recent Trends in Security of Information and Communication Technology
A Robust and Blind Image Watermarking Scheme in DCT Domain
by Arup Kumar Pal, Soumitra Roy
Abstract: In this paper, the authors have presented a robust and blind watermarking scheme based on Discrete Cosine Transform (DCT) for protecting the copyright ownership of digital images. Initially, the image is decomposed into non overlapping blocks and subsequently DCT is employed on each block. In this work, a binary bit of watermark is embedded into each transformed block by modifying some middle significant AC coefficients using repetition code. During the embedding phase of the proposed method, DC and some higher AC coefficients are kept intact after zigzag scanning of each DCT block to ensure the high visual quality of watermarked image. The proposed scheme is suitable to protect the copyright information even in compressed form of the watermarked image since the scheme exploits the middle bands of DCT coefficients for embedding the watermark bits and in general high frequency bands are filtered in the compression process. The proposed scheme is tested on standard images and the simulation results show that the proposed watermark embedding procedure does not reflect too much impact on the visual quality of watermarked images. The proposed scheme is also tested to verify the withstand capability against several image processing attacks like image enhancement, image noising, cropping operation, sharpening, JPEG compression, geometric operation like image rotation etc. and satisfactory results are achieved.
Keywords: Blind Watermarking; Discrete Cosine Transform; Digital Image Watermarking; Robust Watermarking; Repetition code.