DOC PREVIEW
UTD CS 4398 - Secure Cloud Computing and Cloud Forensics

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Dr. Bhavani Thuraisingham The University of Texas at Dallas (UTD) October 2010Cloud Computing: NIST DefinitionSecurity Challenges for CloudsLayered FrameworkSecure Query Processing with Hadoop/MapReducePrinciples of Secure Query OptimizationSlide 7Slide 8SPARQL Query Optimizer for Secure RDF Data ProcessingSlide 10Security for AMAZON S3XACML Design Implementation in HadoopSecure VMM: Xen ArchitectureVirtual MachinesSecurity IssuesSlide 16Data Collection ApproachesDetection at the victim node (e.g., gatekeeper, head node)Cloud Forensics (Kyun Ruan, University College Dublin)Current and Future ResearchEducation ProgramDr. Bhavani ThuraisinghamThe University of Texas at Dallas (UTD)October 2010Secure Cloud Computing and Cloud ForensicsCloud Computing: NIST Definition•Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of five key characteristics, three delivery models, and four deployment models.•Key Characteristics: On-demand self-service, Location independent resource pooling. Rapid elasticity, Pay per use. •Delivery Models: Cloud Software as a Service (SaaS), Cloud Platform as a Service (PaaS), Cloud Infrastructure as a Service (IaaS). •Deployment Models: Private cloud, Community cloud, Public cloud. Hybrid cloud. •Our goal is to demonstrate policy based assured information sharing on cloudsSecurity Challenges for Clouds•Policy–Access Control and Accountability•Data Security and Privacy Issues–Third party publication of data; Security challenges associated with data outsourcing; –Data at the different sites have to be protected, with the end results being made available; querying encrypted data–Secure Query Processing/Updates in Cloud•Secure Storage•Security Related to Virtualization•Cloud Monitoring•Protocol and Network Security for Clouds•Identity Management•Cloud Forensics01/14/19 4Layered FrameworkApplication(Law Enforcement)Hadoop/MapReduc/StorageHIVE/SPARQL/QueryXEN/Linux/VMMSecure Virtual Network MonitorPoliciesXACMLRisks/CostsQoSResource AllocationCloud MonitorsFigure.2 Layered Framework for Assured CloudApproach: Study the problem with current principles and technologies and then develop principles for secure cloud computingSecure Query Processing with Hadoop/MapReduce•We have studied Clouds based on Hadoop•Query Rewriting and Optimization Principles defined and implemented for two types of data•(i) Relational data: Secure query processing with HIVE•(ii) RDF Data: Secure query processing with SPARQL•Demonstrated with XACML Policies (content, temporal, association)•Joint demonstration with Kings College and University of Insubria–First demo (2010): Each party submits their data and policies–Our cloud will manage the data and policies –Second demo (2011): Multiple cloudsPrinciples of Secure Query Optimization•Query optimization principles defined and strategies implemented in the 1970s and 1980s for relational data (IBM System R and DB2 Ingres)–Query Rewriting, Query Evaluation Procedures, Search strategy, Cost functions•Secure query optimization principles defined and strategies implemented in the 1980s and 1990s (Honeywell, MITRE) •Extended secure query optimization for cloud environment–Query optimization for RDF data–Secure query optimization for RDF data –Secure query optimization for RDF data in a cloud environmentHive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, adhoc querying and analysis of large datasets data stored in Hadoop files. It provides a mechanism to put structure on this data and it also provides a simple query language called Hive QL which is based on SQL and which enables users familiar with SQL to query this dataPolicies include content dependent access control, association based access control, time-dependent access controlTable/View definition and loading,Users can create tables as well as load data into tables. Further, they can also upload XACML policies for the table they are creating. Users can also create XACML policies for tables/views.Users can define views only if they have permissions for all tables specified in the query used to create the view. They can also either specify or create XACML policies for the views they are defining.Fine-grained Access Control with HiveFine-grained Access Control with HiveSystem ArchitectureSPARQL Query Optimizer for Secure RDF Data Processing•Developed a secure query optimizer and query rewriter for RDF Data with XACML policies and implemented on top of JENA•Storage Support–Built a storage mechanism for very large RDF graphs for JENA–Integrated the system with Hadoop for the storage of large amounts of RDF data (e.g. a billion triples)–Need to incorporate secure storage strategies developed in FY09ServerBackendSystem ArchitectureWeb InterfaceData PreprocessorN-Triples ConverterPrefix GeneratorPredicate Based SplitterPredicate Object Based SplitterMapReduce FrameworkParserQuery Validator & RewriterXACML PDPPlan GeneratorPlan ExecutorQuery Rewriter By PolicyNew DataQueryAnswerSecurity for AMAZON S3•Many organizations are using cloud services like Amazon S3 for data storage. A few important questions arise here – –Can we use S3 to store the data sources used by Blackbook?; Is the data we store on S3, secure? Is it accessible by any user outside our organization? ; How do we restrict access to files to the users within the organization?–BLACKBOOK is a semantic-web based tool used by analysts within the Intelligence Community. The tool federates queries across data sources. These data sources are databases or applications located either locally or remotely on the network. BLACKBOOK allows analysts to make logical inferences across the data sources, add their own knowledge and share that knowledge with other analysts using the system. •We use Amazon S3 to store the data sources used by Blackbook.•To keep our data secure, we encrypt the data using AES (Advanced Encryption Standard) before uploading the data files on Amazon


View Full Document

UTD CS 4398 - Secure Cloud Computing and Cloud Forensics

Documents in this Course
Botnets

Botnets

33 pages

Botnets

Botnets

33 pages

Load more
Download Secure Cloud Computing and Cloud Forensics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Secure Cloud Computing and Cloud Forensics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Secure Cloud Computing and Cloud Forensics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?