Unformatted text preview:

CS490D: Introduction to Data Mining Chris CliftonData PreprocessingWhy Data Preprocessing?Why Is Data Dirty?Why Is Data Preprocessing Important?Multi-Dimensional Measure of Data QualityMajor Tasks in Data PreprocessingSlide 9Data CleaningMissing DataHow to Handle Missing Data?Noisy DataHow to Handle Noisy Data?Simple Discretization Methods: BinningBinning Methods for Data SmoothingCluster AnalysisRegressionSlide 19Data IntegrationHandling Redundancy in Data IntegrationData TransformationData Transformation: NormalizationZ-Score (Example)Slide 25Slide 26Data Reduction StrategiesData Cube AggregationDimensionality ReductionExample of Decision Tree InductionData CompressionSlide 33Wavelet TransformationDWT for Image CompressionPrincipal Component AnalysisPrincipal Component AnalysisNumerosity ReductionRegression and Log-Linear ModelsRegress Analysis and Log-Linear ModelsHistogramsClusteringSamplingSlide 44Slide 45Hierarchical ReductionSlide 47DiscretizationDiscretization and Concept hierachySlide 50Discretization and Concept Hierarchy Generation for Numeric DataDefinition of EntropyEntropy-Based DiscretizationSegmentation by Natural PartitioningExample of 3-4-5 RuleConcept Hierarchy Generation for Categorical DataAutomatic Concept Hierarchy GenerationSlide 58SummaryReferencesSlide 61Data Generalization and Summarization-based CharacterizationCharacterization: Data Cube ApproachData Cube Approach (Cont…)Attribute-Oriented InductionBasic Principles of Attribute-Oriented InductionAttribute-Oriented Induction: Basic AlgorithmClass Characterization: An ExamplePresentation of Generalized ResultsPresentation—Generalized RelationPresentation—CrosstabImplementation by Cube TechnologySlide 78What Defines a Data Mining Task ?Task-Relevant Data (Mineable View)Types of knowledge to be minedBackground Knowledge: Concept HierarchiesMeasurements of Pattern InterestingnessVisualization of Discovered PatternsData Mining Languages & Standardization EffortsSlide 106CS490D:Introduction to Data MiningChris CliftonJanuary 23, 2004Data PreparationCS490D 2Data Preprocessing•Why preprocess the data?•Data cleaning •Data integration and transformation•Data reduction•Discretization and concept hierarchy generation•SummaryCS490D 3Why Data Preprocessing?•Data in the real world is dirty–incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data•e.g., occupation=“”–noisy: containing errors or outliers•e.g., Salary=“-10”–inconsistent: containing discrepancies in codes or names•e.g., Age=“42” Birthday=“03/07/1997”•e.g., Was rating “1,2,3”, now rating “A, B, C”•e.g., discrepancy between duplicate recordsCS490D 4Why Is Data Dirty?•Incomplete data comes from–n/a data value when collected–different consideration between the time when the data was collected and when it is analyzed.–human/hardware/software problems•Noisy data comes from the process of data–collection–entry–transmission•Inconsistent data comes from–Different data sources–Functional dependency violationCS490D 5Why Is Data Preprocessing Important?•No quality data, no quality mining results!–Quality decisions must be based on quality data•e.g., duplicate or missing data may cause incorrect or even misleading statistics.–Data warehouse needs consistent integration of quality data•Data extraction, cleaning, and transformation comprises the majority of the work of building a data warehouse. —Bill InmonCS490D 6Multi-Dimensional Measure of Data Quality•A well-accepted multidimensional view:–Accuracy–Completeness–Consistency–Timeliness–Believability–Value added–Interpretability–Accessibility•Broad categories:–intrinsic, contextual, representational, and accessibility.CS490D 7Major Tasks in Data Preprocessing•Data cleaning–Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies•Data integration–Integration of multiple databases, data cubes, or files•Data transformation–Normalization and aggregation•Data reduction–Obtains reduced representation in volume but produces the same or similar analytical results•Data discretization–Part of data reduction but with particular importance, especially for numerical dataCS490D 9Data Preprocessing•Why preprocess the data?•Data cleaning •Data integration and transformation•Data reduction•Discretization and concept hierarchy generation•SummaryCS490D 10Data Cleaning•Importance–“Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball–“Data cleaning is the number one problem in data warehousing”—DCI survey•Data cleaning tasks–Fill in missing values–Identify outliers and smooth out noisy data –Correct inconsistent data–Resolve redundancy caused by data integrationCS490D 11Missing Data•Data is not always available–E.g., many tuples have no recorded value for several attributes, such as customer income in sales data•Missing data may be due to –equipment malfunction–inconsistent with other recorded data and thus deleted–data not entered due to misunderstanding–certain data may not be considered important at the time of entry–not register history or changes of the data•Missing data may need to be inferred.CS490D 12How to Handle Missing Data?•Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not effective when the percentage of missing values per attribute varies considerably.•Fill in the missing value manually: tedious + infeasible?•Fill in it automatically with–a global constant : e.g., “unknown”, a new class?! –the attribute mean–the attribute mean for all samples belonging to the same class: smarter–the most probable value: inference-based such as Bayesian formula or decision treeCS490D 13Noisy Data•Noise: random error or variance in a measured variable•Incorrect attribute values may due to–faulty data collection instruments–data entry problems–data transmission problems–technology limitation–inconsistency in naming convention •Other data problems which requires data cleaning–duplicate records–incomplete data–inconsistent dataCS490D 14How to Handle Noisy Data?•Binning method:–first sort data and partition into (equi-depth) bins–then one can smooth by bin means, smooth by bin median, smooth by bin boundaries,


View Full Document

Purdue CS 490D - Data Preparation

Download Data Preparation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Data Preparation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Data Preparation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?