OutlineData CleaningSlide 3Missing DataHow to Handle Missing Data?Slide 6Slide 7Noisy DataBinningSimple Discretization Methods: BinningRegressionCluster AnalysisSlide 13Data integrationData integration problemsRedundant dataSlide 17Pearson’s product moment coefficientSlide 19Chi-SquareChi-Square Calculation: An ExampleSlide 22Slide 23Slide 24Data TransformationNormalizationSlide 27Slide 28Min-max normalizationZ-score normalizationDecimal normalizationOutlineIntroductionDescriptive Data SummarizationData CleaningMissing valueNoise dataData IntegrationRedundancyData TransformationData CleaningImportance“Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball“Data cleaning is the number one problem in data warehousing”—DCI surveyData CleaningData cleaning tasksFill in missing valuesIdentify outliers and smooth out noisy dataMissing DataMissing data may be due to equipment malfunctioninconsistent with other recorded data and thus deleteddata not entered due to misunderstandingcertain data may not be considered important at the time of entrynot register history or changes of the dataIt is important to note that, a missing value may not always imply an error. (for example, Null-allow attri. )How to Handle Missing Data?Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not effective when the percentage of missing values per attribute varies considerably.Fill in the missing value manually: tedious + infeasibleHow to Handle Missing Data?Fill in it automatically witha global constant : e.g., “unknown”, a new class?! the attribute meanthe attribute mean for all samples belonging to the same class: smarterthe most probable value: inference-based such as Bayesian formula or decision treeOutlineIntroductionDescriptive Data SummarizationData CleaningMissing valueNoise dataData IntegrationRedundancyData TransformationNoisy DataNoise: random error or variance in a measured variableHow to Handle Noisy Data?BinningRegressionClusteringBinningBinnig methods smooth a sorted data value by consulting its “neighborhood”First of all, we sort all the valuesThen, the sorted values are distributed into a number of “buckets”, or “bins”Then we smooth the values byMeans (bin value is replace by mean value), orMedium (bin value is replace by medium value), or Boundaries (bin value is replace by the closest boundary value)Simple Discretization Methods: BinningSorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34* Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34* Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29* Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34Regressionxyy = x + 1X1Y1Y1’Cluster AnalysisOutlineIntroductionDescriptive Data SummarizationData CleaningMissing valueNoise dataData IntegrationRedundancyData TransformationData integrationData integration: Combines data from multiple sources into a coherent storeData integration problemsSchema integration: e.g., A.cust-id B.cust-#Integrate metadata from different sourcesDetecting and resolving data value conflictsFor the same real world entity, attribute values from different sources are differentPossible reasons: different representations, different scales, e.g., metric vs. British unitsRedundant dataRedundant data occur often when integration of multiple databasesObject identification: The same attribute or object may have different names in different databasesDerivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenueRedundant dataRedundant attributes may be able to be detected by correlation analysisCareful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and qualityPearson’s product moment coefficientCorrelation coefficient (also called Pearson’s product moment coefficient)where n is the number of tuples, and are the respective means of A and B, σA and σB are the respective standard deviation of A and B, and Σ(AB) is the sum of the AB cross-product.BABAnBAnABnBBAArBA)1()()1())((,Pearson’s product moment coefficientThe correlation coefficient is always between -1 and +1. The closer the correlation is to +/-1, the closer to a perfect linear relationship. Here is how I tend to interpret correlations.-1.0 to -0.7 strong negative association. -0.7 to -0.3 weak negative association. -0.3 to +0.3 little or no association. +0.3 to +0.7 weak positive association. +0.7 to +1.0 strong positive association.Chi-SquareΧ2 (chi-square) testThe larger the Χ2 value, the more likely the variables are relatedChi-Square Calculation: An ExampleSuppose a group of 1500 people was surveyed.The gender of each person was notedMale: 300Female: 1200We have two attributes:Gender Prefer-readingChi-Square Calculation: An ExampleE11 = count (male)*count(fiction)/N = 300 * 450 / 1500 =90E12 = count (male)*count(not_fiction)/N = 300 * 1050/ 1500 =9093.507840)8401000(360)360200(210)21050(90)90250(22222 i j Male Female Sum (row)Like science fiction250(90)200(360) 450Not like science fiction50(210)1000(840) 1050Sum(col.) 300 1200 1500Chi-Square Calculation: An ExampleFor this 2 by 2 table, the degree of freedom are (2-1)(2-1)=1For 1 degree of freedom, the Chi-Square value needed to reject the hypothesis at the 0.001 significance is 10.828 Since our value is above this, we can conclude that the gender and prefer_reading are (strongly) correlated for the given group of peopleOutlineIntroductionDescriptive Data SummarizationData CleaningMissing valueNoise dataData IntegrationRedundancyData TransformationData TransformationData Transformation can involve the following:Smoothing: remove noise from the data, including binning, regression and clusteringAggregationGeneralizationNormalizationAttribute
View Full Document