Data Mining: Concepts and Techniques
March 20, 2018 | Author: Anonymous | Category: N/A
Short Description
Download Data Mining: Concepts and Techniques...
Description
Data Mining: Concepts and Techniques — Chapter 2 —
TUGAS 1 dikiumpulkan tanggal 10 April 2010 ( PRogramming ) 2orang 1 kelompok
April 13, 2017
Data Mining: Concepts and Techniques
1
Chapter 2: Data Preprocessing
Karakteristik data secara umum
Diskripsi data dan eksplorasi
Mengukur kesamaan data
Data cleaning
Integrasi data dan transformasi
Reduksi data
Kesimpulan
April 13, 2017
Data Mining: Concepts and Techniques
2
Types of Attribute Values
Nominal E.g., profession, ID numbers, eye color, zip codes Ordinal E.g., rankings (e.g., army, professions), grades, height in {tall, medium, short} Binary E.g., medical test (positive vs. negative) Interval E.g., calendar dates, body temperatures Ratio
E.g., temperature in Kelvin, length, time, counts
April 13, 2017
Data Mining: Concepts and Techniques
3
Discrete vs. Continuous Attributes
Discrete Attribute Has only a finite or countably infinite set of values E.g., zip codes, profession, or the set of words in a collection of documents Sometimes, represented as integer variables Note: Binary attributes are a special case of discrete attributes Continuous Attribute Has real numbers as attribute values Examples: temperature, height, or weight Practically, real values can only be measured and represented using a finite number of digits Continuous attributes are typically represented as floating-point variables
April 13, 2017
Data Mining: Concepts and Techniques
4
Chapter 2: Data Preprocessing
General data characteristics
Basic data description and exploration
Measuring data similarity
Data cleaning
Data integration and transformation
Data reduction
Summary
April 13, 2017
Data Mining: Concepts and Techniques
5
Mining Data Descriptive Characteristics
Motivasi
Karakteristik dari sebaran data
Untuk memahami data: sebaran, kecenderungan terpusat, dan variasi median, max, min, quartiles, outliers, variance
Dimensi numerik terkait dengan interval yang terurut
April 13, 2017
Boxplot atau quantile analysis pada interval yang terurut
Data Mining: Concepts and Techniques
6
Mengukur kecenderungan terpusat ( Central Tendency)
Rata-rata (sample vs. population):
1 n x xi n i 1
Weighted arithmetic mean:
x N
n
Trimmed mean: chopping extreme values
x
Median: A holistic measure
w x i 1 n
i
i
w i 1
i
Middle value if odd number of values, or average of the middle two values otherwise
Estimated by interpolation (for grouped data):
median L1 (
Mode
Value that occurs most frequently in the data
Unimodal, bimodal, trimodal
Empirical formula:
April 13, 2017
N / 2 ( freq)l freqmedian
) width
mean mode 3 (mean median) Data Mining: Concepts and Techniques
7
Symmetric vs. Skewed Data
Median, mean and mode of symmetric, positively and negatively skewed data
positively skewed
April 13, 2017
symmetric
negatively skewed
Data Mining: Concepts and Techniques
8
Contoh : Upah Karyawan PT. Satria Semarang
Upah Harian
F
200 - 219 220 - 239 240 - 259 260 - 279 280 - 299 300 - 319 320 - 339
4 8 17 24 15 9 5
F.Kumulatif 4 12 29 53 68 77 82
F = 82 Me = 82 : 2= 41 Kelas : 260 - 279
82
259 260 TepiKelasBawah 259,5 2 279 280 TepiKelasA tas 279,5 2
F .sk Me TKB xi Fd 12 Me 259,5 x 20 24 240 Me 259,5 24 Me 259,5 10 Me 269,50
F .sl Me TKA xi Fd 12 Me 279,5 x 20 24 240 Me 279,5 24 Me 279,5 10 269,50
F .sk Me TKB xi Fd 14 Me 64,5 x10 23 140 Me 64,5 23 Me 64,5 6,1 Me 76
Measuring the Dispersion of Data
Quartiles, outliers and boxplots
Quartiles: Q1 (25th percentile), Q3 (75th percentile)
Inter-quartile range: IQR = Q3 – Q1
Five number summary: min, Q1, M, Q3, max
Boxplot: ends of the box are the quartiles, median is marked, whiskers, and plot outlier individually
Outlier: usually, a value higher/lower than 1.5 x IQR
Variance and standard deviation (sample: s, population: σ)
Variance: (algebraic, scalable computation)
1 n 1 n 2 1 n 2 s ( xi x ) [ xi ( xi ) 2 ] n 1 i 1 n 1 i 1 n i 1 2
1 N 2
n
1 ( x ) i N i 1 2
n
xi 2 2
i 1
Standard deviation s (or σ) is the square root of variance s2 (or σ2)
April 13, 2017
Data Mining: Concepts and Techniques
12
Properties of Normal Distribution Curve
The normal (distribution) curve From μ–σ to μ+σ: contains about 68% of the measurements (μ: mean, σ: standard deviation) From μ–2σ to μ+2σ: contains about 95% of it From μ–3σ to μ+3σ: contains about 99.7% of it
April 13, 2017
Data Mining: Concepts and Techniques
13
Graphic Displays of Basic Statistical Descriptions Boxplot: graphic display of five-number summary Histogram: x-axis are values, y-axis repres. frequencies Scatter plot: each pair of values is a pair of coordinates and plotted as points in the plane Loess (local regression) curve: add a smooth curve to a scatter plot to provide better perception of the pattern of dependence
April 13, 2017
Data Mining: Concepts and Techniques
15
Histogram Analysis
Graph displays of basic statistical class descriptions Frequency histograms
April 13, 2017
A univariate graphical method Consists of a set of rectangles that reflect the counts or frequencies of the classes present in the given data
Data Mining: Concepts and Techniques
16
Histograms Often Tells More than Boxplots
The two histograms shown in the left may have the same boxplot representation
April 13, 2017
The same values for: min, Q1, median, Q3, max
But they have rather different data distributions
Data Mining: Concepts and Techniques
17
Scatter plot
Provides a first look at bivariate data to see clusters of points, outliers, etc Each pair of values is treated as a pair of coordinates and plotted as points in the plane
April 13, 2017
Data Mining: Concepts and Techniques
18
Loess Curve
Adds a smooth curve to a scatter plot in order to provide better perception of the pattern of dependence Loess curve is fitted by setting two parameters: a smoothing parameter, and the degree of the polynomials that are fitted by the regression
April 13, 2017
Data Mining: Concepts and Techniques
19
Positively and Negatively Correlated Data
The left half fragment is positively correlated
April 13, 2017
The right half is negative correlated
Data Mining: Concepts and Techniques
20
Not Correlated Data
April 13, 2017
Data Mining: Concepts and Techniques
21
Used by permission of M. Ward, Worcester Polytechnic Institute
Scatterplot Matrices
Matrix of scatterplots (x-y-diagrams) of the k-dim. data [total of C(k, 2) = (k2 ̶ k)/2 scatterplots] April 13, 2017
Data Mining: Concepts and Techniques
22
Chapter 2: Data Preprocessing
General data characteristics
Basic data description and exploration
Measuring data similarity (Sec. 7.2)
Data cleaning
Data integration and transformation
Data reduction
Summary
April 13, 2017
Data Mining: Concepts and Techniques
23
Similarity and Dissimilarity
Similarity Numerical measure of how alike two data objects are Value is higher when objects are more alike Often falls in the range [0,1] Dissimilarity (i.e., distance) Numerical measure of how different are two data objects Lower when objects are more alike Minimum dissimilarity is often 0 Upper limit varies Proximity refers to a similarity or dissimilarity
April 13, 2017
Data Mining: Concepts and Techniques
24
Data Matrix and Dissimilarity Matrix
Data matrix n data points with p dimensions Two modes
Dissimilarity matrix n data points, but registers only the distance A triangular matrix Single mode
April 13, 2017
x11 ... x i1 ... x n1
...
x1f
...
... ...
... xif
... ...
... ... ... xnf
... ...
0 d(2,1) 0 d(3,1) d ( 3,2) 0 : : : d ( n,1) d ( n,2) ...
Data Mining: Concepts and Techniques
x1p ... xip ... xnp
... 0 25
Example: Data Matrix and Distance Matrix 3
point p1 p2 p3 p4
p1
2
p3
p4
1 p2
0 0
1
2
3
4
5
p1 p2 p3 p4
0 2.828 3.162 5.099
y 2 0 1 1
Data Matrix
6
p1
x 0 2 3 5
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
Distance Matrix (i.e., Dissimilarity Matrix) for Euclidean Distance
April 13, 2017
Data Mining: Concepts and Techniques
26
Minkowski Distance
Minkowski distance: A popular distance measure
d (i, j) q (| x x |q | x x |q ... | x x |q ) i1 j1 i2 j2 ip jp where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is the order
Properties
d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive definiteness)
d(i, j) = d(j, i) (Symmetry)
d(i, j) d(i, k) + d(k, j) (Triangle Inequality)
A distance that satisfies these properties is a metric
April 13, 2017
Data Mining: Concepts and Techniques
27
Special Cases of Minkowski Distance
q = 1: Manhattan (city block, L1 norm) distance
E.g., the Hamming distance: the number of bits that are different between two binary vectors d (i, j) | x x | | x x | ... | x x | i1 j1 i2 j 2 ip jp
q= 2: (L2 norm) Euclidean distance d (i, j) (| x x |2 | x x |2 ... | x x |2 ) i1 j1 i2 j2 ip jp
q . “supremum” (Lmax norm, L norm) distance. This is the maximum difference between any component of the vectors Do not confuse q with n, i.e., all these distances are defined for all numbers of dimensions. Also, one can use weighted distance, parametric Pearson product moment correlation, or other dissimilarity measures
April 13, 2017
Data Mining: Concepts and Techniques
28
Example: Minkowski Distance
point p1 p2 p3 p4
x 0 2 3 5
y 2 0 1 1
L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
L2 p1 p2 p3 p4
p1
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
L p1 p2 p3 p4
p1
p2
p3
p4
0 2.828 3.162 5.099 0 2 3 5
2 0 1 3
3 1 0 2
5 3 2 0
Distance Matrix April 13, 2017
Data Mining: Concepts and Techniques
29
Interval-valued variables
Standardize data
Calculate the mean absolute deviation:
sf 1 n (| x1 f m f | | x2 f m f | ... | xnf m f |) where
m f 1n (x1 f x2 f
...
xnf )
.
Calculate the standardized measurement (z-score)
xif m f zif sf
Using mean absolute deviation is more robust than using standard deviation
Then calculate the Enclidean distance of other Minkowski distance
April 13, 2017
Data Mining: Concepts and Techniques
30
Binary Variables
1 0 a b A contingency table for binary data Object i 1 0 c d sum a c b d Distance measure for symmetric
d (i, j)
binary variables:
Distance measure for asymmetric binary variables:
Jaccard coefficient (similarity measure for asymmetric binary variables):
Object j
d (i, j)
sum a b cd p
bc a bc d
bc a bc
simJaccard (i, j)
a a b c
Note: Jaccard coefficient is the same as “coherence”:
coherence(i, j) April 13, 2017
sup(i, j) a sup(i) sup( j) sup(i, j) (a b) (a c) a Data Mining: Concepts and Techniques
31
Dissimilarity between Binary Variables
Example Name Jack Mary Jim
Gender M F M
Fever Y Y Y
Cough N N P
Test-1 P P N
Test-2 N N N
Test-3 N P N
Test-4 N N N
gender is a symmetric attribute the remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be set to 0 01 0.33 2 01 11 d ( jack , jim ) 0.67 111 1 2 d ( jim , mary ) 0.75 11 2 d ( jack , mary )
April 13, 2017
Data Mining: Concepts and Techniques
32
Nominal Variables
A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green Method 1: Simple matching
m: # of matches, p: total # of variables m d (i, j) p p
Method 2: Use a large number of binary variables
April 13, 2017
creating a new binary variable for each of the M nominal states
Data Mining: Concepts and Techniques
33
Ordinal Variables
An ordinal variable can be discrete or continuous
Order is important, e.g., rank
Can be treated like interval-scaled
replace xif by their rank
map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by zif
rif {1,...,M f }
rif 1 M f 1
compute the dissimilarity using methods for intervalscaled variables
April 13, 2017
Data Mining: Concepts and Techniques
34
Ratio-Scaled Variables
Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt Methods:
treat them like interval-scaled variables—not a good choice! (why?—the scale can be distorted) apply logarithmic transformation
yif = log(xif)
April 13, 2017
treat them as continuous ordinal data treat their rank as interval-scaled Data Mining: Concepts and Techniques
35
Variables of Mixed Types
A database may contain all the six types of variables symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio One may use a weighted formula to combine their effects
pf 1 ij( f ) dij( f ) d (i, j) pf 1 ij( f )
f is binary or nominal: dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise f is interval-based: use the normalized distance f is ordinal or ratio-scaled Compute ranks rif and r 1 zif M 1 Treat zif as interval-scaled if
f
April 13, 2017
Data Mining: Concepts and Techniques
36
Vector Objects: Cosine Similarity
Vector objects: keywords in documents, gene features in micro-arrays, … Applications: information retrieval, biologic taxonomy, ... Cosine measure: If d1 and d2 are two vectors, then cos(d1, d2) = (d1 d2) /||d1|| ||d2|| , where indicates vector dot product, ||d||: the length of vector d Example: d1 = 3 2 0 5 0 0 0 2 0 0 d2 = 1 0 0 0 0 0 0 1 0 2 d1d2 = 3*1+2*0+0*0+5*0+0*0+0*0+0*0+2*1+0*0+0*2 = 5 ||d1||= (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.5 = 6.481 ||d2|| = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2)0.5=(6) 0.5 = 2.245 cos( d1, d2 ) = .3150
April 13, 2017
Data Mining: Concepts and Techniques
37
Chapter 2: Data Preprocessing
General data characteristics
Basic data description and exploration
Measuring data similarity
Data cleaning
Data integration and transformation
Data reduction
Summary
April 13, 2017
Data Mining: Concepts and Techniques
38
Tugas Pokok dalam Pemrosesan awal data
Data cleaning Mengisi nilai yang hilang, memperhalus data noise, mengidentifikasi atau menghilangkan outlier dan memecahkan ketidak konsistenanan Integrasi data Mengintegrasikan berbagai database, data cube atau file-file Transformasi data Data transformation Normalisasi dan aggregation Reduksi data Mendapatkan representasi dalam volume data yung sudah terkurangi tetapi menghasilkan hasil analitis yang sama atau serupa Diskritisasi data : bagian dari reduksi data, bagian penting untuk data numerik
April 13, 2017
Data Mining: Concepts and Techniques
39
Data Cleaning
Data yang tidak berkualitas , hasil data mining yang tidak berkualitas!
Keputusan yang berkualitas harus didasarkan pada data yang berkualitas
e.g., data ganda atau data yang hilang mungkin menyebabkan ketidakbenaran atau bahkan menyesatkan
Ekstaksi data, pembersihan, dan transformasi data merupakan tugas utama dalam data warehouse
Tugas-tugas data cleaning
Mengisi nilai-nilai yang hilang
Mengidentifikasi outliers dan memperhalus data noise
Memperbaiki ketidakkonsitenan data
Memecahkan redudansi yang disebabkan oleh integrasi data
April 13, 2017
Data Mining: Concepts and Techniques
40
Data in the Real World Is Dirty
incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data e.g., children=“ ” (missing data) noisy: containing noise, errors, or outliers e.g., Salary=“−10” (an error) inconsistent: containing discrepancies in codes or names, e.g., Age=“42” Birthday=“03/07/1997” Was rating “1,2,3”, now rating “A, B, C” discrepancy between duplicate records
April 13, 2017
Data Mining: Concepts and Techniques
41
Why Is Data Dirty?
Data yang tidak lengkap mungkin diperoleh dari
Noisy data (incorrect values) may come from
Faulty data collection instruments Human or computer error at data entry Errors in data transmission
Inconsistent data may come from
Different considerations between the time when the data was collected and when it is analyzed. Human/hardware/software problems
Different data sources
Duplicate records also need data cleaning
April 13, 2017
Data Mining: Concepts and Techniques
42
Missing Data
Data is not always available E.g., many tuples have no recorded value for several attributes, such as customer income in sales data Missing data may be due to equipment malfunction inconsistent with other recorded data and thus deleted data not entered due to misunderstanding certain data may not be considered important at the time of entry not register history or changes of the data Missing data may need to be inferred
April 13, 2017
Data Mining: Concepts and Techniques
43
Bagaimana mengatasi Missing Value ( data yang hilang )
Mengabaikan record-record: biasanya dilakukan bila label class hilang (tidak efektif bila % dari nilai yang hilang per atribut sangat diperhatikan
Mengisi nilai yang hilang secara manual
Mengisi secara otomatis dengan
Global konstant : e.g., “unknown”, a new class?!
Rata-rata dari atribut
Rata-rata atribut untuk seluruh sample dengan kelas yang sama : smarter nilai yang lebih memungkinkan: yaitu dengan menggunakan metode Bayesian
April 13, 2017
Data Mining: Concepts and Techniques
44
Noisy Data
Noise: random error or variance in a measured variable Incorrect attribute values may due to faulty data collection instruments data entry problems data transmission problems technology limitation Other data problems which requires data cleaning duplicate records incomplete data inconsistent data
April 13, 2017
Data Mining: Concepts and Techniques
45
How to Handle Noisy Data?
Binning first sort data and partition into (equal-frequency) bins then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc. Regression smooth by fitting the data into regression functions Clustering detect and remove outliers Combined computer and human inspection detect suspicious values and check by human (e.g., deal with possible outliers)
April 13, 2017
Data Mining: Concepts and Techniques
46
Simple Discretization Methods: Binning
Equal-width (distance) partitioning
Divides the range into N intervals of equal size: uniform grid
if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B –A)/N.
The most straightforward, but outliers may dominate presentation
Skewed data is not handled well
Equal-depth (frequency) partitioning
Divides the range into N intervals, each containing approximately same number of samples
Good data scaling
Managing categorical attributes can be tricky
April 13, 2017
Data Mining: Concepts and Techniques
47
Binning Methods for Data Smoothing Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 * Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 * Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 * Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34
April 13, 2017
Data Mining: Concepts and Techniques
48
Regression y Y1
Y1’
y=x+1
X1
April 13, 2017
Data Mining: Concepts and Techniques
x
49
Cluster Analysis
April 13, 2017
Data Mining: Concepts and Techniques
50
Data Cleaning as a Process
Data discrepancy detection Use metadata (e.g., domain, range, dependency, distribution) Check field overloading Check uniqueness rule, consecutive rule and null rule Use commercial tools Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make corrections Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers) Data migration and integration Data migration tools: allow transformations to be specified ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface Integration of the two processes Iterative and interactive (e.g., Potter’s Wheels)
April 13, 2017
Data Mining: Concepts and Techniques
51
Chapter 2: Data Preprocessing
General data characteristics
Basic data description and exploration
Measuring data similarity
Data cleaning
Data integration and transformation
Data reduction
Summary
April 13, 2017
Data Mining: Concepts and Techniques
52
Data Integration
Data integration: Combines data from multiple sources into a coherent store Schema integration: e.g., A.cust-id B.cust-# Integrate metadata from different sources Entity identification problem: Identify real world entities from multiple data sources, e.g., Bill Clinton = William Clinton Detecting and resolving data value conflicts For the same real world entity, attribute values from different sources are different Possible reasons: different representations, different scales, e.g., metric vs. British units
April 13, 2017
Data Mining: Concepts and Techniques
53
Handling Redundancy in Data Integration
Redundant data occur often when integration of multiple databases
Object identification: The same attribute or object may have different names in different databases
Derivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenue
Redundant attributes may be able to be detected by
correlation analysis
Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality
April 13, 2017
Data Mining: Concepts and Techniques
54
Correlation Analysis (Numerical Data)
Correlation coefficient (also called Pearson’s product moment coefficient)
rp ,q
( p p)( q q) ( pq) n p q (n 1) p q
(n 1) p q
where n is the number of baris ( record) , q and are the p respective means of p and q, σp and σq are the respective standard deviation of p and q, and Σ(pq) is the sum of the pq cross-product.
If rp,q > 0, p and q are positively correlated (p’s values increase as q’s). The higher, the stronger correlation. rp,q = 0: independent; rpq < 0: negatively correlated
April 13, 2017
Data Mining: Concepts and Techniques
55
Correlation (viewed as linear relationship)
Correlation measures the linear relationship between objects To compute correlation, we standardize data objects, p and q, and then take their dot product
pk ( pk mean( p)) / std ( p)
qk (qk mean(q)) / std (q) correlation( p, q) p q April 13, 2017
Data Mining: Concepts and Techniques
56
Visually Evaluating Correlation
Scatter plots showing the similarity from –1 to 1.
April 13, 2017
Data Mining: Concepts and Techniques
57
Correlation Analysis (Categorical Data)
Χ2 (chi-square) test
(Observed Expected) Expected
2
2
The larger the Χ2 value, the more likely the variables are related The cells that contribute the most to the Χ2 value are those whose actual count is very different from the expected count
Correlation does not imply causality
# of hospitals and # of car-theft in a city are correlated
Both are causally linked to the third variable: population
April 13, 2017
Data Mining: Concepts and Techniques
58
Chi-Square Calculation: An Example
Play chess
Not play chess
Sum (row)
Like science fiction
250(90)
200(360)
450
Not like science fiction
50(210)
1000(840)
1050
Sum(col.)
300
1200
1500
Χ2 (chi-square) calculation (numbers in parenthesis are expected counts calculated based on the data distribution in the two categories) (250 90) 2 (50 210) 2 (200 360) 2 (1000 840) 2 507.93 90 210 360 840 2
It shows that like_science_fiction and play_chess are correlated in the group
April 13, 2017
Data Mining: Concepts and Techniques
59
Data Transformation
A function that maps the entire set of values of a given attribute to a new set of replacement values s.t. each old value can be identified with one of the new values Methods Smoothing: Remove noise from data Aggregation: Summarization, data cube construction Generalization: Concept hierarchy climbing Normalization: Scaled to fall within a small, specified range min-max normalization z-score normalization normalization by decimal scaling Attribute/feature construction New attributes constructed from the given ones
April 13, 2017
Data Mining: Concepts and Techniques
60
Data Transformation: Normalization
Min-max normalization: to [new_minA, new_maxA]
v'
v minA (new _ maxA new _ minA) new _ minA maxA minA
Ex. Let income range $12,000 to $98,000 normalized to [0.0, 73,600 12,000 1.0]. Then $73,000 is mapped to 98,000 12,000 (1.0 0) 0 0.716
Z-score normalization (μ: mean, σ: standard deviation):
v'
v A
A
Ex. Let μ = 54,000, σ = 16,000. Then
73,600 54,000 1.225 16,000
Normalization by decimal scaling
v v' j 10 April 13, 2017
Where j is the smallest integer such that Max(|ν’|) < 1 Data Mining: Concepts and Techniques
61
Chapter 2: Data Preprocessing
General data characteristics
Basic data description and exploration
Measuring data similarity
Data cleaning
Data integration and transformation
Data reduction
Summary
April 13, 2017
Data Mining: Concepts and Techniques
62
Data Reduction Strategies
Why data reduction? A database/data warehouse may store terabytes of data Complex data analysis/mining may take a very long time to run on the complete data set Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results Data reduction strategies Dimensionality reduction — e.g., remove unimportant attributes Numerosity reduction (some simply call it: Data Reduction) Data cub aggregation Data compression Regression Discretization (and concept hierarchy generation)
April 13, 2017
Data Mining: Concepts and Techniques
63
Dimensionality Reduction
Curse of dimensionality When dimensionality increases, data becomes increasingly sparse Density and distance between points, which is critical to clustering, outlier analysis, becomes less meaningful The possible combinations of subspaces will grow exponentially Dimensionality reduction Avoid the curse of dimensionality Help eliminate irrelevant features and reduce noise Reduce time and space required in data mining Allow easier visualization Dimensionality reduction techniques Principal component analysis Singular value decomposition Supervised and nonlinear techniques (e.g., feature selection)
April 13, 2017
Data Mining: Concepts and Techniques
64
Dimensionality Reduction: Principal Component Analysis (PCA)
Find a projection that captures the largest amount of variation in data Find the eigenvectors of the covariance matrix, and these eigenvectors define the new space x2 e
x1 April 13, 2017
Data Mining: Concepts and Techniques
65
Principal Component Analysis (Steps)
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal components) that can be best used to represent data
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal component vectors The principal components are sorted in order of decreasing “significance” or strength Since the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance (i.e., using the strongest principal components, it is possible to reconstruct a good approximation of the original data)
Works for numeric data only
April 13, 2017
Data Mining: Concepts and Techniques
66
Feature Subset Selection
Another way to reduce dimensionality of data
Redundant features
duplicate much or all of the information contained in one or more other attributes E.g., purchase price of a product and the amount of sales tax paid
Irrelevant features
contain no information that is useful for the data mining task at hand E.g., students' ID is often irrelevant to the task of predicting students' GPA
April 13, 2017
Data Mining: Concepts and Techniques
67
Heuristic Search in Feature Selection
There are 2d possible feature combinations of d features Typical heuristic feature selection methods: Best single features under the feature independence assumption: choose by significance tests Best step-wise feature selection: The best single-feature is picked first Then next best feature condition to the first, ... Step-wise feature elimination: Repeatedly eliminate the worst feature Best combined feature selection and elimination Optimal branch and bound: Use feature elimination and backtracking
April 13, 2017
Data Mining: Concepts and Techniques
68
Feature Creation
Create new attributes that can capture the important information in a data set much more efficiently than the original attributes Three general methodologies Feature extraction domain-specific Mapping data to new space (see: data reduction) E.g., Fourier transformation, wavelet transformation Feature construction Combining features Data discretization
April 13, 2017
Data Mining: Concepts and Techniques
69
Mapping Data to a New Space
Fourier transform Wavelet transform
Two Sine Waves
April 13, 2017
Two Sine Waves + Noise
Data Mining: Concepts and Techniques
Frequency
70
Numerosity (Data) Reduction
Reduce data volume by choosing alternative, smaller forms of data representation Parametric methods (e.g., regression) Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) Example: Log-linear models—obtain value at a point in m-D space as the product on appropriate marginal subspaces Non-parametric methods Do not assume models Major families: histograms, clustering, sampling
April 13, 2017
Data Mining: Concepts and Techniques
71
Parametric Data Reduction: Regression and Log-Linear Models
Linear regression: Data are modeled to fit a straight line
Often uses the least-square method to fit the line
Multiple regression: allows a response variable Y to be modeled as a linear function of multidimensional feature vector
Log-linear model: approximates discrete multidimensional probability distributions
April 13, 2017
Data Mining: Concepts and Techniques
72
Regress Analysis and Log-Linear Models
Linear regression: Y = w X + b Two regression coefficients, w and b, specify the line and are to be estimated by using the data at hand Using the least squares criterion to the known values of Y1, Y2, …, X1, X2, …. Multiple regression: Y = b0 + b1 X1 + b2 X2. Many nonlinear functions can be transformed into the above Log-linear models: The multi-way table of joint probabilities is approximated by a product of lower-order tables
Probability: p(a, b, c, d) =
ab acad bcd
Data Cube Aggregation
The lowest level of a data cube (base cuboid)
The aggregated data for an individual entity of interest
E.g., a customer in a phone calling data warehouse
Multiple levels of aggregation in data cubes
Reference appropriate levels
Further reduce the size of data to deal with Use the smallest representation which is enough to solve the task
Queries regarding aggregated information should be answered using data cube, when possible
April 13, 2017
Data Mining: Concepts and Techniques
74
Data Compression
String compression There are extensive theories and well-tuned algorithms Typically lossless But only limited manipulation is possible without expansion Audio/video compression Typically lossy compression, with progressive refinement Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio Typically short and vary slowly with time
April 13, 2017
Data Mining: Concepts and Techniques
75
Data Compression
Compressed Data
Original Data lossless
Original Data Approximated April 13, 2017
Data Mining: Concepts and Techniques
76
Data Reduction Method: Clustering
Partition data set into clusters based on similarity, and store cluster representation (e.g., centroid and diameter) only Can be very effective if data is clustered but not if data is “smeared” Can have hierarchical clustering and be stored in multidimensional index tree structures There are many choices of clustering definitions and clustering algorithms Cluster analysis will be studied in depth in Chapter 7
April 13, 2017
Data Mining: Concepts and Techniques
77
Data Reduction Method: Sampling
Sampling: obtaining a small sample s to represent the whole data set N Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data
Key principle: Choose a representative subset of the data
Simple random sampling may have very poor performance in the presence of skew
Develop adaptive sampling methods, e.g., stratified sampling:
Note: Sampling may not reduce database I/Os (page at a time)
April 13, 2017
Data Mining: Concepts and Techniques
78
Types of Sampling
Simple random sampling There is an equal probability of selecting any particular item Sampling without replacement Once an object is selected, it is removed from the population Sampling with replacement A selected object is not removed from the population Stratified sampling: Partition the data set, and draw samples from each partition (proportionally, i.e., approximately the same percentage of the data) Used in conjunction with skewed data
April 13, 2017
Data Mining: Concepts and Techniques
79
Sampling: Cluster or Stratified Sampling
Raw Data
April 13, 2017
Cluster/Stratified Sample
Data Mining: Concepts and Techniques
80
Data Reduction: Discretization
Three types of attributes:
Nominal — values from an unordered set, e.g., color, profession
Ordinal — values from an ordered set, e.g., military or academic rank
Continuous — real numbers, e.g., integer or real numbers
Discretization:
Divide the range of a continuous attribute into intervals
Some classification algorithms only accept categorical attributes.
Reduce data size by discretization
Prepare for further analysis
April 13, 2017
Data Mining: Concepts and Techniques
81
Discretization and Concept Hierarchy
Discretization
Reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals
Interval labels can then be used to replace actual data values
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Concept hierarchy formation
Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as young, middle-aged, or senior)
April 13, 2017
Data Mining: Concepts and Techniques
82
Discretization and Concept Hierarchy Generation for Numeric Data
Typical methods: All the methods can be applied recursively
Binning (covered above)
Histogram analysis (covered above)
Top-down split, unsupervised,
Top-down split, unsupervised
Clustering analysis (covered above)
Either top-down split or bottom-up merge, unsupervised
Entropy-based discretization: supervised, top-down split
Interval merging by 2 Analysis: unsupervised, bottom-up merge
Segmentation by natural partitioning: top-down split, unsupervised
April 13, 2017
Data Mining: Concepts and Techniques
83
Discretization Using Class Labels
Entropy based approach
3 categories for both x and y
April 13, 2017
5 categories for both x and y
Data Mining: Concepts and Techniques
84
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the information gain after partitioning is I (S , T )
| S1 | |S | Entropy( S1) 2 Entropy( S 2) |S| |S|
Entropy is calculated based on class distribution of the samples in the set. Given m classes, the entropy of S1 is m
Entropy( S1 ) pi log 2 ( pi ) i 1
where pi is the probability of class i in S1
The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization
The process is recursively applied to partitions obtained until some stopping criterion is met Such a boundary may reduce data size and improve classification accuracy
April 13, 2017
Data Mining: Concepts and Techniques
85
Discretization Without Using Class Labels
Data
Equal frequency April 13, 2017
Equal interval width
K-means Data Mining: Concepts and Techniques
86
Interval Merge by 2 Analysis
Merging-based (bottom-up) vs. splitting-based methods
Merge: Find the best neighboring intervals and merge them to form larger intervals recursively
ChiMerge [Kerber AAAI 1992, See also Liu et al. DMKD 2002]
Initially, each distinct value of a numerical attr. A is considered to be
one interval
2 tests are performed for every pair of adjacent intervals
Adjacent intervals with the least 2 values are merged together, since low 2 values for a pair indicate similar class distributions
This merge process proceeds recursively until a predefined stopping criterion is met (such as significance level, max-interval, max inconsistency, etc.)
April 13, 2017
Data Mining: Concepts and Techniques
87
Segmentation by Natural Partitioning
A simply 3-4-5 rule can be used to segment numeric data into relatively uniform, “natural” intervals.
If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equiwidth intervals
If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals
If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals
April 13, 2017
Data Mining: Concepts and Techniques
88
Example of 3-4-5 Rule count
Step 1:
Step 2:
-$351
-$159
Min
Low (i.e, 5%-tile)
msd=1,000
profit
High(i.e, 95%-0 tile)
Low=-$1,000
(-$1,000 - 0)
(-$400 - 0)
(-$200 -$100) (-$100 0)
April 13, 2017
Max
High=$2,000
($1,000 - $2,000)
(0 -$ 1,000)
(-$400 -$5,000)
Step 4:
(-$300 -$200)
$4,700
(-$1,000 - $2,000)
Step 3:
(-$400 -$300)
$1,838
($1,000 - $2, 000)
(0 - $1,000) (0 $200)
($1,000 $1,200)
($200 $400)
($1,200 $1,400) ($1,400 $1,600)
($400 $600) ($600 $800)
($800 $1,000)
($1,600 ($1,800 $1,800) $2,000)
Data Mining: Concepts and Techniques
($2,000 - $5, 000)
($2,000 $3,000) ($3,000 $4,000) ($4,000 $5,000)
89
Concept Hierarchy Generation for Categorical Data
Specification of a partial/total ordering of attributes explicitly at the schema level by users or experts
Specification of a hierarchy for a set of values by explicit data grouping
{Urbana, Champaign, Chicago} < Illinois
Specification of only a partial set of attributes
street < city < state < country
E.g., only street < city, not others
Automatic generation of hierarchies (or attribute levels) by the analysis of the number of distinct values
E.g., for a set of attributes: {street, city, state, country}
April 13, 2017
Data Mining: Concepts and Techniques
90
Automatic Concept Hierarchy Generation
Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set The attribute with the most distinct values is placed at the lowest level of the hierarchy Exceptions, e.g., weekday, month, quarter, year 15 distinct values
country province_or_ state
365 distinct values
city
3567 distinct values
street April 13, 2017
674,339 distinct values Data Mining: Concepts and Techniques
91
Chapter 2: Data Preprocessing
General data characteristics
Basic data description and exploration
Measuring data similarity
Data cleaning
Data integration and transformation
Data reduction
Summary
April 13, 2017
Data Mining: Concepts and Techniques
92
Summary
Data preparation/preprocessing: A big issue for data mining
Data description, data exploration, and measure data similarity set the base for quality data preprocessing Data preparation includes
Data cleaning
Data integration and data transformation
Data reduction (dimensionality and numerosity reduction)
A lot a methods have been developed but data preprocessing still an active area of research
April 13, 2017
Data Mining: Concepts and Techniques
93
References
D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Communications of ACM, 42:73-78, 1999 W. Cleveland, Visualizing Data, Hobart Press, 1993 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003 T. Dasu, T. Johnson, S. Muthukrishnan, V. Shkapenyuk. Mining Database Structure; Or, How to Build a Data Quality Browser. SIGMOD’02 U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in Data Mining and Knowledge Discovery, Morgan Kaufmann, 2001 H. V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical Committee on Data Engineering, 20(4), Dec. 1997 D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999 E. Rahm and H. H. Do. Data Cleaning: Problems and Current Approaches. IEEE Bulletin of
the Technical Committee on Data Engineering. Vol.23, No.4
V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and Transformation, VLDB’2001 T. Redman. Data Quality: Management and Technology. Bantam Books, 1992 E. R. Tufte. The Visual Display of Quantitative Information, 2nd ed., Graphics Press, 2001 R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995
April 13, 2017
Data Mining: Concepts and Techniques
94
Feature Subset Selection Techniques
Brute-force approach: Try all possible feature subsets as input to data mining algorithm Embedded approaches: Feature selection occurs naturally as part of the data mining algorithm Filter approaches: Features are selected before data mining algorithm is run Wrapper approaches: Use the data mining algorithm as a black box to find best subset of attributes
April 13, 2017
Data Mining: Concepts and Techniques
95
View more...
Comments