Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Data Mining - Classification and Prediction - Model selection, Study notes of Data Mining

In this document topics covered which are Classification and Prediction, Classifier Accuracy Measures, Predictor Error Measures, Evaluating the Accuracy of a Classifier or Predictor (I), Model selection, Prediction.

Typology: Study notes

2010/2011

Uploaded on 09/03/2011

amit-mohta
amit-mohta 🇮🇳

4.2

(152)

89 documents

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
November 27, 2014 Data Mining: Concepts and
Techniques 1
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines
(SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
pf3
pf4
pf5

Partial preview of the text

Download Data Mining - Classification and Prediction - Model selection and more Study notes Data Mining in PDF only on Docsity!

November 27, 2014 Data Mining: Concepts and 1

Chapter 6. Classification and Prediction

  • (^) What is classification? What is prediction?
  • (^) Issues regarding classification and prediction
  • (^) Classification by decision tree induction
  • (^) Bayesian classification
  • (^) Rule-based classification
  • (^) Classification by back propagation - (^) Support Vector Machines (SVM) - (^) Associative classification - (^) Lazy learners (or learning from your neighbors) - (^) Other classification methods - (^) Prediction - (^) Accuracy and error measures - (^) Ensemble methods - (^) Model selection - Summary

November 27, 2014 Data Mining: Concepts and 2

Classifier Accuracy Measures

  • (^) Accuracy of a classifier M, acc(M): percentage of test set tuples that

are correctly classified by the model M

  • (^) Error rate (misclassification rate) of M = 1 – acc(M)
  • (^) Given m classes, CMi,j , an entry in a confusion matrix , indicates

# of tuples in class i that are labeled by the classifier as class j

  • (^) Alternative accuracy measures (e.g., for cancer diagnosis)

sensitivity = t-pos/pos /* true positive recognition rate */

specificity = t-neg/neg /* true negative recognition rate */

precision = t-pos/(t-pos + f-pos)

accuracy = sensitivity * pos/(pos + neg) + specificity * neg/(pos +

neg)

  • (^) This model can also be used for cost-benefit analysis classes buy_computer = yes buy_computer = no total recognition(% ) buy_computer = yes 6954 46 7000 99. buy_computer = no 412 2588 3000 86. total 7366 2634 1000 0

C 1 C 2 C 1 True positive False negative C 2 False positive True negative

November 27, 2014 Data Mining: Concepts and 4 Evaluating the Accuracy of a Classifier or Predictor (I)

  • (^) Holdout method
    • (^) Given data is randomly partitioned into two independent sets
      • (^) Training set (e.g., 2/3) for model construction
      • (^) Test set (e.g., 1/3) for accuracy estimation
    • (^) Random sampling: a variation of holdout
      • (^) Repeat holdout k times, accuracy = avg. of the accuracies

obtained

  • (^) Cross-validation ( k -fold, where k = 10 is most popular)
    • (^) Randomly partition the data into k mutually exclusive

subsets, each approximately equal size

  • At i -th iteration, use Di as test set and others as training set
  • (^) Leave-one-out: k folds where k = # of tuples, for small sized

data

  • (^) Stratified cross-validation: folds are stratified so that class

dist. in each fold is approx. the same as that in the initial data

November 27, 2014 Data Mining: Concepts and 5 Evaluating the Accuracy of a Classifier or Predictor (II)

  • (^) Bootstrap
    • (^) Works well with small data sets
    • (^) Samples the given training tuples uniformly with replacement
      • (^) i.e., each time a tuple is selected, it is equally likely to be

selected again and re-added to the training set

  • (^) Several boostrap methods, and a common one is .632 boostrap
    • (^) Suppose we are given a data set of d tuples. The data set is sampled d times, with replacement, resulting in a training set of d samples. The data tuples that did not make it into the training set end up forming the test set. About 63.2% of the original data will end up in the bootstrap, and the remaining 36.8% will form the test set (since (1 – 1/d)d^ ≈ e-1^ = 0.368)
    • (^) Repeat the sampling procedue k times, overall accuracy of

the model: ( ) ( 0. 632 ( ) 0. 368 ( ) )

_ 1 _ i train set k i acc M  (^)   acc Mi test set   acc M