#notes#cs471

Recap

Maximum Likelihood for Learning:

Naïve Bayes
  • Features are conditionally independent given target

Laplace Smoothing

  • Pretend you saw every outcome X times more than you already did

S-Fold cross validation

  • Every data point serves in training and validation dataset.
  • Split data into s parts
  • Use each part in turn as a validation dataset and others as training.
  • Choose hyperparameter leading to best average performance
  • Leave one out cross validation: Every data point is used as validation once