## Classifiers and decision boundaries

Capabilities of predictive models are determined by shape of classification boundaries, allowed by the type of the model and its parameters, and by the learning algorithm's ability to fit it. The purpose of this task is to recall and explore decision boundaries of some models we discussed.

Consider the following models:

- Classification trees of depth 1, also called stumps (limit the tree depth in the to 1)
- Classification trees of depth up to 3
- Logistic regression
- Linear SVM
- SVM with 2nd degree polynomial kernels (make sure you set c to 1)
- SVM with 3rd degree polynomial kernels (set c to 1)
- SVM with RBF kernels
- K Nearest neighbour with k=5

Use the Paint widget (follow by particular models + predictions + scatter plot) to paint different data sets in order to demonstrate differences between models. For instance, construct a pair of cases that show how capabilities of trees differ from those linear models; invent a data set that can be handler by three-level trees but not by smaller ones; a data set that requires 3rd degree polynomials; and a set that needs RBF kernels; etc.

This is obviously an open-ended, rather creative, exploration task.