7. Assignment 7#
Due: 2023-03-23
Eligible skills: (links to checklists)
7.2. Dataset and EDA#
Choose a datasets that is well suited for classification and that has all numerical features. If you want to use a dataset with nonnumerical features you will have to convert the categorical features to one hot encoding.
Hint
Use the UCI ML repository, the “new beta version” will let you filter data by the attributes of it you need.
Include a basic description of the data(what the features are)
Describe the classification task in your own worlds
Use EDA to determine if you expect the classification to get a high accuracy or not. What types of mistakes do you think will happen most?
Hypothesize which will classifier will do better and why you think that. Does the data meet the assumptions of Naive Bayes? what is important about this classifier for this application?
7.3. Basic Classification#
Fit your chosen classifier with the default parameters on 50% of the data
Test it on 50% held out data and generate a classification report
Inspect the model to answer the questions appropriate to your model.
Does this model make sense?
(DT) Are there any leaves that are very small?
(DT) Is this an interpretable number of levels?
(GNB) do the parameters fit the data well?
(GNB) do the paramters generate similar synthetic data
Interpret the model and its performance in terms of the application. Example questions to consider in your response include
do you think this model is good enough to use for real?
is this a model you would trust?
do you think that a more complex model should be used?
do you think that maybe this task cannot be done with machine learning?
7.4. Exploring Problem Setups#
Important
Understanding the impact of test/train size is a part of classifcation and helps with evaluation. This exercise is also a chance at python level 2.
Do an experiment to compare test set size vs performance:
Train a model (if decision tree set the max depth 2 less than the depth it found above) on 10%, 30%, … , 90% of the data. Compute the training accuracy and test accuracy for each size training data in a DataFrame with columns [‘train_pct’,‘n_train_samples’,‘n_test_samples’,‘train_acc’,‘test_acc’]
Plot the accuracies vs training percentage in a line graph.
Interpret these results. How does training vs test size impact the model’s performance? Does it impact training and test accuracy the same way?
use a loop for this part, possibly also a function
Thinking Ahead
ideas for level 3, not required for A7
Repeat the problem setup experiment with cross validation and plot with error bars.
What is the tradeoff to be made in choosing a test/train size?
What is the best test/train size for this dataset?
or with variations:
allowing it to figure out the model depth for each training size, and recording the depth in the loop as well.
repeating each size 10 items, then using summary statistics on that data
Use the extensions above to experiment further with other model parameters.
some of this we’ll learn how to automate in a few weeks, but getting the ideas by doing it yourself can help build understanding and intution