13. Classification Models#

First machine learning model: Naive bayes

import pandas as pd
import seaborn as sns
from sklearn import tree
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix 
import matplotlib.pyplot as plt
classification_report, roc_auc_score
iris_df = sns.load_dataset('iris')
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[1], line 9
      7 from sklearn.metrics import confusion_matrix 
      8 import matplotlib.pyplot as plt
----> 9 classification_report, roc_auc_score
     10 iris_df = sns.load_dataset('iris')

NameError: name 'classification_report' is not defined

To start we will look at the data

iris_df.head()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[2], line 1
----> 1 iris_df.head()

NameError: name 'iris_df' is not defined

image of an iris with the petal width, petal length, sepal length and sepal width annotated

We’re trying to build an automatic flower classifier that, for measurements of a new flower returns the predicted species. To do this, we have a DataFrame with columns for species, petal width, petal length, sepal length, and sepal width. The species is what type of flower it is the petal and sepal are parts of the flower.

Hide code cell content
iris_df.columns
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[3], line 1
----> 1 iris_df.columns

NameError: name 'iris_df' is not defined

The species will be the target and the measurements will be the features. We want to predict the target from the features, the species from the measurements.

feature_vars = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
target_var = 'species'

13.1. What does Naive Bayes do?#

docs

Naive = indepdent features Bayes = most probable

Bayes Estimator

We can look at this data using a pair plot. It plots each pair of numerical variables in a grid of scatterplots and on the diagonal (where it would be a variable with itself) shows the distribution of that variable.

sns.pairplot(data =iris_df,hue=target_var)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[5], line 1
----> 1 sns.pairplot(data =iris_df,hue=target_var)

NameError: name 'iris_df' is not defined

This data is reasonably separable beacuse the different species (indicated with colors in the plot) do not overlap much. We see that the features are distributed sort of like a normal, or Gaussian, distribution. In 2D a Gaussian distribution is like a hill, so we expect to see more points near the center and fewer on the edge of circle-ish blobs. These blobs are slightly live ovals, but not too skew.

This means that the assumptions of the Gaussian Naive Bayes model are met well enough we can expect the classifier to do well.

13.2. Separating Training and Test Data#

To do machine learning, we split the data both sample wise (rows if tidy) and variable-wise (columns if tidy). First, we’ll designate the columns to use as features and as the target.

The features are the input that we wish to use to predict the target.

Next, we’ll use a sklearn function to split the data randomly into test and train portions.

X_train, X_test, y_train, y_test = train_test_split(iris_df[feature_vars],
                                                    iris_df[target_var],
                                                    random_state=5)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[6], line 1
----> 1 X_train, X_test, y_train, y_test = train_test_split(iris_df[feature_vars],
      2                                                     iris_df[target_var],
      3                                                     random_state=5)

NameError: name 'iris_df' is not defined

This function returns multiple values, the docs say that it returns twice as many as it is passed. We passed two separate things, the features and the labels separated, so we get train and test each for both.

Note

If you get different numbers for the index than I do here or run the train test split multipe times and see things change, you have a different ranomd seed above.

X_train.head()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[7], line 1
----> 1 X_train.head()

NameError: name 'X_train' is not defined
X_train.shape, X_test.shape
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[8], line 1
----> 1 X_train.shape, X_test.shape

NameError: name 'X_train' is not defined

We can see by default how many samples it puts the training set:

len(X_train)/len(iris_df)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[9], line 1
----> 1 len(X_train)/len(iris_df)

NameError: name 'X_train' is not defined

So by default we get a 75-25 split.

13.3. Instantiating our Model Object#

Next we will instantiate the object for our model. In sklearn they call these objects estimator. All estimators have a similar usage. First we instantiate the object and set any hyperparameters.

Instantiating the object says we are assuming a particular type of model. In this case Gaussian Naive Bayes. This sets several assumptions in one form:

  • we assume data are Gaussian (normally) distributed

  • the features are uncorrelated/independent (Naive)

  • the best way to predict is to find the highest probability (Bayes)

this is one example of a Bayes Estimator

gnb = GaussianNB()

At this point the object is not very interesting

gnb.__dict__
{'priors': None, 'var_smoothing': 1e-09}

The fit method uses the data to learn the model’s parameters. In this case, a Gaussian distribution is characterized by a mean and variance; so the GNB classifier is characterized by one mean and one variance for each class (in 4d, like our data)

gnb.fit(X_train,y_train)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[12], line 1
----> 1 gnb.fit(X_train,y_train)

NameError: name 'X_train' is not defined

The attributes of the estimator object (gbn) describe the data (eg the class list) and the model’s parameters. The theta_ (often in math as \(\theta\) or \(\mu\)) represents the mean and the var_ (\(\sigma\)) represents the variance of the distributions.

gnb.__dict__
{'priors': None, 'var_smoothing': 1e-09}

13.3.1. Scoring a model#

Estimator objects also have a score method. If the estimator is a classifier, that score is accuracy. We will see that for other types of estimators it is different types.

gnb.score(X_test,y_test)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[14], line 1
----> 1 gnb.score(X_test,y_test)

NameError: name 'X_test' is not defined

13.4. Making model predictions#

we can predict for each sample as well:

y_pred = gnb.predict(X_test)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[15], line 1
----> 1 y_pred = gnb.predict(X_test)

NameError: name 'X_test' is not defined

Important

in the end of class I tried to demo this and got an error

We can also do one single sample, the iloc attrbiute lets us pick out rows by integer index even if that is not the actual index of the DataFrame

X_test.iloc[0]
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[16], line 1
----> 1 X_test.iloc[0]

NameError: name 'X_test' is not defined

but if we pick one row, it returns a series, which is incompatible with the predict method.

gnb.predict(X_test.iloc[0])
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[17], line 1
----> 1 gnb.predict(X_test.iloc[0])

NameError: name 'X_test' is not defined

If we select with a range, that only includes 1, it still returns a DataFrame

X_test.iloc[0:1]
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[18], line 1
----> 1 X_test.iloc[0:1]

NameError: name 'X_test' is not defined

which we can get a prediction for:

gnb.predict(X_test.iloc[0:1])
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[19], line 1
----> 1 gnb.predict(X_test.iloc[0:1])

NameError: name 'X_test' is not defined

We could also transform with to_frame and then transpose with T or (transpose)

gnb.predict(X_test.iloc[0].to_frame().T)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[20], line 1
----> 1 gnb.predict(X_test.iloc[0].to_frame().T)

NameError: name 'X_test' is not defined

We can also pass a 2D array (list of lists) with values in it (here I typed in values similar to the mean for setosa above)

gnb.predict([[5.1, 3.6, 1.5, 0.25]])
---------------------------------------------------------------------------
NotFittedError                            Traceback (most recent call last)
Cell In[21], line 1
----> 1 gnb.predict([[5.1, 3.6, 1.5, 0.25]])

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/sklearn/naive_bayes.py:100, in _BaseNB.predict(self, X)
     86 def predict(self, X):
     87     """
     88     Perform classification on an array of test vectors X.
     89 
   (...)
     98         Predicted target values for X.
     99     """
--> 100     check_is_fitted(self)
    101     X = self._check_X(X)
    102     jll = self._joint_log_likelihood(X)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/sklearn/utils/validation.py:1461, in check_is_fitted(estimator, attributes, msg, all_or_any)
   1458     raise TypeError("%s is not an estimator instance." % (estimator))
   1460 if not _is_fitted(estimator, attributes, all_or_any):
-> 1461     raise NotFittedError(msg % {"name": type(estimator).__name__})

NotFittedError: This GaussianNB instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.

This way it warns us that the feature names are missing, but it still gives a prediction.

13.4.1. Evaluating Performance in more detail#

confusion_matrix(y_test,y_pred)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[22], line 1
----> 1 confusion_matrix(y_test,y_pred)

NameError: name 'y_test' is not defined

This is a little harder to read than the 2D version but we can make it a dataframe to read it better.

n_classes = len(gnb.classes_)
prediction_labels = [['predicted class']*n_classes, gnb.classes_]
actual_labels = [['true class']*n_classes, gnb.classes_]
conf_mat = confusion_matrix(y_test,y_pred)
conf_df = pd.DataFrame(data = conf_mat, index=actual_labels, columns=prediction_labels)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[23], line 1
----> 1 n_classes = len(gnb.classes_)
      2 prediction_labels = [['predicted class']*n_classes, gnb.classes_]
      3 actual_labels = [['true class']*n_classes, gnb.classes_]

AttributeError: 'GaussianNB' object has no attribute 'classes_'
Hide code cell content
from myst_nb import glue
c1 = gnb.classes_[1]
c2 = gnb.classes_[2]
conf12 = conf_mat[1][2]
glue('c1',c1)
glue('c2',c2)
glue('f1t2',conf12)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[24], line 2
      1 from myst_nb import glue
----> 2 c1 = gnb.classes_[1]
      3 c2 = gnb.classes_[2]
      4 conf12 = conf_mat[1][2]

AttributeError: 'GaussianNB' object has no attribute 'classes_'

flowers were mistakenly classified as when they were really

This report is also available:

print(classification_report(y_test,y_pred))
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[25], line 1
----> 1 print(classification_report(y_test,y_pred))

NameError: name 'classification_report' is not defined

We can also get a report with a few metrics.

  • Recall is the percent of each species that were predicted correctly.

  • Precision is the percent of the ones predicted to be in a species that are truly that species.

  • the F1 score is combination of the two

We see we have perfect recall and precision for setosa, as above, but we have lower for the other two because there were mistakes where versicolor and virginica were mixed up.

13.5. What does a generative model mean?#

Gaussian Naive Bayes is a very simple model, but it is a generative model (in constrast to a discriminative model) so we can use it to generate synthetic data that looks like the real data, based on what the model learned.

N = 20
n_features = len(feature_vars)
gnb_df = pd.DataFrame(np.concatenate([np.random.multivariate_normal(th, sig*np.eye(n_features),N)
                 for th, sig in zip(gnb.theta_,gnb.var_)]),
                 columns = gnb.feature_names_in_)
gnb_df['species'] = [ci for cl in [[c]*N for c in gnb.classes_] for ci in cl]
sns.pairplot(data =gnb_df, hue='species')
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[26], line 4
      1 N = 20
      2 n_features = len(feature_vars)
      3 gnb_df = pd.DataFrame(np.concatenate([np.random.multivariate_normal(th, sig*np.eye(n_features),N)
----> 4                  for th, sig in zip(gnb.theta_,gnb.var_)]),
      5                  columns = gnb.feature_names_in_)
      6 gnb_df['species'] = [ci for cl in [[c]*N for c in gnb.classes_] for ci in cl]
      7 sns.pairplot(data =gnb_df, hue='species')

AttributeError: 'GaussianNB' object has no attribute 'theta_'

To break this code down:

To do this, we extract the mean and variance parameters from the model (gnb.theta_,gnb.sigma_) and zip them together to create an iterable object that in each iteration returns one value from each list (for th, sig in zip(gnb.theta_,gnb.sigma_)). We do this inside of a list comprehension and for each th,sig where th is from gnb.theta_ and sig is from gnb.sigma_ we use np.random.multivariate_normal to get 20 samples. In a general multivariate normal distribution the second parameter is actually a covariance matrix. This describes both the variance of each individual feature and the correlation of the features. Since Naive Bayes is Naive it assumes the features are independent or have 0 correlation. So, to create the matrix from the vector of variances we multiply by np.eye(4) which is the identity matrix or a matrix with 1 on the diagonal and 0 elsewhere. Finally we stack the groups for each species together with np.concatenate (like pd.concat but works on numpy objects and np.random.multivariate_normal returns numpy arrays not data frames) and put all of that in a DataFrame using the feature names as the columns.

Then we add a species column, by repeating each species 20 times [c]*N for c in gnb.classes_ and then unpack that into a single list instead of as list of lists.

13.6. How does it make the predictions?#

It computes the probability for each class and then predicts the highest one:

gnb.predict_proba(X_test)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[27], line 1
----> 1 gnb.predict_proba(X_test)

NameError: name 'X_test' is not defined

we can also plot these

# make the probabilities into a dataframe labeled with classes & make the index a separate column
prob_df = pd.DataFrame(data = gnb.predict_proba(X_test), columns = gnb.classes_ ).reset_index()
# add the predictions
prob_df['predicted_species'] = y_pred
prob_df['true_species'] = y_test.values
# for plotting, make a column that combines the index & prediction
pred_text = lambda r: str( r['index']) + ',' + r['predicted_species']
prob_df['i,pred'] = prob_df.apply(pred_text,axis=1)
# same for ground truth
true_text = lambda r: str( r['index']) + ',' + r['true_species']
prob_df['correct'] = prob_df['predicted_species'] == prob_df['true_species']
# a dd a column for which are correct
prob_df['i,true'] = prob_df.apply(true_text,axis=1)
prob_df_melted = prob_df.melt(id_vars =[ 'index', 'predicted_species','true_species','i,pred','i,true','correct'],value_vars = gnb.classes_,
                             var_name = target_var, value_name = 'probability')
prob_df_melted.head()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[28], line 2
      1 # make the probabilities into a dataframe labeled with classes & make the index a separate column
----> 2 prob_df = pd.DataFrame(data = gnb.predict_proba(X_test), columns = gnb.classes_ ).reset_index()
      3 # add the predictions
      4 prob_df['predicted_species'] = y_pred

NameError: name 'X_test' is not defined

and then we can plot this:

# plot a bar graph for each point labeled with the prediction
sns.catplot(data =prob_df_melted, x = 'species', y='probability' ,col ='i,true',
            col_wrap=5,kind='bar', hue='species')
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[29], line 2
      1 # plot a bar graph for each point labeled with the prediction
----> 2 sns.catplot(data =prob_df_melted, x = 'species', y='probability' ,col ='i,true',
      3             col_wrap=5,kind='bar', hue='species')

NameError: name 'prob_df_melted' is not defined

13.7. What if the assumptions are not met?#

Using a toy dataset here shows an easy to see challenge for the classifier that we have seen so far. Real datasets will be hard in different ways, and since they’re higher dimensional, it’s harder to visualize the cause.

corner_data = 'https://raw.githubusercontent.com/rhodyprog4ds/06-naive-bayes/f425ba121cc0c4dd8bcaa7ebb2ff0b40b0b03bff/data/dataset6.csv'
df6= pd.read_csv(corner_data,usecols=[1,2,3])
sns.pairplot(data=df6, hue='char',hue_order=['A','B'])
<seaborn.axisgrid.PairGrid at 0x7f4690dba3a0>
../_images/f873423b5a1eae89748911b893ba6b7b9f19f75c052aa2fc00a4b598783728c3.png

As we can see in this dataset, these classes are quite separated.

X_train, X_test, y_train, y_test = train_test_split(df6[['x0','x1']],
                          df6['char'],
                          random_state = 4)
gnb_corners = GaussianNB()
gnb_corners.fit(X_train,y_train)
gnb_corners.score(X_test, y_test)
0.72

But we do not get a very good classification score.

To see why, we can look at what it learned.

N = 100
gnb_df = pd.DataFrame(np.concatenate([np.random.multivariate_normal(th, sig*np.eye(2),N)
         for th, sig in zip(gnb_corners.theta_,gnb_corners.var_)]),
         columns = ['x0','x1'])
gnb_df['char'] = [ci for cl in [[c]*N for c in gnb_corners.classes_] for ci in cl]

sns.pairplot(data =gnb_df, hue='char',hue_order=['A','B'])
<seaborn.axisgrid.PairGrid at 0x7f464bad7610>
../_images/826a99a34e95180b4c305e9408c43cf113a7360053d3abfc2cb53e3d68258662.png
df6_pred = X_test.copy()
df6_pred['pred'] = gnb_corners.predict(X_test)
sns.pairplot(data =df6_pred, hue ='pred', hue_order =['A','B'])
<seaborn.axisgrid.PairGrid at 0x7f464889b790>
../_images/5f1a84ec55676e8362b3c9cfe6d203099c8ca065ba193dc99fc22f4c3fe15375.png

This does not look much like the data and it’s hard to tell which is higher at any given point in the 2D space. We know though, that it has missed the mark. We can also look at the actual predictions.

If you try this again, split, fit, plot, it will learn different decisions, but always at least about 25% of the data will have to be classified incorrectly.

13.8. Decision Trees#

This data does not fit the assumptions of the Niave Bayes model, but a decision tree has a different rule. It can be more complex, but for the scikit learn one relies on splitting the data at a series of points along one feature at a time, sequentially. It basically learns a flowchart for deciding what class the sample belongs to at test time

It is a discriminative model, because it describes how to discriminate (in the sense of differentiate) between the classes. This is in contrast to the generative model that describes how the data is distributed.

That said, sklearn makes using new classifiers really easy once have learned one. All of the classifiers have the same API (same methods and attributes).

dt = tree.DecisionTreeClassifier()
dt.fit(X_train,y_train)
dt.score(X_test,y_test)
1.0

The sklearn estimator objects (that corresond to different models) all have the same API, so the fit, predict, and score methods are the same as above. We will see this also in regression and clustering. What each method does in terms of the specific calculations will vary depending on the model, but they’re always there.

the tree module also allows you to plot the tree to examine it.

plt.figure(figsize=(15,20))
tree.plot_tree(dt, rounded =True, class_names = ['A','B'],
      proportion=True, filled =True, impurity=False,fontsize=10);
../_images/43d65706ae214cd60b8bcf3e0c428f35aaf0315e70b17ad404df8c79234adba1.png

On the iris dataset, the sklearn docs include a diagram showing the decision boundary You should be able to modify this for another classifier.

13.9. Setting Classifier Parameters#

The decision tree we had above has a lot more layers than we would expect. This is really simple data so we still got perfect classification. However, the more complex the model, the more risk that it will learn something noisy about the training data that doesn’t hold up in the test set.

Fortunately, we can control the parameters to make it find a simpler decision boundary.

dt2 = tree.DecisionTreeClassifier(max_depth=2)
dt2.fit(X_train,y_train)
dt2.score(X_test,y_test)
0.74
plt.figure(figsize=(15,20))
tree.plot_tree(dt2, rounded =True, class_names = ['A','B'],
      proportion=True, filled =True, impurity=False,fontsize=10);
../_images/48b38080d6227675f9e3c425d2c0921af0c2b788330fe717ac5b9ab1fedbc33c.png

13.10. Questions#

Note

I added in some questions from previous semesters because few questions were asked.

13.10.1. Are there any good introductions to ScikitLearn that you are aware of?#

Scikit Learn User Guide is the best one and they have a large example gallery.

13.10.3. Do predictive algorithms have pros and cons? Or is there a standard?#

Each algorithm has different properties and strengths and weaknesses. Some are more popular than others, but they all do have weaknesses.

13.10.4. how often should we use confusion matrixes? Would it be better just to check the accuracy without one?#

A confusion matrix gives more detail on the performance than accuracy alone. If the accuracy was like 99.99 maybe the confustion matrix is not informative, but otherwise, it is generally useful to understand what types of mistakes as context for how you might use/not use/ trust the model.

13.10.5. Due to the initial ‘shuffling’ of data: Is it possible to get a seed/shuffle of data split so that it that does much worse in a model?#

Yes you can get a bad split, we will see next week a statistical technique that helps us improve this. However, the larger your dataset, the less likely this will happen, so we mostly now just get bigger and bigger datasets.

13.10.6. How does gnb = GaussianNB() work?#

this line calls the GaussianNB class constructor method with all default paramters.

This means it creates and object of that type, we can see that like follows

gnb_ex = GaussianNB()
type(gnb_ex)
sklearn.naive_bayes.GaussianNB

13.10.7. how will level 3s work?#

I will update on level 3s in class on Tuesday (and the site before then)

13.10.8. Could we use strata to better identify the the data in the toy dataset?#

The dataset didn’t have any additional columns we could use to stratify this data, this data is simply not a good fit for GaussianNB because it does not fit the assumptions. Above I included the Decision Tree example, so you can see a classifer that does work for it.

13.10.9. is it possible after training the model to add more data to it ?#

The models we will see in class, no. However, there is a thing called “online learning” that involves getting more data on a regular basis to improve its performance.

13.10.10. Can you use machine learning for any type of data?#

Yes the features for example could be an image instead of four numbers. It could also be text. The basic ideas are the same for more complex data, so we are going to spend a lot of time building your understanding of what ML is on simple data. Past students have successfully applied ML in more complex data after this course because once you have a good understanding of the core ideas, applying it to other forms of data is easier to learn on your own.

13.10.11. Can we check how well the model did using the y_test df?#

we could compare them directly or using score that does.

y_pred == y_test
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[41], line 1
----> 1 y_pred == y_test

NameError: name 'y_pred' is not defined
sum(y_pred == y_test)/len(y_test)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[42], line 1
----> 1 sum(y_pred == y_test)/len(y_test)

NameError: name 'y_pred' is not defined
gnb.score(X_test,y_test)
---------------------------------------------------------------------------
NotFittedError                            Traceback (most recent call last)
Cell In[43], line 1
----> 1 gnb.score(X_test,y_test)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/sklearn/base.py:706, in ClassifierMixin.score(self, X, y, sample_weight)
    681 """
    682 Return the mean accuracy on the given test data and labels.
    683 
   (...)
    702     Mean accuracy of ``self.predict(X)`` w.r.t. `y`.
    703 """
    704 from .metrics import accuracy_score
--> 706 return accuracy_score(y, self.predict(X), sample_weight=sample_weight)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/sklearn/naive_bayes.py:100, in _BaseNB.predict(self, X)
     86 def predict(self, X):
     87     """
     88     Perform classification on an array of test vectors X.
     89 
   (...)
     98         Predicted target values for X.
     99     """
--> 100     check_is_fitted(self)
    101     X = self._check_X(X)
    102     jll = self._joint_log_likelihood(X)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/sklearn/utils/validation.py:1461, in check_is_fitted(estimator, attributes, msg, all_or_any)
   1458     raise TypeError("%s is not an estimator instance." % (estimator))
   1460 if not _is_fitted(estimator, attributes, all_or_any):
-> 1461     raise NotFittedError(msg % {"name": type(estimator).__name__})

NotFittedError: This GaussianNB instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.

We can also use any of the other metrics we saw, we’ll practice more on Wednesday

13.10.12. I want to know more about the the test_train_split() function#

the docs are a good place to start.