Skip to article frontmatterSkip to article content

Clustering Introduction

Clustering is unsupervised learning. That means we do not have the labels to learn from. We aim to learn both the labels for each point and some way of characterizing the classes at the same time.

Computationally, this is a harder problem. Mathematically, we can typically solve problems when we have a number of equations equal to or greater than the number of unknowns. For NN data points in dd dimensions and KK clusters, we have NN equations and N+KdN + K*d unknowns:

In contrast, in Gaussian Naive Bayes, we our unknowns would be 2Kd2*K*d for KK classes and dd features:

This means we have a harder problem to solve.

For today, we’ll see K-means clustering which is defined by KK a number of clusters and a mean (center) for each one. There are other K-centers algorithms for other types of centers.

Clustering is, generally, a stochastic (random) algorithm, so it can be a little harder to debug the models and measure performance. For this reason, we are going to look a little more closely at what it actually does than we did with classification.

Today we will focus on the assumptions and measuring performance.

KMeans

Clustering with Sci-kit Learn

import seaborn as sns
import numpy as np
from sklearn import datasets
from sklearn.cluster import KMeans
from sklearn import metrics
import pandas as pd
import matplotlib.pyplot as plt
sns.set_theme(palette='colorblind')

# set global random seed so that the notes are the same each time the site builds
np.random.seed(1103)
np.random.seed(113)

Today we will load the iris data from seaborn:

iris_df = sns.load_dataset('iris')

this is how the clustering algorithm sees the data, with no labels:

sns.pairplot(iris_df)

Next we need to create a copy of the data that’s appropriate for clustering. Remember that clustering is unsupervised learning so it doesn’t have a target variable. We also can do clustering on the data with or without splitting into test/train splits, since it doesn’t use a target variable, we can evaluate how good the clusters it finds are on the actual data that it learned from.

We can either pick the measurements out or drop the species column. remember most data frame operations return a copy of the dataframe.

We will drop here:

iris_X = iris_df.drop(columns=['species'])

and inspect to see it:

iris_X.head()

Next, we create a Kmeans estimator object with 3 clusters, since we know that the iris data has 3 species of flowers.

km = KMeans(n_clusters=3)

We dropped the column that tells us which of the three classes that each sample(row) belongs to. We still have data from three species of flowers, so we still expect to find 3 clusters.

Again, we will inspect the object to see the before and after

km.__dict__

We can use fit again, but this time it only requires the features, no labels.

km.fit(iris_X)

and after fitting, we can see what it learned (and compare to before)

We see it learns similar, but fewer, parameters compared to gnb:

km.__dict__

Visualizing the outputs

First we’ll save it in the dataframe, then we will plot it like we have before, but this time we will use the predicted values for the hue.

iris_df['km3'] = km.predict(iris_X).astype(str)
sns.pairplot(iris_df, hue='km3')

For one pair of features we can look in more detail:

sns.relplot(data=iris_df,x = 'petal_width',y='sepal_width',
                hue='km3',style='species')

here I used the style to set the shape and the hue to set th color of the markers so that we can see where the ground truth and learned groups agree and disagree.

Randomness in Kmeans

If we run that a few times, we will see different solutions each time because the algorithm is random, or stochastic.

for i in range(5):
    iris_df['km3_'+str(i)] = km.fit_predict(iris_X)

Since we don’t have separate test and train data, we can use the fit_predict method. This is what the kmeans algorithm always does anyway, it both learns the means and the assignment (or prediction) for each sample at the same time.

Flip through these tabs and notice that the points stay in approximately the same clusters or that the partition is the same but the labels (0,1,2) assigned to each group changes (the colors change from solution to solution)

0
1
2
3
4
sns.pairplot(iris_df, vars=iris_X.columns,hue='km3_0')
Solution to Exercise 1

Since the column passed to hue ('km3') is numerical, it uses a continuous color palette instead of discrete one.

You can change the palette and to force it to be discrete you can cast the column.

These are similar to the outputs in classification, except that in classification, it’s able to tell us a specific species for each. Here it can only say clust 0, 1, or 2. It can’t match those groups to the species of flower.

Clustering Evaluations

We cannot compute the same metrics we used for classification for clustering, because since it is unsupervised learning

Silhouette Score

s=bamax(a,b)s = \frac{b-a}{max(a,b)}

a: The mean distance between a sample and all other points in the same class.

b: The mean distance between a sample and all other points in the next nearest cluster.

Bad Clustering Solution
Good Clustering Solution

These labels are randomly assigned to data that comes from a single Gaussian distribution.

Source
N = 100
# sample data that is one blob, in 2dimensions
bad_cluster_data = np.random.multivariate_normal([5,5],.75*np.eye(2), size=N)
data_cols = ['x0','x1']
df = pd.DataFrame(data=bad_cluster_data,columns=data_cols)
# randomly assign cluster labels with equal 50-50 probabilty for each sample
df['cluster'] = np.random.choice([0,1],N)

sns.relplot(data=df,x='x0',y='x1',hue='cluster',)
plt.title('S = ' + str(metrics.silhouette_score(df[data_cols],df['cluster'])));

Mutual Information

Mutual information scores can be used to either compare a clustering solution to the real labels (the actual flower species for the iris data) or to compare two different clusterings to see if they are consistent.

In the plots below the color of the points represents one solution and the symbol represents the other.

Low Mutual information Solutions
High Mutual Information Solutions

Both the color and symbol are randomly assigned to data that comes from a single Gaussian distribution.

Source
N = 100
# sample data that is one blob, in 2dimensions
bad_cluster_data = np.random.multivariate_normal([5,5],.75*np.eye(2), size=N)
data_cols = ['x0','x1']
df = pd.DataFrame(data=bad_cluster_data,columns=data_cols)
# randomly assign cluster labels with equal 50-50 probabilty for each sample
colors = ['blue','orange']
symbols = ['o','x']
df['color'] = np.random.choice(colors,N)
df['symbol'] = np.random.choice(symbols,N)

sns.relplot(data=df,x='x0',y='x1',hue='color',style='symbol',hue_order = colors,style_order =symbols)
plt.title('S = ' + str(metrics.adjusted_mutual_info_score(df['color'],df['symbol'])));

How Many Clusters?

We will apply our metrics to answer a question.

In a real clustering context, we would not know the number of clusters. One common way to figure that out is to try a few and compare them.

First we’ll score the one we have already done

metrics.silhouette_score(iris_X, iris_df['km3'])

We can also compare different numbers of clusters, we noted that in class it looked like maybe 2 might be better than 3, so lets look at that:

km2 = KMeans(n_clusters=2)
iris_df['km2'] = km2.fit_predict(iris_X)
metrics.silhouette_score(iris_X,iris_df['km2'])

this is higher than the score for 3, as expected!

we can also check 4:

km4 = KMeans(n_clusters=4)
iris_df['km4'] = km4.fit_predict(iris_X)
metrics.silhouette_score(iris_X,iris_df['km4'])

it is not better, we can check more:

km10 = KMeans(n_clusters=10)
iris_df['km10'] = km10.fit_predict(iris_X)
metrics.silhouette_score(iris_X,iris_df['km10'])

again not better.

Given these results, we would say that the data is best explained by two clusters.

Which one was closest to the original?

The MI score lets us see which one is the closest match to the original data:

metrics.adjusted_mutual_info_score(iris_df['species'],iris_df['km3'])
metrics.adjusted_mutual_info_score(iris_df['species'],iris_df['km2'])
metrics.adjusted_mutual_info_score(iris_df['species'],iris_df['km4'])

here the 3 is best, as expected.

Questions after class

To find the right amount of clusters, you would simply try different cluster counts each time and determine which one has the highest silhouette score?

yes, exactly!

Why does the silhouette score not give a score for each predicted group? SInce the km3 columns in metrics.silhouette_score(iris_X, iris_df[‘km3’]) is the predicted categories, why is the silhouette score not separated for each km3?

There is not an equivalent of confusion matrix in this concept, but we can get a score per sample and groupby to create a per group average:

iris_df['km3_silhouette'] = metrics.silhouette_samples(iris_X,iris_df['km3'])
iris_df.groupby('km3')['km3_silhouette'].mean()