17. More regression#

import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split

import pandas as pd
sns.set_theme(font_scale=2,palette='colorblind')

17.1. Mutlivariate Regression#

Recall the equation for a line:

\[ \hat{y} = mx+b\]

When we have multiple variables instead of a scalar \(x\) we can have a vector \(\mathbf{x}\) and instead of a single slope, we have a vector of coefficients \(\beta\)

\[ \hat{y} = \beta^T\mathbf{x} + \beta_0 \]

where \(\beta\) is the regr_db.coef_ and \(\beta_0\) is regr_db.intercept_ and that’s a vector multiplication and \(\hat{y}\) is y_pred and \(y\) is y_test.

In scalar form, a vector multiplication can be written like

\[ \hat{y} = \sum_{k=0}^d(x_k*\beta_k) + \beta_0\]

where there are \(d\) features, that is \(d\)= len(X_test[k]) and \(k\) indexed into it.

We can also load data from Scikit learn.

This dataset includes 10 features measured on a given date and an measure of diabetes disease progression measured one year later. The predictor we can train with this data might be someting a doctor uses to calculate a patient’s risk.

# Load the diabetes dataset
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)
X_train,X_test, y_train,y_test = train_test_split(diabetes_X, diabetes_y ,
                                                  test_size=20,random_state=0)

by default it would have returned a very different object

db_bunch = datasets.load_diabetes()
type(db_bunch)
sklearn.utils._bunch.Bunch

it has a very useful attribute, the data description:

print(db_bunch.DESCR)
.. _diabetes_dataset:

Diabetes dataset
----------------

Ten baseline variables, age, sex, body mass index, average blood
pressure, and six blood serum measurements were obtained for each of n =
442 diabetes patients, as well as the response of interest, a
quantitative measure of disease progression one year after baseline.

**Data Set Characteristics:**

  :Number of Instances: 442

  :Number of Attributes: First 10 columns are numeric predictive values

  :Target: Column 11 is a quantitative measure of disease progression one year after baseline

  :Attribute Information:
      - age     age in years
      - sex
      - bmi     body mass index
      - bp      average blood pressure
      - s1      tc, total serum cholesterol
      - s2      ldl, low-density lipoproteins
      - s3      hdl, high-density lipoproteins
      - s4      tch, total cholesterol / HDL
      - s5      ltg, possibly log of serum triglycerides level
      - s6      glu, blood sugar level

Note: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times the square root of `n_samples` (i.e. the sum of squares of each column totals 1).

Source URL:
https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html

For more information see:
Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) "Least Angle Regression," Annals of Statistics (with discussion), 407-499.
(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)

This model predicts what lab measure a patient will have one year in the future based on lab measures in a given day. Since we see that this is not a very high r2, we can say that this is not a perfect predictor, but a Doctor, who better understands the score would have to help interpret the core.

regr_db = linear_model.LinearRegression()
regr_db.fit(X_train, y_train)
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

We can look at the estimator again and see what it learned. It describes the model like a line:

\[ \hat{y} = mx+b\]

except in this case it’s multivariate, so we can write it like:

\[ \hat{y} = \beta^Tx + \beta_0 \]

where \(\beta\) is the regr_db.coef_ and \(\beta_0\) is regr_db.intercept_ and that’s a vector multiplication and \(\hat{y}\) is y_pred and \(y\) is y_test.

In scalar form it can be written like

\[ \hat{y} = \sum_{k=0}^d(x_k*\beta_k) + \beta_0\]

where there are \(d\) features, that is \(d\)= len(X_test[k]) and \(k\) indexed into it. For example in the below \(k=0\)

y_pred = regr_db.predict(X_test)
r2_score(y_test,y_pred)
0.5195333332288746
np.sqrt(mean_squared_error(y_test,y_pred))
53.387672540450204
np.mean(y_test)
159.4
np.std(y_test)
77.02103608755208
X_train.shape
(422, 10)

17.2. LASSO#

lasso allows us to pick a subset of the features at the same time we learn the weights

lasso = linear_model.Lasso()
lasso.fit(X_train, y_train)
lasso.score(X_test, y_test)
0.42249260665177746

We see it learns a model with most of the coefficients set to 0.

lasso.coef_
array([  0.        ,  -0.        , 358.27705585,   9.71139556,
         0.        ,   0.        ,  -0.        ,   0.        ,
       309.50698469,   0.        ])
lasso = linear_model.Lasso(.25)
lasso.fit(X_train, y_train)
lasso.score(X_test, y_test)
0.5501992246861973
lasso.coef_
array([  -0.        ,  -52.1429064 ,  503.60281992,  219.24141192,
         -0.        ,   -0.        , -147.83111493,    0.        ,
        441.7507385 ,    0.        ])

17.3. Polynomial Regression#

Polynomial regression is still a linear problem. Linear regression solves for the \(\beta_i\) for a \(d\) dimensional problem.

\[ y = \beta_0 + \beta_1 x_1 + \ldots + \beta_d x_d = \sum_i^d \beta_i x_i\]

Quadratic regression solves for

\[ y = \beta_0 + \sum_i^d \beta_i x_i$ + \sum_j^d \sum_i^d \beta_{d+i} x_i x_j + \sum_i^d x_i^2$ \]

This is still a linear problem, we can create a new \(X\) matrix that has the polynomial values of each feature and solve for more \(\beta\) values.

We use a transformer object, which works similarly to the estimators, but does not use targets. First, we instantiate.

pol = PolynomialFeatures()
X_train2 = pol.fit_transform(X_train)
X_test2 = pol.fit_transform(X_test)

This changes the shape a lot, now we have a lot more features

X_train2.shape
(422, 66)

We can break down this total into different types, the original ones (\(x_0, x_1, \ldots, x_9\)), those squared, (\(x_0^2, x_1^2, \ldots, x_9^2\)), every pair (\(x_0x_1, x_0x_1, \ldots, x_7x_8, x_8x_9\)) and a constant (so we do not need the intercept separately)

original_feats = 10
sq_feats = 10
pair_prod_feats =  sum(range(10)) 
const_feats = 1
original_feats + sq_feats + pair_prod_feats + const_feats
66

Now we can try lasso again (and different alpha values, but we changed it in place and I am leaing a good one)

lasso2 = linear_model.Lasso(.27)
lasso2.fit(X_train2, y_train)
lasso2.score(X_test2, y_test)
0.5501767795382546

and see this can improve some by using a subset of the new features and the old features.

lasso2.coef_
array([   0.        ,    0.        ,  -36.0627981 ,  502.37020182,
        210.0069951 ,   -0.        ,   -0.        , -136.1195633 ,
          0.        ,  439.39268658,    0.        ,    0.        ,
          0.        ,    0.        ,    0.        ,    0.        ,
         -0.        ,   -0.        ,    0.        ,    0.        ,
          0.        ,   -0.        ,    0.        ,    0.        ,
          0.        ,   -0.        ,    0.        ,   -0.        ,
          0.        ,    0.        ,    0.        ,    0.        ,
          0.        ,   -0.        ,   -0.        ,    0.        ,
          0.        ,    0.        ,    0.        ,    0.        ,
          0.        ,    0.        ,    0.        ,    0.        ,
          0.        ,   -0.        ,   -0.        ,    0.        ,
         -0.        ,    0.        ,    0.        ,   -0.        ,
          0.        ,   -0.        ,    0.        ,    0.        ,
         -0.        ,   -0.        ,    0.        ,   -0.        ,
          0.        ,    0.        ,    0.        ,    0.        ,
          0.        ,    0.        ])