Machine Learning Notes—Linear Regression
August 14, 2013
This is my first post on machine learning, and hopefully not the last one. The main goal of these posts is to serve as a quick reference for simple machine learning problems and their solutions, meanwhile allowing me to get a better understanding of the field itself. That said, don’t take anything for granted.
Linear regression is one of the algorithms used to predict scalar values given some inputs. Linear regression can be modelled like this:
It can also be written in a vector form as:
We first need to find the model parameters before any predictions, , can be made for our inputs .
Given a set of inputs and outputs we would have to solve a set equations for . The vector form can be solved using linear algebra; the solution takes the form of:
This particular solution will find betas using the method of ordinary least squares. If linear algebra is not an option, another way to find betas is to minimize the difference between actual values and predicted values by using some function optimization method, e.g. gradient descent.
We are going to use a minimal version of the Iris data set in the code examples.
Implementing The Model
In the examples below I am going to implement a simple linear regression model that takes only a single input. This way we can much more easily visualize our data and model; however, the code can be easily extended to support more than one input.
Reading The Data Set
In this example we are going to model the sepal length using sepal width only for setosa flower class.
One thing to note here is that we are adding intercept to our inputs (ones). It’s not very important in this particular case since our inputs will never evaluate to 0. In other cases it may be useful.
Solving For Betas With NumPy And Linear Algebra
fit() function expects a list of tuples as inputs, and a list of output
values. The model parameters will be returned as a list. If we are going to run
this function on our data set we will get a these beta coefficients
[2.63900125 0.69048972]. Thus, our regression function looks as follows:
Solving For Betas With scikit-learn
scikit-learn package comes with a linear model which can be used to solve linear regression problems.
The model parameters are stored in
model.coef_ and for our dataset they are
array([ 2.63900125, 0.69048972]). Predictions can be done by calling
Assessing Your Model
For simple cases that have only a single input we can plot the regression line and the data points together to see how well our model fits the data.
The red line represents our original model, which I think looks fairly well. However, if we look at our dataset the first data point looks like an outlier. We can try to build a model using a data set without the outlier. The green regression line represents the model where the outlier was removed from the training set.
By looking at the graphs it’s diffucult to tell which model is a better one. We
can take a look at the coefficient of determination () for
each of them. scikit-learn model provides
score method which can be used to
obtain it. For the first model we have of
for the second one—
0.547248091457. The closer to 1 the better
our model is.