Linear regression#
A simple machine learning model that can uncover relationships in data.
Linear regression is a robust machine learning algorithm that is commonly used for modelling and analyzing data.
It is a simple and effective technique for discovering relationships between variables and predicting future outcomes. The basic premise of linear regression is to find the best linear relationship between the independent and dependent variables in a dataset. Doing so can help identify patterns, trends, and correlations in the data, enabling us to make informed decisions and accurate predictions.
Linear regression is a versatile tool with applications in various fields, from finance and economics to healthcare and engineering.
How To#
import pandas as pd
df = pd.read_csv("data/housing.csv")
df.head()
longitude | latitude | housing_median_age | total_rooms | total_bedrooms | population | households | median_income | median_house_value | ocean_proximity | |
---|---|---|---|---|---|---|---|---|---|---|
0 | -122.23 | 37.88 | 41.0 | 880.0 | 129.0 | 322.0 | 126.0 | 8.3252 | 452600.0 | NEAR BAY |
1 | -122.22 | 37.86 | 21.0 | 7099.0 | 1106.0 | 2401.0 | 1138.0 | 8.3014 | 358500.0 | NEAR BAY |
2 | -122.24 | 37.85 | 52.0 | 1467.0 | 190.0 | 496.0 | 177.0 | 7.2574 | 352100.0 | NEAR BAY |
3 | -122.25 | 37.85 | 52.0 | 1274.0 | 235.0 | 558.0 | 219.0 | 5.6431 | 341300.0 | NEAR BAY |
4 | -122.25 | 37.85 | 52.0 | 1627.0 | 280.0 | 565.0 | 259.0 | 3.8462 | 342200.0 | NEAR BAY |
Preparing training data#
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(df[["housing_median_age", "total_rooms", "median_income"]],
df.median_house_value, test_size=.5,
stratify=df.ocean_proximity)
df.shape
(20640, 10)
x_train.shape
(10320, 3)
x_test.shape
(10320, 3)
Building the model#
model = LinearRegression()
model.fit(x_train, y_train)
LinearRegression()
model.score(x_test, y_test)
0.5155075673179548
Improving the model#
from sklearn import preprocessing
x_val, x_test, y_val, y_test = train_test_split(x_test, y_test)
x_test.shape
(2580, 3)
scaler = preprocessing.StandardScaler()
model = LinearRegression()
scaler.fit(x_train)
StandardScaler()
x_scaled = scaler.transform(x_train)
x_scaled
array([[ 1.15325311, -0.95422541, -1.1951807 ],
[-1.16485605, -0.95198119, -0.83551041],
[-0.92505166, 0.36941939, 1.16487974],
...,
[ 0.11410073, -0.04172291, -1.67844611],
[ 0.51377472, -0.52602699, 0.03465452],
[-1.16485605, 3.12802262, 0.09709026]])
model.fit(x_scaled, y_train)
LinearRegression()
model.score(scaler.transform(x_val), y_val)
0.5088762237740652
scaler = preprocessing.MinMaxScaler().fit(x_train)
model = LinearRegression().fit(scaler.transform(x_train), y_train)
model.score(scaler.transform(x_val), y_val)
0.5088762237740652
Predicting with the Model#
model.predict(scaler.transform(x_test))
array([313583.36021605, 171658.13994743, 353538.70300544, ...,
337502.26109472, 290817.54385296, 264698.81541868])
y_test
18113 341400.0
19060 254700.0
18088 420000.0
12618 183600.0
13539 85500.0
...
11351 215100.0
2767 54300.0
13238 304100.0
11101 249400.0
7102 200800.0
Name: median_house_value, Length: 2580, dtype: float64
Inspecting the model#
model.coef_
array([101136.32711246, 146929.99874837, 624960.58781183])
model.intercept_
-2293.2173970997974
Exercise#
Experiment how preprocessing can affect your data.