Machine learning classification#
Building machine learning models to assign data to classes.
Machine learning has become an increasingly popular tool for solving classification problems.
The goal is to assign data points to pre-defined classes based on their features or attributes. This technique has numerous applications in a wide range of fields, from image and speech recognition to fraud detection and spam filtering. Building machine learning models to assign data to classes involves training algorithms on labelled datasets. Each data point is associated with a specific class label. By analyzing the relationships between the input features and the output labels, these models can learn to accurately classify new, unseen data points with high accuracy.
In this way, machine learning provides a powerful tool for automating classification tasks and enabling more efficient and effective decision-making.
How To#
from sklearn.model_selection import train_test_split
import pandas as pd
df = pd.read_csv("data/housing.csv")
df.head()
longitude | latitude | housing_median_age | total_rooms | total_bedrooms | population | households | median_income | median_house_value | ocean_proximity | |
---|---|---|---|---|---|---|---|---|---|---|
0 | -122.23 | 37.88 | 41.0 | 880.0 | 129.0 | 322.0 | 126.0 | 8.3252 | 452600.0 | NEAR BAY |
1 | -122.22 | 37.86 | 21.0 | 7099.0 | 1106.0 | 2401.0 | 1138.0 | 8.3014 | 358500.0 | NEAR BAY |
2 | -122.24 | 37.85 | 52.0 | 1467.0 | 190.0 | 496.0 | 177.0 | 7.2574 | 352100.0 | NEAR BAY |
3 | -122.25 | 37.85 | 52.0 | 1274.0 | 235.0 | 558.0 | 219.0 | 5.6431 | 341300.0 | NEAR BAY |
4 | -122.25 | 37.85 | 52.0 | 1627.0 | 280.0 | 565.0 | 259.0 | 3.8462 | 342200.0 | NEAR BAY |
df = df.dropna()
x_train, x_, y_train, y_ = train_test_split(df.drop(["longitude","latitude","ocean_proximity"], axis=1),
df.ocean_proximity, test_size=.5)
x_val, x_test, y_val, y_test = train_test_split(x_, y_, test_size=.5)
Nearest Neighbours#
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=10)
model.fit(x_train, y_train)
KNeighborsClassifier(n_neighbors=10)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KNeighborsClassifier(n_neighbors=10)
model.score(x_val, y_val)
0.6217697729052467
Random Forest#
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(x_train, y_train)
RandomForestClassifier()In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
RandomForestClassifier()
rf.score(x_val, y_val)
0.6889193422083008
rf.feature_importances_
array([0.12451693, 0.12532868, 0.10461846, 0.12722106, 0.10961342,
0.12612266, 0.2825788 ])
Logistic Regression#
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(max_iter=10000)
model.fit(x_train, y_train)
LogisticRegression(max_iter=10000)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LogisticRegression(max_iter=10000)
model.score(x_val, y_val)
0.5908379013312451
model.coef_
array([[-1.43232851e-02, 6.47333918e-04, 7.20893337e-04,
1.93041959e-03, -4.51779284e-04, -3.93556244e-04,
7.48750568e-06],
[ 2.85547354e-02, 1.87433720e-03, 4.63863630e-03,
1.32377110e-03, -8.17502987e-03, 3.16443770e-03,
-5.43621261e-06],
[-2.00186550e-04, -4.59019428e-03, -9.83595853e-04,
-3.04974462e-03, -9.09377095e-04, -1.33667831e-05,
-1.03892412e-05],
[ 2.12837186e-03, 1.23226868e-03, -7.63341231e-03,
-9.44457396e-04, 1.14355356e-02, -1.30883449e-03,
3.33858117e-06],
[-1.61596357e-02, 8.36254482e-04, 3.25747852e-03,
7.40011330e-04, -1.89934939e-03, -1.44868018e-03,
4.99936701e-06]])
Exercise#
Test different numbers of neighbours for the KNN classifier and see how pre-processing like scaling affects our results.