Machine learning classification#
Building machine learning models to assign data to classes.
Machine learning has become an increasingly popular tool for solving classification problems.
The goal is to assign data points to pre-defined classes based on their features or attributes. This technique has numerous applications in a wide range of fields, from image and speech recognition to fraud detection and spam filtering. Building machine learning models to assign data to classes involves training algorithms on labelled datasets. Each data point is associated with a specific class label. By analyzing the relationships between the input features and the output labels, these models can learn to accurately classify new, unseen data points with high accuracy.
In this way, machine learning provides a powerful tool for automating classification tasks and enabling more efficient and effective decision-making.
How To#
from sklearn.model_selection import train_test_split
import pandas as pd
df = pd.read_csv("data/housing.csv")
df.head()
longitude | latitude | housing_median_age | total_rooms | total_bedrooms | population | households | median_income | median_house_value | ocean_proximity | |
---|---|---|---|---|---|---|---|---|---|---|
0 | -122.23 | 37.88 | 41.0 | 880.0 | 129.0 | 322.0 | 126.0 | 8.3252 | 452600.0 | NEAR BAY |
1 | -122.22 | 37.86 | 21.0 | 7099.0 | 1106.0 | 2401.0 | 1138.0 | 8.3014 | 358500.0 | NEAR BAY |
2 | -122.24 | 37.85 | 52.0 | 1467.0 | 190.0 | 496.0 | 177.0 | 7.2574 | 352100.0 | NEAR BAY |
3 | -122.25 | 37.85 | 52.0 | 1274.0 | 235.0 | 558.0 | 219.0 | 5.6431 | 341300.0 | NEAR BAY |
4 | -122.25 | 37.85 | 52.0 | 1627.0 | 280.0 | 565.0 | 259.0 | 3.8462 | 342200.0 | NEAR BAY |
df = df.dropna()
x_train, x_, y_train, y_ = train_test_split(df.drop(["longitude","latitude","ocean_proximity"], axis=1),
df.ocean_proximity, test_size=.5)
x_val, x_test, y_val, y_test = train_test_split(x_, y_, test_size=.5)
Nearest Neighbours#
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=10)
model.fit(x_train, y_train)
KNeighborsClassifier(n_neighbors=10)
model.score(x_val, y_val)
0.610415035238841
Random Forest#
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(x_train, y_train)
RandomForestClassifier()
rf.score(x_val, y_val)
0.6644479248238058
rf.feature_importances_
array([0.12328463, 0.1240654 , 0.10260033, 0.12654883, 0.1057744 ,
0.13034255, 0.28738387])
Logistic Regression#
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(max_iter=10000)
model.fit(x_train, y_train)
LogisticRegression(max_iter=10000)
model.score(x_val, y_val)
0.5857478465152701
model.coef_
array([[-1.72473975e-02, 6.12754034e-04, 1.30760638e-03,
2.98563020e-03, -1.23615621e-03, -6.67404522e-04,
5.28308491e-06],
[ 3.76171973e-02, 1.90288212e-03, 5.37687156e-03,
2.37207627e-03, -9.48749494e-03, 4.66014188e-03,
-9.49343145e-06],
[-6.82243817e-04, -4.38749818e-03, -1.83951334e-04,
-7.14357554e-03, -1.62250975e-03, -7.95482214e-05,
5.28405405e-07],
[ 3.83003358e-03, 1.06053633e-03, -8.02485942e-03,
-4.58787342e-05, 1.26072182e-02, -1.51002255e-03,
6.12692404e-07],
[-2.35175896e-02, 8.11325701e-04, 1.52433282e-03,
1.83174781e-03, -2.61057337e-04, -2.40316659e-03,
3.06924874e-06]])
Exercise#
Test different numbers of neighbours for the KNN classifier and see how pre-processing like scaling affects our results.