Decision trees and random forests#
Change up the machine learning models
Decision trees and random forests are popular machine learning techniques for classification and regression tasks.
A decision tree is a tree-like model where each node represents a decision based on a feature, and each branch represents an outcome of that decision. On the other hand, random forests are an ensemble of decision trees where each tree is trained on a subset of the data and a random subset of the features. They are powerful and widely used algorithms in machine learning because they can handle large datasets, deal with missing values, and provide interpretable results.
This notebook will explore decision trees and random forests in more detail and discuss their strengths and weaknesses.
from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split import pandas as pd df = pd.read_csv("data/housing.csv") df.head()
x_train, x_, y_train, y_ = train_test_split(df[["housing_median_age", "total_rooms", "median_income"]], df.median_house_value, test_size=.5) x_val, x_test, y_val, y_test = train_test_split(x_, y_, test_size=.5)
from sklearn import preprocessing from sklearn import tree
scaler = preprocessing.StandardScaler() model = tree.DecisionTreeRegressor()
Build a forest of decision trees#
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor()
array([0.14080726, 0.19557078, 0.66362196])
Experiment with different machine learning models