深入探索Python机器学习算法:监督学习(线性回归,逻辑回归,决策树与随机森林,支持向量机,K近邻算法)
文章目录
- 深入探索Python机器学习算法:监督学习
- 一、线性回归
- 二、逻辑回归
- 三、决策树与随机森林
- 四、支持向量机
- 五、K近邻算法
深入探索Python机器学习算法:监督学习
在机器学习领域,Python凭借其丰富的库和简洁的语法成为了众多数据科学家和机器学习爱好者的首选语言。本文将结合具体代码,深入探讨线性回归、逻辑回归、决策树与随机森林、支持向量机以及K近邻算法这五种常见的机器学习算法,帮助读者更好地理解它们的原理、实现以及应用。
import numpy as np
import pandas as pd
from sklearn.datasets import make_regression, load_iris
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC, SVR
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score, accuracy_score, classification_report
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.decomposition import PCA
import graphviz
import matplotlib.pyplot as plt
from statsmodels.stats.outliers_influence import variance_inflation_factor
# 设置 matplotlib 支持中文
plt.rcParams['font.family'] = 'SimHei' # Windows 系统
# plt.rcParams['font.family'] = 'WenQuanYi Zen Hei' # Linux 系统
# plt.rcParams['font.family'] = 'Arial Unicode MS' # macOS 系统
plt.rcParams['axes.unicode_minus'] = False # 解决负号显示问题
一、线性回归
线性回归是一种广泛应用的预测模型,旨在找出变量之间的线性关系。我们通过make_regression
函数生成了一个包含1000个样本、10个特征的回归数据集,并将其按照80:20的比例划分为训练集和测试集。
X_reg, y_reg = make_regression(n_samples=1000, n_features=10, noise=0.5, random_state=42)
X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split(X_reg, y_reg, test_size=0.2, random_state=42)
为了提升模型性能,我们对数据进行了标准化处理。
scaler_reg = StandardScaler()
X_reg_train_scaled = scaler_reg.fit_transform(X_reg_train)
X_reg_test_scaled = scaler_reg.transform(X_reg_test)
接着,使用LinearRegression
类构建线性回归模型并进行训练和预测。
lin_reg = LinearRegression()
lin_reg.fit(X_reg_train_scaled, y_reg_train)
y_reg_pred = lin_reg.predict(X_reg_test_scaled)
特征间的相关性和共线性可能影响模型的稳定性和准确性。我们通过计算方差膨胀因子(VIF)来检测共线性。
# 辅助函数:计算 VIF
def calculate_vif(X):
"""
计算特征的方差膨胀因子(VIF),用于检测特征间的共线性。
参数:
X (DataFrame): 特征矩阵
返回:
vif (DataFrame): 包含特征名称和对应 VIF 值的数据框
"""
vif = pd.DataFrame()
vif["Variable"] = X.columns
vif["VIF"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
return vif
X_reg_df = pd.DataFrame(X_reg_train_scaled)
vif_values_reg = calculate_vif(X_reg_df)
print("线性回归特征的VIF值:")
print(vif_values_reg)
模型评估是关键步骤,我们使用均方误差(MSE)、均方根误差(RMSE)、平均绝对误差(MAE)和决定系数(R²)来评估模型性能。
mse_reg = mean_squared_error(y_reg_test, y_reg_pred)
rmse_reg = np.sqrt(mse_reg)
mae_reg = mean_absolute_error(y_reg_test, y_reg_pred)
r2_reg = r2_score(y_reg_test, y_reg_pred)
print("线性回归 - 均方误差:", mse_reg)
print("线性回归 - 均方根误差:", rmse_reg)
print("线性回归 - 平均绝对误差:", mae_reg)
print("线性回归 - 决定系数:", r2_reg)
此外,我们还探索了线性回归的扩展,如多项式回归、岭回归和Lasso回归。多项式回归通过添加特征的多项式组合来捕捉数据中的非线性关系。
poly = PolynomialFeatures(degree=2)
X_poly_reg = poly.fit_transform(X_reg_train_scaled)
X_poly_reg_test = poly.transform(X_reg_test_scaled)
poly_reg = LinearRegression()
poly_reg.fit(X_poly_reg, y_reg_train)
y_poly_reg_pred = poly_reg.predict(X_poly_reg_test)
岭回归和Lasso回归则通过对模型参数施加正则化约束,防止过拟合。
ridge = Ridge(alpha=1.0)
ridge.fit(X_reg_train_scaled, y_reg_train)
y_ridge_pred = ridge.predict(X_reg_test_scaled)
lasso = Lasso(alpha=0.1)
lasso.fit(X_reg_train_scaled, y_reg_train)
y_lasso_pred = lasso.predict(X_reg_test_scaled)
二、逻辑回归
逻辑回归常用于二分类和多分类问题,通过将线性回归的输出经过Sigmoid函数转换为概率值。我们以鸢尾花数据集为例,构建二分类逻辑回归模型。
iris = load_iris()
X_clf = iris.data
y_clf_binary = (iris.target == 2).astype(int)
X_clf_train, X_clf_test, y_clf_train, y_clf_test = train_test_split(X_clf, y_clf_binary, test_size=0.2, random_state=42)
logreg = LogisticRegression(solver='lbfgs', max_iter=1000)
logreg.fit(X_clf_train, y_clf_train)
y_clf_pred = logreg.predict(X_clf_test)
对于多分类问题,我们展示了One - vs - Rest和Softmax回归两种方法。
X_clf_multiclass = iris.data
y_clf_multiclass = iris.target
X_clf_train_multiclass, X_clf_test_multiclass, y_clf_train_multiclass, y_clf_test_multiclass = train_test_split(
X_clf_multiclass, y_clf_multiclass, test_size=0.2, random_state=42)
logreg_ovr = OneVsRestClassifier(LogisticRegression(solver='lbfgs', max_iter=1000))
logreg_ovr.fit(X_clf_train_multiclass, y_clf_train_multiclass)
y_clf_pred_ovr = logreg_ovr.predict(X_clf_test_multiclass)
logreg_softmax = LogisticRegression(solver='lbfgs', max_iter=1000)
logreg_softmax.fit(X_clf_train_multiclass, y_clf_train_multiclass)
y_clf_pred_softmax = logreg_softmax.predict(X_clf_test_multiclass)
通过准确率评估模型性能,我们可以直观地看到不同方法在多分类任务中的表现。
print("逻辑回归(二分类) - 准确率:", accuracy_score(y_clf_test, y_clf_pred))
print("One-vs-Rest逻辑回归 - 准确率:", accuracy_score(y_clf_test_multiclass, y_clf_pred_ovr))
print("Softmax回归 - 准确率:", accuracy_score(y_clf_test_multiclass, y_clf_pred_softmax))
三、决策树与随机森林
决策树是一种基于树结构的分类和回归模型,通过对特征进行递归划分来构建决策规则。我们以鸢尾花数据集为基础,分别实现了ID3、C4.5(模拟)和CART算法。
# 数据准备
iris = load_iris()
X_tree = iris.data
y_tree = iris.target
X_tree_train, X_tree_test, y_tree_train, y_tree_test = train_test_split(X_tree, y_tree, test_size=0.2, random_state=42)
dtree_id3 = DecisionTreeClassifier(criterion='entropy')
dtree_id3.fit(X_tree_train, y_tree_train)
dtree_c45 = DecisionTreeClassifier(criterion='entropy', splitter='best')
dtree_c45.fit(X_tree_train, y_tree_train)
dtree_cart = DecisionTreeClassifier(criterion='gini')
dtree_cart.fit(X_tree_train, y_tree_train)
以CART算法为例,我们通过export_graphviz
函数将决策树结构可视化,这有助于理解模型的决策过程。
dot_data = export_graphviz(dtree_cart, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
# graph.render("iris_cart_tree")
决策树容易出现过拟合,因此我们介绍了预剪枝和后剪枝(代价复杂度剪枝)两种方法来提高模型的泛化能力。
dtree_pruned = DecisionTreeClassifier(criterion='gini', max_depth=3, min_samples_leaf=5)
dtree_pruned.fit(X_tree_train, y_tree_train)
dtree_ccp = DecisionTreeClassifier(ccp_alpha=0.01)
dtree_ccp.fit(X_tree_train, y_tree_train)
随机森林是一种集成学习方法,通过构建多个决策树并综合它们的预测结果来提高模型的稳定性和准确性。我们使用RandomForestClassifier
类构建随机森林模型,并通过网格搜索进行超参数调优。
rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_tree_train, y_tree_train)
param_grid = {
'n_estimators': [50, 100, 200],
'max_depth': [None, 5, 10],
'min_samples_leaf': [1, 2, 5]
}
grid_search = GridSearchCV(rf, param_grid, cv=5)
grid_search.fit(X_tree_train, y_tree_train)
best_params = grid_search.best_params_
print("随机森林最佳参数:", best_params)
最终,我们对比了随机森林、预剪枝决策树和后剪枝决策树在测试集上的准确率。
y_tree_pred_rf = grid_search.predict(X_tree_test)
accuracy_rf = accuracy_score(y_tree_test, y_tree_pred_rf)
print("随机森林 - 准确率:", accuracy_rf)
print("预剪枝决策树 - 准确率:", accuracy_score(y_tree_test, dtree_pruned.predict(X_tree_test)))
print("后剪枝决策树 - 准确率:", accuracy_score(y_tree_test, dtree_ccp.predict(X_tree_test)))
四、支持向量机
支持向量机旨在寻找一个最优的超平面来对数据进行分类。我们同样使用鸢尾花数据集进行实验,并对数据进行标准化处理。
# 数据准备
iris = load_iris()
X_svm = iris.data
y_svm = iris.target
X_svm_train, X_svm_test, y_svm_train, y_svm_test = train_test_split(X_svm, y_svm, test_size=0.2, random_state=42)
scaler_svm = StandardScaler()
X_svm_train_scaled = scaler_svm.fit_transform(X_svm_train)
X_svm_test_scaled = scaler_svm.transform(X_svm_test)
我们分别实现了线性可分SVM、多项式核SVM和高斯核SVM。
svm_linear = SVC(kernel='linear')
svm_linear.fit(X_svm_train_scaled, y_svm_train)
svm_poly = SVC(kernel='poly', degree=3)
svm_poly.fit(X_svm_train_scaled, y_svm_train)
svm_rbf = SVC(kernel='rbf')
svm_rbf.fit(X_svm_train_scaled, y_svm_train)
通过网格搜索对SVM的超参数进行调优,以找到最佳的模型配置。
param_grid = {
'C': [0.1, 1, 10],
'kernel': ['linear', 'poly', 'rbf'],
'degree': [2, 3, 4],
'gamma': ['scale', 'auto']
}
grid_search_svm = GridSearchCV(SVC(), param_grid, cv=5)
grid_search_svm.fit(X_svm_train_scaled, y_svm_train)
best_params_svm = grid_search_svm.best_params_
print("SVM最佳参数:", best_params_svm)
# 预测
y_svm_pred = grid_search_svm.predict(X_svm_test_scaled)
accuracy_svm = accuracy_score(y_svm_test, y_svm_pred)
print("SVM 准确率:", accuracy_svm)
支持向量机的扩展包括支持向量回归(SVR)以及多分类支持向量机的一对一和一对多策略。
# 支持向量机的扩展
# 支持向量回归
X_svr, y_svr = make_regression(n_samples=1000, n_features=10, noise=0.5, random_state=42)
X_svr_train, X_svr_test, y_svr_train, y_svr_test = train_test_split(X_svr, y_svr, test_size=0.2, random_state=42)
# 数据标准化
scaler_svr = StandardScaler()
X_svr_train_scaled = scaler_svr.fit_transform(X_svr_train)
X_svr_test_scaled = scaler_svr.transform(X_svr_test)
svr = SVR(kernel='linear')
svr.fit(X_svr_train_scaled, y_svr_train)
y_svr_pred = svr.predict(X_svr_test_scaled)
mse_svr = mean_squared_error(y_svr_test, y_svr_pred)
print("支持向量回归均方误差:", mse_svr)
svm_ovo = SVC(kernel='rbf', decision_function_shape='ovo')
svm_ovo.fit(X_svm_train_scaled, y_svm_train)
y_svm_pred_ovo = svm_ovo.predict(X_svm_test_scaled)
accuracy_ovo = accuracy_score(y_svm_test, y_svm_pred_ovo)
print("一对一策略SVM准确率:", accuracy_ovo)
svm_ovr = SVC(kernel='rbf', decision_function_shape='ovr')
svm_ovr.fit(X_svm_train_scaled, y_svm_train)
y_svm_pred_ovr = svm_ovr.predict(X_svm_test_scaled)
accuracy_ovr = accuracy_score(y_svm_test, y_svm_pred_ovr)
print("一对多策略SVM准确率:", accuracy_ovr)
五、K近邻算法
K近邻算法是一种基于实例的学习方法,通过计算待预测样本与训练集中样本的距离,选择最近的K个邻居进行投票或平均来预测结果。我们以鸢尾花数据集为例,构建K近邻分类模型。
iris = load_iris()
X_knn = iris.data
y_knn = iris.target
X_knn_train, X_knn_test, y_knn_train, y_knn_test = train_test_split(X_knn, y_knn, test_size=0.2, random_state=42)
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_knn_train, y_knn_train)
y_knn_pred = knn.predict(X_knn_test)
accuracy_knn = accuracy_score(y_knn_test, y_knn_pred)
print("K近邻分类 - 准确率:", accuracy_knn)
K值的选择对模型性能有重要影响。我们通过交叉验证来寻找最佳的K值,并绘制K值与准确率的关系图。
k_values = range(1, 31)
cv_scores = []
for k in k_values:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X_knn, y_knn, cv=5)
cv_scores.append(scores.mean())
best_k = k_values[np.argmax(cv_scores)]
print("K近邻最佳K值:", best_k)
plt.plot(k_values, cv_scores)
plt.xlabel('K值')
plt.ylabel('交叉验证准确率')
plt.title('K近邻K值选择')
plt.show()
K近邻算法支持多种距离度量方法,如欧氏距离、曼哈顿距离、闵可夫斯基距离、切比雪夫距离和余弦距离。我们对比了不同距离度量下的模型准确率。
knn_euclidean = KNeighborsClassifier(n_neighbors=3, metric='euclidean')
knn_euclidean.fit(X_knn_train, y_knn_train)
y_knn_pred_euclidean = knn_euclidean.predict(X_knn_test)
accuracy_euclidean = accuracy_score(y_knn_test, y_knn_pred_euclidean)
print("欧氏距离KNN准确率:", accuracy_euclidean)
knn_manhattan = KNeighborsClassifier(n_neighbors=3, metric='manhattan')
knn_manhattan.fit(X_knn_train, y_knn_train)
y_knn_pred_manhattan = knn_manhattan.predict(X_knn_test)
accuracy_manhattan = accuracy_score(y_knn_test, y_knn_pred_manhattan)
print("曼哈顿距离KNN准确率:", accuracy_manhattan)
knn_minkowski = KNeighborsClassifier(n_neighbors=3, metric='minkowski', p=3)
knn_minkowski.fit(X_knn_train, y_knn_train)
y_knn_pred_minkowski = knn_minkowski.predict(X_knn_test)
accuracy_minkowski = accuracy_score(y_knn_test, y_knn_pred_minkowski)
print("闵可夫斯基距离(p=3)KNN准确率:", accuracy_minkowski)
knn_chebyshev = KNeighborsClassifier(n_neighbors=3, metric='chebyshev')
knn_chebyshev.fit(X_knn_train, y_knn_train)
y_knn_pred_chebyshev = knn_chebyshev.predict(X_knn_test)
accuracy_chebyshev = accuracy_score(y_knn_test, y_knn_pred_chebyshev)
print("切比雪夫距离KNN准确率:", accuracy_chebyshev)
knn_cosine = KNeighborsClassifier(n_neighbors=3, metric='cosine')
knn_cosine.fit(X_knn_train, y_knn_train)
y_knn_pred_cosine = knn_cosine.predict(X_knn_test)
accuracy_cosine = accuracy_score(y_knn_test, y_knn_pred_cosine)
print("余弦距离KNN准确率:", accuracy_cosine)
为了改进K近邻算法的性能,我们还展示了使用KD树加速和特征降维(PCA)的方法。
knn_kdtree = KNeighborsClassifier(n_neighbors=3, algorithm='kd_tree')
knn_kdtree.fit(X_knn_train, y_knn_train)
y_pred_kdtree = knn_kdtree.predict(X_knn_test)
accuracy_kdtree = accuracy_score(y_knn_test, y_pred_kdtree)
print("KD树加速KNN准确率:", accuracy_kdtree)
pca = PCA(n_components=2)
X_knn_train_pca = pca.fit_transform(X_knn_train)
X_knn_test_pca = pca.transform(X_knn_test)
knn_pca = KNeighborsClassifier(n_neighbors=3)
knn_pca.fit(X_knn_train_pca, y_knn_train)
y_pred_pca = knn_pca.predict(X_knn_test_pca)
accuracy_pca = accuracy_score(y_knn_test, y_pred_pca)
print("PCA降维后KNN准确率:", accuracy_pca)
输出结果:
线性回归特征的 VIF 值:
Variable VIF
0 0 1.008836
1 1 1.005143
2 2 1.013360
3 3 1.008844
4 4 1.004043
5 5 1.008776
6 6 1.003887
7 7 1.013498
8 8 1.005017
9 9 1.002989
线性回归 - 均方误差: 0.23779787276051184
线性回归 - 均方根误差: 0.48764523248003955
线性回归 - 平均绝对误差: 0.3886664065048522
线性回归 - 决定系数: 0.9999858984737383
逻辑回归(二分类) - 准确率: 1.0
One-vs-Rest 逻辑回归 - 准确率: 0.9666666666666667
Softmax 回归 - 准确率: 1.0
随机森林最佳参数: {'max_depth': None, 'min_samples_leaf': 2, 'n_estimators': 200}
随机森林 - 准确率: 1.0
预剪枝决策树 - 准确率: 1.0
后剪枝决策树 - 准确率: 1.0
SVM 最佳参数: {'C': 10, 'degree': 2, 'gamma': 'scale', 'kernel': 'linear'}
SVM 准确率: 0.9666666666666667
支持向量回归均方误差: 0.238160034885163
一对一策略 SVM 准确率: 1.0
一对多策略 SVM 准确率: 1.0
K 近邻分类 - 准确率: 1.0
K 近邻最佳 K 值: 6
欧氏距离 KNN 准确率: 1.0
曼哈顿距离 KNN 准确率: 1.0
闵可夫斯基距离(p=3)KNN 准确率: 1.0
切比雪夫距离 KNN 准确率: 1.0
余弦距离 KNN 准确率: 0.9666666666666667
KD 树加速 KNN 准确率: 1.0
PCA 降维后 KNN 准确率: 1.0
通过上述代码示例和详细讲解,我们对五种常见的机器学习算法在Python中的实现有了全面而深入的了解。如有错误,还望指出。