【机器学习案列】使用随机森林(RF)进行白葡萄酒质量预测
🧑 博主简介:曾任某智慧城市类企业
算法总监
,目前在美国市场的物流公司从事高级算法工程师
一职,深耕人工智能领域,精通python数据挖掘、可视化、机器学习等,发表过AI相关的专利并多次在AI类比赛中获奖。CSDN人工智能领域的优质创作者,提供AI相关的技术咨询、项目开发和个性化解决方案等服务,如有需要请站内私信或者联系任意文章底部的的VX名片(ID:xf982831907
)
💬 博主粉丝群介绍:① 群内初中生、高中生、本科生、研究生、博士生遍布,可互相学习,交流困惑。② 热榜top10的常客也在群里,也有数不清的万粉大佬,可以交流写作技巧,上榜经验,涨粉秘籍。③ 群内也有职场精英,大厂大佬,可交流技术、面试、找工作的经验。④ 进群免费赠送写作秘籍一份,助你由写作小白晋升为创作大佬。⑤ 进群赠送CSDN评论防封脚本,送真活跃粉丝,助你提升文章热度。有兴趣的加文末联系方式,备注自己的CSDN昵称,拉你进群,互相学习共同进步。
【机器学习案列】使用随机森林(RF)进行白葡萄酒质量预测
- 引言
- 数据集介绍
- 环境准备
- 数据预处理
- 1. 导入相应的分析库和数据加载
- 2. 数据探索
- 3. 异常值处理
- 3.1 缺失值处理
- 3.2 异常值处理
- 模型训练
- 1. 数据标签的0/1化
- 2. 数据的归一化
- 3. 划分数据集
- 4. 训练模型
- 5. 模型评估
- 6. 特征重要性
- 结论
引言
在葡萄酒产业中,质量评估是一个复杂的过程,涉及到多个化学和感官因素。随着机器学习技术的发展,我们可以使用这些技术来预测葡萄酒的质量。在这篇文章中,我们将使用Python中的RandomForestClassifier
来预测白葡萄酒的质量。我们将通过分析数据集中的多个化学成分来训练一个模型,该模型能够预测葡萄酒的质量评分。
数据集介绍
我们的数据集包含了多个与葡萄酒质量相关的化学成分,这些成分包括:
- 非挥发性酸(fixed acidity)
- 挥发性酸(volatile acidity)
- 柠檬酸(citric acid)
- 残糖(residual sugar)
- 氯化物(chlorides)
- 游离二氧化硫(free sulfur dioxide)
- 总二氧化硫(total sulfur dioxide)
- 密度(density)
- 酸碱度(pH)
- 硫酸盐(sulphates)
- 酒精(alcohol)
- 葡萄酒质量(quality,0-10)
环境准备
在开始之前,确保你的Python环境中安装了以下库:
- pandas
- numpy
- scikit-learn
- matplotlib
- seaborn
你可以通过以下命令安装这些库:
pip install pandas numpy scikit-learn matplotlib seaborn
数据预处理
1. 导入相应的分析库和数据加载
首先,我们需要加载数据集。
import warnings # For warning handling
# Third-party imports
import pandas as pd # For data processing, CSV file I/O
import numpy as np # For numerical operations and mathematical functions
import matplotlib.pyplot as plt # For data visualization
import seaborn as sns # For statistical graphics
import plotly.express as px # For interactive plotting
from sklearn.model_selection import train_test_split # For data splitting for machine learning
from sklearn.preprocessing import MinMaxScaler, StandardScaler # For feature standardization
from sklearn.metrics import accuracy_score # For model evaluation
from termcolor import colored # For colored text printing
from sklearn.ensemble import RandomForestClassifier # For random forest classifier model
# For warning handling
warnings.filterwarnings('ignore') # For ignoring warnings
加载相应的数据集。
# load data
try:
# Relative file path
filePath = "winequality-white.csv"
# Read the CSV file and save it in "data" variable
data= pd.read_csv(filePath,sep=';')
# Check loading data
print(colored("THE DATASET LOADED SUCCESSFULLY...", "green", attrs=['reverse']))
except FileNotFoundError:
print(colored("ERROR: File not found!", "red", attrs=['reverse']))
except Exception as e:
print(colored(f"ERROR: {e}", "red", attrs=['reverse']))
2. 数据探索
在进行任何预处理之前,我们应该对数据有一个基本的了解,首先查看数据前几行。
# 查看数据集的前几行
dataset_rows = data.head(7) #.head() the default value = 5
print(colored('As you can see, the first 7 rows in the dataset:\n', 'green', attrs=['reverse']))
# Iterate over each row in the dataset_rows DataFrame
for index, row in dataset_rows.iterrows():
# Print the index label of the current row
print(colored(f"Row {index + 1}:","white",attrs=['reverse']))
# Print the content of the current row
print(row)
# Print a separator line
print("--------------------------------------")
查看数据的基本情况,包括shape、特征、总数等等。
print("The shape =",data.shape)
# Show information about the dataset
num_rows, num_cols = data.shape
num_features = num_cols - 1
num_data = num_rows * num_cols
# Print the information
print(f"Number of Rows: {num_rows}")
print(f"Number of Columns: {num_cols}")
print(f"Number of Features: {num_features}")
print(f"Number of All Data: {num_data}")
# Check and ensure running
print(colored("The task has been completed without any errors....","green", attrs=['reverse']))
# 查看数据集的信息
print(data.info())
查看数据统计特征。
data.describe().T.round(2)
查看数据标签的分布情况。
# Create a count plot using seaborn
sns.catplot(data=data, x='quality', kind='count')
# Add labels and title to the plot
plt.title('Distribution of Wine Quality')
plt.xlabel('Quality')
plt.ylabel('Count')
# Display the plot
plt.show()
这里的数据EDA部分不在做详细的介绍,具体参考【数据可视化案列】白葡萄酒质量数据的EDA可视化分析一文即可。
3. 异常值处理
3.1 缺失值处理
# Check for missing values
null_counts = data.isnull().sum()
# Display the number of null values
print(null_counts)
print("_________________________________________________________________")
print(colored(f"Totally, there are {null_counts.sum()} null values in the dataset.","green", attrs=['reverse']))
发现数据集中无缺失值的存在(万恶的资本主义数据集中都没有空值)。
3.2 异常值处理
# Set the figure size
plt.figure(figsize=(22, 11))
# Add outliers to the plot
sns.stripplot(data=data, color="red", jitter=0.2, size=5)
# Set the axis labels and title
plt.title("Outliers")
plt.xlabel("X-axis label")
plt.ylabel("Y-axis label")
# Show the plot
plt.show()
# Delete the outliers
# The data before deleting outliers
print("Before Removing the outliers", data.shape)
# Deleting outliers (Removing the number of observation where the total sulfur dioxide is more than 160)
data = data[data['total sulfur dioxide']<160]
#The data after deleting outliers
print("After Removing the outliers", data.shape)
# Set the figure size
plt.figure(figsize=(22, 11))
# Add outliers to the plot
sns.stripplot(data=data, color="red", jitter=0.2, size=5)
# Set the axis labels and title
plt.title("Outliers")
plt.xlabel("X-axis label")
plt.ylabel("Y-axis label")
# Show the plot
plt.show()
模型训练
1. 数据标签的0/1化
# Split the data into features (X) and target variable (Y)
X = data.drop('quality',axis=1)
# Create a new series 'Y' by applying a lambda function to the 'quality' column of the 'data' DataFrame
# The lambda function assigns a value of 1 if the 'quality' value is greater than or equal to 5, otherwise assigns 0
Y = data['quality'].apply(lambda y_value: 1 if y_value >= 5 else 0)
# Print the shapes of X and Y to verify the splitting
print("Shape of X:", X.shape)
print("Shape of Y:", Y.shape)
2. 数据的归一化
# Rescale and normalize the features
'''
# Standardization (Normalization)
standard_scaler = StandardScaler()
X = standard_scaler.fit_transform(X)
'''
# Min-Max Scaling (Rescaling)
min_max_scaler = MinMaxScaler()
X = min_max_scaler.fit_transform(X)
#I will choose one of them in the future part "model selection" based on the highest accuracy
3. 划分数据集
我们将数据集划分为训练集和测试集。
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=44)
# Print the shapes of the training and testing sets to verify the splitting
print("Shape of X_train:", X_train.shape)
print("Shape of X_test:", X_test.shape)
print("Shape of Y_train:", Y_train.shape)
print("Shape of Y_test:", Y_test.shape)
4. 训练模型
使用RandomForestClassifier
训练模型。
# Initialize lists to store training and testing accuracies
scoreListRF_Train = []
scoreListRF_Test = []
'''
max_dep ----------> (1, 5),(1, 10)
rand_state ----------> (1, 35),(1, 50)
n_est ----------> (1, 30),(1, 30)
'''
# Iterate over different values of max_depth
for max_dep in range(1, 5):
# Iterate over different values of random_state
for rand_state in range(1, 20):
# Iterate over different values of n_estimators
for n_est in range(1, 15):
# Create a Random Forest model with the different values of max_depth, random_state, and n_estimators
Model = RandomForestClassifier(n_estimators=n_est, random_state=rand_state, max_depth=max_dep)
# Fit the model on the training data
Model.fit(X_train, Y_train)
# Calculate and store the training accuracy
scoreListRF_Train.append(Model.score(X_train, Y_train))
# Calculate and store the testing accuracy
scoreListRF_Test.append(Model.score(X_test, Y_test))
# Find the maximum accuracy for both training and testing
RF_Accuracy_Train = max(scoreListRF_Train)
RF_Accuracy_Test = max(scoreListRF_Test)
# Print the best accuracies achieved
print(f"Random Forest best accuracy (Training): {RF_Accuracy_Train*100:.2f}%")
print(f"Random Forest best accuracy (Testing): {RF_Accuracy_Test*100:.2f}%")
# Print a success message indicating that the model has been trained successfully
print(colored("The Random Forest model has been trained successfully","green", attrs=['reverse']))
5. 模型评估
评估模型的性能。
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
# 预测测试集
y_pred = Model.predict(X_test)
# 计算准确率
accuracy = accuracy_score(Y_test, y_pred)
print(f'Accuracy: {accuracy:.2f}')
# 打印分类报告
print(classification_report(Y_test, y_pred))
# 打印混淆矩阵
print(confusion_matrix(Y_test, y_pred))
6. 特征重要性
RandomForestClassifier
提供了一个方便的特性,即可以查看每个特征的重要性。
import matplotlib.pyplot as plt
import seaborn as sns
# 获取特征重要性
feature_importances = Model.feature_importances_
# 创建一个DataFrame来存储特征和它们的重要性
feature_importance_df = pd.DataFrame({
'Feature': data.columns.tolist()[:-1],
'Importance': feature_importances
})
# 对特征重要性进行排序
feature_importance_df = feature_importance_df.sort_values(by='Importance', ascending=False)
# 绘制特征重要性图
plt.figure(figsize=(10, 8))
sns.barplot(x='Importance', y='Feature', data=feature_importance_df)
plt.title('Feature Importance')
plt.show()
结论
通过使用RandomForestClassifier
,我们能够预测白葡萄酒的质量。在这个过程中,我们进行了数据预处理、特征选择、模型训练和评估,并分析了特征的重要性。这只是一个简单的示例,实际应用中可能需要更复杂的数据预处理和模型调优。