当前位置: 首页 > article >正文

机器学习之过采样和下采样调整不均衡样本的逻辑回归模型

过采样和下采样调整不均衡样本的逻辑回归模型

目录

  • 过采样和下采样调整不均衡样本的逻辑回归模型
    • 1 过采样
      • 1.1 样本不均衡
      • 1.2 概念
      • 1.3 图片理解
      • 1.4 SMOTE算法
      • 1.5 算法导入
      • 1.6 函数及格式
      • 1.7 样本类别可视化理解
    • 2 下采样
      • 2.1 概念
      • 2.2 图片理解
      • 2.3 数据处理理解
      • 2.4 样本类别可视化理解
    • 3 实际调整模型

1 过采样


1.1 样本不均衡

数据集中不同类别的样本数量差异很大,通常表现为一个类别的样本数量远多于其他类别

1.2 概念

增加少数类的样本数量,使其样本多的类别样本数量相同。

1.3 图片理解

在这里插入图片描述

1.4 SMOTE算法

在这里插入图片描述

1.5 算法导入

from imblearn.over_sampling import SMOTE

1.6 函数及格式

  • ov = SMOTE(random_state=0),随机抽取函数

random_state是随机种子,保证同一数字时随机抽取数据相同

  • x_ov,y_ov = ov.fit_resample(x_tr_all,y_tr_all)
    • x_ov经过随机抽取,自动拟合后数据,y_ov
    • x_tr_all,y_tr_all

1.7 样本类别可视化理解

import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, cross_val_predict, cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
from sklearn.metrics import confusion_matrix
import pylab as mpl

# 标准化处理
scaler = StandardScaler()
data = pd.read_csv('creditcard.csv')
a = data[['Amount']]
b = data['Amount']
# z标准化处理Amount,再存Amount中
data['Amount'] = scaler.fit_transform(data[['Amount']])
# 删除time列
data = data.drop(['Time'],axis=1)
# 特征数据x,删除class列
x_all = data.drop(['Class'],axis=1)
# class为标签结果列
y_all = data.Class
# # 训练集特征,测试集特征,训练集结果,测试集结果,test_size抽取的测试集百分比,train_size 抽取的训练集百分比
x_tr_all,x_te_all,y_tr_all,y_te_all = \
    train_test_split(x_all,y_all, test_size=0.2,random_state=1000)
# 样本不均衡图片
mpl.rcParams['font.sans-serif']=['Microsoft YaHei']
mpl.rcParams['axes.unicode_minus']=False
labels_count = pd.value_counts(y_all)
plt.title('正负样本数1')
plt.xlabel('类别')
plt.ylabel('频数')
labels_count.plot(kind='bar')
plt.show()
# #过采样使样本均衡
from imblearn.over_sampling import SMOTE
ov = SMOTE(random_state=0)
x_tr_ov,y_tr_ov = ov.fit_resample(x_tr_all,y_tr_all)
# 交叉验证
scores = []
c_range = [0.01,0.1,1,10,100]
# 均衡样本正负图像显示
mpl.rcParams['font.sans-serif']=['Microsoft YaHei']
mpl.rcParams['axes.unicode_minus']=False
labels_count = pd.value_counts(y_tr_ov)
plt.title('正负样本数')
plt.xlabel('类别')
plt.ylabel('频数')
labels_count.plot(kind='bar')
plt.show()

在这里插入图片描述

在这里插入图片描述

2 下采样


2.1 概念

减少多数类的样本数量,使其样本少的类别样本数量相同,但可能会丢失重要信息。

2.2 图片理解

在这里插入图片描述

2.3 数据处理理解

  • pt_eg = **data_tr[data_tr[‘Class’] == 0]**找出两类数据
  • ng_eg = data_tr[data_tr[‘Class’] == 1]
  • pt_eg = pt_eg.sample(len(ng_eg))根据少的数据对多的数据进行抽取
  • data_c = pd.concat([pt_eg,ng_eg]),再将两类数据合并

2.4 样本类别可视化理解

代码展示:

import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from numpy.random import sample
from sklearn.model_selection import train_test_split, cross_val_predict, cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
import pylab as mpl

# 标准化处理
scaler = StandardScaler()
data = pd.read_csv('creditcard.csv')
a = data[['Amount']]
b = data['Amount']
# z标准化处理Amount,再存Amount中
data['Amount'] = scaler.fit_transform(data[['Amount']])
# 删除time列
data = data.drop(['Time'],axis=1)
# 特征数据x,删除class列
x_all = data.drop(['Class'],axis=1)
# class为标签结果列
y_all = data.Class
# # 训练集特征,测试集特征,训练集结果,测试集结果,test_size抽取的测试集百分比,train_size 抽取的训练集百分比
x_tr_all,x_te_all,y_tr_all,y_te_all = \
    train_test_split(x_all,y_all, test_size=0.2,random_state=1000)
# 样本不均衡
mpl.rcParams['font.sans-serif']=['Microsoft YaHei']
mpl.rcParams['axes.unicode_minus']=False
labels_count = pd.value_counts(y_all)
plt.title('正负样本数1')
plt.xlabel('类别')
plt.ylabel('频数')
labels_count.plot(kind='bar')
plt.show()
#下采样
## 组合,为后准备,两个表格组合,前datafarme,后serise,添加列,直接赋值
np.random.seed(seed=4)
# 随机种子
x_tr_all['Class'] = y_tr_all
data_tr = x_tr_all
pt_eg = data_tr[data_tr['Class'] == 0]
ng_eg = data_tr[data_tr['Class'] == 1]
pt_eg = pt_eg.sample(len(ng_eg))
data_c = pd.concat([pt_eg,ng_eg])
x_data_c = data_c.drop(['Class'],axis=1)
y_data_c = data_c['Class']
mpl.rcParams['font.sans-serif']=['Microsoft YaHei']
mpl.rcParams['axes.unicode_minus']=False
labels_count = pd.value_counts(y_data_c )
plt.title('正负样本数1')
plt.xlabel('类别')
plt.ylabel('频数')
labels_count.plot(kind='bar')
plt.show()

在这里插入图片描述
在这里插入图片描述

3 实际调整模型

不均衡样本,下采样样本,过采样样本训练模型代码及结果,可以明显看到数据召回率上升。

代码展示:

import time
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from numpy.random import sample
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, cross_val_predict, cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
import pylab as mpl
# 标准化处理
scaler = StandardScaler()
data = pd.read_csv('creditcard.csv')
a = data[['Amount']]
b = data['Amount']
# z标准化处理Amount,再存Amount中
data['Amount'] = scaler.fit_transform(data[['Amount']])
# 删除time列
data = data.drop(['Time'],axis=1)
# 特征数据x,删除class列
x_all = data.drop(['Class'],axis=1)
# class为标签结果列
y_all = data.Class
# # 训练集特征,测试集特征,训练集结果,测试集结果,test_size抽取的测试集百分比,train_size 抽取的训练集百分比
x_tr_all,x_te_all,y_tr_all,y_te_all = \
    train_test_split(x_all,y_all, test_size=0.2,random_state=1000)
# 样本不均衡
scores = []
c_range = [0.01,0.1,1,10,100]
## 循环测试带入因子
for i in c_range:
    start_time = time.time()
    lg = LogisticRegression(C=i,penalty='l2',solver='lbfgs',max_iter=1000)
    # 模型迭代8次后的所有模型的recall值
    score = cross_val_score(lg,x_tr_all,y_tr_all,cv=5,scoring='recall')
    # score的平均值,也就是recall的平均值
    score_m = sum(score)/len(score)
    # scores列表添加均值recall
    scores.append(score_m)
    end_time = time.time()
best_c = c_range[np.argmax(scores)]
lg = LogisticRegression(C=best_c,penalty='l2',max_iter=1000)
lg.fit(x_te_all,y_te_all)
te_pr = lg.predict(x_te_all)
print("不均衡样本训练")
print(metrics.classification_report(y_te_all,te_pr))

# 下采样
np.random.seed(seed=4)
x_tr_all['Class'] = y_tr_all
data_tr = x_tr_all
pt_eg = data_tr[data_tr['Class'] == 0]
ng_eg = data_tr[data_tr['Class'] == 1]
pt_eg = pt_eg.sample(len(ng_eg))
data_c = pd.concat([pt_eg,ng_eg])
x_data_c = data_c.drop(['Class'],axis=1)
# class为标签结果列
y_data_c = data_c.Class
# # 交叉验证
scores = []
c_range = [0.01,0.1,1,10,100]
# 循环测试带入因子
for i in c_range:
    lg = LogisticRegression(C=i,penalty='l2',solver='lbfgs',max_iter=1000)
    # 模型迭代8次后的所有模型的recall值
    score = cross_val_score(lg,x_data_c,y_data_c,cv=5,scoring='recall')
    # score的平均值,也就是recall的平均值
    score_m = sum(score)/len(score)
    # scores列表添加均值recall
    scores.append(score_m)
best_c = c_range[np.argmax(scores)]
# 根据上面最大判断,建立模型
lg = LogisticRegression(C=best_c,penalty='l2',max_iter=1000)
lg.fit(x_data_c,y_data_c)
te_pr = lg.predict(x_te_all)
print("下采样均衡样本训练")
print(metrics.classification_report(y_te_all,te_pr))

# #过采样
scaler = StandardScaler()
data = pd.read_csv('creditcard.csv')
a = data[['Amount']]
b = data['Amount']
# z标准化处理Amount,再存Amount中
data['Amount'] = scaler.fit_transform(data[['Amount']])
# 删除time列
data = data.drop(['Time'],axis=1)
# 特征数据x,删除class列
x_all = data.drop(['Class'],axis=1)
# class为标签结果列
y_all = data.Class
x_tr_all,x_te_all,y_tr_all,y_te_all = \
    train_test_split(x_all,y_all, test_size=0.2,random_state=1000)
from imblearn.over_sampling import SMOTE
ov = SMOTE(random_state=0)
x_tr_ov,y_tr_ov = ov.fit_resample(x_tr_all,y_tr_all)
# 交叉验证
scores = []
c_range = [0.01,0.1,1,10,100]
## 循环测试带入因子
for i in c_range:
    # start_time = time.time()
    lg = LogisticRegression(C=i,penalty='l2',solver='lbfgs',max_iter=1000)
    # 模型迭代8次后的所有模型的recall值
    score = cross_val_score(lg,x_tr_ov,y_tr_ov,cv=5,scoring='recall')
    # score的平均值,也就是recall的平均值
    score_m = sum(score)/len(score)
    # scores列表添加均值recall
    scores.append(score_m)
best_c = c_range[np.argmax(scores)]
lg = LogisticRegression(C=best_c,penalty='l2',max_iter=1000)
lg.fit(x_tr_ov,y_tr_ov)
te_pr1 = lg.predict(x_te_all)
print("过采样均衡样本训练")
print(metrics.classification_report(y_te_all,te_pr1))

运行结果:
在这里插入图片描述


http://www.kler.cn/a/467316.html

相关文章:

  • Cyber Security 101-Web Hacking-Burp Suite: The Basics(Burp Suite:基础知识)
  • FQ-GAN代码解析
  • RabbitMQ案例
  • Nginx——静态资源部署(二/五)
  • Vue 全局事件总线:Vue 2 vs Vue 3 实现
  • 使用 OpenAI 进行结构化标签提取的 Python 实现
  • 常见中间件漏洞(tomcat,weblogic,jboss,apache)
  • 【管道——二分+区间合并】
  • .Net加密与Java互通
  • Ubuntu静态IP地址
  • HTML——78. 图像地图
  • 【AWS SDK PHP】This operation requests `sigv4a` auth schemes 问题处理
  • 常见的 MySQL 性能问题
  • 框架Tensorflow2
  • 《Rust权威指南》学习笔记(四)
  • Elasticsearch:Lucene 2024 年回顾
  • 豆包 MarsCode 编程助手之Visual Studio Code快速开始教程
  • 【数据可视化-10】国防科技大学录取分数线可视化分析
  • SQL Server 数据库 忘记密码
  • 5.1 冒泡排序与选择排序
  • 对一个双向链表,从尾部遍历找到第一个值为x的点,将node p插入这个点之前,如果找不到,则插在末尾。使用C语言实现
  • Unity3D仿星露谷物语开发16之角色拾取道具
  • spark环境搭建
  • T-SQL语言的编程范式
  • 如何不修改模型参数来强化大语言模型 (LLM) 能力?
  • 【Unity笔记】如何把语言修改为简体中文?