当前位置: 首页 > article >正文

【Python · PyTorch】循环神经网络 RNN(基础应用)

【Python · PyTorch】循环神经网络 RNN(简单应用)

  • 1. 简介
  • 2. 模拟客流预测(数据集转化Tensor)
    • 3.1 数据集介绍
    • 3.2 训练过程
  • 3. 模拟股票预测(DataLoader加载数据集)
    • 3.1 IBM 数据集
      • 3.1.2 数据集介绍
      • 3.1.3 训练过程
        • ① 利用Matplotlib绘图
        • ② 利用Seaborn绘图
    • 3.2 Amazon 数据集
      • 3.2.2 数据集介绍
      • 3.2.3 训练结果

1. 简介

此前介绍了RNN及其变体LSTM、GRU的结构,本章节介绍相关神经网络的代码,并通过 模拟数据 展示简单的应用场景。

RNN核心代码:

nn.RNN(input_size, hidden_size, num_layers, nonlinearity, bias, batch_first, dropout, bidirectional)

参数介绍:

  • input_size:输入层神经元数量,对应输入特征的维度
  • hidden_size:隐藏层神经元数量
  • num_layers:RNN单元堆叠层数,默认1
  • bias:是否启用偏置,默认True
  • batch_first:是否将batch放在第一位,默认False
    • True:input 为 (batch, seq_len, input_size)
    • False:input 为 (seq_len, batch, input)
  • dropout:丢弃率,值范围为0~1
  • bidirectional:是否使用双向RNN,默认False

调用时参数:

  • input:前侧输入
  • h[n-1]:前侧传递状态

调用时返回:

  • output:本层输出

  • h[n]:本层向后侧传递状态

GRU与RNN参数相同,但LSTM有所不同:

  • input/output:本层输入/输出
  • (h[n-1], c[n-1])/(h[n-1], c[n-1]):传递状态

输入时 seq_len表示序列长度/传递长度:即 输入向量维度为 (seq_len, batch, input)

  • 以自然语言训练为例,假设句子长度为30个单词,单词为50维向量,一次训练10个句子。
  • seq_len=30input_size=50batch_size=10,LSTM结构会根据传入数据 向前传递30次 输出最终结果。

2. 模拟客流预测(数据集转化Tensor)

3.1 数据集介绍

模拟航班数据,经常用于学习机器学习算法。

航班客流数据集

3.2 训练过程

① 导入三方库

import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler

定义使用设备

device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")

② 读取数据集

从CSV/Excel文件中读取数据

# 模拟航空数据
pf = pd.read_csv('./data/flight.csv')
month, passengers = pf['Month'], pf['Passengers']
scaler = MinMaxScaler(feature_range=(-1, 1))
passengers = scaler.fit_transform(passengers.values.reshape(-1,1))

定义数据集划分方法

def split_data(passengers, looback):
    # 1. 分段
    passengers = np.array(passengers)
    segments = []

    # 创建所有可能的时间序列
    for index in range(len(passengers) - lookback):
        segments.append(passengers[index: index + lookback])

    segments = np.array(segments)

    # 2. 确定train和test的数量
    test_set_size = int(np.round(0.2 *  segments.shape[0]))
    train_set_size = segments.shape[0] - (test_set_size)
    
    # 3. 分割:训练集和测试集:x和y
    x_train = segments[:train_set_size,:-1]
    y_train = segments[:train_set_size,-1]   # 序列最后一个是y
    x_test = segments[train_set_size:,:-1]
    y_test = segments[train_set_size:,-1]
    
    return x_train, y_train, x_test, y_test

读取数据集

lookback = 20 # 设置序列长度
x_train, y_train, x_test, y_test = split_data(passengers, lookback)
print('x_train.shape = ',x_train.shape)
print('y_train.shape = ',y_train.shape)
print('x_test.shape = ',x_test.shape)
print('y_test.shape = ',y_test.shape)

数据转化为Tensor

x_train = torch.from_numpy(x_train).type(torch.Tensor).to(device)
x_test = torch.from_numpy(x_test).type(torch.Tensor).to(device)
y_train = torch.from_numpy(y_train).type(torch.Tensor).to(device)
y_test = torch.from_numpy(y_test).type(torch.Tensor).to(device)

③ 创建神经网络

定义 输入 / 隐藏 / 输出 维度

input_dim = 1
hidden_dim = 32
num_layers = 2
output_dim = 1

定义三种神经网络

class RNN(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
        super(RNN, self).__init__()
        self.rnn = nn.RNN(input_size=input_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim)
    def forward(self, x):
        # output维度:[num_steps,  batch_size, hidden_size]
        # hn维度    :[num_layers, batch_size, hidden_size]
        output, hn = self.rnn(x)
        # num_steps和hidden_size保留,取最后一次batch
        output = output[:,-1,:]
        # 最后几步送入全连接
        output = self.fc(output)
        return output
class LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
        super(LSTM, self).__init__()

        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers
        self.output_dim = output_dim
        
        self.lstm = nn.LSTM(input_size=input_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim)
    def forward(self, x):
        # 初始化隐藏状态和单元状态
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(device)
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(device)
        
        output, (hn, cn) = self.lstm(x, (h0, c0))
        output = output[:,-1,:]
        output = self.fc(output)
        return output
class GRU(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
        super(GRU, self).__init__()

        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers
        self.output_dim = output_dim
        
        self.gru = nn.GRU(input_size=input_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim)
    def forward(self, x):
        # 初始化隐藏状态和单元状态
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(device)
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(device)
        
        output, hn = self.gru(x, h0)
        output = output[:,-1,:]
        output = self.fc(output)
        return output

④ 训练神经网络

预先定义

# 随机种子
torch.manual_seed(20)
# 创建神经网络对象
rnn = RNN(input_dim, hidden_dim, num_layers, output_dim)
lstm = LSTM(input_dim, hidden_dim, num_layers, output_dim)
gru = GRU(input_dim, hidden_dim, num_layers, output_dim)

# 确定神经网络运行设备
rnn.to(device)
lstm.to(device)
gru.to(device)

# 损失函数
rnn_loss_function = nn.MSELoss()
lstm_loss_function = nn.MSELoss()
gru_loss_function = nn.MSELoss()

# 优化器
rnn_optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001)
lstm_optimizer = torch.optim.Adam(lstm.parameters(), lr=0.001)
gru_optimizer = torch.optim.Adam(gru.parameters(), lr=0.001)


# 训练轮次
epochs = 200
# 训练损失记录
rnn_final_losses = []
lstm_final_losses = []
gru_final_losses = []

定义神经网络训练方法

def train_rnn():
    rnn.train()
    for epoch in range(epochs):
        # 1. 正向传播
        y_train_pred_rnn = rnn(x_train)
        # 2. 计算误差
        rnn_loss = rnn_loss_function(y_train_pred_rnn, y_train)
        rnn_final_losses.append(rnn_loss.item())
        # 3. 反向传播
        rnn_optimizer.zero_grad()
        rnn_loss.backward()
        # 4. 优化参数
        rnn_optimizer.step()

        if epoch % 10 == 0:
            print("RNN:: Epoch: {}, Loss: {} ".format(epoch, rnn_loss.data))
    return y_train_pred_rnn
def train_lstm():
    lstm.train()
    for epoch in range(epochs):
        # 1. 正向传播
        y_train_pred_lstm = lstm(x_train)
        # 2. 计算误差
        lstm_loss = lstm_loss_function(y_train_pred_lstm, y_train)
        lstm_final_losses.append(lstm_loss.item())
        # 3. 反向传播
        lstm_optimizer.zero_grad()
        lstm_loss.backward()
        # 4. 优化参数
        lstm_optimizer.step()

        if epoch % 10 == 0:
            print("LSTM:: Epoch: {}, Loss: {} ".format(epoch, lstm_loss.data))
    return y_train_pred_lstm
def train_gru():
    gru.train()
    for epoch in range(epochs):
        # 1. 正向传播
        y_train_pred_gru = gru(x_train)
        # 2. 计算误差
        gru_loss = gru_loss_function(y_train_pred_gru, y_train)
        gru_final_losses.append(gru_loss.item())
        # 3. 反向传播
        gru_optimizer.zero_grad()
        gru_loss.backward()
        # 4. 优化参数
        gru_optimizer.step()

        if epoch % 10 == 0:
            print("GRU:: Epoch: {}, Loss: {} ".format(epoch, gru_loss.data))
    return y_train_pred_gru

执行训练方法

y_train_pred_rnn = train_rnn()
torch.save(rnn.state_dict(), "rnn_test.pth")
print("Saved PyTorch Model State to rnn_test.pth")

y_train_pred_lstm = train_lstm()
torch.save(lstm.state_dict(), "lstm_test.pth")
print("Saved PyTorch Model State to lstm_test.pth")

y_train_pred_gru = train_gru()
torch.save(gru.state_dict(), "gru_test.pth")
print("Saved PyTorch Model State to gru_test.pth")

训练过程

绘制训练结果(最后一次)

数据逆归一化 并转换为DataFrame

original = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_train).detach().numpy()))

rnn_predict = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_train_pred_rnn).detach().numpy()))
lstm_predict = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_train_pred_lstm).detach().numpy()))
gru_predict = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_train_pred_gru).detach().numpy()))

执行绘制程序

import seaborn as sns
sns.set_style("darkgrid") 

fig = plt.figure(figsize=(16, 6))


# 画左边的趋势图
plt.subplot(1, 2, 1)
ax = sns.lineplot(x = original.index, y = original[0], label="Data", color='blue')
ax = sns.lineplot(x = rnn_predict.index, y = rnn_predict[0], label="RNN Prediction", color='red')
ax = sns.lineplot(x = lstm_predict.index, y = lstm_predict[0], label="LSTM Prediction", color='darkred')
ax = sns.lineplot(x = gru_predict.index, y = gru_predict[0], label="GRU Prediction", color='black')

ax.set_title('Passengers', size = 14, fontweight='bold')
ax.set_xlabel("Days", size = 14)
ax.set_ylabel("Members", size = 14)
ax.set_xticklabels('', size=10)

# 画右边的Loss下降图
plt.subplot(1, 2, 2)
ax = sns.lineplot(data=rnn_final_losses, label="RNN Loss", color='red')
ax = sns.lineplot(data=lstm_final_losses, label="LSTM Loss", color='darkblue')
ax = sns.lineplot(data=gru_final_losses, label="GRU Loss", color='black')
ax.set_xlabel("Epoch", size = 14)
ax.set_ylabel("Loss", size = 14)
ax.set_title("Training Loss", size = 14, fontweight='bold')
plt.show()

绘图

⑤ 测试神经网络

定义测试函数

# 测试 RNN
def test_rnn():
    rnn.eval()
    total = len(x_test)
    currect = 0
    with torch.no_grad():
        y_test_pred_rnn = rnn(x_test)
    return y_test_pred_rnn
    
# 测试 LSTM
def test_lstm():
    lstm.eval()
    total = len(x_test)
    currect = 0
    with torch.no_grad():
        y_test_pred_lstm = lstm(x_test)
    return y_test_pred_lstm

# 测试 RNN
def test_gru():
    gru.eval()
    total = len(x_test)
    currect = 0
    with torch.no_grad():
        y_test_pred_gru = gru(x_test)
    return y_test_pred_gru

执行测试程序

y_test_pred_rnn  = test_rnn()
y_test_pred_lstm = test_lstm()
y_test_pred_gru  = test_gru()

数据逆归一化 并转换为DataFrame

test_original = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_test).detach().numpy()))

test_rnn_predict = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_test_pred_rnn).detach().numpy()))
test_lstm_predict = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_test_pred_lstm).detach().numpy()))
test_gru_predict = pd.DataFrame(scaler.inverse_transform(torch.Tensor.cpu(y_test_pred_gru).detach().numpy()))

执行绘制程序

import seaborn as sns
sns.set_style("darkgrid") 

ax = sns.lineplot(x = test_original.index, y = test_original[0], label="Data", color='blue')

ax = sns.lineplot(x = test_rnn_predict.index, y = test_rnn_predict[0], label="RNN Prediction", color='red')
ax = sns.lineplot(x = test_lstm_predict.index, y = test_lstm_predict[0], label="LSTM Prediction", color='darkred')
ax = sns.lineplot(x = test_gru_predict.index, y = test_gru_predict[0], label="GRU Prediction", color='black')

ax.set_title('Passengers', size = 14, fontweight='bold')
ax.set_xlabel("Days", size = 14)
ax.set_ylabel("Members", size = 14)
ax.set_xticklabels('', size=10)

plt.show()

测试

3. 模拟股票预测(DataLoader加载数据集)

3.1 IBM 数据集

3.1.2 数据集介绍

IBM股价数据,经常用于学习机器学习算法。

IBM

3.1.3 训练过程

① 导入三方库

import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler

定义使用设备

device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")

② 读取数据集

PyTorch提供了一种标准的数据集读取方法:

  • 自定义继承自torch.utils.data.Dataset类的XXDataset类,并重写__getitem__()__len__()方法
  • 利用torch.utils.data.DataLoader加载自定义DataSet实例中读取的数据

例如,官方提供的CIFAR10数据集 就是 使用这种方法读取:

import torchvision.datasets
from torch.utils.data import DataLoader

train_data=torchvision.datasets.CIFAR10(root="datasets",train=False,transform=torchvision.transforms.ToTensor(),download=True)
train_loader=DataLoader(dataset=train_data,batch_size=4,shuffle=True)

DataLoader类初始化参数:

DataLoader

其中常用的参数包括:

  • dataset:Dataset类
  • batch_size:批量大小
  • shuffle:是否每轮打乱数据
  • num_workers:读取数据时工作线程数(默认0:代表只使用主进程)
  • drop_last:是否丢弃所余非完整batch数据(数据长度不能整除batch_size,是否丢弃最后不完整的batch)
  • sampler:从数据集中抽取样本的策略
  • batch_sampler:类似于sampler,但每次返回一个batch的索引。与batch_size、shuffle、sampler和drop_last互斥

DataLoader参数
DataLoader参数

部分参数简介:

num_worker 工作方式

DataLoader一次创建num_worker数量个名为worker工作进程。并用batch_samplerbatch分配给worker,由workerbatch加载进RAM/内存DataLoader迭代时会从RAM/内存中检索并获取batch;若检索失败,则用num_workerworker继续加载batchRAM/内存DataLoader再尝试从中获取batch

sampler / batch_sampler 采样方式

  • Random Sampler(随机采样)
    • 随机从数据集中选择样本,可设置随机数种子,保证采样结果相同
  • Subset Random Sampler(子集随机采样)
    • 从数据集指定子集随机采集样本,可用于数据集划分(训练集、验证集等)
  • Weighted Random Sampler(加权随机采样)
    • 根据指定的样本权重随机采样,可用于处理类别不平衡问题
  • BatchSample(批采样)
    • 将样本索引分为多个batch,每个batch包含指定数量样本索引

有时设置 num_worker 会不同步致使程序卡顿,这里博主将其设置为 num_worker=0 避免卡顿。


自定义股价Dataset类

# 定义StockDataset类  继承Dataset类  重写__getitem()__和__len__()方法
class StockDataset(torch.utils.data.Dataset):
    # 初始化函数 得到数据
    def __init__(self, data, seq_length):
        self.data = data
        self.seq_length = seq_length
        
    # Index是根据 batch_size 划分数据后得到的索引,最后将data和对应的labels一并返回
    def __getitem__(self, idx):
        x = self.data[idx:idx + self.seq_length]  # 输入序列
        y = self.data[idx + self.seq_length]      # 输出值
        return torch.tensor(x, dtype=torch.float32), torch.tensor(y, dtype=torch.float32)
        
    # 该函数返回数据长度大小,DataLoader会调用此方法
    def __len__(self):
        return len(self.data) - self.seq_length

利用DataLoader读取Dataset数据

train_length = 2000   # 训练长度
seq_length   = 20     # 序列长度
batch_size   = 32     # 批量大小

# 利用DataLoader读取Dataset数据
train_dataset = StockDataset(origin_data[:train_length], seq_length)
test_dataset = StockDataset(origin_data[train_length:], seq_length)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size = batch_size, shuffle=True, num_workers = 0)
test_dataloader  = torch.utils.data.DataLoader(test_dataset, batch_size = batch_size, shuffle=True, num_workers = 0)

③ 创建神经网络

class RNN(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
        super(RNN, self).__init__()
        self.rnn = nn.RNN(input_size=input_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim)
    def forward(self, x):
        output, hn = self.rnn(x)
        output = output[:,-1,:]
        output = self.fc(output)
        return output

④ 训练神经网络

预先定义

# 随机种子
torch.manual_seed(20)
# 创建神经网络对象
rnn = RNN(input_dim, hidden_dim, num_layers, output_dim)
# 确定神经网络运行设备
rnn.to(device)
# 损失函数
rnn_loss_function = nn.MSELoss()
# 优化器
rnn_optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001)
# 训练轮次
epochs = 50
# 训练损失记录
rnn_final_losses = []

定义训练函数

def train_rnn():
    min_loss = 1.0
    for epoch in range(epochs):
        rnn.train()
        rnn_loss = 0.0
        for x_train, y_train in train_dataloader:
            x_train = x_train.to(device=device)
            y_train = y_train.to(device=device)
            # 1. 正向传播
            y_train_pred_rnn = rnn(x_train)
            # 2. 计算误差
            loss_func_result = rnn_loss_function(y_train_pred_rnn, y_train)
            rnn_loss += loss_func_result.item()
            # 3. 反向传播
            rnn_optimizer.zero_grad()
            loss_func_result.backward()
            # 4. 优化参数
            rnn_optimizer.step()
        rnn_loss = rnn_loss / len(train_dataloader)
        rnn_final_losses.append(rnn_loss)
        
        # 对比
        if(rnn_loss < min_loss):
            min_loss = rnn_loss
            torch.save(rnn.state_dict(), "rnn_test.pth")
            print("Saved PyTorch Model State to rnn_test.pth")
        
        if epoch % 10 == 0:
            print("RNN:: Epoch: {}, Loss: {} ".format(epoch + 1, rnn_loss))

执行训练函数

y_train_pred_rnn_r = train_rnn()

训练

⑤ 测试神经网络

# 使用训练好的模型进行预测
rnn.load_state_dict(torch.load("rnn_test.pth"))
rnn.eval()
with torch.no_grad():
    # 准备所有输入序列
    X_train = torch.stack([x for x, y in train_dataset])
    train_predictions = rnn(X_train.to(device)).squeeze().cpu().detach().numpy()

with torch.no_grad():
    # 准备所有输入序列
    X_test = torch.stack([x for x, y in test_dataset])
    test_predictions = rnn(X_test.to(device)).squeeze().cpu().detach().numpy()

# 将预测结果逆归一化
origin_data = scaler.inverse_transform(origin_data.reshape(-1, 1))
train_predictions = scaler.inverse_transform(train_predictions.reshape(-1, 1))
test_predictions = scaler.inverse_transform(test_predictions.reshape(-1, 1))
① 利用Matplotlib绘图

执行绘图程序

# 绘制结果
plt.figure(figsize=(12, 6))
plt.plot(origin_data, label='Original Data')
plt.plot(range(seq_length,seq_length+len(train_predictions)),train_predictions, label='RNN Train Predictions', linestyle='--')
plt.plot(range(seq_length+train_length,len(test_predictions)+seq_length+train_length), test_predictions, label='RNN Test Predictions', linestyle='--')

plt.legend()
plt.title("Training Set Predictions")
plt.xlabel("Time Step")
plt.ylabel("Value")
plt.show()

绘图Matplotlib

② 利用Seaborn绘图

将数据转换为DataFrame

original = pd.DataFrame(origin_data)
df_train_predictions = pd.DataFrame(train_predictions)
df_test_predictions = pd.DataFrame(test_predictions)

执行绘图程序

import seaborn as sns
sns.set_style("darkgrid") 

fig = plt.figure(figsize=(16, 6))
fig.subplots_adjust(hspace=0.2, wspace=0.2)

# 画左边的趋势图
plt.subplot(1, 2, 1)
ax = sns.lineplot(x = original.index, y = original[0], label="Original Data", color='blue')
ax = sns.lineplot(x = df_train_predictions.index + seq_length, y = df_train_predictions[0], label="RNN Train Prediction", color='red')
ax = sns.lineplot(x = df_test_predictions.index + seq_length + train_length, y = df_test_predictions[0], label="RNN Test Prediction", color='darkred')

ax.set_title('Stock', size = 14, fontweight='bold')
ax.set_xlabel("Days", size = 14)
ax.set_ylabel("Value", size = 14)
ax.set_xticklabels('', size=10)

# 画右边的Loss下降图
plt.subplot(1, 2, 2)
ax = sns.lineplot(data=rnn_final_losses, label="RNN Loss", color='red')
ax = sns.lineplot(data=lstm_final_losses, label="LSTM Loss", color='darkblue')
ax = sns.lineplot(data=gru_final_losses, label="GRU Loss", color='black')
ax.set_xlabel("Epoch", size = 14)
ax.set_ylabel("Loss", size = 14)
ax.set_title("Training Loss", size = 14, fontweight='bold')
plt.show()

Seaborn绘图

3.2 Amazon 数据集

3.2.2 数据集介绍

Amazon股价数据,经常用于学习机器学习算法。

Amazon

3.2.3 训练结果

代码与IBM数据集类似,这里直接展示运行结果。


训练过程

训练

Matplotlib绘图

绘图Matplotlib

Seaborn绘图

绘图Seaborn


http://www.kler.cn/a/568688.html

相关文章:

  • HashMap与HashTable的区别
  • JDBC 完全指南:掌握 Java 数据库交互的核心技术
  • leetcode 76. 最小覆盖子串
  • 基于专利合作地址匹配的数据构建区域协同矩阵
  • 功能丰富的自动化任务软件zTasker_2.1.0_绿色版_屏蔽强制更新闪退
  • Dify - 自部署的应用构建开源解决方案
  • 数据分享:空气质量数据-济南
  • 2025 GDC开发者先锋大会“人形机器人的开源之路”分论坛 | 圆桌会议:《开放协作:开源生态如何解锁人形机器人与具身智能的未来》(上篇)
  • iOS 18.4 深度更新解析:美食内容革命与跨设备生态重构(2025年4月)
  • Trae智能协作AI编程工具IDE:如何在MacBook Pro下载、安装和配置使用Trae?
  • Raspberry Pi边缘计算网关设计与LoRa通信实现
  • 高频 SQL 50 题(基础版)_626. 换座位
  • 嵌入式学习(29)-ASM330LHH驱动程序
  • 使用python解决硬币找零问题
  • MySQL远程连接Docker中的MySQL(2003,10061)等问题
  • MYISAM存储引擎介绍,特性(和innodb对比),优势,物理文件,表存储格式(静态表,动态表,null记录,压缩表)
  • 动态规划刷题
  • 计算机网络---SYN Blood(洪泛攻击)
  • 【计算机网络基础】-------计算机网络概念
  • ​Java 开发中的String判断空及在多种转换String字符串场景下的判断空