动手学图神经网络(3):利用图神经网络进行节点分类 从理论到实践
利用图神经网络进行节点分类:从理论到实践
前言
在之前的学习中,大家对图神经网络有了初步的了解。本次教程将深入探讨如何运用图神经网络(GNNs)来解决节点分类问题。在节点分类任务里,大家往往仅掌握少量节点的真实标签,却要推断出其余所有节点的标签,这属于归纳式学习的范畴。
安装必要的包
首先,大家要安装所需的 Python 包,同时记录当前使用的 PyTorch 版本。
import os
import torch
os.environ['TORCH'] = torch.__version__
print(torch.__version__)
# 以下命令可用于安装所需库
# !pip install -q torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html
# !pip install -q torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html
# !pip install -q git+https://github.com/pyg-team/pytorch_geometric.git
可视化辅助函数
为了 能够直观地展示节点嵌入,大家定义了一个可视化函数。
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
def visualize(h, color):
z = TSNE(n_components=2).fit_transform(h.detach().cpu().numpy())
plt.figure(figsize=(10,10))
plt.xticks([])
plt.yticks([])
plt.scatter(z[:, 0], z[:, 1], s=70, c=color, cmap="Set2")
plt.show()
数据集介绍
大家选用 Cora
数据集来开展节点分类任务。这是一个引用网络,节点代表文档,每个节点由一个 1433 维的词袋特征向量描述。若两篇文档之间存在引用关系,则它们相互连接。任务是推断每个文档所属的类别,一共有 7 个类别。
from torch_geometric.datasets import Planetoid
from torch_geometric.transforms import NormalizeFeatures
dataset = Planetoid(root='data/Planetoid', name='Cora', transform=NormalizeFeatures())
print()
print(f'Dataset: {dataset}:')
print('======================')
print(f'Number of graphs: {len(dataset)}')
print(f'Number of features: {dataset.num_features}')
print(f'Number of classes: {dataset.num_classes}')
data = dataset[0] # 获取第一个图对象
print()
print(data)
print('===========================================================================================================')
# 收集图的统计信息
print(f'Number of nodes: {data.num_nodes}')
print(f'Number of edges: {data.num_edges}')
print(f'Average node degree: {data.num_edges / data.num_nodes:.2f}')
print(f'Number of training nodes: {data.train_mask.sum()}')
print(f'Training node label rate: {int(data.train_mask.sum()) / data.num_nodes:.2f}')
print(f'Has isolated nodes: {data.has_isolated_nodes()}')
print(f'Has self-loops: {data.has_self_loops()}')
print(f'Is undirected: {data.is_undirected()}')
Cora
网络包含 2708 个节点和 10556 条边,平均节点度为 3.9。用于训练的节点仅有 140 个(每个类别 20 个),训练节点标签率仅为 5%。与之前的 KarateClub
网络不同,这个图还有 val_mask
和 test_mask
属性,分别表示用于验证和测试的节点。并且,大家使用 transform=NormalizeFeatures()
对输入的词袋特征向量进行行归一化处理。
多层感知机(MLP)模型
定义 MLP 模型
理论上,大家可以仅依据文档的内容(即词袋特征表示)来推断其类别,而不考虑文档之间的关系信息。下面大家构建一个简单的 MLP 模型来验证这一点。
import torch
from torch.nn import Linear
import torch.nn.functional as F
class MLP(torch.nn.Module):
def __init__(self, hidden_channels):
super().__init__()
torch.manual_seed(12345)
self.lin1 = Linear(dataset.num_features, hidden_channels)
self.lin2 = Linear(hidden_channels, dataset.num_classes)
def forward(self, x):
x = self.lin1(x)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.lin2(x)
return x
model = MLP(hidden_channels=16)
print(model)
这个 MLP 模型由两个线性层组成,中间使用 ReLU 激活函数和 Dropout 正则化。第一个线性层将 1433 维的特征向量降维到 16 维,第二个线性层将低维嵌入映射到 7 个类别。
训练和测试 MLP 模型
from IPython.display import Javascript # 限制输出单元格高度
display(Javascript('''google.colab.output.setIframeHeight(0, true, {maxHeight: 300})'''))
model = MLP(hidden_channels=16)
criterion = torch.nn.CrossEntropyLoss() # 定义损失函数
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4) # 定义优化器
def train():
model.train()
optimizer.zero_grad() # 清除梯度
out = model(data.x) # 前向传播
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # 计算损失
loss.backward() # 反向传播
optimizer.step() # 更新参数
return loss
def test():
model.eval()
out = model(data.x)
pred = out.argmax(dim=1) # 选择概率最高的类别
test_correct = pred[data.test_mask] == data.y[data.test_mask] # 检查预测结果
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # 计算准确率
return test_acc
for epoch in range(1, 201):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')
Epoch: 001, Loss: 1.9615
Epoch: 002, Loss: 1.9557
Epoch: 003, Loss: 1.9505
Epoch: 004, Loss: 1.9423
Epoch: 005, Loss: 1.9327
Epoch: 006, Loss: 1.9279
Epoch: 007, Loss: 1.9144
Epoch: 008, Loss: 1.9087
Epoch: 009, Loss: 1.9023
Epoch: 010, Loss: 1.8893
Epoch: 011, Loss: 1.8776
Epoch: 012, Loss: 1.8594
Epoch: 013, Loss: 1.8457
Epoch: 014, Loss: 1.8365
Epoch: 015, Loss: 1.8280
Epoch: 016, Loss: 1.7965
Epoch: 017, Loss: 1.7984
Epoch: 018, Loss: 1.7832
Epoch: 019, Loss: 1.7495
Epoch: 020, Loss: 1.7441
Epoch: 021, Loss: 1.7188
Epoch: 022, Loss: 1.7124
Epoch: 023, Loss: 1.6785
Epoch: 024, Loss: 1.6660
Epoch: 025, Loss: 1.6119
Epoch: 026, Loss: 1.6236
Epoch: 027, Loss: 1.5827
Epoch: 028, Loss: 1.5784
Epoch: 029, Loss: 1.5524
Epoch: 030, Loss: 1.5020
Epoch: 031, Loss: 1.5065
Epoch: 032, Loss: 1.4742
Epoch: 033, Loss: 1.4581
Epoch: 034, Loss: 1.4246
Epoch: 035, Loss: 1.4131
Epoch: 036, Loss: 1.4112
Epoch: 037, Loss: 1.3923
Epoch: 038, Loss: 1.3055
Epoch: 039, Loss: 1.2982
Epoch: 040, Loss: 1.2543
Epoch: 041, Loss: 1.2244
Epoch: 042, Loss: 1.2331
Epoch: 043, Loss: 1.1984
Epoch: 044, Loss: 1.1796
Epoch: 045, Loss: 1.1093
Epoch: 046, Loss: 1.1284
Epoch: 047, Loss: 1.1229
Epoch: 048, Loss: 1.0383
Epoch: 049, Loss: 1.0439
Epoch: 050, Loss: 1.0563
Epoch: 051, Loss: 0.9893
Epoch: 052, Loss: 1.0508
Epoch: 053, Loss: 0.9343
Epoch: 054, Loss: 0.9639
Epoch: 055, Loss: 0.8929
Epoch: 056, Loss: 0.8705
Epoch: 057, Loss: 0.9176
Epoch: 058, Loss: 0.9239
Epoch: 059, Loss: 0.8641
Epoch: 060, Loss: 0.8578
Epoch: 061, Loss: 0.7908
Epoch: 062, Loss: 0.7856
Epoch: 063, Loss: 0.7683
Epoch: 064, Loss: 0.7816
Epoch: 065, Loss: 0.7356
Epoch: 066, Loss: 0.6951
Epoch: 067, Loss: 0.7300
Epoch: 068, Loss: 0.6939
Epoch: 069, Loss: 0.7550
Epoch: 070, Loss: 0.6864
Epoch: 071, Loss: 0.7094
Epoch: 072, Loss: 0.7238
Epoch: 073, Loss: 0.7150
Epoch: 074, Loss: 0.6191
Epoch: 075, Loss: 0.6770
Epoch: 076, Loss: 0.6487
Epoch: 077, Loss: 0.6258
Epoch: 078, Loss: 0.5821
Epoch: 079, Loss: 0.5637
Epoch: 080, Loss: 0.6368
Epoch: 081, Loss: 0.6333
Epoch: 082, Loss: 0.6434
Epoch: 083, Loss: 0.5974
Epoch: 084, Loss: 0.6176
Epoch: 085, Loss: 0.5972
Epoch: 086, Loss: 0.4690
Epoch: 087, Loss: 0.6362
Epoch: 088, Loss: 0.6118
Epoch: 089, Loss: 0.5248
Epoch: 090, Loss: 0.5520
Epoch: 091, Loss: 0.6130
Epoch: 092, Loss: 0.5361
Epoch: 093, Loss: 0.5594
Epoch: 094, Loss: 0.5049
Epoch: 095, Loss: 0.5043
Epoch: 096, Loss: 0.5235
Epoch: 097, Loss: 0.5451
Epoch: 098, Loss: 0.5329
Epoch: 099, Loss: 0.5008
Epoch: 100, Loss: 0.5350
Epoch: 101, Loss: 0.5343
Epoch: 102, Loss: 0.5138
Epoch: 103, Loss: 0.5377
Epoch: 104, Loss: 0.5353
Epoch: 105, Loss: 0.5176
Epoch: 106, Loss: 0.5229
Epoch: 107, Loss: 0.4558
Epoch: 108, Loss: 0.4883
Epoch: 109, Loss: 0.4659
Epoch: 110, Loss: 0.4908
Epoch: 111, Loss: 0.4966
Epoch: 112, Loss: 0.4725
Epoch: 113, Loss: 0.4787
Epoch: 114, Loss: 0.4390
Epoch: 115, Loss: 0.4199
Epoch: 116, Loss: 0.4810
Epoch: 117, Loss: 0.4484
Epoch: 118, Loss: 0.5080
Epoch: 119, Loss: 0.4241
Epoch: 120, Loss: 0.4745
Epoch: 121, Loss: 0.4651
Epoch: 122, Loss: 0.4652
Epoch: 123, Loss: 0.5580
Epoch: 124, Loss: 0.4861
Epoch: 125, Loss: 0.4405
Epoch: 126, Loss: 0.4292
Epoch: 127, Loss: 0.4409
Epoch: 128, Loss: 0.3575
Epoch: 129, Loss: 0.4468
Epoch: 130, Loss: 0.4603
Epoch: 131, Loss: 0.4108
Epoch: 132, Loss: 0.4601
Epoch: 133, Loss: 0.4258
Epoch: 134, Loss: 0.3852
Epoch: 135, Loss: 0.4028
Epoch: 136, Loss: 0.4245
Epoch: 137, Loss: 0.4300
Epoch: 138, Loss: 0.4693
Epoch: 139, Loss: 0.4314
Epoch: 140, Loss: 0.4031
Epoch: 141, Loss: 0.4290
Epoch: 142, Loss: 0.4110
Epoch: 143, Loss: 0.3863
Epoch: 144, Loss: 0.4215
Epoch: 145, Loss: 0.4519
Epoch: 146, Loss: 0.3940
Epoch: 147, Loss: 0.4429
Epoch: 148, Loss: 0.3527
Epoch: 149, Loss: 0.4390
Epoch: 150, Loss: 0.4212
Epoch: 151, Loss: 0.4128
Epoch: 152, Loss: 0.3779
Epoch: 153, Loss: 0.4801
Epoch: 154, Loss: 0.4130
Epoch: 155, Loss: 0.3962
Epoch: 156, Loss: 0.4262
Epoch: 157, Loss: 0.4210
Epoch: 158, Loss: 0.4081
Epoch: 159, Loss: 0.4066
Epoch: 160, Loss: 0.3782
Epoch: 161, Loss: 0.3836
Epoch: 162, Loss: 0.4172
Epoch: 163, Loss: 0.3993
Epoch: 164, Loss: 0.4477
Epoch: 165, Loss: 0.3714
Epoch: 166, Loss: 0.3610
Epoch: 167, Loss: 0.4546
Epoch: 168, Loss: 0.4387
Epoch: 169, Loss: 0.3793
Epoch: 170, Loss: 0.3704
Epoch: 171, Loss: 0.4286
Epoch: 172, Loss: 0.4131
Epoch: 173, Loss: 0.3795
Epoch: 174, Loss: 0.4230
Epoch: 175, Loss: 0.4139
Epoch: 176, Loss: 0.3586
Epoch: 177, Loss: 0.3588
Epoch: 178, Loss: 0.3911
Epoch: 179, Loss: 0.3810
Epoch: 180, Loss: 0.4203
Epoch: 181, Loss: 0.3583
Epoch: 182, Loss: 0.3690
Epoch: 183, Loss: 0.4025
Epoch: 184, Loss: 0.3920
Epoch: 185, Loss: 0.4369
Epoch: 186, Loss: 0.4317
Epoch: 187, Loss: 0.4911
Epoch: 188, Loss: 0.3369
Epoch: 189, Loss: 0.4945
Epoch: 190, Loss: 0.3912
Epoch: 191, Loss: 0.3824
Epoch: 192, Loss: 0.3479
Epoch: 193, Loss: 0.3798
Epoch: 194, Loss: 0.3799
Epoch: 195, Loss: 0.4015
Epoch: 196, Loss: 0.3615
Epoch: 197, Loss: 0.3985
Epoch: 198, Loss: 0.4664
Epoch: 199, Loss: 0.3714
Epoch: 200, Loss: 0.3810
MLP 模型的测试准确率仅约为 59%,主要原因是模型仅能访问少量训练节点,导致过拟合,无法很好地泛化到未见过的节点表示。而且,MLP 未能利用文档之间的引用关系这一重要信息。
图神经网络(GNN)模型
定义 GCN 模型
大家可以通过将 MLP 中的线性层替换为 PyG 的 GNN 算子,轻松地将 MLP 转换为 GNN 模型。这里大家使用 GCNConv
模块。
from torch_geometric.nn import GCNConv
class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
super().__init__()
torch.manual_seed(1234567)
self.conv1 = GCNConv(dataset.num_features, hidden_channels)
self.conv2 = GCNConv(hidden_channels, dataset.num_classes)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv2(x, edge_index)
return x
model = GCN(hidden_channels=16)
print(model)
GCN 层的定义为:
x
v
(
ℓ
+
1
)
=
W
(
ℓ
+
1
)
∑
w
∈
N
(
v
)
∪
{
v
}
1
c
w
,
v
⋅
x
w
(
ℓ
)
\mathbf{x}_v^{(\ell + 1)} = \mathbf{W}^{(\ell + 1)} \sum_{w \in \mathcal{N}(v) \, \cup \, \{ v \}} \frac{1}{c_{w,v}} \cdot \mathbf{x}_w^{(\ell)}
xv(ℓ+1)=W(ℓ+1)w∈N(v)∪{v}∑cw,v1⋅xw(ℓ)
而线性层的定义为:
x
v
(
ℓ
+
1
)
=
W
(
ℓ
+
1
)
x
v
(
ℓ
)
\mathbf{x}_v^{(\ell + 1)} = \mathbf{W}^{(\ell + 1)} \mathbf{x}_v^{(\ell)}
xv(ℓ+1)=W(ℓ+1)xv(ℓ)
可以看出,GCN 层利用了邻居节点的信息,而线性层没有。
可视化未训练的 GCN 节点嵌入
model = GCN(hidden_channels=16)
model.eval()
out = model(data.x, data.edge_index)
visualize(out, color=data.y)
训练和测试 GCN 模型
from IPython.display import Javascript # 限制输出单元格高度
display(Javascript('''google.colab.output.setIframeHeight(0, true, {maxHeight: 300})'''))
model = GCN(hidden_channels=16)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
def train():
model.train()
optimizer.zero_grad() # 清除梯度
out = model(data.x, data.edge_index) # 前向传播
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # 计算损失
loss.backward() # 反向传播
optimizer.step() # 更新参数
return loss
def test():
model.eval()
out = model(data.x, data.edge_index)
pred = out.argmax(dim=1) # 选择概率最高的类别
test_correct = pred[data.test_mask] == data.y[data.test_mask] # 检查预测结果
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # 计算准确率
return test_acc
for epoch in range(1, 101):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')
Epoch: 001, Loss: 1.9463
Epoch: 002, Loss: 1.9409
Epoch: 003, Loss: 1.9343
Epoch: 004, Loss: 1.9275
Epoch: 005, Loss: 1.9181
Epoch: 006, Loss: 1.9086
Epoch: 007, Loss: 1.9015
Epoch: 008, Loss: 1.8933
Epoch: 009, Loss: 1.8808
Epoch: 010, Loss: 1.8685
Epoch: 011, Loss: 1.8598
Epoch: 012, Loss: 1.8482
Epoch: 013, Loss: 1.8290
Epoch: 014, Loss: 1.8233
Epoch: 015, Loss: 1.8057
Epoch: 016, Loss: 1.7966
Epoch: 017, Loss: 1.7825
Epoch: 018, Loss: 1.7617
Epoch: 019, Loss: 1.7491
Epoch: 020, Loss: 1.7310
Epoch: 021, Loss: 1.7147
Epoch: 022, Loss: 1.7056
Epoch: 023, Loss: 1.6954
Epoch: 024, Loss: 1.6697
Epoch: 025, Loss: 1.6538
Epoch: 026, Loss: 1.6312
Epoch: 027, Loss: 1.6161
Epoch: 028, Loss: 1.5899
Epoch: 029, Loss: 1.5711
Epoch: 030, Loss: 1.5576
Epoch: 031, Loss: 1.5393
Epoch: 032, Loss: 1.5137
Epoch: 033, Loss: 1.4948
Epoch: 034, Loss: 1.4913
Epoch: 035, Loss: 1.4698
Epoch: 036, Loss: 1.3998
Epoch: 037, Loss: 1.4041
Epoch: 038, Loss: 1.3761
Epoch: 039, Loss: 1.3631
Epoch: 040, Loss: 1.3258
Epoch: 041, Loss: 1.3030
Epoch: 042, Loss: 1.3119
Epoch: 043, Loss: 1.2519
Epoch: 044, Loss: 1.2530
Epoch: 045, Loss: 1.2492
Epoch: 046, Loss: 1.2205
Epoch: 047, Loss: 1.2037
Epoch: 048, Loss: 1.1571
Epoch: 049, Loss: 1.1700
Epoch: 050, Loss: 1.1296
Epoch: 051, Loss: 1.0860
Epoch: 052, Loss: 1.1080
Epoch: 053, Loss: 1.0564
Epoch: 054, Loss: 1.0157
Epoch: 055, Loss: 1.0362
Epoch: 056, Loss: 1.0328
Epoch: 057, Loss: 1.0058
Epoch: 058, Loss: 0.9865
Epoch: 059, Loss: 0.9667
Epoch: 060, Loss: 0.9741
Epoch: 061, Loss: 0.9769
Epoch: 062, Loss: 0.9122
Epoch: 063, Loss: 0.8993
Epoch: 064, Loss: 0.8769
Epoch: 065, Loss: 0.8575
Epoch: 066, Loss: 0.8897
Epoch: 067, Loss: 0.8312
Epoch: 068, Loss: 0.8262
Epoch: 069, Loss: 0.8511
Epoch: 070, Loss: 0.7711
Epoch: 071, Loss: 0.8012
Epoch: 072, Loss: 0.7529
Epoch: 073, Loss: 0.7525
Epoch: 074, Loss: 0.7689
Epoch: 075, Loss: 0.7553
Epoch: 076, Loss: 0.7032
Epoch: 077, Loss: 0.7326
Epoch: 078, Loss: 0.7122
Epoch: 079, Loss: 0.7090
Epoch: 080, Loss: 0.6755
Epoch: 081, Loss: 0.6666
Epoch: 082, Loss: 0.6679
Epoch: 083, Loss: 0.7037
Epoch: 084, Loss: 0.6752
Epoch: 085, Loss: 0.6266
Epoch: 086, Loss: 0.6564
Epoch: 087, Loss: 0.6266
Epoch: 088, Loss: 0.6411
Epoch: 089, Loss: 0.6226
Epoch: 090, Loss: 0.6535
Epoch: 091, Loss: 0.6317
Epoch: 092, Loss: 0.5741
Epoch: 093, Loss: 0.5572
Epoch: 094, Loss: 0.5710
Epoch: 095, Loss: 0.5816
Epoch: 096, Loss: 0.5745
Epoch: 097, Loss: 0.5547
Epoch: 098, Loss: 0.5989
Epoch: 099, Loss: 0.6021
Epoch: 100, Loss: 0.5799
通过将线性层替换为 GNN 层,GCN 模型的测试准确率达到了 81.5%,远高于 MLP 模型,这表明关系信息对于提高模型性能至关重要。
可视化训练后的 GCN 节点嵌入
model.eval()
out = model(data.x, data.edge_index)
visualize(out, color=data.y)
在本章节中,大家学习了如何将 GNNs 应用于实际的节点分类问题,并且看到了 GNNs 在提升模型性能方面的显著效果。
(可选)练习
练习 1
利用 data.val_mask
选择验证性能最佳的模型并进行测试,争取将测试性能提高到 82% 的准确率。
练习 2
探究增加隐藏特征维度或层数时,GCN
模型的表现,以及增加层数是否有帮助。
练习 3
尝试使用不同的 GNN 层,例如将所有 GCNConv
替换为 GATConv
层。下面是一个 2 层 GAT
模型的框架:
from torch_geometric.nn import GATConv
class GAT(torch.nn.Module):
def __init__(self, hidden_channels, heads):
super().__init__()
torch.manual_seed(1234567)
self.conv1 = GATConv(...) # 需要完成初始化
self.conv2 = GATConv(...) # 需要完成初始化
def forward(self, x, edge_index):
x = F.dropout(x, p=0.6, training=self.training)
x = self.conv1(x, edge_index)
x = F.elu(x)
x = F.dropout(x, p=0.6, training=self.training)
x = self.conv2(x, edge_index)
return x
model = GAT(hidden_channels=8, heads=8)
print(model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.005, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
def train():
model.train()
optimizer.zero_grad() # 清除梯度
out = model(data.x, data.edge_index) # 前向传播
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # 计算损失
loss.backward() # 反向传播
optimizer.step() # 更新参数
return loss
def test(mask):
model.eval()
out = model(data.x, data.edge_index)
pred = out.argmax(dim=1) # 选择概率最高的类别
correct = pred[mask] == data.y[mask] # 检查预测结果
acc = int(correct.sum()) / int(mask.sum()) # 计算准确率
return acc
for epoch in range(1, 201):
loss = train()
val_acc = test(data.val_mask)
test_acc = test(data.test_mask)
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, Val: {val_acc:.4f}, Test: {test_acc:.4f}')
通过这些练习,你可以更深入地理解图神经网络在节点分类任务中的应用和优化。快来动手试试吧!