当前位置: 首页 > article >正文

Chapter 4.5:Connecting attention and linear layers in a transformer block

4 Implementing a GPT model from Scratch To Generate Text

4.5 Connecting attention and linear layers in a transformer block

  • 本节将实现 transformer 块,这是 GPT 和其他 LLM 架构的核心组件。在 1.24 亿参数的 GPT-2 中,该块重复多次,集成了多头注意力、层归一化、dropout、前馈层和 GELU 激活函数等概念,如下图所示。下一节会将其整合到 GPT 架构中。

  • 创建TransformerBlock

    import tiktoken
    import torch
    import torch.nn as nn
    from torch.utils.data import Dataset, DataLoader
    
    class MultiHeadAttention(nn.Module):
        def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False):
            super().__init__()
            assert d_out % num_heads == 0, "d_out must be divisible by num_heads"
    
            self.d_out = d_out
            self.num_heads = num_heads
            self.head_dim = d_out // num_heads  # Reduce the projection dim to match desired output dim
    
            self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
            self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
            self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
            self.out_proj = nn.Linear(d_out, d_out)  # Linear layer to combine head outputs
            self.dropout = nn.Dropout(dropout)
            self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1))
    
        def forward(self, x):
            b, num_tokens, d_in = x.shape
    
            keys = self.W_key(x)  # Shape: (b, num_tokens, d_out)
            queries = self.W_query(x)
            values = self.W_value(x)
    
            # We implicitly split the matrix by adding a `num_heads` dimension
            # Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)
            keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)
            values = values.view(b, num_tokens, self.num_heads, self.head_dim)
            queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)
    
            # Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)
            keys = keys.transpose(1, 2)
            queries = queries.transpose(1, 2)
            values = values.transpose(1, 2)
    
            # Compute scaled dot-product attention (aka self-attention) with a causal mask
            attn_scores = queries @ keys.transpose(2, 3)  # Dot product for each head
    
            # Original mask truncated to the number of tokens and converted to boolean
            mask_bool = self.mask.bool()[:num_tokens, :num_tokens]
    
            # Use the mask to fill attention scores
            attn_scores.masked_fill_(mask_bool, -torch.inf)
    
            attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
            attn_weights = self.dropout(attn_weights)
    
            # Shape: (b, num_tokens, num_heads, head_dim)
            context_vec = (attn_weights @ values).transpose(1, 2)
    
            # Combine heads, where self.d_out = self.num_heads * self.head_dim
            context_vec = context_vec.contiguous().view(b, num_tokens, self.d_out)
            context_vec = self.out_proj(context_vec)  # optional projection
    
            return context_vec
    
        
    GPT_CONFIG_124M = {
        "vocab_size": 50257,    # Vocabulary size
        "context_length": 1024, # Context length
        "emb_dim": 768,         # Embedding dimension
        "n_heads": 12,          # Number of attention heads
        "n_layers": 12,         # Number of layers
        "drop_rate": 0.1,       # Dropout rate
        "qkv_bias": False       # Query-Key-Value bias
    }
    
    class LayerNorm(nn.Module):
        def __init__(self, emb_dim):
            super().__init__()
            self.eps = 1e-5
            self.scale = nn.Parameter(torch.ones(emb_dim))
            self.shift = nn.Parameter(torch.zeros(emb_dim))
    
        def forward(self, x):
            mean = x.mean(dim=-1, keepdim=True)
            var = x.var(dim=-1, keepdim=True, unbiased=False)
            norm_x = (x - mean) / torch.sqrt(var + self.eps)
            return self.scale * norm_x + self.shift
    
    class GELU(nn.Module):
        def __init__(self):
            super().__init__()
    
        def forward(self, x):
            return 0.5 * x * (1 + torch.tanh(
                torch.sqrt(torch.tensor(2.0 / torch.pi)) * 
                (x + 0.044715 * torch.pow(x, 3))
            ))
        
    class FeedForward(nn.Module):
        def __init__(self, cfg):
            super().__init__()
            self.layers = nn.Sequential(
                nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
                GELU(),
                nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
            )
    
        def forward(self, x):
            return self.layers(x)
    
    class TransformerBlock(nn.Module):
        def __init__(self, cfg):
            super().__init__()
            self.att = MultiHeadAttention(
                d_in=cfg["emb_dim"],
                d_out=cfg["emb_dim"],
                context_length=cfg["context_length"],
                num_heads=cfg["n_heads"], 
                dropout=cfg["drop_rate"],
                qkv_bias=cfg["qkv_bias"])
            self.ff = FeedForward(cfg)
            self.norm1 = LayerNorm(cfg["emb_dim"])
            self.norm2 = LayerNorm(cfg["emb_dim"])
            self.drop_shortcut = nn.Dropout(cfg["drop_rate"])
    
        def forward(self, x):
            # Shortcut connection for attention block
            shortcut = x
            x = self.norm1(x)
            x = self.att(x)  # Shape [batch_size, num_tokens, emb_size]
            x = self.drop_shortcut(x)
            x = x + shortcut  # Add the original input back
    
            # Shortcut connection for feed forward block
            shortcut = x
            x = self.norm2(x)
            x = self.ff(x)
            x = self.drop_shortcut(x)
            x = x + shortcut  # Add the original input back
    
            return x
    
    

    使用我们之前定义的 GPT_CONFIG_124M 字典,让我们实例化一个转换器块并为其提供一些示例数据:

    # 实例化
    torch.manual_seed(123)
    
    x = torch.rand(2, 4, 768)  # Shape: [batch_size, num_tokens, emb_dim]
    block = TransformerBlock(GPT_CONFIG_124M)
    output = block(x)
    
    print("Input shape:", x.shape)
    print("Output shape:", output.shape)
    
    """输出"""
    Input shape: torch.Size([2, 4, 768])
    Output shape: torch.Size([2, 4, 768])
    

    transformer block在其输出中维护输入维度,这表明transformer 架构处理数据序列而不改变它们在整个网络中的形状,通过保留输入序列的形状(长度和特征大小)并重新编码每个输出向量以整合全局上下文信息,使其能够有效应用于各种序列到序列任务,同时保持输入与输出的一对一关系。

  • 本节至此,现在已经实现 GPT 所需的所有构建块

    4_14



http://www.kler.cn/a/473139.html

相关文章:

  • 深入学习RabbitMQ的Direct Exchange(直连交换机)
  • 数据库中锁与ETL的故障排除和性能优化
  • Vue.js支持哪些数据可视化工具?
  • 【第01阶段-基础必备篇-第二部分--Python之基础】04 函数
  • TCP 如何获取端口信息
  • 【题库】人工智能训练师练习题
  • 【VUE 指令学习笔记】
  • Linux/Ubuntu/银河麒麟 arm64 飞腾FT2000 下使用 arm64版本 linuxdeployqt 打包Qt程序
  • MySQL进阶突击系列(05)突击MVCC核心原理 | 左右护法ReadView视图和undoLog版本链强强联合
  • 红黑树详解
  • 极客公园创新大会探索AI未来,Soul App创始人张璐团队以技术驱动创新
  • 百济神州后端开发工程师 - 部分笔试题 - 解析
  • spark——RDD算子集合
  • 标题统计C++
  • Spring Security使用指南
  • 代码随想录 链表 test 6
  • 东京大学联合Adobe提出基于指令的图像编辑模型InstructMove,可通过观察视频中的动作来实现基于指令的图像编辑。
  • Selenium 进行网页自动化操作的一个示例,绕过一些网站的自动化检测。python编程
  • 『 Linux 』高级IO (三) - Epoll模型的封装与EpollEchoServer服务器
  • C#里对已经存在的文件进行压缩生成ZIP文件
  • FPGA车牌识别
  • 基于需求文档、设计文档、测试用例的测试答疑助手
  • 用Portainer实现对Docker容器的管理(四)
  • 深度学习与计算机视觉 (博士)
  • 【JAVA基础】Collections方法的具体使用方法
  • 计算机网络 笔记 物理层