当前位置: 首页 > article >正文

k8s 部署kafka单机和集群

单节点kafka

kafka版本: 3.6.0镜像 (bitnami)

yaml清单
kafka.yaml

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: kafka-stand-alone
  namespace: hgjg-common
  labels:
    app: kafka-stand-alone
  annotations:
    kubesphere.io/creator: admin
    kubesphere.io/description: kafka单机
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kafka-stand-alone
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kafka-stand-alone
      annotations:
        cni.projectcalico.org/ipv4pools: '["default-ipv4-ippool"]'
        kubesphere.io/creator: admin
        kubesphere.io/imagepullsecrets: '{"container-8coejh":"register-addr"}'
        kubesphere.io/restartedAt: '2025-01-16T01:54:10.296Z'
        logging.kubesphere.io/logsidecar-config: '{}'
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: container-8coejh
          image: 'registry.flow.cn/library/kafka:3.6.0'
          command:
            - sh
            - '-c'
            - >
              POD_NAME=$(hostname)

              echo "POD_NAME:$POD_NAME"

              # pod的索引
              REPLICA_INDEX=$(echo $POD_NAME | sed 's/.*-\([0-9]\)$/\1/')
              # node.id配置
              export KAFKA_CFG_NODE_ID="$REPLICA_INDEX"

              # broker.id配置
              export KAFKA_BROKER_ID="$REPLICA_INDEX"



              PORT=$((REPLICA_INDEX + 30700)) 

              PORT2=$((REPLICA_INDEX + 9092))

              PORT3=$((REPLICA_INDEX + 19092))

              # 内部监听地址,默认就是PLAINTEXT://:9092
              export
              KAFKA_CFG_LISTENERS="PLAINTEXT://:$PORT2,CONTROLLER://:$PORT3"

              export
              KAFKA_CFG_CONTROLLER_QUORUM_VOTERS="$REPLICA_INDEX@localhost:$PORT3"

              # 外部监听 
              # 必须设置通告地址:端口, 就是外部连接kafka的地址,这里是指定pod的fqdn,记得修改为你的pod地址
              export
              KAFKA_CFG_ADVERTISED_LISTENERS="PLAINTEXT://10.0.11.26:$PORT"

              exec /opt/bitnami/scripts/kafka/entrypoint.sh
              /opt/bitnami/scripts/kafka/run.sh
          ports:
            - name: tcp-9092
              containerPort: 9092
              protocol: TCP
            - name: tcp-9093
              containerPort: 9093
              protocol: TCP
            - name: tcp-9094
              containerPort: 9094
              protocol: TCP
          env:
            - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
              value: CONTROLLER
            - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTER_CLIENT_QUOTA_WINDOW_NUM
              value: '10'
            - name: >-
                KAFKA_CFG_CONTROLLER_QUORUM_VOTER_CLIENT_QUOTA_WINDOW_SIZE_SECONDS
              value: '1'
            - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTER_REQUEST_TIMEOUT_MS
              value: '5000'
            - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
              value: 'true'
            - name: KAFKA_ENABLE_KRAFT
              value: 'YES'
            - name: KAFKA_CFG_PROCESS_ROLES
              value: 'broker,controller'
            - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
              value: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT'
            - name: KAFKA_INTER_BROKER_LISTENER_NAME
              value: PLAINTEXT
          resources: {}
          volumeMounts:
            - name: host-time
              mountPath: /etc/localtime
            - name: kafka-stand-alone-pvc
              mountPath: /bitnami
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext: {}
      imagePullSecrets:
        - name: register-addr
      schedulerName: default-scheduler
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: kafka-stand-alone-pvc
        namespace: hgjg-common
        creationTimestamp: null
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: local
        volumeMode: Filesystem
      status:
        phase: Pending
  serviceName: kafka-stand-alone-headless
  podManagementPolicy: OrderedReady
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      partition: 0
  revisionHistoryLimit: 10
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain

创建资源并测试,这里资源创建在test空间

# kubectl -n test apply -f kafka.yaml
# kubectl -n test get pod
kafka-stand-alone-0                         1/1     Running   0          104m

# 进入到容器里面,生产和消费正常
# kubectl -n test exec -it kafka-stand-alone-0 -- bash
$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytopic
>hello
[2024-12-01 10:50:58,483] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {mytopic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
>hello
>hello
>hello
# 上面的报错没关系,是因为没有提前创建topic,会自动创建

$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning
hello
hello
hello

上面的KAFKA_CFG_ADVERTISED_LISTENERS变量设置为你自己的pod的FQDN

如何访问kafka

  • 直接访问pod的FQDN端口:kafka-stand-alone-0.kafka-stand-alone-headless.test.svc.cluster.local:9092
  • 访问svc: kafka-stand-alone-headless.test.svc.cluster.local:9092
    推荐使用第一种方式,不用经过svc

如何配置

  • 所有的kafka配置都可以用环境变量, KAFKA_CFG_BROKER_ID对应的kafka配置是broker.id
  • 也就是加上KAFAK_CFG前缀,所有的都大写,下划线隔开

使用zookeeper集群部署

通过statefulSet部署zookeeper+kafka3节点集群

zookeeper: 3.8.4 (bitnami镜像)
kafka: 3.9.0 (bitnami镜像)

用官方的kafka3.9镜像做不了集群,因为默认是kraft模式,不能使用zookeeper
kafka3.9是最后支持的zookeeper的版本,kafka4.0+版本不能使用zookeeper

kafka-cluster.yaml

# 3. kafka-cluster的headless
---
kind: Service
apiVersion: v1
metadata:
  name: kafka-cluster-headless
  namespace: hgjg-common
  labels:
    app: kafka-cluster-headless
  annotations:
    kubesphere.io/creator: admin
spec:
  ports:
    - name: tcp-9092
      protocol: TCP
      port: 9092
      targetPort: 9092
    - name: tcp-9093
      protocol: TCP
      port: 9093
      targetPort: 9093
  selector:
    app: kafka-cluster
  clusterIP: None
  clusterIPs:
    - None
  type: ClusterIP
  sessionAffinity: None
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster
---
# 3. kafka-cluster的service
kind: Service
apiVersion: v1
metadata:
  name: kafka-cluster-service
  namespace: hgjg-common
  labels:
    app: kafka-cluster-service
  annotations:
    kubesphere.io/creator: admin
spec:
  ports:
    - name: tcp-9092
      protocol: TCP
      port: 9092
      targetPort: 9092
      nodePort: 30601
  selector:
    app: kafka-cluster
  clusterIP: 10.233.39.97
  clusterIPs:
    - 10.233.39.97
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster

---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: kafka-cluster
  namespace: hgjg-common
  labels:
    app: kafka-cluster
  annotations:
    kubesphere.io/creator: admin
    kubesphere.io/description: kafka集群
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kafka-cluster
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kafka-cluster
      annotations:
        cni.projectcalico.org/ipv4pools: '["default-ipv4-ippool"]'
        kubesphere.io/creator: admin
        kubesphere.io/imagepullsecrets: '{"container-zl59w2":"register-addr"}'
        kubesphere.io/restartedAt: '2025-01-14T03:07:29.848Z'
        logging.kubesphere.io/logsidecar-config: '{}'
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: container-zl59w2
          image: 'registry.flow.cn/library/kafka:3.6.0'
          command:
            - sh
            - '-c'
            - >

              POD_NAME=$(hostname)

              echo "POD_NAME:$POD_NAME"

              # pod的索引
              REPLICA_INDEX=$(echo $POD_NAME | sed 's/.*-\([0-9]\)$/\1/')

              echo "REPLICA_INDEX:$REPLICA_INDEX"
              
              # KAFKA_CFG_NODE_ID通过主机名获取到id,作为borker.id
              export KAFKA_CFG_NODE_ID=${POD_NAME##*-}

              PORT=$((REPLICA_INDEX + 30601)) PORT2=$((REPLICA_INDEX + 9093))

              export
              KAFKA_CFG_ADVERTISED_LISTENERS="INSIDE://:9092,OUTSIDE://10.0.11.26:$PORT"
              # 内部监听器,默认就是PLAINTEXT://:9092
              export KAFKA_CFG_LISTENERS="INSIDE://:9092,OUTSIDE://:$PORT2"

              exec /opt/bitnami/scripts/kafka/entrypoint.sh
              /opt/bitnami/scripts/kafka/run.sh
          ports:
            - name: tcp-9092
              containerPort: 9092
              protocol: TCP
            - name: tcp-9093
              containerPort: 9093
              protocol: TCP
            - name: tcp-9094
              containerPort: 9094
              protocol: TCP
            - name: tcp-9095
              containerPort: 9095
              protocol: TCP
          env:
            - name: TZ
              value: Asia/Shanghai
              # 连接zk的svc地址
            - name: KAFKA_CFG_ZOOKEEPER_CONNECT
              value: 'zk-cluster-headless:2181'
              # kafka的jvm堆内存配置
            - name: KAFKA_HEAP_OPTS
              value: '-Xmx2g'
              # 3个节点的集群, 副本因子可以设置为3
            - name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
              value: '3'
              # 分区数
            - name: KAFKA_CFG_NUM_PARTITIONS
              value: '2'
              # 日志保留72小时
            - name: KAFKA_CFG_LOG_RETENTION_HOURS
              value: '72'
            - name: KAFKA_CFG_LOG_ROLL_HOURS
              value: '72'
            - name: KAFKA_CFG_LOG_SEGMENT_BYTES
              value: '1073741824'
              # 自动、创建删除topic
            - name: KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE
              value: 'true'
            - name: KAFKA_CFG_DELETE_TOPIC_ENABLE
              value: 'true'
            - name: KAFKA_CFG_AUTO_LEADER_REBALANCE_ENABLE
              value: 'false'
            - name: KAFKA_CFG_LOG_CLEANUP_POLICY
              value: delete
            - name: KAFKA_CFG_MESSAGE_MAX_BYTES
              value: '2000000000'
            - name: KAFKA_CFG_MAX_REQUEST_SIZE
              value: '2000000000  '
            - name: KAFKA_CFG_BATCH_SIZE
              value: '16384'
              # 这里设置不加密, 否则你得配置加密,要不然会报错
            - name: ALLOW_PLAINTEXT_LISTENER
              value: 'true'
            - name: POD_NAME
            - name: KAFKA_INTER_BROKER_LISTENER_NAME
              value: INSIDE
            - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
              value: 'INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT'
            - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
              value: 'true'
             # jmx端口, 如果用了cmak之类的kafka管理工具可以开启,但是开启了
            # 就不能exec -it到kafka容器里面执行kafa-xxx.sh命令,否则会提示端口已经占用,只能在kafkapod外面起一个pod去测试
            # 但是不会影响业务使用,根据需要开启
            # - name: JMX_PORT
            #   value: "9898"
          resources:
            requests:
              memory: 512Mi
          volumeMounts:
            - name: host-time
              mountPath: /etc/localtime
              # kafka容器数据目录
            - name: kafka-cluster-pvc
              mountPath: ' /bitnami'
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext: {}
      imagePullSecrets:
        - name: register-addr
      # pod反亲和性,软策略,尽量让pod不要调度在同一个节点上
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - kafka-cluster
                topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: kafka-cluster-pvc
        namespace: hgjg-common
        creationTimestamp: null
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: local
        volumeMode: Filesystem
      status:
        phase: Pending
  serviceName: kafka-cluster-headless
  podManagementPolicy: OrderedReady
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      partition: 0
  revisionHistoryLimit: 10
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain

创建资源,理论应该是先创建zookeeper集群,然后kafka集群。

k8s部署zookeeper集群

kubectl -n test apply -f kafak-cluster.yaml 

测试kafka集群,生产和消费消息是否正常

# kubectl -n test exec -it kafka-cluster-0 -- bash

# 生产消息(正常)
$ kafka-console-producer.sh --bootstrap-server kafka-cluster-0.kafka-cluster-headless.debug.svc.cluster.local:9092,kafka-cluster-1.kafka-cluster-headless.debug.svc.cluster.local:9092,kafka-cluster-2.kafka-cluster-headless.debug.svc.cluster.local:9092 --topic hello
>hello
>hello
>hello
>hello
>hello

# 消费消息(正常)
$ kafka-console-consumer.sh --bootstrap-server kafka-cluster-0.kafka-cluster-headless.debug.svc.cluster.local:9092,kafka-cluster-1.kafka-cluster-headless.debug.svc.cluster.local:9092,kafka-cluster-2.kafka-cluster-headless.debug.svc.cluster.local:9092 --topic hello --from-beginning 
hello
hello
hello
hello
hello

# 连接到kafka的svc生产消息,正常
$ kafka-console-producer.sh --bootstrap-server kafka-cluster-headless.debug.svc.cluster.local:9092 --topic hello 
>v1
>v2
>v3

# 连接到kafka的svc消费消息,正常
$ kafka-console-consumer.sh  --bootstrap-server kafka-cluster-headless.debug.svc.cluster.local:9092 --topic hello  --from-beginning  
v1
v2
v3

KRaft模式集群部署

Kafka 4.0 将移除zookeeper,仅支持KRaft
在这里插入图片描述

生产环境中,机器足够,也需要一定的性能和稳定性。选择推荐使用完整集群方案
3.0+版本中,基于raft协议的集群元数据管理方式,已经成熟,此处推荐使用kraft协议部署,简化部署方案
以前旧的依赖Zookeeper的方案在此不再提供、研究、维护

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: kafka-cluster
  namespace: hgjg-common
  labels:
    app: kafka-cluster
  annotations:
    kubesphere.io/creator: admin
    kubesphere.io/description: kafka集群
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kafka-cluster
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kafka-cluster
      annotations:
        cni.projectcalico.org/ipv4pools: '["default-ipv4-ippool"]'
        kubesphere.io/creator: admin
        kubesphere.io/imagepullsecrets: '{"container-zl59w2":"register-addr"}'
        kubesphere.io/restartedAt: '2025-01-15T05:12:59.968Z'
        logging.kubesphere.io/logsidecar-config: '{}'
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: container-zl59w2
          image: 'registry.flow.cn/library/kafka:3.6.0'
          command:
            - sh
            - '-c'
            - >

              POD_NAME=$(hostname)

              echo "POD_NAME:$POD_NAME"

              REPLICA_INDEX=$(echo $POD_NAME | sed 's/.*-\([0-9]\)$/\1/')

              echo "REPLICA_INDEX:$REPLICA_INDEX"

              export KAFKA_CFG_NODE_ID=${POD_NAME##*-}

              exec /opt/bitnami/scripts/kafka/entrypoint.sh
              /opt/bitnami/scripts/kafka/run.sh
          ports:
            - name: kafka
              containerPort: 9092
              protocol: TCP
            - name: tcp-9093
              containerPort: 9093
              protocol: TCP
          env:
            - name: TZ
              value: Asia/Shanghai
            - name: KAFKA_HEAP_OPTS
              value: '-Xmx2g'
            - name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
              value: '3'
            - name: KAFKA_CFG_NUM_PARTITIONS
              value: '2'
            - name: KAFKA_CFG_LOG_RETENTION_HOURS
              value: '72'
            - name: KAFKA_CFG_LOG_ROLL_HOURS
              value: '72'
            - name: KAFKA_CFG_LOG_SEGMENT_BYTES
              value: '1073741824'
            - name: KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE
              value: 'true'
            - name: KAFKA_CFG_DELETE_TOPIC_ENABLE
              value: 'true'
            - name: KAFKA_CFG_AUTO_LEADER_REBALANCE_ENABLE
              value: 'false'
            - name: KAFKA_CFG_LOG_CLEANUP_POLICY
              value: delete
            - name: KAFKA_CFG_MESSAGE_MAX_BYTES
              value: '5242880'
            - name: KAFKA_CFG_MAX_REQUEST_SIZE
              value: '4194304'
            - name: KAFKA_CFG_BATCH_SIZE
              value: '16384'
            - name: ALLOW_PLAINTEXT_LISTENER
              value: 'true'
            - name: KAFKA_INTER_BROKER_LISTENER_NAME
              value: PLAINTEXT
            - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
              value: 'PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT'
            - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
              value: 'true'
            - name: KAFKA_PROCESS_ROLES
              value: 'broker,controller'
              # 这个参数指定每个主题的默认分区数。分区是 Kafka中的消息存储单元,用于实现消息的并行处理。
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092,CONTROLLER://:9093'
              #这个参数定义了 Kafka监听器的配置。
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://10.0.11.26:30601'
              #这个参数指定了控制器选举过程中的投票者信息,包括其 ID和监听器的地址。
            - name: KAFKA_CONTROLLER_QUORUM_VOTERS
              value: >-
                0@kafka-cluster-0.kafka-cluster-headless:9093,1@kafka-cluster-1.kafka-cluster-headless:9093,2@kafka-cluster-2.kafka-cluster-headless:9093
            - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTER_CLIENT_QUOTA_WINDOW_NUM
              value: '10'
            - name: >-
                KAFKA_CFG_CONTROLLER_QUORUM_VOTER_CLIENT_QUOTA_WINDOW_SIZE_SECONDS
              value: '2'
            - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTER_REQUEST_TIMEOUT_MS
              value: '5000'
            - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
              value: CONTROLLER
            - name: KAFKA_KRAFT_CLUSTER_ID
              value: Jc7hwCMorEyPprSI1Iw4sW
          resources:
            limits:
              cpu: '1'
              memory: 2560Mi
            requests:
              cpu: 500m
              memory: 512Mi
          volumeMounts:
            - name: host-time
              mountPath: /etc/localtime
            - name: kafka-cluster-pvc
              mountPath: ' /bitnami'
          livenessProbe:
            tcpSocket:
              port: kafka
            initialDelaySeconds: 20
            timeoutSeconds: 1
            periodSeconds: 3
            successThreshold: 1
            failureThreshold: 2
          readinessProbe:
            tcpSocket:
              port: kafka
            initialDelaySeconds: 20
            timeoutSeconds: 1
            periodSeconds: 3
            successThreshold: 1
            failureThreshold: 2
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext: {}
      imagePullSecrets:
        - name: register-addr
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - kafka-cluster
                topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: kafka-cluster-pvc
        namespace: hgjg-common
        creationTimestamp: null
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: local
        volumeMode: Filesystem
      status:
        phase: Pending
  serviceName: kafka-cluster-headless
  podManagementPolicy: OrderedReady
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      partition: 0
  revisionHistoryLimit: 10
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain


http://www.kler.cn/a/511996.html

相关文章:

  • 2024春秋杯密码题第一、二天WP
  • 通过学习更多样化的生成数据进行更广泛的数据分发来改进实例分割
  • CentOS9 安装Docker+Dpanel+onlyoffice(https、更改字体、字号、去除限制)的避坑笔记
  • C++实现Point2D类 有限元基础类
  • 【博客之星2024年度总评选】年度回望:我的博客之路与星光熠熠
  • Pytorch使用教程(12)-如何进行并行训练?
  • 使用 Parcel 和 NPM 脚本进行打包
  • 【大数据】机器学习------聚类
  • 常见的图形库概览-03-D3.js 入门例子
  • 计算机系统原理:一些断言
  • Transformer详解:Attention机制原理
  • Vue2:el-tree用scope slot为每一个节点添加一个鼠标悬浮时出现的右对齐的按钮
  • C# 事件(Event)详解
  • C++和OpenGL实现3D游戏编程【连载21】——父物体和子物体模式实现
  • PyTorch框架——基于WebUI:Gradio深度学习ShuffleNetv2神经网络蔬菜图像识别分类系统
  • 【深度学习】傅里叶变换
  • WPS不登录无法使用基本功能的解决方案
  • MySQL快速入门——库的操作
  • Golang学习笔记_28——工厂方法模式
  • Arcgis Pro安装完成后启动失败的解决办法
  • 亚马逊库存管理sql记录
  • JUnit单元测试
  • OpenAI进军实体机器人:GPT赋能的智能未来
  • 成就与远见:2024年技术与思维的升华
  • [Unity]【游戏开发】 脚本创建物体的实践与技巧
  • C语言--深入printf