当前位置: 首页 > article >正文

2、数据模拟

1、使用说明

1)将application.yml、gmall-remake-mock-2023-05-15-3.jar、path.json、logback.xml上传到hadoop102的/opt/module/applog目录下

(1)创建applog路径

[shuidi@hadoop102 module]$ mkdir /opt/module/applog

(2)上传文件到/opt/module/applog目录

2)配置文件

(1)application.yml文件

可以根据需求生成对应日期的用户行为日志。

[shuidi@hadoop102 applog]$  vim application.yml

修改如下内容

# 外部配置打开
logging.config: ./logback.xml


#http模式下,发送的地址
mock:
  log:
    type: "file"      #"file" "http" "kafka" "none"
    http:
      url: "http://localhost:8090/applog"
    kafka:
        server: "hadoop102:9092,hadoop102:9092,hadoop102:9092"
        topic: "topic_log"

spring:
    datasource:
      type: com.alibaba.druid.pool.DruidDataSource
      druid:
        url: jdbc:mysql://hadoop102:3306/gmall?characterEncoding=utf-8&allowPublicKeyRetrieval=true&useSSL=false&serverTimezone=GMT%2B8
        username: root
        password: "000000"
        driver-class-name:  com.mysql.cj.jdbc.Driver
        max-active: 20
        test-on-borrow: true


mybatis-plus.global-config.db-config.field-strategy: not_null
mybatis-plus:
  mapper-locations: classpath:mapper/*.xml

mybatis:
   mapper-locations: classpath:mapper/*.xml

#业务日期, 并非Linux系统时间的日期,而是生成模拟数据的日期
mock.date: "2022-06-08"

# 日志是否写入数据库一份  写入z_log表中
mock.log.db.enable: 1

# 清空
mock.clear.busi: 1

# 清空用户
mock.clear.user: 0

# 批量生成新用户
mock.new.user: 0
  #session次数
mock.user-session.count: 200
  #设备最大值
mock.max.mid: 1000000

# 是否针对实时生成数据,若启用(置为1)则数据的 yyyy-MM-dd 与 mock.date 一致而 HH:mm:ss 与系统时间一致;若禁用则数据的 yyyy-MM-dd 与 mock.date 一致而 HH:mm:ss 随机分布,此处禁用
mock.if-realtime: 0
#访问时间分布权重
mock.start-time-weight: "10:5:0:0:0:0:5:5:5:10:10:15:20:10:10:10:10:10:20:25:30:35:30:20"

#支付类型占比 支付宝 :微信 :银联
mock.payment_type_weight: "40:50:10"

  #页面平均访问时间
mock.page.during-time-ms: 20000
  #错误概率 百分比
mock.error.rate: 3
  #每条日志发送延迟 ms
mock.log.sleep: 100
  #课程详情来源  用户查询,商品推广,智能推荐, 促销活动
mock.detail.source-type-rate: "40:25:15:20"

mock.if-cart-rate: 100

mock.if-favor-rate: 70

mock.if-order-rate: 100

mock.if-refund-rate: 50



  #搜索关键词
mock.search.keyword: "java,python,多线程,前端,数据库,大数据,hadoop,flink"


  #用户数据变化概率
mock.user.update-rate: 20


# 男女浏览品牌比重(11 品牌)
mock.tm-weight.male: "3:2:5:5:5:1:1:1:1:1:1"
mock.tm-weight.female: "1:5:1:1:2:2:2:5:5:5:5"


# 外连类型比重(5 种)
mock.refer-weight: "10:2:3:4:5"

# 线程池相关配置
mock.pool.core: 20
mock.pool.max-core: 100

(2)path.json,该文件用来配置访问路径

根据需求,可以灵活配置用户点击路径。

[
  {"path":["start_app","home", "search", "good_list","good_detail","good_detail" ,"good_detail","cart","order","payment","mine","order_list","end"],"rate":100 },
  {"path":["start_app","home", "good_list","good_detail","good_detail" ,"good_detail","cart","end"],"rate":30 },
  {"path":["start_app","home", "activity1111","good_detail"  ,"cart","good_detail","cart","order","payment","end"],"rate":30 },
  {"path":[ "activity1111","good_detail" ,"activity1111" ,"good_detail","order","payment","end"],"rate":200 },
  {"path":[ "start_app","home" ,"activity1111" ,"good_detail","order","payment","end"],"rate":200 },
  {"path":[ "start_app","home" , "good_detail","order","payment","end"],"rate":30 },
  {"path":[  "good_detail","order","payment","end"],"rate":650 },
  {"path":[  "good_detail"  ],"rate":30 },
  {"path":[  "start_app","home","mine","good_detail"  ],"rate":30 },
  {"path":[  "start_app","home", "good_detail","good_detail","good_detail","cart","order","payment","end"  ],"rate":200 },
  {"path":[  "start_app","home", "search","good_list","good_detail","cart","order","payment","end"  ],"rate":200 }
]

(3)logback配置文件

可配置日志生成路径,修改内容如下。

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property name="LOG_HOME" value="/opt/module/applog/log" />
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.out</target>
        <encoder>
            <pattern>%msg%n</pattern>
        </encoder>
    </appender>

    <appender name="console_em" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.err</target>
        <encoder>
            <pattern>%msg%n</pattern>
        </encoder>
    </appender>

    <appender name="rollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_HOME}/app.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_HOME}/app.%d{yyyy-MM-dd}.log</fileNamePattern>
        </rollingPolicy>
        <encoder>
            <pattern>%msg%n</pattern>
        </encoder>
    </appender>

    <!-- 将某一个包下日志单独打印日志 -->
    <logger name="com.atguigu.mock.util.LogUtil"
            level="INFO" additivity="false">
          <appender-ref ref="rollingFile" />
<!--           <appender-ref ref="console" />-->
    </logger>
    <logger name="com.atguigu.gmallre.mock.task.UserMockTask" level="INFO" additivity="false" >
        <appender-ref ref="console_em" />
    </logger>

<!--    <logger name="com.alibaba.druid.pool" level="error" additivity="false" >-->
<!--        <appender-ref ref="console" />-->
<!--    </logger>-->

<!--    <logger  name="com.atguigu.edu2021.mock.mapper" level="debug">-->
<!--         <appender-ref ref="console" />-->
<!--    </logger>-->

<!--      <logger  name="com.atguigu.edu2021.mock.service.impl.UserInfoServiceImpl" level="debug">
             <appender-ref ref="console" />
       </logger>-->

    <root level="error"  >
       <appender-ref ref="console_em" />
        <!-- <appender-ref ref="async-rollingFile" />  -->
    </root>
</configuration>

3)生成日志

(1)进入到/opt/module/applog路径,执行以下命令

[shuidi@hadoop102 applog]$ java -jar gmall-remake-mock-2023-05-15-3.jar test 100 2022-06-08
① 增加test参数为测试模式,只生成用户行为数据不生成业务数据。
② 100 为产生的用户session数一个session默认产生1条启动日志和5条页面方法日志。
③ 第三个参数为日志数据的日期,测试模式下不会加载配置文件,要指定数据日期只能通过命令行传参实现。
④ 三个参数的顺序必须与示例保持一致
⑤ 第二个参数和第三个参数可以省略,如果test后面不填写参数,默认为1000
(2)在/opt/module/applog/log目录下查看生成日志
[shuidi@hadoop102 log]$ ll

2、 集群日志生成脚本

(1)在/home/shuidi/bin目录下创建脚本lg.sh

[shuidi@hadoop102 bin]$ vim lg.sh

(2)在脚本中编写如下内容

#!/bin/bash
echo "========== hadoop102 =========="
ssh hadoop102 "cd /opt/module/applog/; nohup java -jar gmall-remake-mock-2023-05-15-3.jar $1 $2 $3 >/dev/null 2>&1 &"

注:

①/opt/module/applog/为jar包及配置文件所在路径

②/dev/null代表Linux的空设备文件,所有往这个文件里面写入的内容都会丢失,俗称“黑洞”。

标准输入0:从键盘获得输入 /proc/self/fd/0

标准输出1:输出到屏幕(即控制台) /proc/self/fd/1

错误输出2:输出到屏幕(即控制台) /proc/self/fd/2

(3)修改脚本执行权限

[shuidi@hadoop102 bin]$ chmod 777 lg.sh

(4)将jar包及配置文件上传至hadoop103的/opt/module/applog/路径

注意修改启动脚本

(5)启动脚本

[shuidi@hadoop102 bin]$ lg.sh test 100

(6)分别在hadoop102、hadoop103的/opt/module/applog/log目录上查看生成的数据


http://www.kler.cn/a/301316.html

相关文章:

  • Android 使用Retrofit 以纯二进制文件流上传文件
  • QT QLabel双击事件
  • Mac——基本操作使用整理
  • 使用nossl模式连接MySQL数据库详解
  • Http常⻅见请求/响应头content-type内容类型讲解(笔记)
  • 层归一化和批归一化
  • 基础Web开发的支持
  • [ACTF2020 新生赛]Upload1
  • 探索Python世界的隐藏宝石:Pika库的神秘力量
  • 基于JavaWeb开发的Java+SpringBoot+vue+element实现汽车订票管理平台详细设计和实现
  • 个人学习笔记7-1:动手学深度学习pytorch版-李沐
  • 正则表达式--python
  • C 盘突然爆满,罪魁祸首竟然是 ...... !
  • 并发编程(八)
  • 智能的PHP开发工具PhpStorm v2024.2全新发布——支持日志文件
  • 【Jupyter Notebook】汉化
  • 使用 Python-docx 进行 Word 文档操作
  • 最新HTML5中的视频和音频讲解
  • Computer Exercise
  • java-redis-穿透
  • .NET 一款支持NTLM实现横向移动的工具
  • Python实现模糊逻辑算法
  • 今年白银市场的供需关系矛盾
  • Java教程:SE进阶【十万字详解】(上)
  • Android Environment 获取的路径问题
  • MySQL灾难恢复策略:构建稳健的备份与恢复机制