当前位置: 首页 > article >正文

Seaweedfs(master volume filer) docker run参数帮助文档

文章目录

  • 进入容器后执行获取
    • weed -h
      • 英文
      • 中文
    • weed server -h
      • 英文
      • 中文
    • weed volume -h
      • 英文
      • 中文
    • 关键点
    • 测试了一下,这个`-volume.minFreeSpace string`有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了
    • 尝试只用参数`-volume.max string`设置最大卷数量(貌似一个是大约1g)

进入容器后执行获取

weed -h

英文

/data # weed

SeaweedFS: store billions of files and serve them fast!

Usage:

        weed command [arguments]

The commands are:

    autocomplete install autocomplete
    autocomplete.uninstall uninstall autocomplete
    backup      incrementally backup a volume to local folder
    benchmark   benchmark by writing millions of files and reading them out
    compact     run weed tool compact on volume file
    download    download files by file id
    export      list or export files from one volume data file
    filer       start a file server that points to a master server, or a list of master servers
    filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml
    filer.cat   copy one file to local
    filer.copy  copy one or a list of files to a filer folder
    filer.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.toml
    filer.meta.tail see continuous changes on a filer
    filer.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object store
    filer.remote.sync resumable continuously write back updates to remote storage
    filer.replicate replicate file changes to another destination
    filer.sync  resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters
    fix         run weed tool fix on files or whole folders to recreate index file(s) if corrupted
    fuse        Allow use weed with linux's mount command
    iam         start a iam API compatible server
    master      start a master server
    master.follower start a master follower
    mount       mount weed filer to a directory as file system in userspace(FUSE)
    mq.broker   <WIP> start a message queue broker
    s3          start a s3 API compatible server that is backed by a filer
    scaffold    generate basic configuration files
    server      start a master server, a volume server, and optionally a filer and a S3 gateway
    shell       run interactive administrative commands
    update      get latest or specific version from https://github.com/seaweedfs/seaweedfs
    upload      upload one or a list of files
    version     print SeaweedFS version
    volume      start a volume server
    webdav      start a webdav server that is backed by a filer

Use "weed help [command]" for more information about a command.

For Logging, use "weed [logging_options] [command]". The logging options are:
  -alsologtostderr
        log to standard error as well as files (default true)
  -config_dir value
        directory with toml configuration files
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -logdir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -options string
        a file of command line options, each line in optionName=optionValue format
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log levels [0|1|2|3|4], default to 0
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

中文

SeaweedFS: store billions of files and serve them fast!  # 海量文件存储与快速服务

Usage:

        weed command [arguments]  # 使用格式:weed 命令 [参数]

The commands are:

    autocomplete install autocomplete  # 安装自动补全功能
    autocomplete.uninstall uninstall autocomplete  # 卸载自动补全功能
    backup      incrementally backup a volume to local folder  # 增量备份卷数据到本地目录
    benchmark   benchmark by writing millions of files and reading them out  # 通过读写百万文件进行性能测试
    compact     run weed tool compact on volume file  # 压缩卷文件
    download    download files by file id  # 通过文件ID下载文件
    export      list or export files from one volume data file  # 从卷数据文件列出/导出文件
    filer       start a file server that points to a master server, or a list of master servers  # 启动文件服务器连接主节点
    filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml  # 持续备份文件到replication.toml定义的位置
    filer.cat   copy one file to local  # 复制单个文件到本地
    filer.copy  copy one or a list of files to a filer folder  # 复制文件到filer目录
    filer.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.toml  # 持续备份元数据到备份配置指定位置
    filer.meta.tail see continuous changes on a filer  # 实时查看filer元数据变化
    filer.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object store  # 将本地存储操作同步到远程对象存储
    filer.remote.sync resumable continuously write back updates to remote storage  # 持续同步更新到远程存储
    filer.replicate replicate file changes to another destination  # 文件变更复制到其他目标
    filer.sync  resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters  # 集群间持续同步
    fix         run weed tool fix on files or whole folders to recreate index file(s) if corrupted  # 修复损坏的索引文件
    fuse        Allow use weed with linux's mount command  # 支持Linux挂载命令
    iam         start a iam API compatible server  # 启动IAM兼容API服务
    master      start a master server  # 启动主节点
    master.follower start a master follower  # 启动主节点跟随者
    mount       mount weed filer to a directory as file system in userspace(FUSE)  # 挂载FUSE文件系统
    mq.broker   <WIP> start a message queue broker  # 启动消息队列代理(开发中)
    s3          start a s3 API compatible server that is backed by a filer  # 启动S3兼容服务
    scaffold    generate basic configuration files  # 生成基础配置文件
    server      start a master server, a volume server, and optionally a filer and a S3 gateway  # 启动完整服务(主节点+存储节点+可选组件)
    shell       run interactive administrative commands  # 进入交互式管理命令行
    update      get latest or specific version from https://github.com/seaweedfs/seaweedfs  # 更新SeaweedFS版本
    upload      upload one or a list of files  # 上传单个或多个文件
    version     print SeaweedFS version  # 显示版本信息
    volume      start a volume server  # 启动存储节点
    webdav      start a webdav server that is backed by a filer  # 启动WebDAV服务

日志选项说明(每个命令前均可添加):
  -alsologtostderr
        同时输出日志到标准错误和文件(默认true)
  -config_dir value
        包含toml配置文件的目录
  -log_backtrace_at value
        当记录到指定行时输出堆栈跟踪
  -logdir string
        日志文件存储目录(非空时生效)
  -logtostderr
        日志输出到标准错误而非文件
  -options string
        命令行选项配置文件(每行格式为optionName=optionValue)
  -stderrthreshold value
        高于此级别的日志输出到标准错误
  -v value
        日志级别 [0|1|2|3|4],默认为0
  -vmodule value
        文件过滤日志设置(逗号分隔的pattern=N格式)

weed server -h

英文

/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name
Default Usage:
  -cpuprofile string
        cpu profile output file
  -dataCenter string
        current volume server's data center name
  -debug
        serves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2
  -debug.port int
        http port for debugging (default 6060)
  -dir string
        directories to store data files. dir[,dir]... (default "/tmp")
  -disableHttp
        disable http requests, only gRPC operations are allowed.
  -filer
        whether to start filer
  -filer.collection string
        all data will be stored in this collection
  -filer.concurrentUploadLimitMB int
        limit total concurrent upload size (default 64)
  -filer.defaultReplicaPlacement string
        default replication type. If not specified, use master setting.
  -filer.dirListLimit int
        limit sub dir listing size (default 1000)
  -filer.disableDirListing
        turn off directory listing
  -filer.disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -filer.downloadMaxMBps int
        download max speed for each download request, in MB per second
  -filer.encryptVolumeData
        encrypt data on volume servers
  -filer.filerGroup string
        share metadata with other filers in the same filerGroup
  -filer.localSocket string
        default to /tmp/seaweedfs-filer-<port>.sock
  -filer.maxMB int
        split files larger than the limit (default 4)
  -filer.port int
        filer server http listen port (default 8888)
  -filer.port.grpc int
        filer server grpc listen port
  -filer.port.public int
        filer server public http listen port
  -filer.saveToFilerLimit int
        Small files smaller than this limit can be cached in filer store.
  -filer.ui.deleteDir
        enable filer UI show delete directory button (default true)
  -iam
        whether to start IAM service
  -iam.port int
        iam server http listen port (default 8111)
  -idleTimeout int
        connection idle seconds (default 30)
  -ip string
        ip or server name, also used as identifier (default "172.17.0.6")
  -ip.bind string
        ip address to bind to. If empty, default to same as -ip option.
  -master
        whether to start master server (default true)
  -master.defaultReplication string
        Default replication type if not specified.
  -master.dir string
        data directory to store meta data, default to same as -dir specified
  -master.electionTimeout duration
        election timeout of master servers (default 10s)
  -master.garbageThreshold float
        threshold to vacuum and reclaim spaces (default 0.3)
  -master.heartbeatInterval duration
        heartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)
  -master.metrics.address string
        Prometheus gateway address
  -master.metrics.intervalSeconds int
        Prometheus push interval in seconds (default 15)
  -master.peers string
        all master nodes in comma separated ip:masterPort list
  -master.port int
        master server http listen port (default 9333)
  -master.port.grpc int
        master server grpc listen port
  -master.raftHashicorp
        use hashicorp raft
  -master.resumeState
        resume previous state on start master server
  -master.volumePreallocate
        Preallocate disk space for volumes.
  -master.volumeSizeLimitMB uint
        Master stops directing writes to oversized volumes. (default 30000)
  -memprofile string
        memory profile output file
  -metricsPort int
        Prometheus metrics listen port
  -mq.broker
        whether to start message queue broker
  -mq.broker.port int
        message queue broker gRPC listen port (default 17777)
  -options string
        a file of command line options, each line in optionName=optionValue format
  -rack string
        current volume server's rack name
  -s3
        whether to start S3 gateway
  -s3.allowDeleteBucketNotEmpty
        allow recursive deleting all entries along with bucket (default true)
  -s3.allowEmptyFolder
        allow empty folders (default true)
  -s3.auditLogConfig string
        path to the audit log config file
  -s3.cert.file string
        path to the TLS certificate file
  -s3.config string
        path to the config file
  -s3.domainName string
        suffix of the host name in comma separated list, {bucket}.{domainName}
  -s3.key.file string
        path to the TLS private key file
  -s3.port int
        s3 server http listen port (default 8333)
  -s3.port.grpc int
        s3 server grpc listen port
  -volume
        whether to start volume server (default true)
  -volume.compactionMBps int
        limit compaction speed in mega bytes per second
  -volume.concurrentDownloadLimitMB int
        limit total concurrent download size (default 64)
  -volume.concurrentUploadLimitMB int
        limit total concurrent upload size (default 64)
  -volume.dir.idx string
        directory to store .idx files
  -volume.disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -volume.fileSizeLimitMB int
        limit file size to avoid out of memory (default 256)
  -volume.hasSlowRead
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)
  -volume.images.fix.orientation
        Adjust jpg orientation when uploading.
  -volume.index string
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
  -volume.index.leveldbTimeout int
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.
  -volume.inflightUploadDataTimeout duration
        inflight upload data wait timeout of volume servers (default 1m0s)
  -volume.max string
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
  -volume.minFreeSpace string
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
  -volume.minFreeSpacePercent string
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")
  -volume.port int
        volume server http listen port (default 8080)
  -volume.port.grpc int
        volume server grpc listen port
  -volume.port.public int
        volume server public port
  -volume.pprof
        enable pprof http handlers. precludes --memprofile and --cpuprofile
  -volume.preStopSeconds int
        number of seconds between stop send heartbeats and stop volume server (default 10)
  -volume.publicUrl string
        publicly accessible address
  -volume.readBufferSizeMB int
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)
  -volume.readMode string
        [local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")
  -webdav
        whether to start WebDAV gateway
  -webdav.cacheCapacityMB int
        local cache capacity in MB
  -webdav.cacheDir string
        local cache directory for file chunks (default "/tmp")
  -webdav.cert.file string
        path to the TLS certificate file
  -webdav.collection string
        collection to create the files
  -webdav.disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -webdav.filer.path string
        use this remote path from filer server (default "/")
  -webdav.key.file string
        path to the TLS private key file
  -webdav.port int
        webdav server http listen port (default 7333)
  -webdav.replication string
        replication to create the files
  -whiteList string
        comma separated Ip addresses having write permission. No limit if empty.
Description:
  start both a volume server to provide storage spaces
  and a master server to provide volume=>location mapping service and sequence number of file ids

  This is provided as a convenient way to start both volume server and master server.
  The servers acts exactly the same as starting them separately.
  So other volume servers can connect to this master server also.

  Optionally, a filer server can be started.
  Also optionally, a S3 gateway can be started.
/data #

中文

/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name  # 示例命令
Default Usage:
  -cpuprofile string  # CPU性能分析输出文件
        cpu profile output file  
  -dataCenter string  # 当前卷服务器的数据中心名称
        current volume server's data center name  
  -debug  # 启用调试模式,提供运行时分析数据
        serves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2  
  -debug.port int  # 调试用的HTTP端口号 (默认6060)
        http port for debugging (default 6060)  
  -dir string  # 数据存储目录列表,多个目录用逗号分隔 (默认"/tmp")
        directories to store data files. dir[,dir]... (default "/tmp")  
  -disableHttp  # 禁用HTTP请求,只允许gRPC操作
        disable http requests, only gRPC operations are allowed.  
  -filer  # 是否启动文件管理器服务
        whether to start filer  
  -filer.collection string  # 所有数据将存储在此集合中
        all data will be stored in this collection  
  -filer.concurrentUploadLimitMB int  # 总并发上传大小限制(单位MB)(默认64)
        limit total concurrent upload size (default 64)  
  -filer.defaultReplicaPlacement string  # 默认副本放置策略(未指定时使用主设置)
        default replication type. If not specified, use master setting.  
  -filer.dirListLimit int  # 子目录列表显示数量限制 (默认1000)
        limit sub dir listing size (default 1000)  
  -filer.disableDirListing  # 关闭目录列表功能
        turn off directory listing  
  -filer.disk string  # 磁盘类型标签 [hdd|ssd|<自定义标签>]
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag  
  -filer.downloadMaxMBps int  # 单个下载请求的最大速度(MB/秒)
        download max speed for each download request, in MB per second  
  -filer.encryptVolumeData  # 加密卷服务器上的数据
        encrypt data on volume servers  
  -filer.filerGroup string  # 与同组文件管理器共享元数据
        share metadata with other filers in the same filerGroup  
  -filer.localSocket string  # 本地socket文件路径 (默认/tmp/seaweedfs-filer-<port>.sock)
        default to /tmp/seaweedfs-filer-<port>.sock  
  -filer.maxMB int  # 文件分割阈值(单位MB)(默认4)
        split files larger than the limit (default 4)  
  -filer.port int  # 文件管理器HTTP监听端口 (默认8888)
        filer server http listen port (default 8888)  
  -filer.port.grpc int  # 文件管理器gRPC监听端口
        filer server grpc listen port  
  -filer.port.public int  # 文件管理器公共HTTP监听端口
        filer server public http listen port  
  -filer.saveToFilerLimit int  # 可缓存到文件管理器的小文件大小阈值
        Small files smaller than this limit can be cached in filer store.  
  -filer.ui.deleteDir  # 在文件管理器UI显示删除目录按钮 (默认true)
        enable filer UI show delete directory button (default true)  
  -iam  # 是否启动IAM服务
        whether to start IAM service  
  -iam.port int  # IAM服务HTTP监听端口 (默认8111)
        iam server http listen port (default 8111)  
  -idleTimeout int  # 连接空闲超时秒数 (默认30)
        connection idle seconds (default 30)  
  -ip string  # 服务器IP或名称,也作为标识符 (默认"172.17.0.6")
        ip or server name, also used as identifier (default "172.17.0.6")  
  -ip.bind string  # 绑定的IP地址(空则使用-ip设置)
        ip address to bind to. If empty, default to same as -ip option.  
  -master  # 是否启动主服务器 (默认true)
        whether to start master server (default true)  
  -master.defaultReplication string  # 默认副本策略(未指定时使用)
        Default replication type if not specified.  
  -master.dir string  # 主服务器元数据存储目录(默认同-dir)
        data directory to store meta data, default to same as -dir specified  
  -master.electionTimeout duration  # 主服务器选举超时时间 (默认10s)
        election timeout of master servers (default 10s)  
  -master.garbageThreshold float  # 触发空间回收的垃圾占比阈值 (默认0.3)
        threshold to vacuum and reclaim spaces (default 0.3)  
  -master.heartbeatInterval duration  # 主服务器心跳间隔(随机乘以1~1.25)(默认300ms)
        heartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)  
  -master.metrics.address string  # Prometheus网关地址
        Prometheus gateway address  
  -master.metrics.intervalSeconds int  # Prometheus推送间隔(秒)(默认15)
        Prometheus push interval in seconds (default 15)  
  -master.peers string  # 所有主节点列表(逗号分隔的ip:port)
        all master nodes in comma separated ip:masterPort list  
  -master.port int  # 主服务器HTTP监听端口 (默认9333)
        master server http listen port (default 9333)  
  -master.port.grpc int  # 主服务器gRPC监听端口
        master server grpc listen port  
  -master.raftHashicorp  # 使用Hashicorp Raft实现
        use hashicorp raft  
  -master.resumeState  # 启动时恢复之前的状态
        resume previous state on start master server  
  -master.volumePreallocate  # 为卷预分配磁盘空间
        Preallocate disk space for volumes.  
  -master.volumeSizeLimitMB uint  # 主服务器停止写入超大卷的阈值(单位MB)(默认30000)
        Master stops directing writes to oversized volumes. (default 30000)  
  -memprofile string  # 内存分析输出文件
        memory profile output file  
  -metricsPort int  # Prometheus指标监听端口
        Prometheus metrics listen port  
  -mq.broker  # 是否启动消息队列代理
        whether to start message queue broker  
  -mq.broker.port int  # 消息队列代理gRPC监听端口 (默认17777)
        message queue broker gRPC listen port (default 17777)  
  -options string  # 命令行选项配置文件(每行格式optionName=optionValue)
        a file of command line options, each line in optionName=optionValue format  
  -rack string  # 当前卷服务器的机架名称
        current volume server's rack name  
  -s3  # 是否启动S3网关
        whether to start S3 gateway  
  -s3.allowDeleteBucketNotEmpty  # 允许递归删除非空桶 (默认true)
        allow recursive deleting all entries along with bucket (default true)  
  -s3.allowEmptyFolder  # 允许空文件夹 (默认true)
        allow empty folders (default true)  
  -s3.auditLogConfig string  # 审计日志配置文件路径
        path to the audit log config file  
  -s3.cert.file string  # TLS证书文件路径
        path to the TLS certificate file  
  -s3.config string  # 配置文件路径
        path to the config file  
  -s3.domainName string  # S3域名后缀(逗号分隔列表,格式{bucket}.{domainName})
        suffix of the host name in comma separated list, {bucket}.{domainName}  
  -s3.key.file string  # TLS私钥文件路径
        path to the TLS private key file  
  -s3.port int  # S3服务HTTP监听端口 (默认8333)
        s3 server http listen port (default 8333)  
  -s3.port.grpc int  # S3服务gRPC监听端口
        s3 server grpc listen port  
  -volume  # 是否启动卷服务器 (默认true)
        whether to start volume server (default true)  
  -volume.compactionMBps int  # 压缩速度限制(MB/秒)
        limit compaction speed in mega bytes per second  
  -volume.concurrentDownloadLimitMB int  # 总并发下载大小限制(单位MB)(默认64)
        limit total concurrent download size (default 64)  
  -volume.concurrentUploadLimitMB int  # 总并发上传大小限制(单位MB)(默认64)
        limit total concurrent upload size (default 64)  
  -volume.dir.idx string  # .idx文件存储目录
        directory to store .idx files  
  -volume.disk string  # 卷磁盘类型标签 [hdd|ssd|<自定义标签>]
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag  
  -volume.fileSizeLimitMB int  # 文件大小限制以避免内存溢出(单位MB)(默认256)
        limit file size to avoid out of memory (default 256)  
  -volume.hasSlowRead  # <实验性> 防止慢速读取阻塞其他请求(默认true)
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)  
  -volume.images.fix.orientation  # 上传时自动调整JPG方向
        Adjust jpg orientation when uploading.  
  -volume.index string  # 索引模式选择 [memory|leveldb|leveldbMedium|leveldbLarge] (默认"memory")
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")  
  -volume.index.leveldbTimeout int  # leveldb存活超时时间(小时),0表示禁用
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.  
  -volume.inflightUploadDataTimeout duration  # 传输中上传数据等待超时时间 (默认1m0s)
        inflight upload data wait timeout of volume servers (default 1m0s)  
  -volume.max string  # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")  
  -volume.minFreeSpace string  # 最小空闲磁盘空间(百分比<=100,或如10GiB)
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.  
  -volume.minFreeSpacePercent string  # 最小空闲磁盘空间百分比(已弃用,改用minFreeSpace)(默认"1")
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")  
  -volume.port int  # 卷服务器HTTP监听端口 (默认8080)
        volume server http listen port (default 8080)  
  -volume.port.grpc int  # 卷服务器gRPC监听端口
        volume server grpc listen port  
  -volume.port.public int  # 卷服务器公共端口
        volume server public port  
  -volume.pprof  # 启用pprof HTTP处理器(与--memprofile/--cpuprofile互斥)
        enable pprof http handlers. precludes --memprofile and --cpuprofile  
  -volume.preStopSeconds int  # 停止发送心跳到停止服务的时间间隔(秒)(默认10)
        number of seconds between stop send heartbeats and stop volume server (default 10)  
  -volume.publicUrl string  # 公开访问地址
        publicly accessible address  
  -volume.readBufferSizeMB int  # <实验性> 读缓冲区大小(MB)(默认4)
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)  
  -volume.readMode string  # 非本地卷处理模式 [local|proxy|redirect] (默认"proxy")
        [local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")  
  -webdav  # 是否启动WebDAV网关
        whether to start WebDAV gateway  
  -webdav.cacheCapacityMB int  # 本地缓存容量(MB)
        local cache capacity in MB  
  -webdav.cacheDir string  # 文件块本地缓存目录 (默认"/tmp")
        local cache directory for file chunks (default "/tmp")  
  -webdav.cert.file string  # TLS证书文件路径
        path to the TLS certificate file  
  -webdav.collection string  # 文件创建的目标集合
        collection to create the files  
  -webdav.disk string  # WebDAV磁盘类型标签 [hdd|ssd|<自定义标签>]
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag  
  -webdav.filer.path string  # 使用的远程文件管理器路径 (默认"/")
        use this remote path from filer server (default "/")  
  -webdav.key.file string  # TLS私钥文件路径
        path to the TLS private key file  
  -webdav.port int  # WebDAV服务HTTP监听端口 (默认7333)
        webdav server http listen port (default 7333)  
  -webdav.replication string  # 文件创建的副本策略
        replication to create the files  
  -whiteList string  # 拥有写权限的IP白名单(逗号分隔,空表示无限制)
        comma separated Ip addresses having write permission. No limit if empty.  
Description:
  start both a volume server to provide storage spaces  # 同时启动卷服务器提供存储空间
  and a master server to provide volume=>location mapping service and sequence number of file ids  # 和主服务器提供卷位置映射及文件ID序列服务

  This is provided as a convenient way to start both volume server and master server.  # 本命令是同时启动卷服务器和主服务器的便捷方式
  The servers acts exactly the same as starting them separately.  # 服务表现与单独启动时完全相同
  So other volume servers can connect to this master server also.  # 其他卷服务器也可以连接到此主服务器

  Optionally, a filer server can be started.  # 可选项:可启动文件管理器服务
  Also optionally, a S3 gateway can be started.  # 可选项:可启动S3网关
/data #

weed volume -h

英文

/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:
  -compactionMBps int
        limit background compaction or copying speed in mega bytes per second
  -concurrentDownloadLimitMB int
        limit total concurrent download size (default 256)
  -concurrentUploadLimitMB int
        limit total concurrent upload size (default 256)
  -cpuprofile string
        cpu profile output file
  -dataCenter string
        current volume server's data center name
  -dir string
        directories to store data files. dir[,dir]... (default "/tmp")
  -dir.idx string
        directory to store .idx files
  -disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
  -fileSizeLimitMB int
        limit file size to avoid out of memory (default 256)
  -hasSlowRead
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)
  -idleTimeout int
        connection idle seconds (default 30)
  -images.fix.orientation
        Adjust jpg orientation when uploading.
  -index string
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
  -index.leveldbTimeout int
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.
  -inflightUploadDataTimeout duration
        inflight upload data wait timeout of volume servers (default 1m0s)
  -ip string
        ip or server name, also used as identifier (default "172.17.0.6")
  -ip.bind string
        ip address to bind to. If empty, default to same as -ip option.
  -max string
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
  -memprofile string
        memory profile output file
  -metricsPort int
        Prometheus metrics listen port
  -minFreeSpace string
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
  -minFreeSpacePercent string
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")
  -mserver string
        comma-separated master servers (default "localhost:9333")
  -options string
        a file of command line options, each line in optionName=optionValue format
  -port int
        http listen port (default 8080)
  -port.grpc int
        grpc listen port
  -port.public int
        port opened to public
  -pprof
        enable pprof http handlers. precludes --memprofile and --cpuprofile
  -preStopSeconds int
        number of seconds between stop send heartbeats and stop volume server (default 10)
  -publicUrl string
        Publicly accessible address
  -rack string
        current volume server's rack name
  -readBufferSizeMB int
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)
  -readMode string
        [local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")
  -whiteList string
        comma separated Ip addresses having write permission. No limit if empty.
Description:
  start a volume server to provide storage spaces

中文

/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:
  -compactionMBps int
        limit background compaction or copying speed in mega bytes per second
        [限制后台压缩或复制速度,单位MB/秒]
  -concurrentDownloadLimitMB int
        limit total concurrent download size (default 256)
        [限制并发下载总大小,默认256MB]
  -concurrentUploadLimitMB int
        limit total concurrent upload size (default 256)
        [限制并发上传总大小,默认256MB]
  -cpuprofile string
        cpu profile output file
        [CPU性能分析输出文件名]
  -dataCenter string
        current volume server's data center name
        [当前卷服务器的数据中心名称]
  -dir string
        directories to store data files. dir[,dir]... (default "/tmp")
        [数据文件存储目录,多个目录用逗号分隔,默认/tmp]
  -dir.idx string
        directory to store .idx files
        [索引文件存储目录]
  -disk string
        [hdd|ssd|<tag>] hard drive or solid state drive or any tag
        [磁盘类型标识:hdd/ssd/自定义标签]
  -fileSizeLimitMB int
        limit file size to avoid out of memory (default 256)
        [限制单个文件大小防止内存溢出,默认256MB]
  -hasSlowRead
        <experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)
        [实验性:启用后慢读不会阻塞其他请求,但大文件读取延迟会增加]
  -idleTimeout int
        connection idle seconds (default 30)
        [连接空闲超时时间(秒),默认30秒]
  -images.fix.orientation
        Adjust jpg orientation when uploading.
        [上传时自动调整JPG方向]
  -index string
        Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
        [索引存储模式:内存优先或不同级别的LevelDB]
  -index.leveldbTimeout int
        alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.
        [LevelDB存活时间(小时),超时后卸载以节省资源]
  -inflightUploadDataTimeout duration
        inflight upload data wait timeout of volume servers (default 1m0s)
        [上传数据等待超时时间,默认1分钟]
  -ip string
        ip or server name, also used as identifier (default "172.17.0.6")
        [服务器IP/名称,也作为唯一标识]
  -ip.bind string
        ip address to bind to. If empty, default to same as -ip option.
        [绑定IP地址,默认与-ip相同]
  -max string
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
        [最大卷数量(自动计算磁盘空间与卷大小的比值)]
  -memprofile string
        memory profile output file
        [内存性能分析输出文件名]
  -metricsPort int
        Prometheus metrics listen port
        [Prometheus指标监听端口]
  -minFreeSpace string
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
        [最小磁盘剩余空间(百分比或易读字节单位如10GiB),空间不足时将卷设为只读]
  -minFreeSpacePercent string
        minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")
        [已弃用,改用minFreeSpace参数]
  -mserver string
        comma-separated master servers (default "localhost:9333")
        [主服务器地址列表,用逗号分隔]
  -options string
        a file of command line options, each line in optionName=optionValue format
        [配置文件路径(每行格式为optionName=optionValue)]
  -port int
        http listen port (default 8080)
        [HTTP监听端口]
  -port.grpc int
        grpc listen port
        [gRPC监听端口]
  -port.public int
        port opened to public
        [对外开放端口]
  -pprof
        enable pprof http handlers. precludes --memprofile and --cpuprofile
        [启用pprof性能分析(与--memprofile/--cpuprofile互斥)]
  -preStopSeconds int
        number of seconds between stop send heartbeats and stop volume server (default 10)
        [停止发送心跳到停止服务之间的等待秒数]
  -publicUrl string
        Publicly accessible address
        [公开访问地址]
  -rack string
        current volume server's rack name
        [当前卷服务器的机架名称]
  -readBufferSizeMB int
        <experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)
        [实验性:增大可优化查询性能但增加内存占用,默认4MB]
  -readMode string
        [local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")
        [非本地卷处理模式:本地无/代理请求/重定向]
  -whiteList string
        comma separated Ip addresses having write permission. No limit if empty.
        [白名单IP地址(逗号分隔),空表示无限制]
Description:
  start a volume server to provide storage spaces
  [启动卷服务器提供存储空间]

关键点

-master.garbageThreshold float  # 触发空间回收的垃圾占比阈值 (默认0.3)
        threshold to vacuum and reclaim spaces (default 0.3)  

-volume.max string  # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")
        maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")  
-volume.minFreeSpace string  # 最小空闲磁盘空间(百分比<=100,或如10GiB),如果达到阈值所有卷将被标记只读(大概写30表示30%)
        min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.  

测试了一下,这个-volume.minFreeSpace string有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了

在这里插入图片描述

尝试只用参数-volume.max string设置最大卷数量(貌似一个是大约1g)

我尝试设置20:

    docker run \
        -d -i -t --restart always \
        --name $CONTAINER_NAME \
        -p $MASTER_PORT:9333 \
        -p $FILER_PORT:8888 \
        -v $SCRIPT_LOCATION/mount/masterVolumeFiler/data/:/data/ \
        -v /etc/localtime:/etc/localtime:ro \
        --log-driver=json-file \
        --log-opt max-size=100m \
        --log-opt max-file=3 \
        $IMAGE_NAME:$IMAGE_TAG \
        server -filer -volume.max=20

在这里插入图片描述

在不断上传文件过程中,它会分阶段扩张:

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍
ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ


http://www.kler.cn/a/543950.html

相关文章:

  • 鸿蒙HarmonyOS NEXT开发:优化用户界面性能——组件复用(@Reusable装饰器)
  • 团结引擎 OpenHarmony 平台全面支持 UAAL,实现引擎能力嵌入原生应用
  • 网络安全威胁是什么
  • MySQL数据库入门到大蛇尚硅谷宋红康老师笔记 基础篇 part 11
  • RAG 在智能答疑中的探索
  • 支付宝安全发全套解决方案
  • 性能优化中的服务器与操作系统优化
  • 华为云kubernetes基于keda自动伸缩deployment副本(监听redis队列长度)
  • 在亚马逊云科技上一键自动部署Falcon3大语言模型
  • 11.推荐系统的安全与隐私保护
  • 对gru的理解
  • 【C++八股】C++内存管理
  • 从360度全景照片到高质量3D场景:介绍SC-Omnigs 3D重建系统
  • redis持久化原理相关面试题剖析
  • 期权帮 | 股指期货交易:规则速览与解读!
  • SpringBoot速成(九)获取用户信息 P9-P10
  • git - 克隆带子模块的git工程的方法
  • windows蓝牙驱动开发-支持蓝牙事件通知
  • 为什么 ARCGIS PRO ArcGISIndexingServer.exe 使用大量计算机内存?
  • UGUI下UI元素的position和localPosition
  • 计算机网络和操作系统常见面试题目(带脑图,做了延伸以防面试官深入提问)
  • docker nginx 配置文件详解
  • MySQL数据库入门到大蛇尚硅谷宋红康老师笔记 基础篇 part 10
  • MFC程序设计(十一)单文档架构
  • 从时间同步到智能调度:TSN时间敏感网络如何提升工厂安全性
  • 【Elasticsearch】Kibana Dev Tools中基础使用