Seaweedfs(master volume filer) docker run参数帮助文档
文章目录
- 进入容器后执行获取
- weed -h
- 英文
- 中文
- weed server -h
- 英文
- 中文
- weed volume -h
- 英文
- 中文
- 关键点
- 测试了一下,这个`-volume.minFreeSpace string`有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了
- 尝试只用参数`-volume.max string`设置最大卷数量(貌似一个是大约1g)
进入容器后执行获取
weed -h
英文
/data # weedSeaweedFS: store billions of files and serve them fast!Usage:weed command [arguments]The commands are:autocomplete install autocompleteautocomplete.uninstall uninstall autocompletebackup incrementally backup a volume to local folderbenchmark benchmark by writing millions of files and reading them outcompact run weed tool compact on volume filedownload download files by file idexport list or export files from one volume data filefiler start a file server that points to a master server, or a list of master serversfiler.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.tomlfiler.cat copy one file to localfiler.copy copy one or a list of files to a filer folderfiler.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.tomlfiler.meta.tail see continuous changes on a filerfiler.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object storefiler.remote.sync resumable continuously write back updates to remote storagefiler.replicate replicate file changes to another destinationfiler.sync resumable continuous synchronization between two active-active or active-passive SeaweedFS clustersfix run weed tool fix on files or whole folders to recreate index file(s) if corruptedfuse Allow use weed with linux's mount commandiam start a iam API compatible servermaster start a master servermaster.follower start a master followermount mount weed filer to a directory as file system in userspace(FUSE)mq.broker <WIP> start a message queue brokers3 start a s3 API compatible server that is backed by a filerscaffold generate basic configuration filesserver start a master server, a volume server, and optionally a filer and a S3 gatewayshell run interactive administrative commandsupdate get latest or specific version from https://github.com/seaweedfs/seaweedfsupload upload one or a list of filesversion print SeaweedFS versionvolume start a volume serverwebdav start a webdav server that is backed by a filerUse "weed help [command]" for more information about a command.For Logging, use "weed [logging_options] [command]". The logging options are:-alsologtostderrlog to standard error as well as files (default true)-config_dir valuedirectory with toml configuration files-log_backtrace_at valuewhen logging hits line file:N, emit a stack trace-logdir stringIf non-empty, write log files in this directory-logtostderrlog to standard error instead of files-options stringa file of command line options, each line in optionName=optionValue format-stderrthreshold valuelogs at or above this threshold go to stderr-v valuelog levels [0|1|2|3|4], default to 0-vmodule valuecomma-separated list of pattern=N settings for file-filtered logging
中文
SeaweedFS: store billions of files and serve them fast! # 海量文件存储与快速服务Usage:weed command [arguments] # 使用格式:weed 命令 [参数]The commands are:autocomplete install autocomplete # 安装自动补全功能autocomplete.uninstall uninstall autocomplete # 卸载自动补全功能backup incrementally backup a volume to local folder # 增量备份卷数据到本地目录benchmark benchmark by writing millions of files and reading them out # 通过读写百万文件进行性能测试compact run weed tool compact on volume file # 压缩卷文件download download files by file id # 通过文件ID下载文件export list or export files from one volume data file # 从卷数据文件列出/导出文件filer start a file server that points to a master server, or a list of master servers # 启动文件服务器连接主节点filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml # 持续备份文件到replication.toml定义的位置filer.cat copy one file to local # 复制单个文件到本地filer.copy copy one or a list of files to a filer folder # 复制文件到filer目录filer.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.toml # 持续备份元数据到备份配置指定位置filer.meta.tail see continuous changes on a filer # 实时查看filer元数据变化filer.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object store # 将本地存储操作同步到远程对象存储filer.remote.sync resumable continuously write back updates to remote storage # 持续同步更新到远程存储filer.replicate replicate file changes to another destination # 文件变更复制到其他目标filer.sync resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters # 集群间持续同步fix run weed tool fix on files or whole folders to recreate index file(s) if corrupted # 修复损坏的索引文件fuse Allow use weed with linux's mount command # 支持Linux挂载命令iam start a iam API compatible server # 启动IAM兼容API服务master start a master server # 启动主节点master.follower start a master follower # 启动主节点跟随者mount mount weed filer to a directory as file system in userspace(FUSE) # 挂载FUSE文件系统mq.broker <WIP> start a message queue broker # 启动消息队列代理(开发中)s3 start a s3 API compatible server that is backed by a filer # 启动S3兼容服务scaffold generate basic configuration files # 生成基础配置文件server start a master server, a volume server, and optionally a filer and a S3 gateway # 启动完整服务(主节点+存储节点+可选组件)shell run interactive administrative commands # 进入交互式管理命令行update get latest or specific version from https://github.com/seaweedfs/seaweedfs # 更新SeaweedFS版本upload upload one or a list of files # 上传单个或多个文件version print SeaweedFS version # 显示版本信息volume start a volume server # 启动存储节点webdav start a webdav server that is backed by a filer # 启动WebDAV服务日志选项说明(每个命令前均可添加):-alsologtostderr同时输出日志到标准错误和文件(默认true)-config_dir value包含toml配置文件的目录-log_backtrace_at value当记录到指定行时输出堆栈跟踪-logdir string日志文件存储目录(非空时生效)-logtostderr日志输出到标准错误而非文件-options string命令行选项配置文件(每行格式为optionName=optionValue)-stderrthreshold value高于此级别的日志输出到标准错误-v value日志级别 [0|1|2|3|4],默认为0-vmodule value文件过滤日志设置(逗号分隔的pattern=N格式)
weed server -h
英文
/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name
Default Usage:-cpuprofile stringcpu profile output file-dataCenter stringcurrent volume server's data center name-debugserves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2-debug.port inthttp port for debugging (default 6060)-dir stringdirectories to store data files. dir[,dir]... (default "/tmp")-disableHttpdisable http requests, only gRPC operations are allowed.-filerwhether to start filer-filer.collection stringall data will be stored in this collection-filer.concurrentUploadLimitMB intlimit total concurrent upload size (default 64)-filer.defaultReplicaPlacement stringdefault replication type. If not specified, use master setting.-filer.dirListLimit intlimit sub dir listing size (default 1000)-filer.disableDirListingturn off directory listing-filer.disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-filer.downloadMaxMBps intdownload max speed for each download request, in MB per second-filer.encryptVolumeDataencrypt data on volume servers-filer.filerGroup stringshare metadata with other filers in the same filerGroup-filer.localSocket stringdefault to /tmp/seaweedfs-filer-<port>.sock-filer.maxMB intsplit files larger than the limit (default 4)-filer.port intfiler server http listen port (default 8888)-filer.port.grpc intfiler server grpc listen port-filer.port.public intfiler server public http listen port-filer.saveToFilerLimit intSmall files smaller than this limit can be cached in filer store.-filer.ui.deleteDirenable filer UI show delete directory button (default true)-iamwhether to start IAM service-iam.port intiam server http listen port (default 8111)-idleTimeout intconnection idle seconds (default 30)-ip stringip or server name, also used as identifier (default "172.17.0.6")-ip.bind stringip address to bind to. If empty, default to same as -ip option.-masterwhether to start master server (default true)-master.defaultReplication stringDefault replication type if not specified.-master.dir stringdata directory to store meta data, default to same as -dir specified-master.electionTimeout durationelection timeout of master servers (default 10s)-master.garbageThreshold floatthreshold to vacuum and reclaim spaces (default 0.3)-master.heartbeatInterval durationheartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)-master.metrics.address stringPrometheus gateway address-master.metrics.intervalSeconds intPrometheus push interval in seconds (default 15)-master.peers stringall master nodes in comma separated ip:masterPort list-master.port intmaster server http listen port (default 9333)-master.port.grpc intmaster server grpc listen port-master.raftHashicorpuse hashicorp raft-master.resumeStateresume previous state on start master server-master.volumePreallocatePreallocate disk space for volumes.-master.volumeSizeLimitMB uintMaster stops directing writes to oversized volumes. (default 30000)-memprofile stringmemory profile output file-metricsPort intPrometheus metrics listen port-mq.brokerwhether to start message queue broker-mq.broker.port intmessage queue broker gRPC listen port (default 17777)-options stringa file of command line options, each line in optionName=optionValue format-rack stringcurrent volume server's rack name-s3whether to start S3 gateway-s3.allowDeleteBucketNotEmptyallow recursive deleting all entries along with bucket (default true)-s3.allowEmptyFolderallow empty folders (default true)-s3.auditLogConfig stringpath to the audit log config file-s3.cert.file stringpath to the TLS certificate file-s3.config stringpath to the config file-s3.domainName stringsuffix of the host name in comma separated list, {bucket}.{domainName}-s3.key.file stringpath to the TLS private key file-s3.port ints3 server http listen port (default 8333)-s3.port.grpc ints3 server grpc listen port-volumewhether to start volume server (default true)-volume.compactionMBps intlimit compaction speed in mega bytes per second-volume.concurrentDownloadLimitMB intlimit total concurrent download size (default 64)-volume.concurrentUploadLimitMB intlimit total concurrent upload size (default 64)-volume.dir.idx stringdirectory to store .idx files-volume.disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-volume.fileSizeLimitMB intlimit file size to avoid out of memory (default 256)-volume.hasSlowRead<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)-volume.images.fix.orientationAdjust jpg orientation when uploading.-volume.index stringChoose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")-volume.index.leveldbTimeout intalive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.-volume.inflightUploadDataTimeout durationinflight upload data wait timeout of volume servers (default 1m0s)-volume.max stringmaximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")-volume.minFreeSpace stringmin free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.-volume.minFreeSpacePercent stringminimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")-volume.port intvolume server http listen port (default 8080)-volume.port.grpc intvolume server grpc listen port-volume.port.public intvolume server public port-volume.pprofenable pprof http handlers. precludes --memprofile and --cpuprofile-volume.preStopSeconds intnumber of seconds between stop send heartbeats and stop volume server (default 10)-volume.publicUrl stringpublicly accessible address-volume.readBufferSizeMB int<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)-volume.readMode string[local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")-webdavwhether to start WebDAV gateway-webdav.cacheCapacityMB intlocal cache capacity in MB-webdav.cacheDir stringlocal cache directory for file chunks (default "/tmp")-webdav.cert.file stringpath to the TLS certificate file-webdav.collection stringcollection to create the files-webdav.disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-webdav.filer.path stringuse this remote path from filer server (default "/")-webdav.key.file stringpath to the TLS private key file-webdav.port intwebdav server http listen port (default 7333)-webdav.replication stringreplication to create the files-whiteList stringcomma separated Ip addresses having write permission. No limit if empty.
Description:start both a volume server to provide storage spacesand a master server to provide volume=>location mapping service and sequence number of file idsThis is provided as a convenient way to start both volume server and master server.The servers acts exactly the same as starting them separately.So other volume servers can connect to this master server also.Optionally, a filer server can be started.Also optionally, a S3 gateway can be started.
/data #
中文
/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name # 示例命令
Default Usage:-cpuprofile string # CPU性能分析输出文件cpu profile output file -dataCenter string # 当前卷服务器的数据中心名称current volume server's data center name -debug # 启用调试模式,提供运行时分析数据serves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2 -debug.port int # 调试用的HTTP端口号 (默认6060)http port for debugging (default 6060) -dir string # 数据存储目录列表,多个目录用逗号分隔 (默认"/tmp")directories to store data files. dir[,dir]... (default "/tmp") -disableHttp # 禁用HTTP请求,只允许gRPC操作disable http requests, only gRPC operations are allowed. -filer # 是否启动文件管理器服务whether to start filer -filer.collection string # 所有数据将存储在此集合中all data will be stored in this collection -filer.concurrentUploadLimitMB int # 总并发上传大小限制(单位MB)(默认64)limit total concurrent upload size (default 64) -filer.defaultReplicaPlacement string # 默认副本放置策略(未指定时使用主设置)default replication type. If not specified, use master setting. -filer.dirListLimit int # 子目录列表显示数量限制 (默认1000)limit sub dir listing size (default 1000) -filer.disableDirListing # 关闭目录列表功能turn off directory listing -filer.disk string # 磁盘类型标签 [hdd|ssd|<自定义标签>][hdd|ssd|<tag>] hard drive or solid state drive or any tag -filer.downloadMaxMBps int # 单个下载请求的最大速度(MB/秒)download max speed for each download request, in MB per second -filer.encryptVolumeData # 加密卷服务器上的数据encrypt data on volume servers -filer.filerGroup string # 与同组文件管理器共享元数据share metadata with other filers in the same filerGroup -filer.localSocket string # 本地socket文件路径 (默认/tmp/seaweedfs-filer-<port>.sock)default to /tmp/seaweedfs-filer-<port>.sock -filer.maxMB int # 文件分割阈值(单位MB)(默认4)split files larger than the limit (default 4) -filer.port int # 文件管理器HTTP监听端口 (默认8888)filer server http listen port (default 8888) -filer.port.grpc int # 文件管理器gRPC监听端口filer server grpc listen port -filer.port.public int # 文件管理器公共HTTP监听端口filer server public http listen port -filer.saveToFilerLimit int # 可缓存到文件管理器的小文件大小阈值Small files smaller than this limit can be cached in filer store. -filer.ui.deleteDir # 在文件管理器UI显示删除目录按钮 (默认true)enable filer UI show delete directory button (default true) -iam # 是否启动IAM服务whether to start IAM service -iam.port int # IAM服务HTTP监听端口 (默认8111)iam server http listen port (default 8111) -idleTimeout int # 连接空闲超时秒数 (默认30)connection idle seconds (default 30) -ip string # 服务器IP或名称,也作为标识符 (默认"172.17.0.6")ip or server name, also used as identifier (default "172.17.0.6") -ip.bind string # 绑定的IP地址(空则使用-ip设置)ip address to bind to. If empty, default to same as -ip option. -master # 是否启动主服务器 (默认true)whether to start master server (default true) -master.defaultReplication string # 默认副本策略(未指定时使用)Default replication type if not specified. -master.dir string # 主服务器元数据存储目录(默认同-dir)data directory to store meta data, default to same as -dir specified -master.electionTimeout duration # 主服务器选举超时时间 (默认10s)election timeout of master servers (default 10s) -master.garbageThreshold float # 触发空间回收的垃圾占比阈值 (默认0.3)threshold to vacuum and reclaim spaces (default 0.3) -master.heartbeatInterval duration # 主服务器心跳间隔(随机乘以1~1.25)(默认300ms)heartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms) -master.metrics.address string # Prometheus网关地址Prometheus gateway address -master.metrics.intervalSeconds int # Prometheus推送间隔(秒)(默认15)Prometheus push interval in seconds (default 15) -master.peers string # 所有主节点列表(逗号分隔的ip:port)all master nodes in comma separated ip:masterPort list -master.port int # 主服务器HTTP监听端口 (默认9333)master server http listen port (default 9333) -master.port.grpc int # 主服务器gRPC监听端口master server grpc listen port -master.raftHashicorp # 使用Hashicorp Raft实现use hashicorp raft -master.resumeState # 启动时恢复之前的状态resume previous state on start master server -master.volumePreallocate # 为卷预分配磁盘空间Preallocate disk space for volumes. -master.volumeSizeLimitMB uint # 主服务器停止写入超大卷的阈值(单位MB)(默认30000)Master stops directing writes to oversized volumes. (default 30000) -memprofile string # 内存分析输出文件memory profile output file -metricsPort int # Prometheus指标监听端口Prometheus metrics listen port -mq.broker # 是否启动消息队列代理whether to start message queue broker -mq.broker.port int # 消息队列代理gRPC监听端口 (默认17777)message queue broker gRPC listen port (default 17777) -options string # 命令行选项配置文件(每行格式optionName=optionValue)a file of command line options, each line in optionName=optionValue format -rack string # 当前卷服务器的机架名称current volume server's rack name -s3 # 是否启动S3网关whether to start S3 gateway -s3.allowDeleteBucketNotEmpty # 允许递归删除非空桶 (默认true)allow recursive deleting all entries along with bucket (default true) -s3.allowEmptyFolder # 允许空文件夹 (默认true)allow empty folders (default true) -s3.auditLogConfig string # 审计日志配置文件路径path to the audit log config file -s3.cert.file string # TLS证书文件路径path to the TLS certificate file -s3.config string # 配置文件路径path to the config file -s3.domainName string # S3域名后缀(逗号分隔列表,格式{bucket}.{domainName})suffix of the host name in comma separated list, {bucket}.{domainName} -s3.key.file string # TLS私钥文件路径path to the TLS private key file -s3.port int # S3服务HTTP监听端口 (默认8333)s3 server http listen port (default 8333) -s3.port.grpc int # S3服务gRPC监听端口s3 server grpc listen port -volume # 是否启动卷服务器 (默认true)whether to start volume server (default true) -volume.compactionMBps int # 压缩速度限制(MB/秒)limit compaction speed in mega bytes per second -volume.concurrentDownloadLimitMB int # 总并发下载大小限制(单位MB)(默认64)limit total concurrent download size (default 64) -volume.concurrentUploadLimitMB int # 总并发上传大小限制(单位MB)(默认64)limit total concurrent upload size (default 64) -volume.dir.idx string # .idx文件存储目录directory to store .idx files -volume.disk string # 卷磁盘类型标签 [hdd|ssd|<自定义标签>][hdd|ssd|<tag>] hard drive or solid state drive or any tag -volume.fileSizeLimitMB int # 文件大小限制以避免内存溢出(单位MB)(默认256)limit file size to avoid out of memory (default 256) -volume.hasSlowRead # <实验性> 防止慢速读取阻塞其他请求(默认true)<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true) -volume.images.fix.orientation # 上传时自动调整JPG方向Adjust jpg orientation when uploading. -volume.index string # 索引模式选择 [memory|leveldb|leveldbMedium|leveldbLarge] (默认"memory")Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory") -volume.index.leveldbTimeout int # leveldb存活超时时间(小时),0表示禁用alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption. -volume.inflightUploadDataTimeout duration # 传输中上传数据等待超时时间 (默认1m0s)inflight upload data wait timeout of volume servers (default 1m0s) -volume.max string # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8") -volume.minFreeSpace string # 最小空闲磁盘空间(百分比<=100,或如10GiB)min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly. -volume.minFreeSpacePercent string # 最小空闲磁盘空间百分比(已弃用,改用minFreeSpace)(默认"1")minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1") -volume.port int # 卷服务器HTTP监听端口 (默认8080)volume server http listen port (default 8080) -volume.port.grpc int # 卷服务器gRPC监听端口volume server grpc listen port -volume.port.public int # 卷服务器公共端口volume server public port -volume.pprof # 启用pprof HTTP处理器(与--memprofile/--cpuprofile互斥)enable pprof http handlers. precludes --memprofile and --cpuprofile -volume.preStopSeconds int # 停止发送心跳到停止服务的时间间隔(秒)(默认10)number of seconds between stop send heartbeats and stop volume server (default 10) -volume.publicUrl string # 公开访问地址publicly accessible address -volume.readBufferSizeMB int # <实验性> 读缓冲区大小(MB)(默认4)<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4) -volume.readMode string # 非本地卷处理模式 [local|proxy|redirect] (默认"proxy")[local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy") -webdav # 是否启动WebDAV网关whether to start WebDAV gateway -webdav.cacheCapacityMB int # 本地缓存容量(MB)local cache capacity in MB -webdav.cacheDir string # 文件块本地缓存目录 (默认"/tmp")local cache directory for file chunks (default "/tmp") -webdav.cert.file string # TLS证书文件路径path to the TLS certificate file -webdav.collection string # 文件创建的目标集合collection to create the files -webdav.disk string # WebDAV磁盘类型标签 [hdd|ssd|<自定义标签>][hdd|ssd|<tag>] hard drive or solid state drive or any tag -webdav.filer.path string # 使用的远程文件管理器路径 (默认"/")use this remote path from filer server (default "/") -webdav.key.file string # TLS私钥文件路径path to the TLS private key file -webdav.port int # WebDAV服务HTTP监听端口 (默认7333)webdav server http listen port (default 7333) -webdav.replication string # 文件创建的副本策略replication to create the files -whiteList string # 拥有写权限的IP白名单(逗号分隔,空表示无限制)comma separated Ip addresses having write permission. No limit if empty.
Description:start both a volume server to provide storage spaces # 同时启动卷服务器提供存储空间and a master server to provide volume=>location mapping service and sequence number of file ids # 和主服务器提供卷位置映射及文件ID序列服务This is provided as a convenient way to start both volume server and master server. # 本命令是同时启动卷服务器和主服务器的便捷方式The servers acts exactly the same as starting them separately. # 服务表现与单独启动时完全相同So other volume servers can connect to this master server also. # 其他卷服务器也可以连接到此主服务器Optionally, a filer server can be started. # 可选项:可启动文件管理器服务Also optionally, a S3 gateway can be started. # 可选项:可启动S3网关
/data #
weed volume -h
英文
/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:-compactionMBps intlimit background compaction or copying speed in mega bytes per second-concurrentDownloadLimitMB intlimit total concurrent download size (default 256)-concurrentUploadLimitMB intlimit total concurrent upload size (default 256)-cpuprofile stringcpu profile output file-dataCenter stringcurrent volume server's data center name-dir stringdirectories to store data files. dir[,dir]... (default "/tmp")-dir.idx stringdirectory to store .idx files-disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-fileSizeLimitMB intlimit file size to avoid out of memory (default 256)-hasSlowRead<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)-idleTimeout intconnection idle seconds (default 30)-images.fix.orientationAdjust jpg orientation when uploading.-index stringChoose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")-index.leveldbTimeout intalive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.-inflightUploadDataTimeout durationinflight upload data wait timeout of volume servers (default 1m0s)-ip stringip or server name, also used as identifier (default "172.17.0.6")-ip.bind stringip address to bind to. If empty, default to same as -ip option.-max stringmaximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")-memprofile stringmemory profile output file-metricsPort intPrometheus metrics listen port-minFreeSpace stringmin free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.-minFreeSpacePercent stringminimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")-mserver stringcomma-separated master servers (default "localhost:9333")-options stringa file of command line options, each line in optionName=optionValue format-port inthttp listen port (default 8080)-port.grpc intgrpc listen port-port.public intport opened to public-pprofenable pprof http handlers. precludes --memprofile and --cpuprofile-preStopSeconds intnumber of seconds between stop send heartbeats and stop volume server (default 10)-publicUrl stringPublicly accessible address-rack stringcurrent volume server's rack name-readBufferSizeMB int<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)-readMode string[local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")-whiteList stringcomma separated Ip addresses having write permission. No limit if empty.
Description:start a volume server to provide storage spaces
中文
/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:-compactionMBps intlimit background compaction or copying speed in mega bytes per second[限制后台压缩或复制速度,单位MB/秒]-concurrentDownloadLimitMB intlimit total concurrent download size (default 256)[限制并发下载总大小,默认256MB]-concurrentUploadLimitMB intlimit total concurrent upload size (default 256)[限制并发上传总大小,默认256MB]-cpuprofile stringcpu profile output file[CPU性能分析输出文件名]-dataCenter stringcurrent volume server's data center name[当前卷服务器的数据中心名称]-dir stringdirectories to store data files. dir[,dir]... (default "/tmp")[数据文件存储目录,多个目录用逗号分隔,默认/tmp]-dir.idx stringdirectory to store .idx files[索引文件存储目录]-disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag[磁盘类型标识:hdd/ssd/自定义标签]-fileSizeLimitMB intlimit file size to avoid out of memory (default 256)[限制单个文件大小防止内存溢出,默认256MB]-hasSlowRead<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)[实验性:启用后慢读不会阻塞其他请求,但大文件读取延迟会增加]-idleTimeout intconnection idle seconds (default 30)[连接空闲超时时间(秒),默认30秒]-images.fix.orientationAdjust jpg orientation when uploading.[上传时自动调整JPG方向]-index stringChoose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")[索引存储模式:内存优先或不同级别的LevelDB]-index.leveldbTimeout intalive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.[LevelDB存活时间(小时),超时后卸载以节省资源]-inflightUploadDataTimeout durationinflight upload data wait timeout of volume servers (default 1m0s)[上传数据等待超时时间,默认1分钟]-ip stringip or server name, also used as identifier (default "172.17.0.6")[服务器IP/名称,也作为唯一标识]-ip.bind stringip address to bind to. If empty, default to same as -ip option.[绑定IP地址,默认与-ip相同]-max stringmaximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")[最大卷数量(自动计算磁盘空间与卷大小的比值)]-memprofile stringmemory profile output file[内存性能分析输出文件名]-metricsPort intPrometheus metrics listen port[Prometheus指标监听端口]-minFreeSpace stringmin free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.[最小磁盘剩余空间(百分比或易读字节单位如10GiB),空间不足时将卷设为只读]-minFreeSpacePercent stringminimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")[已弃用,改用minFreeSpace参数]-mserver stringcomma-separated master servers (default "localhost:9333")[主服务器地址列表,用逗号分隔]-options stringa file of command line options, each line in optionName=optionValue format[配置文件路径(每行格式为optionName=optionValue)]-port inthttp listen port (default 8080)[HTTP监听端口]-port.grpc intgrpc listen port[gRPC监听端口]-port.public intport opened to public[对外开放端口]-pprofenable pprof http handlers. precludes --memprofile and --cpuprofile[启用pprof性能分析(与--memprofile/--cpuprofile互斥)]-preStopSeconds intnumber of seconds between stop send heartbeats and stop volume server (default 10)[停止发送心跳到停止服务之间的等待秒数]-publicUrl stringPublicly accessible address[公开访问地址]-rack stringcurrent volume server's rack name[当前卷服务器的机架名称]-readBufferSizeMB int<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)[实验性:增大可优化查询性能但增加内存占用,默认4MB]-readMode string[local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")[非本地卷处理模式:本地无/代理请求/重定向]-whiteList stringcomma separated Ip addresses having write permission. No limit if empty.[白名单IP地址(逗号分隔),空表示无限制]
Description:start a volume server to provide storage spaces[启动卷服务器提供存储空间]
关键点
-master.garbageThreshold float # 触发空间回收的垃圾占比阈值 (默认0.3)threshold to vacuum and reclaim spaces (default 0.3) -volume.max string # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")
-volume.minFreeSpace string # 最小空闲磁盘空间(百分比<=100,或如10GiB),如果达到阈值所有卷将被标记只读(大概写30表示30%)min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.
测试了一下,这个-volume.minFreeSpace string有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了

尝试只用参数-volume.max string设置最大卷数量(貌似一个是大约1g)
我尝试设置20:
docker run \-d -i -t --restart always \--name $CONTAINER_NAME \-p $MASTER_PORT:9333 \-p $FILER_PORT:8888 \-v $SCRIPT_LOCATION/mount/masterVolumeFiler/data/:/data/ \-v /etc/localtime:/etc/localtime:ro \--log-driver=json-file \--log-opt max-size=100m \--log-opt max-file=3 \$IMAGE_NAME:$IMAGE_TAG \server -filer -volume.max=20

在不断上传文件过程中,它会分阶段扩张:



ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ
ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ ᅟᅠ
相关文章:
Seaweedfs(master volume filer) docker run参数帮助文档
文章目录 进入容器后执行获取weed -h英文中文 weed server -h英文中文 weed volume -h英文中文 关键点测试了一下,这个-volume.minFreeSpace string有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间࿰…...
嵌套调用实现数组元素逆序存放
主函数调用reverse_array(int ptr[],int cnt)函数,该函数在调用inplace_swap(int *x,int *y)函数时,把两个不同的地址送给inplace_swap(int *x,int *y)函数,实现这两个位置处元素的交换。 令*xa,*yb 则*y *x^*y执行后,*xa,*ya^b…...
【工业安全】-CVE-2022-35555- Tenda W6路由器 命令注入漏洞
文章目录 1.漏洞描述 2.环境搭建 3.漏洞复现 4.漏洞分析 4.1:代码分析 4.2:流量分析 5.poc代码: 1.漏洞描述 漏洞编号:CVE-2022-35555 漏洞名称:Tenda W6 命令注入 威胁等级:高危 漏洞详情࿱…...
Spark 和 Flink
Spark 和 Flink 都是目前流行的大数据处理引擎,但它们在架构设计、应用场景、性能和生态方面有较大区别。以下是详细对比: 1. 架构与核心概念 方面Apache SparkApache Flink计算模型微批(Micro-Batch)为主,但支持结构…...
Jupyter lab 无法导出格式 Save and Export Notebook As无法展开
本来尝试jypyter lab如何导出HTML带有侧边导航栏,一顿操作后发现还是没实现。 又突然发现导出其他格式地功能不能用了,浏览器里Save and Export Notebook As展开按钮为灰色打不开。 经典想实现的没实现还把原先的搞坏了。 看了jupyter lab的运行信息发…...
C#(Winform)通过添加AForge添加并使用系统摄像机
先展示效果 AForge介绍 AForge是一个专门为开发者和研究者基于C#框架设计的, 也是NET平台下的开源计算机视觉和人工智能库 它提供了许多常用的图像处理和视频处理算法、机器学习和神经网络模型,并且具有高效、易用、稳定等特点。 AForge主要包括: 计算机视觉与人…...
【LeetCode: 611. 有效三角形的个数 + 排序 + 双指针】
🚀 算法题 🚀 🌲 算法刷题专栏 | 面试必备算法 | 面试高频算法 🍀 🌲 越难的东西,越要努力坚持,因为它具有很高的价值,算法就是这样✨ 🌲 作者简介:硕风和炜,…...
每日十题八股-补充材料-2025年2月15日
1.TCP是如何保证消息的顺序和可靠的? 写得超级好的文章 首先肯定是三次握手和四次挥手保证里通讯双方建立了正确有效的连接。 其次是校验和、序列号,ACK消息应答机制还有重传机制,保证了消息顺序和可靠。 同时配合拥塞机制和流量控制机制&am…...
国内已经部署DeepSeek的第三方推荐
大家好,我是苍何。 最近DeepSeek爆火,我也说点心里话,其实就我们普通人而言,要想用好 DeepSeek,其实无非就是要利用好工具为我们自己提效。 比如你是搞编程的,你就得学会如何用 DeepSeek 更快速的辅助你编…...
理解WebGPU 中的 GPUDevice :与 GPU 交互的核心接口
在 WebGPU 开发中, GPUDevice 是一个至关重要的对象,它是与 GPU 进行交互的核心接口。通过 GPUDevice ,开发者可以创建和管理 GPU 资源(如缓冲区、纹理、管线等),并提交命令缓冲区以执行渲染和计算任…...
APlayer - APlayer 初识(APlayer 初识案例、APlayer 常用事件)
一、APlayer APlayer 是一款轻量级、功能丰富的 HTML5 音频播放器 二、APlayer 初识案例 1、案例演示 <!DOCTYPE html> <html lang"en"><head><meta charset"UTF-8" /><meta name"viewport" content"widthde…...
c++中什么时候应该使用final关键字?
在C中,final关键字是自C11标准引入的重要特性,主要用于类继承和虚函数重写机制的约束。下面从技术原理、使用场景和最佳实践三个维度进行系统分析,并给出工业级代码示例。 目录 一、技术原理深度解析 二、关键使用场景分析 1. 类级别的fi…...
2025年2月15日(虚拟环境-deepseek)
好的,用户之前已经询问过如何在树莓派上安装venv,现在他们的问题是“如何使用”。我需要回顾之前的对话,看看之前是否已经涵盖了使用的部分,或者用户需要更详细的使用步骤。 首先,查看之前的回答,发现用户…...
PyTorch Lightning LightningDataModule 介绍
LightningDataModule 是 PyTorch Lightning 提供的数据模块,用于统一管理数据加载流程(包括数据准备、预处理、拆分、批量加载等)。它的核心作用是将数据处理逻辑与模型解耦,提高代码的可复用性和可读性。 1. LightningDataModule 的作用 ✅ 封装数据预处理:数据下载、清…...
Windows环境下使用Ollama搭建本地AI大模型教程
注:Ollama仅支持Windows10及以上版本。 安装Ollama 去 ollama官网 下载对应平台及OS的安装包。 运行安装包,点击“安装”按钮即可开始安装。Ollama会自动安装到你的 C:\Users\<当前用户名>\AppData\Local\Programs\Ollama 目录上。 安装完成后&…...
2024年认证杯SPSSPRO杯数学建模A题(第二阶段)保暖纤维的保暖能力全过程文档及程序
2024年认证杯SPSSPRO杯数学建模 A题 保暖纤维的保暖能力 原题再现: 冬装最重要的作用是保暖,也就是阻挡温暖的人体与寒冷环境之间的热量传递。人们在不同款式的棉衣中会填充保暖材料,从古已有之的棉花、羽绒到近年来各种各样的人造纤维。不…...
算法19(力扣244)反转字符串
1、问题 编写一个函数,其作用是将输入的字符串反转过来。输入字符串以字符数组 s 的形式给出。 不要给另外的数组分配额外的空间,你必须原地修改输入数组、使用 O(1) 的额外空间解决这一问题。 2、示例 (1) 示例 1&a…...
DeepSeek 助力 Vue 开发:打造丝滑的卡片(Card)
前言:哈喽,大家好,今天给大家分享一篇文章!并提供具体代码帮助大家深入理解,彻底掌握!创作不易,如果能帮助到大家或者给大家一些灵感和启发,欢迎收藏关注哦 💕 目录 Deep…...
ESP32 arduino + DeepSeek API访问
此项目主要使用ESP32-S3实现一个AI语音聊天助手,可以通过该项目熟悉ESP32-S3 arduino的开发,百度语音识别,语音合成API调用,百度文心一言大模型API的调用方法,音频的录制及播放,SD卡的读写,Wifi…...
最新国内 ChatGPT Plus/Pro 获取教程
最后更新版本:20250202 教程介绍: 本文将详细介绍如何快速获取一张虚拟信用卡,并通过该卡来获取ChatGPT Plus和ChatGPT Pro。 # 教程全程约15分钟开通ChatGPT Plus会员帐号前准备工作 一个尚未升级的ChatGPT帐号!一张虚拟信用卡…...
SQLMesh 系列教程4- 详解模型特点及模型类型
SQLMesh 作为一款强大的数据建模工具,以其灵活的模型设计和高效的增量处理能力脱颖而出。本文将详细介绍 SQLMesh 模型的特点和类型,帮助读者快速了解其强大功能。我们将深入探讨不同模型类型(如增量模型、全量模型、SCD Type 2 等࿰…...
三维重建(十二)——3D先验的使用
文章目录 零、最近感受和前言一、使用能够快速得到重建初始化的方法1.1 Colmap(多视角)1.2 深度估计(单视角)二、已知形状模板2.1 人脸2.2 人体2.3 动物三、刚性与非刚性约束(变形约束)3.1 刚性变形3.2 非刚性变形四、统计(深度学习)先验——从大量(3D)数据中提取信息…...
渗透利器:YAKIT 工具-基础实战教程.
YAKIT 工具-基础实战教程. YAKIT(Yak Integrated Toolkit)是一款基于Yak语言开发的集成化网络安全单兵工具,旨在覆盖渗透测试全流程,提供从信息收集、漏洞扫描到攻击实施的自动化支持。其核心目标是通过GUI界面降低Yak语言的使用…...
Kotlin 2.1.0 入门教程(二十一)数据类
数据类 数据类主要用于存储数据。 对于每个数据类,编译器会自动生成一些额外的成员函数,这些函数支持将实例打印为易读的输出、比较实例、复制实例等操作。 数据类使用 data 关键字标记: data class User(val name: String, val age: Int…...
Python学习心得数据的验证
数据的验证是指程序对用户输入的数据进行”合法“性验证 一、 数据的验证的一些方法: 方法名 描述说明 str.isdigit() 所有字符都是数字(阿拉伯数字) str.isnumeric() 所有字符都是数字 str.isalpha() 所有字符都是字母(包含中文字符) str.isalnum() 所有…...
PyQt6/PySide6 的信号与槽原理
一、核心原理剖析 1.1 观察者模式的GUI实现 信号与槽机制基于观察者模式实现解耦通信,相比传统GUI回调机制具备: 类型安全:信号参数与槽参数自动匹配松耦合:发送者无需知道接收者存在多对多连接:一个信号可绑定多个…...
jenkins 配置ssh拉取gitlab
一、生成key ssh-keygen -t rsa -b 4096 -C "root" 二、将id_rsa内容拷贝到jenkins 公钥id_rsa.pub拷贝到gitlab...
基于css实现正六边形的三种方案
方案一:通过旋转三个长方形生成正六边形 分析: 如下图所示,我们可以通过旋转三个长方形来得到一个正六边形。疑问: 1. 长方形的宽高分别是多少? 设正六边形的边长是100,基于一些数学常识,可以…...
18.Python实战:实现年会抽奖系统
目录结构 python/ ├── sql/ │ └── table.sql # 创建数据库及数据表 ├── config/ │ └── __init__.py # 数据库和Flask配置 ├── static/ │ ├── style.css # 样式文件 │ └── script.js # JavaScript脚本…...
145,【5】 buuctf web [GWCTF 2019]mypassword
进入靶场 修改了url后才到了注册页面 注测后再登录 查看源码 都点进去看看 有个反馈页面 再查看源码 又有收获 // 检查$feedback是否为数组 if (is_array($feedback)) {// 如果是数组,弹出提示框提示反馈不合法echo "<script>alert(反馈不合法);<…...
