支持S3接口的分布式存储系统LeoFS集群部署笔记

前段时间有点儿空,在github上看了下,一个小鬼子开发的分布式存储系统LeoFS,总体架构还比较好的,实现了很多amz S3的功能,本着实践出真知的原则,自己搭建来测试了下,感觉还不错,可以持续关注,但是生产环境暂时不推荐使用,再观察下吧….

在linode上开了几个虚拟服务器来进行测试,过程笔记如下:

LeoFS是一个非结构化的Web对象存储和高可用的,分布式的,最终一致的存储系统
基本规划
Manager
IP: 45.79.96.27, 45.79.75.81
Name: manager_0@45.79.96.27, manager_1@45.79.75.81

Gateway
IP: 45.33.47.145
Name: gateway_0@45.33.47.145

Storage
IP: 173.255.242.49 .. 45.33.36.211 … 45.33.48.247
Name: storage_01@173.255.242.49 .. storage_02@45.33.36.211…storage_02@45.33.48.247

首先在主控端配置(本次在manager master0上进行)

1
2
yum install screen epel-release -y && screen -S leofs
yum install ansible -y

——————————————–
配置ansible

1
vi /etc/ansible/hosts

添加各IP服务器(暂未分组–实际生产为了维护方便应分组,方便升级等作业)
(若在主master上安装的ansible 需要在本机的ip后面指定本机连接)

1
2
3
4
5
6
45.79.96.27 ansible_connection=local
45.79.75.81
45.33.47.145
173.255.242.49
45.33.36.211
45.33.48.247

编辑主控端配置文件

1
vi /etc/ansible/ansible.cfg

关闭ssh在首次连接时出现检查keys 的提示

1
host_key_checking = False

——————使用密匙登录远程服务器————————–
主控端生成密匙并上传到各个服务器
ssh-keygen -t rsa
一路回车
然后将主控端生成的公匙拷贝到各个服务器上(使用ssh-copy-id说明详见https://xiaohost.com/2472.html)
依次执行以下命令

1
2
3
4
5
6
ssh-copy-id -p 22 -i /root/.ssh/id_rsa.pub root@45.79.96.27
ssh-copy-id -p 22 -i /root/.ssh/id_rsa.pub root@45.79.75.81
ssh-copy-id -p 22 -i /root/.ssh/id_rsa.pub root@45.33.47.145
ssh-copy-id -p 22 -i /root/.ssh/id_rsa.pub root@173.255.242.49
ssh-copy-id -p 22 -i /root/.ssh/id_rsa.pub root@45.33.36.211
ssh-copy-id -p 22 -i /root/.ssh/id_rsa.pub root@45.33.48.247

输入yes回车
输入受控机ssh root密码

——————————————–
PS:以后更换密匙可以使用ansible 推送key 到其他服务器

1
ansible all -m authorized_key -a "user=root  key='{{ lookup('file','~/.ssh/id_rsa.pub')}}'"

——————————————–
验证

1
ansible all -m ping

——————————————–
停止和关闭防火墙

1
2
ansible all -m shell -a "systemctl stop firewalld.service"
ansible all -m shell -a "systemctl disable firewalld.service"

——————————————–
分别修改主机名以及hosts(可以将原来默认的删除)

1
cat /dev/null > /etc/hostname && echo "45.79.96.27 M0" >> /etc/hostname && cat /dev/null > /etc/hosts && echo "45.79.96.27 M0" >> /etc/hosts
1
cat /dev/null > /etc/hostname && echo "45.79.75.81 M1" >> /etc/hostname && cat /dev/null > /etc/hosts && echo "45.79.75.81 M1" >> /etc/hosts
1
cat /dev/null > /etc/hostname && echo "45.33.47.145 G0" >> /etc/hostname && cat /dev/null > /etc/hosts && echo "45.33.47.145 s3.xiaohost.com" >> /etc/hosts && echo "45.33.47.145 *.s3.xiaohost.com" >> /etc/hosts
1
cat /dev/null > /etc/hostname && echo "173.255.242.49 S1" >> /etc/hostname && cat /dev/null > /etc/hosts && echo "173.255.242.49 S1" >> /etc/hosts
1
cat /dev/null > /etc/hostname && echo "45.33.36.211 S2" >> /etc/hostname && cat /dev/null > /etc/hosts && echo "45.33.36.211 S2" >> /etc/hosts
1
cat /dev/null > /etc/hostname && echo "45.33.48.247 S3" >> /etc/hostname && cat /dev/null > /etc/hosts && echo "45.33.48.247 S3" >> /etc/hosts

注意网关host必须加入之前解析的域名(后面还需要在master管理台使用set-endpoint s3.xiaohost.com 添加endpoint属性,实现用域名访问s3存储)

1
2
45.33.47.145 s3.xiaohost.com
45.33.47.145 *.s3.xiaohost.com

****修改完所有服务器的主机名和hosts后保存退出,并全部reboot重启
——————————————–
验证修改的主机名

1
2
ansible all -m shell -a "hostname"
ansible all -m shell  -a "cat /etc/hosts"

——————————————–
执行ansible 对所有被控端执行shell命令

安装必要组件

1
ansible all -m shell -a "yum update -y && yum install wget curl git nc redhat-lsb-core gcc gcc-c++ glibc-devel make automake ncurses-devel openssl-devel autoconf libuuid-devel cmake check check-devel -y && yum install gcc gcc-c++ glibc-devel make ncurses-devel openssl-devel java-1.8.0-openjdk-devel -y && yum install gcc* vim nc -y"

——————————————–
下载安装“Erlang 18.3” (ansible为逐个执行,安装这个太慢了,可以选择这一节手动去每个逐个安装)

1
2
3
ansible all -m shell -a "cd / && wget -c --no-check-certificate http://erlang.org/download/otp_src_18.3.tar.gz && tar -zxvf otp_src_18.3.tar.gz && cd /otp_src_18.3"
ansible all -m shell -a "cd /otp_src_18.3 && ./otp_build autoconf"
ansible all -m shell -a "cd /otp_src_18.3 && ./configure && make && sudo make install"

——————————————-
在主控端下载leofs_ansible

1
cd / &&mkdir codefiles && cd /codefiles && git clone https://github.com/leo-project/leofs_ansible.git && cd leofs_ansible

——————————————–
编辑leofs_ansible的hosts配置

1
cp hosts.sample hosts && cat /dev/null > hosts && vi hosts

—————-写入以下数据—————–

1
2
3
4
5
6
7
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.2
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"

[builder]
45.79.96.27

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
45.79.96.27

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
45.79.75.81

[leo_storage]
173.255.242.49 leofs_module_nodename=S1@173.255.242.49
45.33.36.211 leofs_module_nodename=S2@45.33.36.211
45.33.48.247 leofs_module_nodename=S3@45.33.48.247

[leo_gateway]
45.33.47.145 leofs_module_nodename=G0@45.33.47.145

[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage

——————————————–
安装leofs 及集群管理(在安装ansible的manager master操作)

ansible-playbook命令必须在 /codefiles/leofs_ansible/ 目录下执行(因为要引用以下命令包含的yml配置文件)

##第一步 Build LeoFS 本机配置

1
ansible-playbook -i hosts build_leofs.yml

##第二步 Install LeoFS 安装系统

1
ansible-playbook -i hosts install_leofs.yml

## Config LeoFS 配置系统(每次更改配置前先停再配置最后启动)

1
ansible-playbook -i hosts config_leofs.yml

——————系统管理(通过脚本管理与各个服务器本机systemctl命令有冲突,只能用脚本来管理)—————————
## Start LeoFS 启动系统

1
ansible-playbook -i hosts start_leofs.yml

## Stop LeoFS 停止系统

1
ansible-playbook -i hosts stop_leofs.yml

## Purge LeoFS 清空系统

1
ansible-playbook -i hosts purge_leofs.yml

——————————————–
设置端点endpoint(manager master操作)

1
leofs-adm add-endpoint s3.xiaohost.com

——————————————–
创建用户(可以指定AK)(manager master操作)
leofs-adm create-user yourname

——————————————-
创建桶(在manager master操作)

1
leofs-adm add-bucket bucket1

——————————————–
设置桶的访问权限(manager master操作)

1
leofs-adm  update-acl bucket1 16f027afc1585117c1f9 public-read

——————————————–
修改网关端口(manager master 通过ansible操作)
系统的统一管理配置文件位于/codefiles/leofs_ansible
先关闭系统

1
2
ansible-playbook -i hosts stop_leofs.yml
vi /codefiles/leofs_ansible/roles/leo_gateway/defaults/main.yml

然后重新配置后再启动系统

1
2
3
ansible-playbook -i hosts config_leofs.yml
ansible-playbook -i hosts start_leofs.yml
leofs-adm status

—————————————————————————————————————–
相关管理命令
所有命令http://leo-project.net/leofs/docs/admin/index_of_commands/
——————————————–
—————————————————————-
leofs 对象存储的配置和查询
——————————————————————
s3-api 命令使用
http://leo-project.net/leofs/docs/admin_guide/admin_guide_8.html

用户查询

1
leofs-adm get-users

1
/usr/local/leofs/1.4.2/leofs-adm get-users
1
2
3
user_id     | role_id | access_key_id          | created_at
------------+---------+------------------------+---------------------------
_test_leofs | 9       | 05236                  | 2017-03-30 15:03:54 +0800

删除默认的用户

1
leofs-adm delete-user

1
/usr/local/leofs/1.4.2/leofs-adm delete-user _test_leofs

操作如下

1
leofs-adm delete-user _test_leofs

OK

———————————————-
创建用户

1
leofs-adm create-user

1
/usr/local/leofs/1.4.2/leofs-adm create-user test test

———————————————-
设置端点endpoint

1
2
leofs-adm add-endpoint
leofs-adm add-endpoint s3.xiaohost.com

———————————————-
查询端点endpoint

1
2
leofs-adm get-endpoints
/usr/local/leofs/1.4.2/leofs-adm get-endpoints

———————————————-
删除端点endpoint

1
leofs-adm delete-endpoint

———————————————-
创建bucket

1
2
leofs-adm add-bucket
/usr/local/leofs/1.4.2/leofs-adm add-bucket abc

———————————————-
查询指定bucket

1
leofs-adm get-bucket

———————————————-
查询所有bucket

1
leofs-adm get-buckets

———————————————-
修改bucket访问权限(可选)

1
leofs-adm  update-acl   public-read

更新存储桶的ACL,访问控制列表
– private (default):除了所有者之外没有人具有访问权限
– public-read:所有用户都具有READ访问权限
– public-read-write:所有用户都具有READ和WRITE访问权限

———————-相关性能调优————————

1
2
3
4
5
6
7
# 网关配置线程池
## Large Object Handler - put worker pool size
large_object.put_worker_pool_size = 16
## Large Object Handler - put worker buffer size
large_object.put_worker_buffer_size = 32
## Memory cache capacity in bytes
cache.cache_ram_capacity = 0

## 硬盘缓存 Disk cache capacity in bytes
cache.cache_disc_capacity = 0
## When the length of the object exceeds this value, store the object on disk
cache.cache_disc_threshold_len = 1048576
## Directory for the disk cache data
cache.cache_disc_dir_data = ./cache/data
## Directory for the disk cache journal
cache.cache_disc_dir_journal = ./cache/journal

master安装LeoCenter

1
2
3
4
5
git clone https://github.com/leo-project/leo_center.git
yum install ruby-devel -y
cd leo_center/
gem install bundler
bundle install

修改配置

1
vi config.yml
1
2
3
:managers:
- "localhost:10020" # master
- "localhost:10021" # slave

:credential:
:access_key_id: “YOUR_ACCESS_KEY_ID”
:secret_access_key: “YOUR_SECRET_ACCESS_KEY”

启动服务

1
thin start -a ${HOST} -p ${PORT} > /dev/null 2>&1

创建管理员帐号

1
2
3
  leofs-adm create-user leo_admin password
access-key-id: ab96d56258e0e9d3621a
secret-access-key: 5c3d9c188d3e4c4372e414dbd325da86ecaa8068

修改帐号权限

1
leofs-adm update-user-role leo_admin 9

删除用户buckets 是存在的,只有当buckets 删除之后数据才能真正删除

1
2
3
4
5
  LeoFS commands are:
General Commands:
version [all]
status []
whereis

Storage Operation:
detach
suspend
resume
rollback
start
rebalance
mq-stats
mq-suspend
mq-resume

Recover Commands:
recover-file
recover-disk
recover-consistency
recover-node
recover-ring
recover-cluster

Compaction Commands:
compact-start (all | ) []
compact-suspend
compact-resume

——————————————–

修改一致性设置
## Changes the consistency level to [w:2, d:2, r:1]

1
leofs-adm update-consistency-level 2 2 1

——————————————–
添加storage node
在启动新的LeoStorage节点后,LeoFS暂时将节点添加到LeoManager数据库的成员表中。如果您决定将其加入群集,则需要执行leofs-adm rebalance命令。
——————————————–
先查看当前状态

1
leofs-adm status

添加storage

1
leofs-adm rebalance

检查添加后的状态

1
leofs-adm status

——————————————–
删除storage node
如果需要缩小集群大小,可使用以下步骤。
拟删除的storage节点当前状态必须为running和stop状态
然后执行leofs-adm detach命令
最后,执行leofs-adm rebalance命令重新平衡集群数据
——————————————–

1
leofs-adm status

根据查询出来的node结果,例如删除node节点storage_3@127.0.0.1

1
2
3
leofs-adm detach storage_3@127.0.0.1
leofs-adm rebalance
leofs-adm status

——————————————–
回滚storage node
如果您错误地分离了节点,则可以按照以下操作回滚该节点。
检查群集的当前状态,并指定错误分离的节点。
然后leofs-adm rollback在每个分离的节点上执行命令。
执行leofs-adm status以检查节点状态是否返回running状态.
——————————————–

1
2
3
leofs-adm status
leofs-adm rollback storage_1@127.0.0.1
leofs-adm status

——————————————–
接管storage node
如果新的LeoStorage节点接管拟分离的节点,您可以通过遵循操作流程来实现。
执行leofs-adm detach命令以删除集群中的目标分离节点
然后启动一个新节点来接管分离的节点
最后,执行leofs-adm reebalance命令以开始在集群中重新平衡数据
——————————————–

1
leofs-adm status

分离拟分离的节点

1
leofs-adm detach storage_0@127.0.0.1

启动新节点,在新的storage服务器上执行

1
2
3
4
systemctl start leofs-storage
leofs-adm status
leofs-adm rebalance
leofs-adm status

——————————————–
挂起storage node
当需要维护节点时,可以暂时挂起目标节点。
挂起的节点不接收来自LeoGateway节点和LeoStorage节点的请求。
LeoFS最终将群集的状态分配给每个节点。
——————————————–

1
2
leofs-adm suspend storage_1@127.0.0.1
leofs-adm status

此时被挂起的节点状态为leofs-adm status
挂起的节点维护完毕,要恢复时使用以下命令

1
leofs-adm resume storage_1@127.0.0.1

检查状态

1
leofs-adm status

——————————————–
磁盘使用情况

1
2
leofs-adm du
leofs-adm du detail

——————————————–
文件压缩/修复一致性等
http://leo-project.net/leofs/docs/admin/system_operations/data/

定期检查修复数据
为了使数据在最终一致的系统上保持一致,应该进行常规清理。
所以我们建议用户recover-consistency定期运行,同时牢记以下注意事项,
recover-consistency在系统的非高峰时间逐个运行
当某些LeoStorage在恢复过程中发生故障时,只需在被击落的节点恢复后重新执行此操作
一般来说,它在很大程度上取决于您的数据大小和您选择的一致性级别(W和D)。
您选择的一致性级别越低,您应该越频繁地运行recover-consistency。

——————————————–
异步操作的消息队列
可以将一些数据存储到队列中以便稍后处理
例如
PUT / DELETE操作失败
多重DC复制,MDCR失败
rebalance/recover-(file|node|cluster) 通过leofs-adm调用
——————————————–

文件结构
File Structure?
Multiple AVS/KVS pairs can be placed on one node to enable LeoFS handling as much use cases and hardware requirements as possible. See Concept and Architecture / LeoStorage’s Architecture – Data Structure.

Container : AVS/KVS pair = 1 : N
Multiple AVS/KVS pairs can be stored under one OS directory. It is called Container.
‘N’ can be specified through leo_storage.conf.
How to choose optimal ‘N’
As a data-compaction is executed per AVS/KVS pair, at least the size of a AVS/KVS pair is needed to run data-compaction so that the larger ‘N’, the less disk space LeoFS uses for data-compaction.
However the larger N, the more disk seeks LeoFS suffers.
Tha said, the optimal N is determined by setting the largest value that doesn’t affect the online throughput you would expect.
Node : Container = 1 : N
Each Container can be stored under a different OS directory.
N can be specified through leo_storage.conf.
Setting N > 1 can be useful when there are multiple JBOD disks on the node. The one JBOD disk array can be map to the one container.

——————————————–
数据压缩?
本节提供有关数据压缩可以如何影响在线性能的信息。

排比Parallelism
可以并行地跨多个AVS / KVS对执行数据压缩。
可以通过leofs-adm的参数指定数据压缩过程的数量。
当来自在线的负载相对较低并且希望数据压缩过程尽可能地增加时,增加数量会很有用。
请注意,过多的数字会使数据压缩过程变慢。
与来自在线的任何操作同时进行。
GET / HEAD永远不会被数据压缩阻止。
当数据压缩处理AVS的尾部时,可以阻止PUT / DELETE。
鉴于上述限制,如果LeoFS系统的集群处理写入密集型工作负载,我们建议暂停您应该运行数据压缩的节点。

——————————————–
压缩命令

1
leofs-adm compact-start  (all/) []

启动压缩(将其状态转换为运行状态)。

1
2
3
4
5
6
num-of-targets:压缩了多少个AVS / KVS对。
num-of-compaction-pro:并行运行多少个进程。
leofs-adm compact-suspend  暂停压缩(将其状态转移到“暂停”运行)。
leofs-adm compact-resume    恢复压缩(将其状态从暂停状态转移到“正在运行”)。
leofs-adm compact-status    请参阅当前压缩状态。
leofs-adm diagnose-start    开始诊断(实际上不进行压缩,但扫描所有AVS / KVS对并报告哪些对象/元数据作为文件存在)。

磁盘使用情况

1
2
leofs-adm du
leofs-adm du detail

——————————————–
例子
compact-start?

## Note:
## All AVS/KVS pairs on storage_0@127.0.0.1
## will be compacted with 3 concurrent processes (default concurrency is 3)
## Example:
压缩开始
$ leofs-adm compact-start storage_0@127.0.0.1 all
OK

## Note:
## Five AVS/KVS pairs on storage_0@127.0.0.1
## will be compacted with 2 concurrent processes
$ leofs-adm compact-start storage_0@127.0.0.1 5 2
OK
compact-suspend?

## Example:
挂起压缩进程
$ leofs-adm compact-suspend storage_0@127.0.0.1
OK
compact-resume?

## Example:
恢复挂起的压缩
$ leofs-adm compact-resume storage_0@127.0.0.1
OK
compact-status?

## Example:
压缩统计
$ leofs-adm compact-status storage_0@127.0.0.1
current status: running
last compaction start: 2013-03-04 12:39:47 +0900
total targets: 64
# of pending targets: 5
# of ongoing targets: 3
# of out of targets : 56
diagnose-start?
See also diagnosis-log format to understand the output log format.

## Example:
诊断开始
$ leofs-adm diagnose-start storage_0@127.0.0.1
OK

du? du统计
leofs-adm du storage_0@127.0.0.1
## Example:
$ leofs-adm du storage_0@127.0.0.1
active number of objects: 19968
total number of objects: 39936
active size of objects: 198256974.0
total size of objects: 254725020.0
ratio of active size: 77.83%
last compaction start: 2013-03-04 12:39:47 +0900
last compaction end: 2013-03-04 12:39:55 +0900

du detail?du命令执行后详情
leofs-adm du detail storage_0@127.0.0.1
## Example:
$ leofs-adm du detail storage_0@127.0.0.1
[du(storage stats)]
file path: /home/leofs/dev/leofs/package/leofs/storage/avs/object/0.avs
active number of objects: 320
total number of objects: 640
active size of objects: 3206378.0
total size of objects: 4082036.0
ratio of active size: 78.55%
last compaction start: 2013-03-04 12:39:47 +0900
last compaction end: 2013-03-04 12:39:55 +0900
.
.
.
file path: /home/leofs/dev/leofs/package/leofs/storage/avs/object/63.avs
active number of objects: 293
total number of objects: 586
active size of objects: 2968909.0
total size of objects: 3737690.0
ratio of active size: 79.43%
last compaction start: ____-__-__ __:__:__
last compaction end: ____-__-__ __:__:__

Diagnosis?诊断
leofs-adm diagnose-start

This section explains the file format generated by leofs-adm diagnose-start in detail.
## Example:
——+——————————————+————————————————————+———–+————+——————+————————–+—-
Offset| RING’s address-id | Filename | Child num | File Size | Unixtime | Localtime |del?
——+——————————————+————————————————————+———–+————+——————+————————–+—-
194 296754181484029444656944009564610621293 photo/leo_redundant_manager/Makefile 0 2034 1413348050768344 2014-10-15 13:40:50 +0900 0
2400 185993533055981727582172380494809056426 photo/leo_redundant_manager/ebin/leo_redundant_manager.beam 0 24396 1413348050869454 2014-10-15 13:40:50 +0900 0
38446 53208912738248114804281793572563205919 photo/leo_rpc/.git/refs/remotes/origin/HEAD 0 33 1413348057441546 2014-10-15 13:40:57 +0900 0
38658 57520977797167422772945547576980778561 photo/leo_rpc/ebin/leo_rpc_client_utils.beam 0 2576 1413348057512261 2014-10-15 13:40:57 +0900 0
69506 187294034498591995039607573685274229706 photo/leo_backend_db/src/leo_backend_db_server.erl 0 13911 1413348068031188 2014-10-15 13:41:08 +0900 0
83603 316467020376888598364250682951088839795 photo/leo_backend_db/test/leo_backend_db_api_prop.erl 0 3507 1413348068052219 2014-10-15 13:41:08 +0900 1
The file is formatted as Tab Separated Values (TSV) except headers (head three lines of a file). The detail of each column are described below:

Column Number Description
1 byte-wise Offset where the object is located in an AVS.
2 Address ID on RING (Distribute Hash Routing Table).
3 File Name.
4 The Number of Children in a File.
5 File Size in bytes.
6 Timestamp in Unix Time.
7 Timestamp in Local Time.
8 Flag (0/1) representing whether the object is removed.

——————–存储节点采用xfs文件系统————————
leofs 建议使用xfs,因为xfs I/O对大文件支持比较好
添加一块硬盘并且格式化为xfs 文件系统

1
2
3
4
5
fdisl /dev/vdc
n
p
w
mkfs.xfs /dev/vdc1

——————————————–

1
vi   /usr/local/leofs/1.4.2/leo_storage/etc/leo_storage.conf

按以下配置

1
2
3
managers = [manager_0@45.79.96.27, manager_1@45.79.75.81]
obj_containers.path = [/data/leofs]
obj_containers.num_of_containers = [8]

——————————————–
#读写参数的一些设置
磁盘的一些设置

1
2
3
4
##  Watchdog.DISK
##
## Is disk-watchdog enabled - default:false
watchdog.disk.is_enabled = false

## disk – raised error times
watchdog.disk.raised_error_times = 5

## disk – watch interval – default:1sec
watchdog.disk.interval = 10

## Threshold use(%) of a target disk’s capacity – defalut:85%
watchdog.disk.threshold_disk_use = 85

## Threshold disk utilization(%) – defalut:90%
watchdog.disk.threshold_disk_util = 90

## Threshold disk read kb/sec – defalut:98304(KB) = 96MB
#watchdog.disk.threshold_disk_rkb = 98304
#131072kb=128MB

watchdog.disk.threshold_disk_rkb = 131072

## Threshold disk write kb/sec – defalut:98304(KB) = 96MB
#watchdog.disk.threshold_disk_wkb = 98304
#131072(kb)=128MB

watchdog.disk.threshold_disk_wkb = 131072

nodename = storage_01@173.255.242.49

——————————————————

gw_01 45.33.47.145 配置

网关配置协议端口

网关的缓存大小

1
vi /usr/local/leofs/1.4.2/leo_gateway/etc/leo_gateway.conf
1
2
3
4
5
6
7
8
managers = [manager_0@45.79.96.27, manager_1@45.79.75.81]
protocol = s3
http.port = 8080
cache.cache_ram_capacity = 268435456
cache.cache_disc_capacity = 0
cache.cache_expire = 300
cache.cache_max_content_len = 1048576
nodename = gateway_01@45.33.47.145
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 网关配置线程池
## Large Object Handler - put worker pool size
large_object.put_worker_pool_size = 64
## Large Object Handler - put worker buffer size
large_object.put_worker_buffer_size = 32
## Memory cache capacity in bytes
cache.cache_ram_capacity = 0
## Disk cache capacity in bytes
cache.cache_disc_capacity = 0
## When the length of the object exceeds this value, store the object on disk
cache.cache_disc_threshold_len = 1048576
## Directory for the disk cache data
cache.cache_disc_dir_data = ./cache/data
## Directory for the disk cache journal
cache.cache_disc_dir_journal = ./cache/journal
原文链接:https://xiaohost.com/2756.html,转载请注明出处。
0

评论0

请先