实验环境
主机/IP | 角色 | 系统 |
server01/192.168.1.201 | configserver | ubuntu 22.04 |
server02/192.168.1.202 | mongos | / |
server03/192.168.1.203 | shard1-primary | / |
server04/192.168.1.204 | shard1-secondary | / |
server05/192.168.1.205 | shard1-secondary | / |
server06/192.168.1.206 | shard2-primary | / |
server07/192.168.1.207 | shard2-secondary | / |
server08/192.168.1.208 | shard2-secondary | / |
All Nodes
安装mongodb
sudo apt-get install gnupg curl
curl -fsSL https://www.mongodb.org/static/pgp/server-8.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-8.0.gpg \
--dearmor
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-8.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/8.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-8.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
修改配置文件
server01
vim /etc/mongod.conf
net:
port: 27017
bindIp: 0.0.0.0
sharding:
clusterRole: configsvr
replication:
replSetName: config
- sharding.clusterRole值为configsvr
- replication.replSetName值为配置副本集名称
systemctl start mongod
server02
vim /etc/mongod.conf
net:
port: 27017
bindIp: 0.0.0.0
sharding:
configDB: config/192.168.1.201:27017
- sharding.configDB的值为配置副本集的名称和成员
使用systemd管理mongos
注意:由于这里是测试环境,配置副本集只采用一台,所以配置systmd的时候会出现不符合生产环境告警,将配置副本集拓展到3个成员即可
chown mongodb:mongodb /var/log/mongodb
vim /etc/systemd/system/mongos.service
[Unit]
Description=MongoDB Sharding Router (mongos)
After=network.target
[Service]
User=mongodb
Group=mongodb
ExecStart=/usr/bin/mongos -f /etc/mongod.conf
Restart=always
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start mongos
server03 && server04 && server05
vim /etc/mongod.conf
net:
port: 27017
bindIp: 0.0.0.0
sharding:
clusterRole: shardsvr
replication:
replSetName: shard1
- sharding.clusterRole值为shardsvr
- replication.replSetName值为分片副本集名称
systemctl start mongod
server05 && server06 && server07
vim /etc/mongod.conf
net:
port: 27017
bindIp: 0.0.0.0
sharding:
clusterRole: shardsvr
replication:
replSetName: shard2
systemctl start mongod
初始化配置副本集
server01
mongosh
rs.initiate(
{
_id: "config",
configsvr: true,
members: [
{ _id : 0, host : "192.168.1.201:27017" },
]
}
查看状态
rs.status()
初始化分片副本集
server01
mongosh
rs.initiate(
{
_id: "shard1",
members: [
{ _id : 0, host : "192.168.1.203:27017" },
{ _id : 1, host : "192.168.1.204:27017" },
{ _id : 2, host : "192.168.1.205:27017" }
]
}
)
查看状态
rs.status()
server02
mongosh
rs.initiate(
{
_id: "shard2",
members: [
{ _id : 0, host : "192.168.1.206:27017" },
{ _id : 1, host : "192.168.1.207:27017" },
{ _id : 2, host : "192.168.1.208:27017" }
]
}
)
查看状态
rs.status()
将分片添加到集群
server02
mongosh
sh.addShard( "shard1/192.168.1.203:27017,192.168.1.204:27017,192.168.1.205:27017")
sh.addShard( "shard1/192.168.1.206:27017,192.168.1.207:27017,192.168.1.208:27017")
启用分片
server02
mongosh
# 创建数据库
use testdb
# 为数据库启用分片
sh.enableSharding("testdb")
# 为集合设置分片规则,这里使用哈希索引作为分片键
sh.shardCollection("testdb.order",{_id:"hashed"})
验证
# 插入1000条数据
for (i = 1; i <= 1000; i=i+1){
db.order.insertOne({'price': 1})
}
查看插入量
db.order.find().count()
sevrer03 && server06
mongosh
use testdb
db.order.find().count()
最终结果会发现数据存放在不同的分片副本集中
拓展
将新成员添加进副本集直接倒主节点执行rs.add("成员IP:端口")