root@master:~# gluster peer status
Number of Peers: 2
Hostname: gluster2
Uuid: 03d75df2-382d-4ce8-b09d-656b8c81131e
State: Peer in Cluster (Connected)
Hostname: gluster3
Uuid: 34a785cd-74d3-420c-b064-a7066360fb5f
State: Peer in Cluster (Connected)
Verify pool list
### verify pool list
root@master:~# gluster pool list
UUID Hostname State
03d75df2-382d-4ce8-b09d-656b8c81131e gluster2 Connected
34a785cd-74d3-420c-b064-a7066360fb5f gluster3 Connected
502d946f-5921-4d17-b6d4-94aa5d3b7904 localhost Connected
the last one shows localhost because I am on the node (= gluster1)
Adding DISK to all your machines (all nodes)
If you have not done it so far, you can do it now, at the this I added new disk and rebooted my VMs, so after booting up again I had /dev/sdb on all nodes
### sample output
root@worker-1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
|-sda1 8:1 0 1M 0 part
|-sda2 8:2 0 1G 0 part /boot
`-sda3 8:3 0 15G 0 part
`-ubuntu--vg-ubuntu--lv 253:0 0 15G 0 lvm /
sdb 8:16 0 16G 0 disk <===== I added this one
sr0 11:0 1 969M 0 rom
Partitioning the DISK we added (all nodes)
# A one-liner
echo -e "o\nn\np\n1\n\n\nw" | fdisk /dev/sdb
# it tells the fdisk:
# n => new partition
# p =>
# 1 => select number 1
# w => write it and quit
for more about this commands you can see this video
# sample running
root@worker-2:~# echo -e "o\nn\np\n1\n\n\nw" | fdisk /dev/sdb
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5f468501.
Command (m for help): Created a new DOS disklabel with disk identifier 0x923e0d4c.
Command (m for help): Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): Partition number (1-4, default 1): First sector (2048-33554431, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-33554431, default 33554431):
Created a new partition 1 of type 'Linux' and of size 16 GiB.
Command (m for help): The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Verify result of partitioning (all nodes)
### result
root@worker-2:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
|-sda1 8:1 0 1M 0 part
|-sda2 8:2 0 1G 0 part /boot
`-sda3 8:3 0 15G 0 part
`-ubuntu--vg-ubuntu--lv 253:0 0 15G 0 lvm /
sdb 8:16 0 16G 0 disk
`-sdb1 8:17 0 16G 0 part <===== a new partition
sr0 11:0 1 969M 0 rom
Format the partition (all nodes)
### Format the partition
mkfs.xfs -i size=512 /dev/sdb1
Add an entry to /etc/fstab (all nodes)
### Add an entry to /etc/fstab
echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0" >> /etc/fstab
Mount the partition as a Gluster "brick" (all nodes)
### Mount the partition as a Gluster "brick"
mkdir -p /export/sdb1 && mount -a && mkdir -p /export/sdb1/brick
Verify if it has been mounted (all nodes)
### verify if it has been mounted
oot@master:~# df -h | grep sdb
/dev/sdb1 16G 49M 16G 1% /export/sdb1
Set up a Gluster volume (just on one of nodes)
### syntax
gluster volume create gv0 replica 2 <NAME>:/export/sdb1/brick <NAME>:/export/sdb1/brick
### for me (just on one of nodes) + (I have 3 nodes)
gluster volume create gv0 replica 3 gluster1:/export/sdb1/brick gluster2:/export/sdb1/brick gluster3:/export/sdb1/brick
### output of above command
> gluster volume create gv0 replica 3 gluster1:/export/sdb1/brick gluster2:/export/sdb1/brick gluster3:/export/sdb1/brick
volume create: gv0: success: please start the volume to access data
### if you run it on other nodes,
volume create: gv0: failed: Volume gv0 already exists
Verify our volume (all nodes)
### verify our volume
> gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: f529029b-420c-4750-877c-273a86af415b
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/export/sdb1/brick
Brick2: gluster2:/export/sdb1/brick
Brick3: gluster3:/export/sdb1/brick
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
if you already did not setup the client, install it, otherwise no need to do.
### install the client
# yum install -y glusterfs-client # Centos / RHEL
# sudo apt-get install -y glusterfs-client # Ubuntu
### I already install the client at first
Make a directory for gv0 (all nodes)
### make a directory for gv0 (all nodes)
mkdir -p /mnt/glusterfs/gv0
Mount the volume
### mount the volume
### mount each disk on its own volume
### node 1 gluster1
mount -t glusterfs gluster1:/gv0 /mnt/glusterfs/gv0
### node 2 gluster2
mount -t glusterfs gluster2:/gv0 /mnt/glusterfs/gv0
### node 3 gluster3
mount -t glusterfs gluster3:/gv0 /mnt/glusterfs/gv0
Verify the mounted volume
### verify the mount
df -h | grep gv0
gluster1:/gv0 16G 213M 16G 2% /mnt/glusterfs/gv0
Test it (final)
### final test
cd /mnt/glusterfs/gv0
touch one.txt
Now you should see one.txt file on other nodes and if you modify it, you see the modification on others nodes.