Shakiba Moshiri
  • Shakiba Moshiri (شکیبا مشیری)
  • opt
    • high traffic site optimization
      • infrastructure check
      • infrastructure test
  • tools
    • Cryptsetup
      • Container encryption using cryptsetup
    • curly
      • ftp
      • ssl
      • http
      • dns
      • ip
      • email
    • SSH
      • ssh password-less login
        • Untitled
    • volumes and FS
      • installing Gluster fs on Ubuntu 18.04 server
      • Accessing Gluster FS from the client machine
  • CDN
    • How does a CDN work
  • Server Panel
  • DirectAdmin
    • DirectAdmin through a reverse proxy
  • Web Server
    • Nginx
      • Live Steaming with Nginx and FFMPEG
  • Security
  • Container
    • Docker Networking 101
      • why docker networking is important?
      • type of networking in docker
    • Docker
      • How to run gitlab-runner with docker
      • using vim inside any container without installing it
      • Cannot connect to the Docker daemon at unix:///var/run/docker.sock
      • moving docker images around using ssh and pipe
      • How can I make docker-compose pull images using a socks5 proxy?
  • Stack Overflow
  • Github
  • vmware
    • tools
      • how to install vmware CLI govc on Linux
  • Windows
    • How to Erase a Recovery Partition in Windows
Powered by GitBook
On this page
  • Find the latest version
  • Add the repository and update the list (all nodes)
  • DNS setups (all nodes)
  • Install the gulsterfs-server (all nodes)
  • Start and enable the service (all nodes)
  • Add peers (just one of nodes) for me gluster1
  • Verify adding peers
  • Verify pool list
  • Adding DISK to all your machines (all nodes)
  • Partitioning the DISK we added (all nodes)
  • Verify result of partitioning (all nodes)
  • Format the partition (all nodes)
  • Add an entry to /etc/fstab (all nodes)
  • Mount the partition as a Gluster "brick" (all nodes)
  • Verify if it has been mounted (all nodes)
  • Set up a Gluster volume (just on one of nodes)
  • Verify our volume (all nodes)
  • Start the volume (node you have set it up)
  • Install glusterfs-client (optional)
  • Make a directory for gv0 (all nodes)
  • Mount the volume
  • Verify the mounted volume
  • Test it (final)

Was this helpful?

  1. tools
  2. volumes and FS

installing Gluster fs on Ubuntu 18.04 server

installing Gluster FS on Ubuntu 18.04 server (three nodes)

Previousvolumes and FSNextAccessing Gluster FS from the client machine

Last updated 3 years ago

Was this helpful?

This setup has been tested on Ubuntu 18.04 and not others

and on Vmware VMs

and with three nodes on my local lab

gluster1

gluster2

gluster3

all the commands have been run by the root user

Find the latest version

You can go to and see all the versions, and for this installation I picked up version 9 , the blow links

or

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-9

Add the repository and update the list (all nodes)

add-apt-repository ppa:gluster/glusterfs-9
apt-get update

DNS setups (all nodes)

I just used /etc/hosts but any other DNS resolution should be good to go

### setup DNS
### gluster-fs ###
192.168.1.192 gluster1
192.168.1.11  gluster2
192.168.1.249 gluster3

Install the gulsterfs-server (all nodes)

# first
apt-get install software-properties-common

# second the server
apt-get install glusterfs-server

Start and enable the service (all nodes)

### start and enable 
systemctl start glusterd.service
systemctl enable  glusterd.servic

Add peers (just one of nodes) for me gluster1

### add peers
gluster peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>

gluster peer probe gluster1
gluster peer probe gluster2

### sample outout
root@master:~# gluster peer probe gluster2
peer probe: success
root@master:~# gluster peer probe gluster3
peer probe: success
root@master:~#

Verify adding peers

root@master:~# gluster peer status
Number of Peers: 2

Hostname: gluster2
Uuid: 03d75df2-382d-4ce8-b09d-656b8c81131e
State: Peer in Cluster (Connected)

Hostname: gluster3
Uuid: 34a785cd-74d3-420c-b064-a7066360fb5f
State: Peer in Cluster (Connected)

Verify pool list

### verify pool list
root@master:~#  gluster pool list
UUID					Hostname 	State
03d75df2-382d-4ce8-b09d-656b8c81131e	gluster2 	Connected
34a785cd-74d3-420c-b064-a7066360fb5f	gluster3 	Connected
502d946f-5921-4d17-b6d4-94aa5d3b7904	localhost	Connected

the last one shows localhost because I am on the node (= gluster1)

Adding DISK to all your machines (all nodes)

If you have not done it so far, you can do it now, at the this I added new disk and rebooted my VMs, so after booting up again I had /dev/sdb on all nodes

### sample output
root@worker-1:~# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   16G  0 disk
|-sda1                      8:1    0    1M  0 part
|-sda2                      8:2    0    1G  0 part /boot
`-sda3                      8:3    0   15G  0 part
  `-ubuntu--vg-ubuntu--lv 253:0    0   15G  0 lvm  /
sdb                         8:16   0   16G  0 disk  <===== I added this one
sr0                        11:0    1  969M  0 rom

Partitioning the DISK we added (all nodes)

# A one-liner
echo -e "o\nn\np\n1\n\n\nw" |  fdisk /dev/sdb

# it tells the fdisk:
# n => new partition
# p => 
# 1 => select number 1
# w => write it and quit

for more about this commands you can see this video

# sample running
root@worker-2:~# echo -e "o\nn\np\n1\n\n\nw" |  fdisk /dev/sdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5f468501.

Command (m for help): Created a new DOS disklabel with disk identifier 0x923e0d4c.

Command (m for help): Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): Partition number (1-4, default 1): First sector (2048-33554431, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-33554431, default 33554431): 
Created a new partition 1 of type 'Linux' and of size 16 GiB.

Command (m for help): The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Verify result of partitioning (all nodes)

### result
root@worker-2:~# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   16G  0 disk
|-sda1                      8:1    0    1M  0 part
|-sda2                      8:2    0    1G  0 part /boot
`-sda3                      8:3    0   15G  0 part
  `-ubuntu--vg-ubuntu--lv 253:0    0   15G  0 lvm  /
sdb                         8:16   0   16G  0 disk
`-sdb1                      8:17   0   16G  0 part  <===== a new partition
sr0                        11:0    1  969M  0 rom

Format the partition (all nodes)

### Format the partition
mkfs.xfs -i size=512 /dev/sdb1

Add an entry to /etc/fstab (all nodes)

### Add an entry to /etc/fstab
echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0"  >> /etc/fstab

Mount the partition as a Gluster "brick" (all nodes)

### Mount the partition as a Gluster "brick"
mkdir -p /export/sdb1 && mount -a && mkdir -p /export/sdb1/brick

Verify if it has been mounted (all nodes)

### verify if it has been mounted
oot@master:~#  df -h | grep sdb
/dev/sdb1                           16G   49M   16G   1% /export/sdb1

Set up a Gluster volume (just on one of nodes)

### syntax
gluster volume create gv0 replica 2 <NAME>:/export/sdb1/brick <NAME>:/export/sdb1/brick

### for me (just on one of nodes) + (I have 3 nodes)
gluster volume create gv0 replica 3 gluster1:/export/sdb1/brick gluster2:/export/sdb1/brick gluster3:/export/sdb1/brick

### output of above command 
> gluster volume create gv0 replica 3 gluster1:/export/sdb1/brick gluster2:/export/sdb1/brick gluster3:/export/sdb1/brick
volume create: gv0: success: please start the volume to access data

### if you run it on other nodes, 
volume create: gv0: failed: Volume gv0 already exists

Verify our volume (all nodes)

### verify our volume 
> gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: f529029b-420c-4750-877c-273a86af415b
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/export/sdb1/brick
Brick2: gluster2:/export/sdb1/brick
Brick3: gluster3:/export/sdb1/brick
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Start the volume (node you have set it up)

###  start the volume.
gluster volume start gv0

### output
> gluster volume start gv0
volume start: gv0: success

Install glusterfs-client (optional)

if you already did not setup the client, install it, otherwise no need to do.

### install the client
# yum install -y glusterfs-client # Centos / RHEL
# sudo apt-get install -y glusterfs-client # Ubuntu
### I already install the client at first

Make a directory for gv0 (all nodes)

### make a directory for gv0 (all nodes)
mkdir -p /mnt/glusterfs/gv0

Mount the volume

### mount the volume
### mount each disk on its own volume 
### node 1 gluster1
mount -t glusterfs gluster1:/gv0 /mnt/glusterfs/gv0

### node 2 gluster2
mount -t glusterfs gluster2:/gv0 /mnt/glusterfs/gv0

### node 3 gluster3
mount -t glusterfs gluster3:/gv0 /mnt/glusterfs/gv0

Verify the mounted volume

### verify the mount 
df -h | grep gv0
gluster1:/gv0                       16G  213M   16G   2% /mnt/glusterfs/gv0

Test it (final)

### final test
cd /mnt/glusterfs/gv0
touch one.txt

Now you should see one.txt file on other nodes and if you modify it, you see the modification on others nodes.

node 1 (= master) (= gluster1)

node 2 (= worker-1) (= gluster2)

node 3 (= worker-2) (= gluster3)

also my /etc/hosts

please notice that

192.168.1.192 master
192.168.1.11  worker-1
192.168.1.249 worker-2

is not for gluster-fs it was for k8s which I have set it up already

resources

for other OS' you can refer to this link:

https://docs.gluster.org/en/v3/Install-Guide/Install/
https://www.youtube.com/watch?v=CUCJJmYyiII
https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
https://docs.gluster.org/en/v3/Install-Guide/Install/
https://launchpad.net/~gluster
glusterfs-9 : “Gluster” teamLaunchpad
Logo