Testing glusterfs on centos
Following along the CentOS howto using Centos 7.2
Just a couple of things have changed since it was written:
- As per the CentOS storage special interest group, you can now get the glusterfs packages without using wget to retrieve additional repos:
yum install centos-release-gluster yum install glusterfs-server samba
- Evidently xfs filesystems should be formatted to inode size 512 bytes, not 256.
- If you want to delete a volume immediately after creating, you’ll need some incantations to re-use the bricks without re-formating them.
Along the way I asked myself the following questions
- Should each of the physical disks be surfaced as a separate volume group and allocated within a single logical volume using 100% of the VG? Allocating 100% prevents using LVM snapshots, but why would you ever want to do that (or is that a building block for GlusterFS snapshots?) Is there something that could be done to use HSM and LVM to balance blocks on each node?
- What is the best filesystem for the bricks? XFS is the default. As above, it should be formatted with 512 byte inodes.
- What should the network segmentation and firewall zone configuration be?
- What brick layout, replica setting and stripe setting makes sense? The number of bricks are required to be the product of the number of replicas and stripes.
- How to best utilise SSD
- Through glusterfs file level tiering in the March gluster / Redhat storage tech preview? This version also allows access to erasure coding, lowering the cost of storage replication.
- immediately available in the Centos 7.2 build. BUT… works at the file level, so not so helpful with VMs. However if sharding is turned on, perhaps it works at the shard level?
- Through block level management with dm-cache or similar, preferably integrated to LVM?
- Through hardware RAID controller?
- Through glusterfs file level tiering in the March gluster / Redhat storage tech preview? This version also allows access to erasure coding, lowering the cost of storage replication.