On 27 August, we released important additions to the EuroLinux 8 operating system, namely Resilient Storage (which implements the Global File System 2) and High Availability (which keeps applications running regardless of the state of individual nodes in the cluster). Both modules greatly simplify operations work and extend standard system capabilities.
According to our policy, add-ons are always available within the basic subscription, and we do not charge additional fees for them. It is worth mentioning that the availability of these modules is one of our commitments that we have included in the Roadmap, presented on the occasion of the release of EuroLinux 8.4. Other elements of it will be successively released shortly. As an aside, it is also worth noting that some vendors are artificially making add-ons into separate, paid products.
Resilient Storage add-on
The Resilient Storage add-on allows you to use GFS2 (Global File System 2), which enables shared and direct access to the block device. Importantly, GFS2 runs directly on top of the Linux kernel’s VFS (Virtual File System), an abstraction layer that allows the use of standard user space calls. This means tremendous compatibility and no need to use multiple additional abstraction layers or write specialized programs. A very desirable feature of GFS2 is perfect consistency, which means that a saved change on one node is immediately visible to all nodes in the cluster. GFS2 uses software shared with the High Availability add-on, such as Corosync and Pacemaker, to maintain high availability, and uniquely available only in this add-on Distributed Lock Manager (DLM) to provide locking mechanisms on the file system in a deadlock-free manner.
Supported technical features are consistent with those offered by Red Hat®, the maximum number of nodes in a cluster for GFS2 is 16 for x86_64 architecture, and the maximum cluster size is 100 TiB.
High Availability add-on
The High Availability add-on is based on the two main projects: Corosync and Pacemaker. For this solution, according to CAP (Consistency, Availability and Partition tolerance), if a Quorum or a specific resource is lost or another rule is met, the cluster stops working to maintain consistency. Thanks to the mechanism of fencing resources and one of its implementations on the node level - STONITH (Shoot The Other Node In The Head), it also avoids split-brain situations, because in case of failure it has the possibility to use very aggressive policies, including sudden power-off.
Clusters based on Corosync and Pacemaker can, among other things:
- support active/passive and active/active clusters
- detect cluster dysfunction and automatically disable a node from the cluster and restore it to the cluster
- group resources together with their monitoring and access management (e.g. a given resource can run only on one node)
- maintain the cluster in a synchronized state
- communicate between nodes.
Finally, it is worth mentioning the maximum supported number of nodes:
- the maximum cluster size is 32 (16 for EuroLinux extended support subscription 6)
- for more than 16 nodes the cluster cannot use GFS2.
How to enable High Availability/Resilient Storage repositories and install their software?
This topic is described in the how to which is part of the open documentation project for EuroLinux.