RADOS
Reliable Autonomic Distributed Object Storage. For a more detailed discussion, see what is Ceph™.

CRUSH
A scalable hashing algorithm. For a more detailed discussion, see what is Ceph™.

librados
A Ceph™ library which enables direct interaction with RADOS. It supports languages such as C, C++, Java, Python, Ruby and PHP. Used mainly in advanced projects which consciously choose to exploit the maximum possibilities of Ceph™.

Monitor (mon)
Monitors are responsible for coordinating the cluster. If a client wants to initiate a connection with the cluster, it requests a connection from the monitor, from which it receives a map (information) of the cluster which includes among other things a map of the CRUSH algorithm as well as information about other available services (in fact, the monitor stores 5 maps which are collectively called the cluster map). Thanks to these, the client can calculate where the object in question is located. A cluster should consist of at least 3 monitors. Generally, odd numbers are chosen. For the operation of the cluster, the majority of monitors must be available to form a quorum. Failure to form a quorum results in the cluster being unavailable to protect its data.

OSD
An Object Storage Device (OSD) stores data, provides replication and recreation of data as well as rebalancing. An OSD can use an entire disk, LVM, file, or preferred BlueStore. One node in a cluster may have many OSDs.

Manager (mgr)
Contrary to what it seems, this does not play a management role – it monitors more than manages the cluster. Manager gathers metrics and records monitoring. It also allows for further sharing of statistics to other systems. Generally, each node where a monitor is operating has such a manager.

Ceph™ Object Gateway; Rados Gateway (RGW)
RGW is an interface providing a RESTful gateway to RADOS clusters, allowing for the management of objects (as understood by OpenStack™ Swift and S3) in a cluster. RadosGW supports the following interfaces:
- interfaces aiming for compatibility with Amazon™ S3 (see what is Ceph™)
- interfaces compatible with OpenStack™ Swift.
Importantly, RGW has its own user management mechanisms which differ from the internal mechanisms of the cluster.

Rados Block Device (RBD)
Block devices are the most popular type of device for storing data. A block in this case simply means a portion of data on a device with free access. It does not matter whether we are talking about a CD/DVD or an SSD NVMe disk. This situation is similar with RBD (Rados Block Device). From the user perspective, the Ceph™ user shares block devices which in reality are thinly provisioned. RBD supports replications and snapshots. RBD can be accessed by the kernel module as well as through the FUSE program. The advantage of RBD is its use of RADOS, which ensures the distribution of traffic to individual OSDs while also providing security and integrity.

MetaData Server (MSD)
A service which stores metadata and coordinates CephFS operations. Metadata includes, among other things, catalogue structures, file access permissions and owners and ACL. Multiple MDSs may operate in a cluster.

CephFS
CephFS – Ceph™ File System. A file system which aims to be compatible with POSIX and uses RADOS. Thanks to this approach, CephFS is a high performance, shared and highly available file system.

FUSE
Filesystem in Userspace (FUSE) is an interface which allows a virtual file system to be exported to the Linux kernel. The most significant benefit of FUSE is the possibility for it to be used safely by non-root system users.