Deploying Ceph cluster using Cephadm

Nikhil Patil
5 min readNov 17, 2020

What is Ceph?

Ceph is open source, software-defined storage maintained by RedHat. It’s capable of block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing). This algorithm ensures that all the data is properly distributed across the cluster and data quickly without any constraints. Replication, Thin provisioning, Snapshots are the key features of the Ceph storage.

Ceph object storage is accessible through Amazon Simple Storage Service (S3) and OpenStack Swift Representational State Transfer (REST)-based application programming interfaces (APIs), and a native API for integration with software applications.

Ceph block storage makes use of a Ceph Block Device, which is a virtual disk that can be attached to bare-metal Linux-based servers or virtual machines. The Ceph Reliable Autonomic Distributed Object Store (RADOS) provides block storage capabilities, such as snapshots and replication. The Ceph RADOS Block Device is integrated to work as a back end with OpenStack Block Storage.

What is Podman?

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.Podman relies on an OCI compliant Container Runtime (runc, crun, runv, etc) to interface with the operating system and create the running containers.

What is Cephadm?

Cephadm deploys and manages a Ceph cluster by connection to hosts from the manager daemon via SSH to add, remove, or update Ceph daemon containers. It does not rely on external configuration or orchestration tools like Ansible, Rook, or Salt. Cephadm manages the full lifecycle of a Ceph cluster. It starts by bootstrapping a tiny Ceph cluster on a single node (one monitor and one manager) and then uses the orchestration interface to expand the cluster to include all hosts and to provision all Ceph daemons and services.

Cephadm is new in the Octopus v15.2.0 release and does not support older versions of Ceph.

Requirements

  • Systemd
  • Podman or Docker for running containers
  • Time synchronization (such as chrony or NTP)

lets start demo

Install cephadm on node

$ dnf -y install centos-release-ceph-octopus epel-release

$ dnf -y install cephadm

create a directory

$ mkdir -p /etc/ceph

Now, after we have set up all the prerequisites, we can start the boostrap process:

$ cephadm bootstrap — mon-ip 192.168.122.139

INFO:cephadm:Verifying podman|docker is present…
INFO:cephadm:Verifying lvm2 is present…
INFO:cephadm:Verifying time synchronization is in place…
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Repeating the final host check…
INFO:cephadm:podman|docker (/usr/bin/podman) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 73ea2064–2501–11eb-a403–525400e6a4a9
INFO:cephadm:Verifying IP 192.168.122.139 port 3300 …
INFO:cephadm:Verifying IP 192.168.122.139 port 6789 …
INFO:cephadm:Mon IP 192.168.122.139 is in CIDR network 192.168.122.0/24
INFO:cephadm:Pulling container image docker.io/ceph/ceph:v15…
INFO:cephadm:Extracting ceph user uid/gid from container image…
INFO:cephadm:Creating initial keys…
INFO:cephadm:Creating initial monmap…
INFO:cephadm:Creating mon…
INFO:cephadm:firewalld ready
INFO:cephadm:Enabling firewalld service ceph-mon in current zone…
INFO:cephadm:Waiting for mon to start…
INFO:cephadm:Waiting for mon…
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf…
INFO:cephadm:Generating new minimal ceph.conf…
INFO:cephadm:Restarting the monitor…
INFO:cephadm:Setting mon public_network…
INFO:cephadm:Creating mgr…
INFO:cephadm:Verifying port 9283 …
INFO:cephadm:firewalld ready
INFO:cephadm:Enabling firewalld service ceph in current zone…
INFO:cephadm:firewalld ready
INFO:cephadm:Enabling firewalld port 9283/tcp in current zone…
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start…
INFO:cephadm:Waiting for mgr…
INFO:cephadm:mgr not available, waiting (1/10)…
INFO:cephadm:mgr not available, waiting (2/10)…
INFO:cephadm:mgr not available, waiting (3/10)…
INFO:cephadm:mgr not available, waiting (4/10)…
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module…
INFO:cephadm:Waiting for the mgr to restart…
INFO:cephadm:Waiting for Mgr epoch 6…
INFO:cephadm:Mgr epoch 6 is available
INFO:cephadm:Setting orchestrator backend to cephadm…
INFO:cephadm:Generating ssh key…
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost’s authorized_keys…
INFO:cephadm:Adding host node1…
INFO:cephadm:Deploying mon service with default placement…
INFO:cephadm:Deploying mgr service with default placement…
INFO:cephadm:Deploying crash service with default placement…
INFO:cephadm:Enabling mgr prometheus module…
INFO:cephadm:Deploying prometheus service with default placement…
INFO:cephadm:Deploying grafana service with default placement…
INFO:cephadm:Deploying node-exporter service with default placement…
INFO:cephadm:Deploying alertmanager service with default placement…
INFO:cephadm:Enabling the dashboard module…
INFO:cephadm:Waiting for the mgr to restart…
INFO:cephadm:Waiting for Mgr epoch 15…
INFO:cephadm:Mgr epoch 15 is available
INFO:cephadm:Generating a dashboard self-signed certificate…
INFO:cephadm:Creating initial admin user…
INFO:cephadm:Fetching dashboard port number…
INFO:cephadm:firewalld ready
INFO:cephadm:Enabling firewalld port 8443/tcp in current zone…
INFO:cephadm:Ceph Dashboard is now available at:

URL: https://node1:8443/
User: admin
Password: 2plpxwuiaz

INFO:cephadm:You can access the Ceph CLI with:

sudo /usr/sbin/cephadm shell — fsid 73ea2064–2501–11eb-a403–525400e6a4a9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

As you see, the bootstrap process covers a few important phases:

  • Create a monitor and manager daemon for the new cluster on the local host.
  • Generate a new SSH key for the Ceph cluster and adds it to the root user’s /root/.ssh/authorized_keys file.
  • Write a minimal configuration file needed to communicate with the new cluster to /etc/ceph/ceph.conf .
  • Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.
  • Write a copy of the public key to /etc/ceph/ceph.pub`.

Enable Ceph Cli

$ alias ceph=’cephadm shell — ceph’

$echo “alias ceph=’cephadm shell — ceph’” >> ~/.bashrc

Now let’s verify ceph version

$ ceph -v

Containers are running for each service

$ podman ps

Transfer SSH public key

$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2

$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3

Add target Nodes to Cluster

$ ceph orch host add node2

$ ceph orch host add node3

To view the current hosts and labels

$ ceph orch host ls

Configure OSD

$ ceph orch daemon add osd node1:/dev/vdb

$ ceph orch daemon add osd node2:/dev/vdb

$ ceph orch daemon add osd node3:/dev/vdb

Give cephadm a minute or two to deploy those daemons, and eventually let's use the ceph -s command, in order to verify our cluster, is healthy:

$ ceph -s

Let’s verify that our Dashboard is available as well

You can access Ceph Dashboard on the active MGR node.

I hope that you have understood how to deploy ceph using cephadm.

If faced any kind of issue while installation put that in comment section , i will try to solve that error.

Thank you!!

--

--