kowabunga.cloud.ceph role – Install, configure, and start Ceph storage cluster.

Note

This role is part of the kowabunga.cloud collection (version 0.1.0).

It is not included in ansible-core. To check whether it is installed, run ansible-galaxy collection list.

To install it use: ansible-galaxy collection install kowabunga.cloud.

To use it in a playbook, specify: kowabunga.cloud.ceph.

Entry point main – Install, configure, and start Ceph storage cluster.

Synopsis

  • Install, configure, and start Ceph storage cluster.

Parameters

Parameter

Comments

kowabunga_ceph_clients

list / elements=dictionary

List of client keyrings to be supported.

Must be hand over to client applications further on.

Typical client would be libvirt or Kubernetes, trying to access Ceph RBD block devices.

Default: [{"caps": {"mon": "profile rbd", "osd": "profile rbd pool=rbd"}, "name": "libvirt"}]

caps

dictionary

Map of client capabilities.

Refer to https://docs.ceph.com/en/latest/cephfs/client-auth/ for help.

name

string / required

Ceph client name

kowabunga_ceph_fs_enabled

boolean

Defines whether support for CephFS must be enabled.

Only usefull if you intend to provide filesystem-as-a-service (e.g. with Kylo).

Choices:

  • false ← (default)

  • true

kowabunga_ceph_fs_filesystems

list / elements=dictionary

List of Ceph filesystem to be created on the storage cluster.

Requires kowabunga_ceph_fs_enabled feature to be enabled.

Default: []

data_pool

string / required

Name of one of the previously created OSD pool which will serve to host filesystem’s data.

default

boolean

Defines whether this filesystem is to be set as default one.

Choices:

  • false ← (default)

  • true

fstype

string

Filesystem target type.

Choices:

  • "fs" ← (default)

  • "nfs"

metadata_pool

string / required

Name of one of the previously created OSD pool which will serve to host filesystem’s metadata.

name

string / required

Filesystem name.

No whitespace or special characters are allowed.

kowabunga_ceph_fsid

string / required

Ceph cluster filesystem ID.

Must be unique across your entire network (usually the case, unless you have multiple Ceph clusters).

Consists of a UUID (can be generated through ‘uuigen’ command).

kowabunga_ceph_group

string / required

Name of the group of machine that are part of the Ceph cluster

kowabunga_ceph_local_keyrings_dir

path / required

Defines local directory (relative to playbook execution one) where to store generated keyrings.

Keyring files will be generated once (and for all) per regional cluster.

Generated Keyring files then be further deployed on cluster peers.

Once generated, keep files under source control.

kowabunga_ceph_manager_admin_password

string / required

Password for the ‘admin’ user.

Can be used to connect on Ceph Web client on port 8080.

Recommended to be safe and encrypted into Ansible Vault or SOPS.

Defaults to encrypted secret_kowabunga_ceph_manager_admin_password variable.

kowabunga_ceph_manager_enabled

boolean

Defines whether Ceph Manager component must be deployed on node.

Ceph manager provides monitoring and serviceability features.

It is recommended to enable on at least one cluster node.

A group of 2 (or 3) is recommended for high-availability (active-passive).

Choices:

  • false ← (default)

  • true

kowabunga_ceph_monitor_enabled

boolean

Defines whether Ceph monitor component must be deployed on node.

Ceph monitors implement API and takes the clients workload.

At least one cluster node must have a monitor present.

A group of 3 is recommended for load balancing and high-availability.

Having more than 3 is usually not useful.

Choices:

  • false ← (default)

  • true

kowabunga_ceph_monitor_listen_addr

string

Network address for Ceph monitor to listen to.

Private LAN one if unspecified.

kowabunga_ceph_osd_enabled

boolean

Defines whether Ceph OSD components must be deployed on node.

An OSD (Object Storage Daemon) targets an atomic cluster storage entity (usually a disk or a partition).

There are as many OSD instances as disks to be part of the cluster.

Enabled, unless the node does not feature any data disk to be part of the cluster.

Choices:

  • false

  • true ← (default)

kowabunga_ceph_osd_max_pg_per_osd

integer

Maximum number of PG (Placement Groups) per OSD.

Default: 500

kowabunga_ceph_osd_pools

list / elements=dictionary

List of storage pools to be created on the Ceph cluster.

Refer to https://linuxkidd.com/ceph/pgcalc.html for per-pool PG value calculation.

Values of at least 256 (or 512) are recommended for multi-nodes cluster, 128 for single-node.

Warning: pg_autoscaler is enabled by default, see https://docs.ceph.com/en/latest/rados/operations/placement-groups

Default: [{"compression": {"algorithm": "snappy", "mode": "passive"}, "name": "rbd", "pgs": 256, "ptype": "rbd", "replication": {"min": 1, "request": 2}}]

compression

dictionary

Data compression settings.

algorithm

string

Compression algorithm to be used.

Choices:

  • "lz4"

  • "snappy" ← (default)

  • "zlib"

  • "zstd"

mode

string

Compression mode, tradeoff between storage size and CPU usage.

Choices:

  • "none"

  • "passive" ← (default)

  • "aggressive"

  • "force"

name

string / required

Pool name

No whitespace or special characters are allowed.

pgs

integer / required

Number of PGs to be allocated to the pool.

ptype

string / required

Type of storage pool

Use ‘rbd’ for block devices 0r ‘fs’ for filesystems.

Choices:

  • "rbd"

  • "fs"

replication

dictionary / required

Defines pool data replication factor.

Data fragments are copied over multiple OSDs for redundancy.

The bigger, the more resilient your cluster is to failures but the least usable space you’ll get.

min

integer / required

Minimum replicas to be alive for the cluster to be safe.

request

integer / required

Target replica count.

kowabunga_ceph_osds

list / elements=dictionary

List of instance OSDs definitions.

Each disk/partition from the host must be declared in this list.

Default: []

dev

string / required

Unique Linux special device file representing the disk/partition to be mapped by the OSD.

Example: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNS0X101300

WARNING: Device WILL be formatted for Ceph usage.

id

integer / required

OSD unique identifier across the whole Ceph cluster.

Usually iterates incrementally over disks and instances (e.g. 0 for first disk of for first instance, 1 for second disk of first instance, …)

weight

float / required

Weight of the OSD in Ceph crush map.

Value will determine the object placement and priority.

The bigger the value, the more chances one disk has to be elected to store data fragments.

Usually defines as disk size in TB (e.g. a 1.92 TB SSD would be assigned a weight of 1.92).

Can be overriden to enforce placement (if you have faster disks than others for example).