Shared Storage (Ceph) - Funky Penguin's Geek Cookbook

Sadly not, either with the IPs in the /etc/hosts file or even using the IPs themselves in the fstab file

What OS are you using?

I’m using Ubuntu 18.04

You’re missing the port number. Also, as I understand it, you just need to specify one node, not all three - If all 3 nodes are up when the initial mount to say Node 1 is made, the mount will be aware of the other 2 nodes?

Perhaps just test the mount (using mount command) direct to Node 1 (and only node 1) IP first?

FYI, it’s working for me without the port number (provided you’ve not deviated from the default port). And yes, you only need to specify one node, but it can’t be the node you’re on, since the mon daemon may not have started yet (@TNTechnoHermit had this problem initially). Since you can’t predict which of your nodes will be up/down when this particular node boots, it’s more resilient to just add them all :slight_smile:

@funkypenguin I’ve finally got round to setting this up and it all appears to be working great, thank you. My only question is that the output from ceph status shows a total of 30GiB (3 x 10GB disks) with an availability of 27GiB but when doing a df -h the mounted ceph path shows only 8.5G available? Here’s the output from ceph status:

cluster:
id: 630cf0f4-a389-11ea-978c-fa163ec47a7e
health: HEALTH_OK

services:
mon: 3 daemons, quorum dn0,dn1,dn2 (age 11h)
mgr: dn0.qomczu(active, since 11h), standbys: dn1.cpxaem
mds: data:1 {0=data.dn2.moduyx=up:active} 1 up:standby
osd: 3 osds: 3 up (since 11h), 3 in (since 12h)

task status:
scrub status:
mds.data.dn2.moduyx: idle

data:
pools: 3 pools, 65 pgs
objects: 27 objects, 74 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 65 active+clean

Thank you

Edit: Also, notice only one of my nodes is in standby as a manager, how do I get the 3rd node in standby too? If I reboot any of the VPS (for a kernel update for example), rebooting dn1 (in standby) or dn2 (the one not in standby) ceph continues to operate with a warning. However, rebooting dn0 (the one with cephadm on) ceph hangs and doesn’t operate (this concerns me to what would happen once I have docker volumes mounted to here with DBs running etc if ceph stops!) until the VPS is back online. Any ideas?

Edit 2: (Sorry) I’m also receiving a health warning with 1 hosts fail cephadm check (How do I go about finding out which host has the issue) and any ideas on what might cause this. This is only a mess around so happy to wipe and try again but thought I’d best ask first :smiley:

P.S I’m a complete noob to all of this stuff so forgive my ignorance :slight_smile:

This’ll mean that you have 30GiB of “raw” storage, but due to the amount of replicas you’ve chosen (3, by default) in the pool used to back cephfs, you’re only presented with an availability of 8.5G for cephfs.

If (for example), you created another pool (say, for rbd storage) with 2 replicas instead of 3, you’d have 17G available in this pool, and as you consumed each pool, the available storage would be reduced accordingly.

1 Like

I’ve not tried this yet, but based on https://ceph.readthedocs.io/en/latest/mgr/orchestrator/, I think you’d do something like ceph orch apply mgr 3 dn2. However, how do you know that ceph hangs when dn0 reboots? What does ceph -s on dn1/2 tell you when this happens?

@funkypenguin Thank you David for explaining that, makes more sense now. My next question of course is how would I go about specifying 2 replicas for the pool I have or for a new one (Happy to start again if easier)? The idea in my mind is that I can then lose one VPS (for a reboot) while having enough space left over out of the 3 nodes I currently have setup.

@funkypenguin Thanks again, I’ll have a play with this and report back. Noticed I’ve only got 2/3 nodes for MDS too. Out of interest, did you find this was the case when you tried this or did you have all hosts showing for all the various services (like the old guide used to)?

When running ceph -s on dn1/2 (while dn0 is offline) it just sits and never returns, also, if I put a test file in the ceph mount (during this time) it is not copied over to the remaining host. If I reboot either the other two but not dn0 ceph -s returns immediately with the warning that a host is down (as expected) and continues to operate normally

In fstab you need to put:

manager1,manager2,manager3:/ /var/data ceph name=admin,secret=[cephkey],noatime,_netdev 0 0

[cephkey] is generated by sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring.

In Debian Buster you must install ceph-common.

Sorry for my bad english… I hope I have helped.

@funkypenguin, you forgot add secret option at fstab in to the recipe.

Welcome @cristain_dkb! I turns out I didn’t need a secret option - provided I installed ceph-common using the Octopus repo installed by cephadm, it just worked :slight_smile:

Maybe, it’s a distro thing. With Debian Buster without secret option I had error 22, like @zeiglecm . But with secret option is working perfect.

Another thing, to run ceph orch host add [node-name], I needed execute sudo ./cephadm shell --fsid [fsid] -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring before.

@funkypenguin, this is a great job. Thanks you very much. I’m going to deploy a Swarm in a University in Argentina with servers in premise. And these guides are very useful.

1 Like

@cristain_dkb I’ve added a section on Debian Buster, but I can’t test it currently: https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph/#mount-cephfs-volume

Would you mind validating that my example works for you?

Thanks!
D

Hi I am totally new to this topic, go gentle on me.
I am trying to create a one node cluster just to see if i got basics right.
(and do i really need 3 cluster hosts, if so i might have to give up now as i only have two)

I get this error (and yes i had to do the secret export):

root@docker01:/etc# mount -a
mount error: no mds server is up or the cluster is laggy

how long does it take to create the mds?

root@docker01:/etc# ceph -s
  cluster:
id:     9fa14f50-af83-11ea-a832-0211322d0d4d
health: HEALTH_WARN
        1 MDSs report slow metadata IOs
        Reduced data availability: 65 pgs inactive
        Degraded data redundancy: 65 pgs undersized
        OSD count 1 < osd_pool_default_size 3
 
  services:
mon: 1 daemons, quorum docker01 (age 58m)
mgr: docker01.xnkndd(active, since 56m)
mds: data:1 {0=data.docker01.tvnvea=up:creating}
osd: 1 osds: 1 up (since 50m), 1 in (since 50m)
 
  task status:
scrub status:
    mds.data.docker01.tvnvea: idle
 
  data:
pools:   3 pools, 65 pgs
objects: 0 objects, 0 B
usage:   1.0 GiB used, 499 GiB / 500 GiB avail
pgs:     100.000% pgs not active
         65 undersized+peered

Hey @scyto!

Hmm, I’ve not tried with a single node - but it does look unhappy about only having 1 OSD. I’d still have expected the MDS to start though - what’s the specs of the node you’re using? If it’s too old/slow, you may be bumping into some default IO limits…

You could also try checking the logs of the mds contain, incase that gives a clue?

D

Thanks for the pointers, will try again soon. I tore down the docker host completely to start again incase I made some muck up the first time through.

The docker host is actually a Debian VM running on a synology VM host. The data volume is an iscsi volume on the vmhost mapped into the Debian VM.

I could see how that could cause issues if it is expecting extremely high IO, but mapping the iscsi volume into windows to test speed showed it was reasonably fast.

tl;dr i lost the logs in the process of tearing it all down :slight_smile:

If you’re doing a 1-node cluster, you don’t really even need Ceph. Just create /var/data on the host and have the Docker containers access it there. Ceph is for keeping data highly available between nodes. So, unless you’re setting this up now and meaning to add more nodes to the Swarm later, I think you could just ignore Ceph and move on.

1 Like