Привет. Запустил в тестовой среде кластер из 3х нод. Настроил ceph, 3 osd, по одной на каждой ноде, пул size=3
, min_size=2
. Потом создал один lxc в ha и отрубил его. Контейнер нормально запустился на второй ноде, но в ceph health висит вот это:
Degraded data redundancy: 201/600 objects degraded (33.500%), 64 pgs unclean, 64 pgs degraded, 64 pgs undersized
pg 2.0 is stuck undersized for 873.596481, current state active+undersized+degraded, last acting [1,2]
pg 2.1 is stuck undersized for 873.602337, current state active+undersized+degraded, last acting [2,1]
pg 2.2 is stuck undersized for 873.596553, current state active+undersized+degraded, last acting [1,2]
pg 2.3 is stuck undersized for 873.596615, current state active+undersized+degraded, last acting [1,2]
pg 2.4 is stuck undersized for 873.602252, current state active+undersized+degraded, last acting [2,1]
pg 2.5 is stuck undersized for 873.600040, current state active+undersized+degraded, last acting [2,1]
pg 2.6 is stuck undersized for 873.602196, current state active+undersized+degraded, last acting [2,1]
pg 2.7 is stuck undersized for 873.596685, current state active+undersized+degraded, last acting [1,2]
pg 2.8 is stuck undersized for 873.596961, current state active+undersized+degraded, last acting [1,2]
pg 2.9 is stuck undersized for 873.600627, current state active+undersized+degraded, last acting [1,2]
pg 2.a is stuck undersized for 873.600086, current state active+undersized+degraded, last acting [2,1]
pg 2.b is stuck undersized for 873.600212, current state active+undersized+degraded, last acting [1,2]
Это нормально что оно в таком состояние, 201/600 objects degraded (33.500%)
, висит уже пол часа и не сдвигается с места? Вроде ж Degraded data должны поделиться на двух osd, как бы зеркало, с условием что будет ворнинг, что нету одного osd.