Приветствую! Вот, вкратце -после апдейта системы с 14.1 на 14.2 слаквари настала жопа софтверному райду на двух нжмд - то есть всё живое, но разделы начали обзыватся как md127, md126, md125 md124 вместо md1, md2, md3, md4. поскольку мд1 - это рут вместе с бутом, мд2- свап, а мд3 - хоум, то естественно всё перестало работать и даже загружатся. изначально в системе mdadm.conf ,то есть, всё в ём закомменчено. инитрамфс не использовалось. изначально стояла слака 14.1, сначала сделал апдейт её - поставилось новое ядро. после рестарта было всё ок. после апдейта на 14.2 - уже нет.
сделал mdadm.conf:
ARRAY /dev/md1 metadata=0.90 UUID=7cc47bea:832f8260:208cdb8d:9e23b04b
ARRAY /dev/md2 metadata=0.90 UUID=cce81d3a:78965aa5:208cdb8d:9e23b04b
ARRAY /dev/md3 metadata=0.90 UUID=f0bc71fc:8467ef54:208cdb8d:9e23b04b
ARRAY /dev/md4 metadata=0.90 UUID=3f4daae2:cbf37a2a:208cdb8d:9e23b04b
почему половина подхватилась, а вторая - нет? всё ж одинаковое, должно было либо всё правильно подхватится, либо ничего? я в полной фрустрации..... есть ли у кого какие нибудь идеи по сему поводу?
df -h :
Filesystem Size Used Avail Use% Mounted on
/dev/root 99G 85G 9.3G 91% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 1008K 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /dev/shm
cgroup_root 3.8G 0 3.8G 0% /sys/fs/cgroup
cgmfs 100K 0 100K 0% /run/cgmanager/fs
/dev/md1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /dev/shm type tmpfs (rw)
[ 0.215951] amd_nb: Cannot enumerate AMD northbridges
[ 0.316910] ata7: PATA max UDMA/100 cmd 0x2018 ctl 0x2024 bmdma 0x2000 irq 17
[ 0.318177] ata9: PATA max PIO4 cmd 0x1f0 ctl 0x3f6 irq 14
[ 3.335294] ata10: PATA max PIO4 cmd 0x170 ctl 0x376 irq 15
[ 3.494385] md: linear personality registered for level -1
[ 3.494518] md: raid0 personality registered for level 0
[ 3.494649] md: raid1 personality registered for level 1
[ 3.494781] md: raid10 personality registered for level 10
[ 3.494956] md: raid6 personality registered for level 6
[ 3.495092] md: raid5 personality registered for level 5
[ 3.495223] md: raid4 personality registered for level 4
[ 3.495355] md: multipath personality registered for level -4
[ 3.825256] md: Waiting for all devices to be available before autodetect
[ 3.825390] md: If you don't use raid, use raid=noautodetect
[ 3.825718] md: Autodetecting RAID arrays.
[ 3.933280] md: Scanned 8 and added 8 devices.
[ 3.933422] md: autorun ...
[ 3.933550] md: considering sdb4 ...
[ 3.933683] md: adding sdb4 ...
[ 3.933812] md: sdb3 has different UUID to sdb4
[ 3.933943] md: sdb2 has different UUID to sdb4
[ 3.934078] md: sdb1 has different UUID to sdb4
[ 3.934211] md: adding sda4 ...
[ 3.934340] md: sda3 has different UUID to sdb4
[ 3.934471] md: sda2 has different UUID to sdb4
[ 3.934602] md: sda1 has different UUID to sdb4
[ 3.934910] md: created md125
[ 3.935045] md: bind<sda4>
[ 3.935187] md: bind<sdb4>
[ 3.935327] md: running: <sdb4><sda4>
[ 3.935667] md/raid1:md125: active with 2 out of 2 mirrors
[ 3.935830] md125: detected capacity change from 0 to 514872442880
[ 3.935971] md: considering sdb3 ...
[ 3.936107] md: adding sdb3 ...
[ 3.936237] md: sdb2 has different UUID to sdb3
[ 3.936368] md: sdb1 has different UUID to sdb3
[ 3.936501] md: adding sda3 ...
[ 3.936630] md: sda2 has different UUID to sdb3
[ 3.936761] md: sda1 has different UUID to sdb3
[ 3.937055] md: created md124
[ 3.937183] md: bind<sda3>
[ 3.937323] md: bind<sdb3>
[ 3.937464] md: running: <sdb3><sda3>
[ 3.937787] md/raid1:md124: active with 2 out of 2 mirrors
[ 3.937941] md124: detected capacity change from 0 to 375809572864
[ 3.938087] md: considering sdb2 ...
[ 3.938219] md: adding sdb2 ...
[ 3.938349] md: sdb1 has different UUID to sdb2
[ 3.938481] md: adding sda2 ...
[ 3.938610] md: sda1 has different UUID to sdb2
[ 3.938902] md: created md2
[ 3.939035] md: bind<sda2>
[ 3.939176] md: bind<sdb2>
[ 3.939317] md: running: <sdb2><sda2>
[ 3.939646] md/raid1:md2: active with 2 out of 2 mirrors
[ 3.939799] md2: detected capacity change from 0 to 2147418112
[ 3.939938] md: considering sdb1 ...
[ 3.940076] md: adding sdb1 ...
[ 3.940207] md: adding sda1 ...
[ 3.940495] md: created md1
[ 3.940624] md: bind<sda1>
[ 3.940765] md: bind<sdb1>
[ 3.940905] md: running: <sdb1><sda1>
[ 3.941233] md/raid1:md1: active with 2 out of 2 mirrors
[ 3.941388] md1: detected capacity change from 0 to 107374116864
[ 3.941526] md: ... autorun DONE.
[ 3.943359] EXT4-fs (md1): couldn't mount as ext3 due to feature incompatibilities
[ 3.985138] EXT4-fs (md1): mounted filesystem with ordered data mode. Opts: (null)
[ 9.235215] Adding 2097084k swap on /dev/md2. Priority:-1 extents:1 across:2097084k
[ 9.998237] EXT4-fs (md1): re-mounted. Opts: (null)
[ 105.253940] md124: detected capacity change from 375809572864 to 0
[ 105.253950] md: md124 stopped.
[ 105.253959] md: unbind<sdb3>
[ 105.258033] md: export_rdev(sdb3)
[ 105.258083] md: unbind<sda3>
[ 105.262015] md: export_rdev(sda3)
[ 107.749713] md125: detected capacity change from 514872442880 to 0
[ 107.749723] md: md125 stopped.
[ 107.749733] md: unbind<sdb4>
[ 107.754051] md: export_rdev(sdb4)
[ 107.754084] md: unbind<sda4>
[ 107.759025] md: export_rdev(sda4)
[ 114.843287] md: md3 stopped.
[ 114.844140] md: bind<sdb3>
[ 114.844329] md: bind<sda3>
[ 114.850069] md/raid1:md3: active with 2 out of 2 mirrors
[ 114.850109] md3: detected capacity change from 0 to 375809572864
[ 114.910224] md: md4 stopped.
[ 114.911789] md: bind<sdb4>
[ 114.911992] md: bind<sda4>
[ 114.923705] md/raid1:md4: active with 2 out of 2 mirrors
[ 114.923745] md4: detected capacity change from 0 to 514872442880
Linux drago 4.4.29 #2 SMP Mon Oct 31 15:02:12 CDT 2016 x86_64 Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz GenuineIntel GNU/Linux drago.domain.com
mdadm -Es:
ARRAY /dev/md1 UUID=7cc47bea:832f8260:208cdb8d:9e23b04b
ARRAY /dev/md2 UUID=cce81d3a:78965aa5:208cdb8d:9e23b04b
ARRAY /dev/md124 UUID=f0bc71fc:8467ef54:208cdb8d:9e23b04b
ARRAY /dev/md125 UUID=3f4daae2:cbf37a2a:208cdb8d:9e23b04b