Доброго дня. Появилась проблема - не могу решить:
есть Debian 4.9.51-1~bpo8+1 (openmediavault), на нем 2 сетевки, оба интерфейса (eth0 1G и eth1 100M) объеденены в бонд active-backup. Все это работает круглосуточно. В один из моментов пропадает доступ к этому серверу.
В логах нашел переименование после принудительной перезагрузки.
# grep -E «r8169|bond|eth0» messages
Dec 15 18:04:30 storage kernel: [ 1.236400] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Dec 15 18:04:30 storage kernel: [ 1.236409] r8169 0000:06:00.0: can't disable ASPM; OS doesn't have ASPM control
Dec 15 18:04:30 storage kernel: [ 1.236821] r8169 0000:06:00.0 eth0: RTL8168c/8111c at 0xffffa9a0018bd000, 00:80:c8:3b:94:b8, XID 1c4000c0 IRQ 28
Dec 15 18:04:30 storage kernel: [ 1.236823] r8169 0000:06:00.0 eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko]
Dec 15 18:04:30 storage kernel: [ 14.468829] r8169 0000:06:00.0 rename2: renamed from eth0
Dec 15 18:04:30 storage kernel: [ 106.062866] bond0: Setting MII monitoring interval to 100
Dec 15 18:04:30 storage kernel: [ 106.062916] bond0: Setting down delay to 200
Dec 15 18:04:30 storage kernel: [ 106.062960] bond0: Setting up delay to 200
Dec 15 18:04:30 storage kernel: [ 106.089189] bond0: Adding slave eth1
Dec 15 18:04:30 storage kernel: [ 106.096828] bond0: Enslaving eth1 as a backup interface with a down link
Dec 15 18:04:30 storage kernel: [ 106.103975] bond0: interface eth0 does not exist!
Dec 15 18:04:30 storage kernel: [ 106.422676] bond0: link status up for interface eth1, enabling it in 0 ms
Dec 15 18:04:30 storage kernel: [ 106.429373] bond0: link status definitely up for interface eth1, 100 Mbps full duplex
Dec 15 18:04:30 storage kernel: [ 106.429375] bond0: making interface eth1 the new active one
Dec 15 18:04:30 storage kernel: [ 106.429416] bond0: first active interface up!
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: rename2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:80:c8:3b:94:b8 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:80:c8:3b:94:b8 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:80:c8:3b:94:b8 brd ff:ff:ff:ff:ff:ff
inet x.x.x.x/24 brd x.x.x.x scope global bond0
valid_lft forever preferred_lft forever
В конфигах все осталось без изменений:
/etc/udev/rules.d/70-persistent-net.rules
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="90:e6:ba:85:40:4d", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:80:c8:3b:94:b8", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
auto lo
iface lo inet loopback
# bond0 network interface
auto bond0
iface bond0 inet static
address x.x.x.x
gateway x.x.x.x
netmask x.x.x.x
dns-nameservers x.x.x.x
bond-slaves eth1 eth0
bond-primary eth0
bond-mode 1
bond-miimon 100
bond-downdelay 200
bond-updelay 200
iface bond0 inet6 manual
pre-down ip -6 addr flush dev $IFACE