English
« Back to projectatomic.io
Ask Your Question
0

device mapper is stuck in an infinite “mount/remount” loop

asked 2016-04-23 02:53:03 +0000

jonnyx gravatar image

updated 2016-04-24 01:01:54 +0000

In a 5 node cluster, one of the node is stuck in a loop:

Apr 22 22:43:51 atomic05 kernel: XFS (dm-6): Mounting V4 Filesystem Apr 22 22:43:51 atomic05 kernel: XFS (dm-6): Ending clean mount Apr 22 22:43:51 atomic05 kernel: XFS (dm-6): Unmounting Filesystem Apr 22 22:43:51 atomic05 systemd: Device dev-disk-by\x2duuid-d7f87303\x2d30f2\x2d402a\x2dbdb6\x2dd25140573d0e.device appeared twice with different sysfs paths /sys/devices/virtual/block/dm-5 and /sys/devices/virtual/block/dm-6 Apr 22 22:43:51 atomic05 kernel: XFS (dm-6): Mounting V4 Filesystem Apr 22 22:43:51 atomic05 kernel: XFS (dm-6): Ending clean mount Apr 22 22:43:51 atomic05 kernel: XFS (dm-6): Unmounting Filesystem Apr 22 22:43:52 atomic05 systemd: Device dev-disk-by\x2duuid-d7f87303\x2d30f2\x2d402a\x2dbdb6\x2dd25140573d0e.device appeared twice with different sysfs paths /sys/devices/virtual/block/dm-5 and /sys/devices/virtual/block/dm-6 Apr 22 22:43:52 atomic05 kernel: XFS (dm-6): Mounting V4 Filesystem Apr 22 22:43:52 atomic05 kernel: XFS (dm-6): Ending clean mount Apr 22 22:43:52 atomic05 systemd: Started docker container 8de2c2765d39c71332f210846b37dd4ff994d8ac1e351c50a366ea9331f99b52. Apr 22 22:43:52 atomic05 systemd: Starting docker container 8de2c2765d39c71332f210846b37dd4ff994d8ac1e351c50a366ea9331f99b52. Apr 22 22:43:52 atomic05 systemd: Stopped docker container 8de2c2765d39c71332f210846b37dd4ff994d8ac1e351c50a366ea9331f99b52. Apr 22 22:43:52 atomic05 systemd: Stopping docker container 8de2c2765d39c71332f210846b37dd4ff994d8ac1e351c50a366ea9331f99b52. Apr 22 22:43:52 atomic05 kernel: XFS (dm-6): Unmounting Filesystem Apr 22 22:44:00 atomic05 kubelet: I0422 22:44:00.940622 2346 manager.go:2038] Back-off 5m0s restarting failed container=wp-forkedblog pod=wp-forkedblog-ls0dm_default

This is causing a pod to fail to start, and is stuck in CrashLoopBackOff.

I build a fresh node, ran atomic host upgrade, and this problem still affects only one node in the cluster.

All hosts in the cluster are version 7.20160404

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
0

answered 2016-04-25 15:18:00 +0000

jonnyx gravatar image

I found the fix for this issue. It was related to selinux and NFS:

/usr/sbin/setsebool -P virtusenfs 1

Once this command was ran, the pod started successfully.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

[hide preview]

Question Tools

Follow
1 follower

Stats

Asked: 2016-04-23 02:53:03 +0000

Seen: 572 times

Last updated: Apr 25 '16