Sahina Bose
2018-Feb-05 05:38 UTC
[Gluster-users] [ovirt-users] VM paused due unknown storage error
Adding gluster-users. On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan <kmisak at gmail.com> wrote:> Hi, > > here is the output from virt3 - problematic host: > > [root at virt3 ~]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick virt1:/gluster/brick2/data 49152 0 Y > 3536 > Brick virt2:/gluster/brick2/data 49152 0 Y > 3557 > Brick virt3:/gluster/brick2/data 49152 0 Y > 3523 > Self-heal Daemon on localhost N/A N/A Y > 32056 > Self-heal Daemon on virt2 N/A N/A Y > 29977 > Self-heal Daemon on virt1 N/A N/A Y > 1788 > > Task Status of Volume data > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: engine > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick virt1:/gluster/brick1/engine 49153 0 Y > 3561 > Brick virt2:/gluster/brick1/engine 49153 0 Y > 3570 > Brick virt3:/gluster/brick1/engine 49153 0 Y > 3534 > Self-heal Daemon on localhost N/A N/A Y > 32056 > Self-heal Daemon on virt2 N/A N/A Y > 29977 > Self-heal Daemon on virt1 N/A N/A Y > 1788 > > Task Status of Volume engine > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: iso > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick virt1:/gluster/brick4/iso 49154 0 Y > 3585 > Brick virt2:/gluster/brick4/iso 49154 0 Y > 3592 > Brick virt3:/gluster/brick4/iso 49154 0 Y > 3543 > Self-heal Daemon on localhost N/A N/A Y > 32056 > Self-heal Daemon on virt1 N/A N/A Y > 1788 > Self-heal Daemon on virt2 N/A N/A Y > 29977 > > Task Status of Volume iso > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > and one of the logs. > > Thanks in advance > > Best regards, > Misak Khachatryan > > > On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose <sabose at redhat.com> wrote: > > Could you provide the output of "gluster volume status" and the gluster > > mount logs to check further? > > Are all the host shown as active in the engine (that is, is the > monitoring > > working?) > > > > On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan <kmisak at gmail.com> > wrote: > >> > >> Hi, > >> > >> After upgrade to 4.2 i'm getting "VM paused due unknown storage > >> error". When i was upgrading i had some gluster problem with one of > >> the hosts, which i was fixed readding it to gluster peers. Now i see > >> something weir in bricks configuration, see attachment - one of the > >> bricks uses 0% of space. > >> > >> How I can diagnose this? Nothing wrong in logs as I can see. > >> > >> > >> > >> > >> Best regards, > >> Misak Khachatryan > >> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180205/e0383b3c/attachment.html>
Seemingly Similar Threads
- Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
- Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
- Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
- Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
- grep contents of file on remote server