Displaying 4 results from an estimated 4 matches for "rhev_vms".
2018 May 30
2
RDMA inline threshold?
Forgot to mention, sometimes I have to do force start other volumes as
well, its hard to determine which brick process is locked up from the logs.
Status of volume: rhev_vms_primary
Gluster process
TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
0 49157 Y 15666
Brick deadpool.ib.runlevelone.lan:/gl...
2018 May 30
0
RDMA inline threshold?
...ect result.
best wishes,
Stefan
> Am 30.05.2018 um 03:00 schrieb Dan Lavu <dan at redhat.com>:
>
> Forgot to mention, sometimes I have to do force start other volumes as well, its hard to determine which brick process is locked up from the logs.
>
>
> Status of volume: rhev_vms_primary
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary...
2018 May 30
0
RDMA inline threshold?
Stefan,
Sounds like a brick process is not running. I have notice some strangeness
in my lab when using RDMA, I often have to forcibly restart the brick
process, often as in every single time I do a major operation, add a new
volume, remove a volume, stop a volume, etc.
gluster volume status <vol>
Does any of the self heal daemons show N/A? If that's the case, try forcing
a restart on
2018 May 29
2
RDMA inline threshold?
Dear all,
I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all.
The respective log file shows this