search for: 41b4

Displaying 8 results from an estimated 8 matches for "41b4".

Did you mean: 41,4
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
...1data.master2.opt-tier1data2019-brick -p /var/run/gluster/vols/tier1data/master2-opt-tier1data2019-brick.pid -S /var/run/gluster/97da28e3d5c23317.socket --brick-name /opt/tier1data2019/brick -l /var/log/glusterfs/bricks/opt-tier1data2019-brick.log --xlator-option *-posix.glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --process-name brick --brick-port 49152 --xlator-option tier1data-server.listen-port=49152 root 2710196 0.0 0.0 1298116 11544 ? Ssl Jan27 0:01 /usr/sbin/glusterfs -s localhost --volfile-id shd/tier1data -p /var/run/gluster/shd/tier1data/tier1data-shd.pid -l /var/log...
2018 Feb 27
2
Quorum in distributed-replicate volume
...ation change, > and since it is a live system you may end up in data unavailability or data > loss. > Can you give the output of "gluster volume info <volname>" > and which brick is of what size. Volume Name: palantir Type: Distributed-Replicate Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: saruman:/var/local/brick0/data Brick2: gandalf:/var/local/brick0/data Brick3: azathoth:/var/local/brick0/data Brick4: yog-sothoth:/var/local/brick0/data Brick5: cthulhu:/var/local/bri...
2016 Jul 26
0
Re: How can I run command in containers on the host?
...2016 at 05:19:22PM +0800, John Y. wrote: > Hi Daniel, > > I forgot to tell you that I using mips64 fedora. Has any effect on this > case? > 2016-07-26 09:05:59.634+0000: 16406: debug : virDomainLxcEnterNamespace:131 > : dom=0xaaad4067c0, (VM: name=fedora2, > uuid=42b97e4d-54dc-41b4-b009-2321a1477a9a), nfdlist=0, fdlist=0xaaad4007c0, > noldfdlist=(nil), oldfdlist=(nil), flags=0 > libvirt: error : Expected at least one file descriptor > error: internal error: Child process (16406) unexpected exit status 125 This is shows nfdlist=0, whicih means that virDomainLxcOpen...
2016 Jul 26
2
How can I run command in containers on the host?
How can I run command in containers on the host? Just like the lxc command lxc-attach. I run : virsh -c lxc:/// lxc-enter-namespace fedora2 --noseclabel /bin/ls but get error: libvirt: error : Expected at least one file descriptor error: internal error: Child process (14930) unexpected exit status 125 Here is my libvirt.xml <domain type='lxc'> <name>fedora2</name>
2018 Feb 27
0
Quorum in distributed-replicate volume
...ve system you may end up in data unavailability or > data > > loss. > > Can you give the output of "gluster volume info <volname>" > > and which brick is of what size. > > Volume Name: palantir > Type: Distributed-Replicate > Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 2 = 6 > Transport-type: tcp > Bricks: > Brick1: saruman:/var/local/brick0/data > Brick2: gandalf:/var/local/brick0/data > Brick3: azathoth:/var/local/brick0/data > Brick4: yog-sothoth:/var/loc...
2019 Nov 17
23
[PATCH 00/18] rvh-upload: Various fixes and cleanups
This series extract oVirt SDK and imageio code to make it eaiser to follow the code and improve error handing in open() and close(). The first small patches can be consider as fixes for downstream. Tested based on libguestfs v1.41.5, since I had trouble building virt-v2v and libguestfs from master. Nir Soffer (18): rhv-upload: Remove unused exception class rhv-upload: Check status more
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is