Displaying 20 results from an estimated 62 matches for "vol0".
Did you mean:
vol
2003 Aug 09
18
[releng_4 tinderbox] failure on i386/i386
TB --- 2003-08-09 16:00:11 - starting RELENG_4 tinderbox run for i386/i386
TB --- 2003-08-09 16:00:11 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-08-09 16:00:11 - /usr/bin/cvs returned exit code 1
TB --- 2003-08-09 16:00:11 - ERROR: unable to check out the source tree
TB ---
2003 Aug 09
13
[releng_4 tinderbox] failure on i386/pc98
TB --- 2003-08-09 16:00:12 - starting RELENG_4 tinderbox run for i386/pc98
TB --- 2003-08-09 16:00:12 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-08-09 16:00:12 - /usr/bin/cvs returned exit code 1
TB --- 2003-08-09 16:00:12 - ERROR: unable to check out the source tree
TB ---
2003 Aug 09
28
[releng_4 tinderbox] failure on alpha/alpha
TB --- 2003-08-09 16:00:11 - starting RELENG_4 tinderbox run for alpha/alpha
TB --- 2003-08-09 16:00:11 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-08-09 16:00:11 - /usr/bin/cvs returned exit code 1
TB --- 2003-08-09 16:00:11 - ERROR: unable to check out the source tree
TB ---
2003 Oct 01
0
[releng_4 tinderbox] failure on i386/i386
...ary build tree
>>> stage 1: bootstrap tools
>>> stage 2: cleaning up the object tree
>>> stage 2: rebuilding the object tree
>>> stage 2: build tools
>>> stage 3: cross tools
>>> stage 4: populating /home/des/tinderbox/RELENG_4/i386/i386/obj/vol/vol0/users/des/tinderbox/RELENG_4/i386/i386/src/i386/usr/include
>>> stage 4: building libraries
>>> stage 4: make dependencies
>>> stage 4: building everything..
TB --- 2003-10-02 05:16:57 - building generic kernel
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386/src
TB --- /...
2003 Jul 20
0
[-STABLE tinderbox] failure on i386/pc98
...uild tree
>>> stage 1: bootstrap tools
>>> stage 2: cleaning up the object tree
>>> stage 2: rebuilding the object tree
>>> stage 2: build tools
>>> stage 3: cross tools
>>> stage 4: populating /home/des/tinderbox/RELENG_4/i386/pc98/obj/pc98/vol/vol0/users/des/tinderbox/RELENG_4/i386/pc98/src/i386/usr/include
>>> stage 4: building libraries
>>> stage 4: make dependencies
>>> stage 4: building everything..
TB --- 2003-07-21 06:02:33 - building generic kernel
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98/src
TB --- /...
2003 Oct 01
0
[releng_4 tinderbox] failure on alpha/alpha
...d tree
>>> stage 1: bootstrap tools
>>> stage 2: cleaning up the object tree
>>> stage 2: rebuilding the object tree
>>> stage 2: build tools
>>> stage 3: cross tools
>>> stage 4: populating /home/des/tinderbox/RELENG_4/alpha/alpha/obj/alpha/vol/vol0/users/des/tinderbox/RELENG_4/alpha/alpha/src/i386/usr/include
>>> stage 4: building libraries
>>> stage 4: make dependencies
>>> stage 4: building everything..
TB --- 2003-10-02 04:41:55 - building generic kernel
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha/src
TB -...
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0 --name=lvol4 --noformat
The purpose is reinstalling a machine wit...
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
...e a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1: glt01:/vol/vol0
Brick2: glt02:/vol/vol0
Brick3: glt05:/vol/vol0 (arbiter)
Brick4: glt03:/vol/vol0
Brick5: glt04:/vol/vol0
Brick6: glt06:/vol/vol0 (arbiter)
Volume Name: vol1
Distributed-Replicate
Number of Bricks: 2 x (2 + 1)...
2014 Jun 27
1
geo-replication status faulty
...cation log grab #
[2014-06-26 17:09:08.794359] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2014-06-26 17:09:08.795387] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2014-06-26 17:09:09.358588] I [gsyncd(/data/glusterfs/vol0/brick0/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root at node003:gluster://localhost:gluster_vol1
[2014-06-26 17:09:09.537219] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2014-06-26 17:09:09.5400...
2013 Jul 09
2
Re: Libvirt and Glusterfs
...a 4 and libvirt 1.0.6.
>
> [root bbox ~]# virsh start test
> error: Failed to start domain test
> error: internal error process exited while connecting to monitor: char
> device redirected to /dev/pts/3 (label charserial0)
> qemu-system-x86_64: -drive
> file=gluster://127.0.0.1/vol0/test0.img,if=none,id=drive-virtio-disk1,format=raw:
> Gluster connection failed for server=127.0.0.1 port=0 volume=vol0
> image=test0.img transport=tcp
> qemu-system-x86_64: -drive
> file=gluster://127.0.0.1/vol0/test0.img,if=none,id=drive-virtio-disk1,format=raw:
> could not open di...
2009 Jan 29
1
7.1, mpt and slow writes
Hello,
I think this needs a few more eyes:
http://lists.freebsd.org/pipermail/freebsd-scsi/2009-January/003782.html
In short, writes are slow, likely do to the write-cache being enabled on
the controller. The sysctl used in 6.x to turn the cache off don't seem
to be in 7.x.
Thanks,
Charles
___
Charles Sprickman
NetEng/SysAdmin
Bway.net - New York's Best Internet - www.bway.net
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
...ning Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
> The current setup is:
>
> Volume Name: vol0
> Distributed-Replicate
> Number of Bricks: 2 x (2 + 1) = 6
> Bricks:
> Brick1: glt01:/vol/vol0
> Brick2: glt02:/vol/vol0
> Brick3: glt05:/vol/vol0 (arbiter)
> Brick4: glt03:/vol/vol0
> Brick5: glt04:/vol/vol0
> Brick6: glt06:/vol/vol0 (arbiter)
>
> Volume Name: vol...
2013 Jul 10
1
Re: Libvirt and Glusterfs
...gt;>> bbox ~]# virsh start test error: Failed to start domain test error:
>>> internal error process exited while connecting to monitor: char
>>> device redirected to /dev/pts/3 (label charserial0)
>>> qemu-system-x86_64: -drive
>>> file=gluster://127.0.0.1/vol0/test0.img,if=none,id=drive-virtio-disk1,format=raw:
>>> Gluster connection failed for server=127.0.0.1 port=0 volume=vol0
>>> image=test0.img transport=tcp qemu-system-x86_64: -drive
>>> file=gluster://127.0.0.1/vol0/test0.img,if=none,id=drive-virtio-disk1,format=raw:
>...
2006 May 30
1
Cannot remove Maildir folder
...s removed but I get additional ..DOVECOT-TRASHED directory -
next time it won't be possible to remove it
Some more info:
* Just before removal I run lsof to see which files are in use:
bash# lsof /home/marcin
...
imap 21360 marcin cwd DIR 0,15 344064 3307008 /home/marcin
(filer:/vol/vol0/home/marcin)
imap 21360 marcin 8u DIR 0,15 4096 4223651
/home/marcin/Maildir/.Trash.test/new (filer:/vol/vol0/home/marcin)
imap 21360 marcin 9u DIR 0,15 4096 4223650
/home/marcin/Maildir/.Trash.test/cur (filer:/vol/vol0/home/marcin)
imap 21360 marcin 11r REG 0,15...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...skipped: 0
Checking my logs the new stor3node and the rebalance task was executed on
2018-02-10 . From this date to now I have been storing new files.
The sequence of commands to add the node was:
gluster peer probe stor3data
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
2018-03-01 6:32 GMT+01:00 Nithya Balachandran <nbalacha at redhat.com>:
> Hi Jose,
>
> On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
>
>> Hi Nithya,
>>...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...account the algorithm of
DHT with filename patterns)
Thanks,
Greetings.
Jose V.
Status of volume: volumedisk0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick stor1data:/mnt/glusterfs/vol0/bri
ck1 49152 0 Y
13533
Brick stor2data:/mnt/glusterfs/vol0/bri
ck1 49152 0 Y
13302
Brick stor3data:/mnt/disk_b1/glusterfs/
vol0/brick1 49152 0 Y
17...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Moun...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...Checking my logs the new stor3node and the rebalance task was executed on
2018-02-10 . From this date to now I have been storing new files.
The exact sequence of commands to add the new node was:
gluster peer probe stor3data
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b2/glusterfs/vol0
gluster volume add-brick volumedisk1 stor3data:/mnt/disk_c/glusterfs/vol1
gluster volume add-brick volumedisk1 stor3data:/mnt/disk_d/glusterfs/vol1
gluster volume rebalance volumedisk0 start force
gluster volume rebalan...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...> Thanks,
> Greetings.
>
> Jose V.
>
> Status of volume: volumedisk0
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick stor1data:/mnt/glusterfs/vol0/bri
> ck1 49152 0 Y
> 13533
> Brick stor2data:/mnt/glusterfs/vol0/bri
> ck1 49152 0 Y
> 13302
> Brick stor3data:/mnt/disk_b1/glusterfs/
> vol0/brick1...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# df -h
> Files...