Displaying 20 results from an estimated 400 matches similar to: "Cannot remove Maildir folder"
2011 Mar 14
3
ideas on sorting
Hi, I have a character vector as below:
a<-c('10','3R','4','4R','5','5R','6','6R','7','8','9','7R','1','10R','11'
2003 Oct 01
0
[releng_4 tinderbox] failure on i386/i386
TB --- 2003-10-02 04:44:07 - starting RELENG_4 tinderbox run for i386/i386
TB --- 2003-10-02 04:44:07 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-10-02 04:49:12 - building world
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386/src
TB --- /usr/bin/make -B buildworld
>>>
2003 Jul 20
0
[-STABLE tinderbox] failure on i386/pc98
TB --- 2003-07-21 05:27:28 - starting RELENG_4 tinderbox run for i386/pc98
TB --- 2003-07-21 05:27:28 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-07-21 05:33:30 - building world
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98/src
TB --- /usr/bin/make -B buildworld
>>>
2003 Oct 01
0
[releng_4 tinderbox] failure on alpha/alpha
TB --- 2003-10-02 04:00:01 - starting RELENG_4 tinderbox run for alpha/alpha
TB --- 2003-10-02 04:00:01 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-10-02 04:07:38 - building world
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha/src
TB --- /usr/bin/make -B buildworld
>>>
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.
I have followed the configuration steps as documented in
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2013 Jul 10
1
Re: Libvirt and Glusterfs
On 07/09/2013 08:18 PM, Olivier Mauras wrote:
> On 2013-07-09 09:40, Vijay Bellur wrote:
>
>>> Hi, I'm trying to use qemu native glusterfs integration with libvirt.
>>> It's all working well from the qemu side, but libvirt fails to start
>>> a domain with a gluster drive or attach a drive. I have exactly the
>>> same error as this person:
2013 Jul 07
0
Libvirt and Glusterfs
Hi,
I'm trying to use qemu native glusterfs integration with
libvirt. It's all working well from the qemu side, but libvirt fails to
start a domain with a gluster drive or attach a drive.
I have exactly
the same error as this person:
https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
[1]
I use qemu 1.5.1 with glusterfs 3.4 beta 4 and libvirt
1.0.6.
[root@bbox ~]#
2002 Feb 22
1
Head & Rotor VE(CHINA-LuTong) 2/23
Dear Sir,
My name is ChenHua, and I'm writing on behalf of the
China-Lutong mechanical company. Located in the south east
of China, we specialize in hydraulic heads for the VE
distributor pump.
We can supply standard, good quality units at a very
competitive price. The following types are available:
Engine model VE PUMS code NO UNIT PRICE(EX WORKS)
ISUZU) NP-VE4/11L
2013 Jul 09
0
Re: Libvirt and Glusterfs
On 2013-07-09 09:40, Vijay Bellur wrote:
>> Hi, I'm trying to use
qemu native glusterfs integration with libvirt. It's all working well
from the qemu side, but libvirt fails to start a domain with a gluster
drive or attach a drive. I have exactly the same error as this person:
https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
[1] I use qemu 1.5.1 with glusterfs
2013 Jul 09
2
Re: Libvirt and Glusterfs
> Hi,
>
> I'm trying to use qemu native glusterfs integration with libvirt. It's
> all working well from the qemu side, but libvirt fails to start a domain
> with a gluster drive or attach a drive.
> I have exactly the same error as this person:
> https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
>
> I use qemu 1.5.1 with glusterfs 3.4 beta 4
2004 Aug 06
1
Head & Rotor VE(CHINA-LuTong) 2/23
Dear Sir,
My name is ChenHua, and I'm writing on behalf of the
China-Lutong mechanical company. Located in the south east
of China, we specialize in hydraulic heads for the VE
distributor pump.
We can supply standard, good quality units at a very
competitive price. The following types are available:
Engine model VE PUMS code NO UNIT PRICE(EX WORKS)
ISUZU) NP-VE4/11L
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2002 Feb 23
0
Head & Rotor VE(CHINA-LuTong) 2/23
Dear Sir,
My name is ChenHua, and I'm writing on behalf of the
China-Lutong mechanical company. Located in the south east
of China, we specialize in hydraulic heads for the VE
distributor pump.
We can supply standard, good quality units at a very
competitive price. The following types are available:
Engine model VE PUMS code NO UNIT PRICE(EX WORKS)
ISUZU) NP-VE4/11L
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2002 Feb 26
1
Head & Rotor VE(CHINA-LuTong) 2/23
Dear Sir,
My name is ChenHua, and I'm writing on behalf of the
China-Lutong mechanical company. Located in the south east
of China, we specialize in hydraulic heads for the VE
distributor pump.
We can supply standard, good quality units at a very
competitive price. The following types are available:
Engine model VE PUMS code NO UNIT PRICE(EX WORKS)
ISUZU) NP-VE4/11L
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I