Displaying 20 results from an estimated 2000 matches similar to: "rsync not overwriting files on destination"
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2006 Jun 21
2
startup script for icecast
Hello,
I was wondering about the feasibility of including a startup script for
icecast for redhat/fedora installs? I've had to do an rpm install on an fc4
box, and a source install, rpms couldn't be found for an rh9 machine, yah i
know that's old. And in both cases i had to drop in a custom-made startup
script, see below. I was wondering esepcially in the case of the rpm, and
2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam
Can I use this command "gluster vol add-brick vol1 replica 2
file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01
and 02 exited without add new servers. Is it ok for expanding volume?
Thanks for your support
Regards,
Giang
2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>:
> Hi,
> You can use add-brick
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team,
**Please respond to me and my coworker listed in the Cc, since neither
one of us are on this alias**
QUICK PROBLEM DESCRIPTION:
Cu created a dataset which contains all the zvols for a particular
zone. The zone is then given access to all the zvols in the dataset
using a match statement in the zoneconfig (see long problem description
for details). After the initial boot of the zone
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2001 Jan 22
3
Possible funny with /sbin/fsck
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Content-Type: text/plain; charset=us-ascii
Howdy - we have a bunch of dual processor Compaqs with 180GB RAID
partitions for email, running with ext2 for the last year or so. I
thought I'd try out ext3 (on our development machine :-) to see
whether it was a practical proposition for this kind of thing yet.
Appears to be working so far, with a
2017 Jul 11
2
Public file share Samba 4.6.5
I am trying to configure a public file share on \\fs1\vol1
From a Windows 7 command prompt, I enter: dir \\fs1\vol1
Windows says: Logon failure: unknown user name or bad password.
Where am I going wrong?
Error log says: " SPNEGO login failed: NT_STATUS_NO_SUCH_USER" - that
must have something to do with this, but I thought that was the point of
"map to guest = Bad User"
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi,
I've got troubles after few minutes of glusterfs operations.
I setup a 4-node replica 4 storage, with 2 bricks on every server:
# gluster volume create vms replica 4 transport tcp
192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1
192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2
192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2
I started copying files with
2006 Aug 04
3
OCFS2 and ASM Question
Ok guys & gals here is the scenario:
1.) Host RHEL 4 U3 2.6.9-34.0.2.EL
2.) OCFS2 latest version
3.) Successfully formatted & mounted OCFS2 filesystems on 2 nodes
/dev/sdb1 /u02/oradata/usdev/voting
/dev/sdc1 /u02/oradata/usdev/data01
/dev/sdd1 /u02/oradata/usdev/data02
/dev/sde1 /u02/oradata/usdev/data03
4.) Downloaded & installed ASMLib 2.0 on both nodes
5.) Ran
2008 Aug 07
1
Bug in format.default(): na.encode does not have any effect for (PR#12318)
Hi!
If I use format() on numeric vector, na.encode argument does not have any e=
ffect. This
was reported before:
- https://stat.ethz.ch/pipermail/r-help/2007-October/143881.html
- http://tolstoy.newcastle.edu.au/R/e2/devel/06/09/0360.html
It works for other (say character) classes!
> format(c("a", NA), na.encode=3DTRUE)
[1] "a " "NA"
>
2011 Feb 20
1
initlog is deprecated
Hello Centos,
I am getting an error that I am not familiar with when I restart ssh.
[root at virtcent01:~] #service sshd restart
Stopping sshd: [ OK ]
Starting sshd:WARNING: initlog is deprecated and will be removed in a
future release
[ OK ]
[root at virtcent01:~] #
I was just
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
Hi,
after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time:
[12407.918249] ganesha.nfsd[38104]: segfault at 0 ip 00007f872425fb00 sp 00007f867cefe5d0 error 4 in libglusterfs.so.0.0.1[7f8724223000+f1000]
[12693.119259] ganesha.nfsd[3610]: segfault at 0 ip 00007f716d8f5b00 sp 00007f71367e15d0 error 4 in libglusterfs.so.0.0.1[7f716d8b9000+f1000]
[14531.582667]