Displaying 20 results from an estimated 3000 matches similar to: "NT 4 Issue"
2005 Jun 22
0
NT 4 Issue (Summary)
Thanks to Wolfgang Ratzka,
The issue is that NT 4 will not map to a subdirectory (and with a Mixed
NT and 2000 network I can't use SUBST).
Guess I will end up setting up a bunch of shares.
Thanks for the help.
--
*********************
Doug Hubbard - IT Manager
TrackMaster, an Equibase Company
email doug@trackmaster.com <mailto:doug@trackmaster.com>
Website www.trackmaster.com
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam
Can I use this command "gluster vol add-brick vol1 replica 2
file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01
and 02 exited without add new servers. Is it ok for expanding volume?
Thanks for your support
Regards,
Giang
2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>:
> Hi,
> You can use add-brick
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team,
**Please respond to me and my coworker listed in the Cc, since neither
one of us are on this alias**
QUICK PROBLEM DESCRIPTION:
Cu created a dataset which contains all the zvols for a particular
zone. The zone is then given access to all the zvols in the dataset
using a match statement in the zoneconfig (see long problem description
for details). After the initial boot of the zone
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2003 Jun 04
1
rsync not overwriting files on destination
Hi,
I am rsyncing from my source server A to a destination server B.
A/vol1 contains two files syslog.txt and syslog.bak
B/vol1 contains five files syslog.txt, syslog.bak, initlog.txt,
internal.txt, and internal.bak.
I want to preserve the 5 files on B/vol1 when I do rsync from A to B.
Here is the command I use:
rsync -av --delete --exclude-from=EXCLUDEFILE A/ B
I've tried the option
2017 Jul 11
2
Public file share Samba 4.6.5
I am trying to configure a public file share on \\fs1\vol1
From a Windows 7 command prompt, I enter: dir \\fs1\vol1
Windows says: Logon failure: unknown user name or bad password.
Where am I going wrong?
Error log says: " SPNEGO login failed: NT_STATUS_NO_SUCH_USER" - that
must have something to do with this, but I thought that was the point of
"map to guest = Bad User"
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi,
I've got troubles after few minutes of glusterfs operations.
I setup a 4-node replica 4 storage, with 2 bricks on every server:
# gluster volume create vms replica 4 transport tcp
192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1
192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2
192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2
I started copying files with
2006 Aug 04
3
OCFS2 and ASM Question
Ok guys & gals here is the scenario:
1.) Host RHEL 4 U3 2.6.9-34.0.2.EL
2.) OCFS2 latest version
3.) Successfully formatted & mounted OCFS2 filesystems on 2 nodes
/dev/sdb1 /u02/oradata/usdev/voting
/dev/sdc1 /u02/oradata/usdev/data01
/dev/sdd1 /u02/oradata/usdev/data02
/dev/sde1 /u02/oradata/usdev/data03
4.) Downloaded & installed ASMLib 2.0 on both nodes
5.) Ran
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
Hi,
after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time:
[12407.918249] ganesha.nfsd[38104]: segfault at 0 ip 00007f872425fb00 sp 00007f867cefe5d0 error 4 in libglusterfs.so.0.0.1[7f8724223000+f1000]
[12693.119259] ganesha.nfsd[3610]: segfault at 0 ip 00007f716d8f5b00 sp 00007f71367e15d0 error 4 in libglusterfs.so.0.0.1[7f716d8b9000+f1000]
[14531.582667]
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>
2009 Sep 10
3
zfs send of a cloned zvol
Hi,
I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap).
If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol?
Best regards.
Maurilio.
--
This message posted from