Displaying 20 results from an estimated 143 matches for "vol1".
Did you mean:
val1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterf...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
....node1.X.rd
run.node2.X.rd
( X ranging from 0000 to infinite )
Curiously stor1data and stor2data maintain similar ratios in bytes:
Filesystem 1K-blocks Used Available
Use% Mounted on
/dev/sdc1 52737613824 17079174264 35658439560 33%
/mnt/glusterfs/vol1 -> stor1data
/dev/sdc1 52737613824 17118810848 35618802976 33%
/mnt/glusterfs/vol1 -> stor2data
However the ratio on som3data differs too much (1TB):
Filesystem 1K-blocks Used Available
Use% Mounted on
/dev/sdc1 52737613824 154791...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...ranging from 0000 to infinite )
>
> Curiously stor1data and stor2data maintain similar ratios in bytes:
>
> Filesystem 1K-blocks Used Available
> Use% Mounted on
> /dev/sdc1 52737613824 17079174264 35658439560 33%
> /mnt/glusterfs/vol1 -> stor1data
> /dev/sdc1 52737613824 17118810848 35618802976 33%
> /mnt/glusterfs/vol1 -> stor2data
>
> However the ratio on som3data differs too much (1TB):
> Filesystem 1K-blocks Used Available
> Use% Mounted on
> /dev/s...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...r df shows a bad total size.
My configuration for one volume: volumedisk1
[root at stor1 ~]# gluster volume status volumedisk1 detail
Status of volume: volumedisk1
------------------------------------------------------------------------------
Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 13579
File System : xfs
Device : /dev/sdc1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.0TB
Total Disk Space : 49.1TB
Ino...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...iously stor1data and stor2data maintain similar ratios in bytes:
>>
>> Filesystem 1K-blocks Used Available
>> Use% Mounted on
>> /dev/sdc1 52737613824 17079174264 <(707)%20917-4264>
>> 35658439560 33% /mnt/glusterfs/vol1 -> stor1data
>> /dev/sdc1 52737613824 17118810848 35618802976 33%
>> /mnt/glusterfs/vol1 -> stor2data
>>
>> However the ratio on som3data differs too much (1TB):
>> Filesystem 1K-blocks Used Available
>> Use...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...My configuration for one volume: volumedisk1
> [root at stor1 ~]# gluster volume status volumedisk1 detail
>
> Status of volume: volumedisk1
> ------------------------------------------------------------
> ------------------
> Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
> TCP Port : 49153
> RDMA Port : 0
> Online : Y
> Pid : 13579
> File System : xfs
> Device : /dev/sdc1
> Mount Options : rw,noatime
> Inode Size : 512
> Disk Space Free...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...t sequence of commands to add the new node was:
gluster peer probe stor3data
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b2/glusterfs/vol0
gluster volume add-brick volumedisk1 stor3data:/mnt/disk_c/glusterfs/vol1
gluster volume add-brick volumedisk1 stor3data:/mnt/disk_d/glusterfs/vol1
gluster volume rebalance volumedisk0 start force
gluster volume rebalance volumedisk1 start force
For some reason , could be unbalanced the assigned range of DHT for
stor3data bricks ? Could be minor than stor1data and st...
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
...educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1: glt01:/vol/vol0
Brick2: glt02:/vol/vol0
Brick3: glt05:/vol/vol0 (arbiter)
Brick4: glt03:/vol/vol0
Brick5: glt04:/vol/vol0
Brick6: glt06:/vol/vol0 (arbiter)
Volume Name: vol1
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1: glt07:/vol/vol1
Brick2: glt08:/vol/vol1
Brick3: glt05:/vol/vol1 (arbiter)
Brick4: glt09:/vol/vol1
Brick5: glt10:/vol/vol1
Brick6: glt06:/vol/vol1 (arbiter)
After performing the upgrade because of differences in checksums, the...
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
...ce
match: /dev/zvol/dsk/d1000pool/biscotti-vols/*
There are 4 volumes in the dataset
[/]
root at tofu# zfs list -r d1000pool/biscotti-vols
NAME USED AVAIL REFER MOUNTPOINT
d1000pool/biscotti-vols 400M 197G 49K none
d1000pool/biscotti-vols/vol1 11.2M 197G 11.2M -
d1000pool/biscotti-vols/vol2 10.7M 197G 10.7M -
d1000pool/biscotti-vols/vol3 11.0M 197G 11.0M -
d1000pool/biscotti-vols/vol4 10.5M 197G 10.5M -
The volumes are mounted in the zone via the zones vfstab
/dev/zvol/dsk/d1000pool/biscotti-vols/vol1...
2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam
Can I use this command "gluster vol add-brick vol1 replica 2
file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01
and 02 exited without add new servers. Is it ok for expanding volume?
Thanks for your support
Regards,
Giang
2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>:
> Hi,
>...
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
...ol0
> Distributed-Replicate
> Number of Bricks: 2 x (2 + 1) = 6
> Bricks:
> Brick1: glt01:/vol/vol0
> Brick2: glt02:/vol/vol0
> Brick3: glt05:/vol/vol0 (arbiter)
> Brick4: glt03:/vol/vol0
> Brick5: glt04:/vol/vol0
> Brick6: glt06:/vol/vol0 (arbiter)
>
> Volume Name: vol1
> Distributed-Replicate
> Number of Bricks: 2 x (2 + 1) = 6
> Bricks:
> Brick1: glt07:/vol/vol1
> Brick2: glt08:/vol/vol1
> Brick3: glt05:/vol/vol1 (arbiter)
> Brick4: glt09:/vol/vol1
> Brick5: glt10:/vol/vol1
> Brick6: glt06:/vol/vol1 (arbiter)
>
> After performing...
2003 Jun 04
1
rsync not overwriting files on destination
Hi,
I am rsyncing from my source server A to a destination server B.
A/vol1 contains two files syslog.txt and syslog.bak
B/vol1 contains five files syslog.txt, syslog.bak, initlog.txt,
internal.txt, and internal.bak.
I want to preserve the 5 files on B/vol1 when I do rsync from A to B.
Here is the command I use:
rsync -av --delete --exclude-from=EXCLUDEFILE A/ B
I'v...
2011 Jul 10
3
How create a FAT filesystem on a zvol?
The `lofiadm'' man page describes how to export a file as a block
device and then use `mkfs -F pcfs'' to create a FAT filesystem on it.
Can''t I do the same thing by first creating a zvol and then creating
a FAT filesystem on it? Nothing I''ve tried seems to work. Isn''t the
zvol just another block device?
--
-Gary Mills- -Unix Group-
2017 Jul 11
2
Public file share Samba 4.6.5
I am trying to configure a public file share on \\fs1\vol1
From a Windows 7 command prompt, I enter: dir \\fs1\vol1
Windows says: Logon failure: unknown user name or bad password.
Where am I going wrong?
Error log says: " SPNEGO login failed: NT_STATUS_NO_SUCH_USER" - that
must have something to do with this, but I thought that was the point...
2006 Aug 04
3
OCFS2 and ASM Question
...ystems on 2 nodes
/dev/sdb1 /u02/oradata/usdev/voting
/dev/sdc1 /u02/oradata/usdev/data01
/dev/sdd1 /u02/oradata/usdev/data02
/dev/sde1 /u02/oradata/usdev/data03
4.) Downloaded & installed ASMLib 2.0 on both nodes
5.) Ran /etc/init.d/oracleasm configure
6.) Ran /etc/init.d/oracleasm createdisk vol1 /dev/sdc1
Error: asmtool: unable to clear device /dev/sdc1
7.) Shutdown node 2
8.) Ran /etc/init.d/oracleasm createdisk vol1 /dev/sdc1
Error: asmtool: unable to clear device /dev/sdc1
9.) Unmounted OCFS2 filesystems
10.) Ran /etc/init.d/oracleasm createdisk vol1 /dev/sdc1
Success
11.) Ran /etc/in...
2005 Apr 08
0
windows copy versus move
Running 3.0.4 on FreeBSD 5.2.1...
I have two directories...
drwxrwxr-x root data_current /usr/vol1/current
drwxrwx--- root data_current /usr/vol1/hold
If a file is in /usr/vol1/hold with the following
attributes...
-rwxrwx--- root data_hold file1
...and a user MOVES it to /usr/vol1/current it
has the following attributes...
-rwxrwx--- root data_hold file1
...if the user COPIES (then...
2013 Jan 03
0
Resolve brick failed in restore
Hi,
I have a lab with 10 machines acting as storage servers for some compute
machines, using glusterfs to distribute the data as two volumes.
Created using:
gluster volume create vol1 192.168.10.{221..230}:/data/vol1
gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2
and mounted on the client and server machines using:
mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1
mount -t glusterfs 192.168.10.221:/vol2 /mnt/vol2
Everything worked great for almost two mo...
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi,
I've got troubles after few minutes of glusterfs operations.
I setup a 4-node replica 4 storage, with 2 bricks on every server:
# gluster volume create vms replica 4 transport tcp
192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1
192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2
192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2
I started copying files with rsync from node1, and after few minutes the
network traffic stalled.
Inspecting logs brick logs on node4, I've...
2010 Apr 09
1
windows live mail + dovecot and nfs
Hi all. My clients have got problem accessing dovecot by "Windows live
mail (Windows 7)" client. The problem is that users can't delete two or
more folders one-by-one.
There is no errors deleting first folder (new dir
/storage/vol1/mail/domain/user/Maildir/..DOVECOT-TRASHED appeared and
wasn't deleted). If I try to delete second one, error occurs (user,
domain, IP address changed):
dovecot: IMAP(user at domain):
unlink_directory(/storage/vol1/mail/domain/user/Maildir/..DOVECOT-TRASHED)
failed: Directory not empty...