Displaying 20 results from an estimated 76 matches for "testvol".
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
...ess storage using
libgfapi are no longer able to start. The libvirt log file shows:
[2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up
[2016-11-02 14:26:41.864075] I [MSGID: 114020] [client.c:2356:notify]
0-testvol-client-0: parent translators are ready, attempting connect on
transport
[2016-11-02 14:26:41.882975] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
0-testvol-client-0: changing port to 49152 (from 0)
[2016-11-02 14:26:41.889362] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-...
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika,
I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off.
Samples:
sr:~# gluster volume profile testvol start
Starting volume profile on testvol has been successful
sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2708 s, 87.5 MB/s
sr:~# gluster volume profile testvol info > /32mb_shard_and_1gb_dd.log
sr...
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
...users] Very slow performance on Sharded GlusterFS
Hi Krutika,
I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off.
Samples:
sr:~# gluster volume profile testvol start
Starting volume profile on testvol has been successful
sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2708 s, 87.5 MB/s
sr:~# gluster volume profile testvol info > /32mb_shard_and_1gb_dd.log
sr...
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
...rutika,
>
>
>
> I also did one more test. I re-created another volume (single volume. Old
> one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both
> are 32MB shard and eager-lock off.
>
>
>
> Samples:
>
>
>
> sr:~# gluster volume profile testvol start
>
> Starting volume profile on testvol has been successful
>
> sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1
>
> 1+0 records in
>
> 1+0 records out
>
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2708 s, 87.5 MB/s
>
> sr:~# gluster volume pr...
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
...on. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate testvol /dev/drbd0
Clustered volume group "testvol" successfully created
[root at alice ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG St...
2013 Dec 04
1
Testing failover and recovery
...similar usecases with DRBD+NFS setups) so I setup some
testcase to try out failover and recovery.
For this I have a setup with two glusterfs servers (each is a VM) and one
client (also a VM).
I'm using GlusterFS 3.4 btw.
The servers manages a gluster volume created as:
gluster volume create testvol rep 2 transport tcp gs1:/export/vda1/brick
gs2:/export/vda1/brick
gluster volume start testvol
gluster volume set testvol network.ping-timeout 5
Then the client mounts this volume as:
mount -t glusterfs gs1:/testvol /import/testvol
Everything seems to work good in normal usecases, I can write/rea...
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
...;> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol
>> /rhev/data-center/mnt/glusterSD/s1:_testvol)
>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
>&...
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
...essage:
>
> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol
> /rhev/data-center/mnt/glusterSD/s1:_testvol)
> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
> --volfile-server...
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
...stributed over all bricks.
2. Hm.. This is really weird.
And others;
No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command:
sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-50-14-1...
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
...05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
>>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol
>>> /rhev/data-center/mnt/glusterSD/s1:_testvol)
>>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile...
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
...23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main]
>>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>>>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
>>>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol
>>>> /rhev/data-center/mnt/glusterSD/s1:_testvol)
>>>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main]
>>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>>>> (args: /usr/sbin/glusterfs --volfile-ser...
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
...> commit messages of these 2 patches for more details.
>
> You can play around with most of these commands in a 1 node setup
> if you want to convince yourself that they work. There is no need
> to form a cluster.
> [root at tuxpad glusterfs]# gluster v create testvol replica 3
> 127.0.0.2:/home/ravi/bricks/brick{1..3} force
> [root at tuxpad glusterfs]# gluster v start testvol
> [root at tuxpad glusterfs]# mount -t glusterfs 127.0.0.2:testvol
> /mnt/fuse_mnt/
> [root at tuxpad glusterfs]# touch /mnt/fuse_mnt/FILE
> [roo...
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
...stributed over all bricks.
2. Hm.. This is really weird.
And others;
No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command:
sudo gluster volume create testvol replica 2 sr-09-loc-50-14-18:/bricks/brick1 sr-10-loc-50-14-18:/bricks/brick1 sr-09-loc-50-14-18:/bricks/brick2 sr-10-loc-50-14-18:/bricks/brick2 sr-09-loc-50-14-18:/bricks/brick3 sr-10-loc-50-14-18:/bricks/brick3 sr-09-loc-50-14-18:/bricks/brick4 sr-10-loc-50-14-18:/bricks/brick4 sr-09-loc-50-14-1...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...e of the following log message:
[2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol /rhev/data-center/mnt/glusterSD/s1:_testvol)
[2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 --volfile-server=s3 --volfile-server=s4 -...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...;> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol
>> /rhev/data-center/mnt/glusterSD/s1:_testvol)
>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20
>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2
>&...
2019 Jun 11
1
Proper command for replace-brick on distribute–replicate?
Dear list,
In a recent discussion on this list Ravi suggested that the documentation
for replace-brick? was out of date. For a distribute?replicate volume the
documentation currently says that we need to kill the old brick's PID,
create a temporary empty directory on the FUSE mount, check the xattrs,
replace-brick with commit force.
Is all this still necessary? I'm running Gluster 5.6 on
2012 May 03
2
[3.3 beta3] When should the self-heal daemon be triggered?
...ote more data to a few folders from the client machine.
Then I restarted the second brick server.
At this point, the second server seemed to "self-heal" enough that it
registered the new directories, but all the files inside were zero-length.
I then ran the command:
gluster volume heal testvol
After I ran that, there was some activity, and now all the files were
populated.
Was that supposed to happen automatically, eventually, or am I missing
something about how the self-heal daemon works?
Thanks,
Toby
1997 Oct 22
0
R-alpha: na.woes
...76
are NA-filled. To find the interesting case, you need to go through
the following contortions:
> menar.t1 <- juul$menarche=="Yes"&juul$tanner=="I"
> menar.t1 <- menar.t1 & !(is.na(menar.t1))
> juul[menar.t1,]
age menarche sex igf1 tanner testvol
962 12.33 Yes F NA I NA
I can't think of a single case where the current behavior is useful.
Indexing with numerical NA's makes sense, e.g. when recoding, and then
there's the boundary case of indexing with a vector where all elements
are logical NA's (should the...
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika,
Sure, here is volume info:
root at sr-09-loc-50-14-18:/# gluster volume info testvol
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 30426017-59d5-4091-b6bc-279a905b704a
Status: Started
Snapshot Count: 0
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: sr-09-loc-50-14-18:/bricks/brick1
Brick2: sr-09-loc-50-14-18:/bricks/brick2
Brick3: sr-0...
2009 Jan 24
2
rsync with --copy-devices patch and device-target with --write-batch doesnt work
...append, no ACLs, xattrs, iconv, symtimes
rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
root@xp8main3:/usr/local/src/rsync# ./rsync -v --progress
--write-batch=/mnt/testvol/diff1_2_usb_copydiff
/mnt/sdc1/snapshotvergleich/rootbackup1.img /dev/vg0/rootbackup
rootbackup1.img
53,116,928 0% 50.62MB/s 0:03:26
rsync error: error in file IO (code 11) at io.c(1565) [sender=3.1.0dev]
root@xp8main3:/usr/local/src/rsync#
proberly more informative:
root@xp8main3:/us...