similar to: Reconstructing files from shards

Displaying 20 results from an estimated 10000 matches similar to: "Reconstructing files from shards"

2018 Apr 23
0
Reconstructing files from shards
From some old May 2017 email. I asked the following: "From the docs, I see you can identify the shards by the GFID # getfattr -d -m. -e hex/path_to_file/ # ls /bricks/*/.shard -lh | grep /GFID Is there a gluster tool/script that will recreate the file? or can you just sort them sort them properly and then simply cat/copy+ them back together? cat shardGFID.1 .. shardGFID.X > thefile
2018 Apr 23
1
Reconstructing files from shards
> On Apr 23, 2018, at 10:49 AM, WK <wkmail at bneit.com> wrote: > > From some old May 2017 email. I asked the following: > "From the docs, I see you can identify the shards by the GFID > # getfattr -d -m. -e hex path_to_file > # ls /bricks/*/.shard -lh | grep GFID > > Is there a gluster tool/script that will recreate the file? > > or can you just sort
2018 Apr 27
0
Reconstructing files from shards
The short answer is - no there exists no script currently that can piece the shards together into a single file. Long answer: IMO the safest way to convert from sharded to a single file _is_ by copying the data out into a new volume at the moment. Picking up the files from the individual bricks directly and joining them, although fast, is a strict no-no for many reasons - for example, when you
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>: > Is it that really so? yes, i've opened a bug asking developers to block removal of sharding when volume has data on it or to write a huge warning message saying that data loss will happen > I thought that sharding was a extended attribute on the files created when > sharding is enabled. > > Turning off
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto: > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename and > then copy i.e.) > > This should simply store the files without sharding. > If you turn off sharding on a sharded volume with data in it, all sharded
2018 Apr 22
0
Reconstructing files from shards
Il 20/04/2018 21:44, Jamie Lawrence ha scritto: > Hello, > > So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have. > > I'd like to turn sharding off and reverse the
2018 Apr 23
0
Reconstructing files from shards
Il 22/04/2018 11:39, Gandalf Corvotempesta ha scritto: > Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com > <mailto:ab1 at metalit.com>> ha scritto: > > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename > and > then copy i.e.) > > This
2018 Apr 22
0
Reconstructing files from shards
So a stock ovirt with gluster install that uses sharding A. Can't safely have sharding turned off once files are in use B. Can't be expanded with additional bricks Ouch. On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote: >Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha >scritto: > >> Imho
2017 Dec 08
2
Testing sharding on tiered volume
Hi, I'm looking to use sharding on tiered volume. This is very attractive feature that could benefit tiered volume to let it handle larger files without hitting the "out of (hot)space problem". I decided to set test configuration on GlusterFS 3.12.3 when tiered volume has 2TB cold and 1GB hot segments. Shard size is set to be 16MB. For testing 100GB files are used. It seems writes
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the following issue? Thanks! Bug 1549714 - On sharded tiered volume, only first shard of new file goes on hot tier. https://bugzilla.redhat.com/show_bug.cgi?id=1549714 On sharded tiered volume, only first shard of new file goes on hot tier. On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest
2017 Sep 03
3
Poor performance with shard
Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd' command) is approximately 250 Mbs and when the feature is on the writing speed is
2019 Nov 28
1
Stale File Handle Errors During Heavy Writes
I have already tried disabling sharding on a test oVirt volume... The results were devastating for the app, so please do not disable sharding. Best Regards, Strahil NikolovOn Nov 27, 2019 20:55, Olaf Buitelaar <olaf.buitelaar at gmail.com> wrote: > > Hi Tim, > > That issue also seems to point to a stale file. Best i suppose is first?to determine if you indeed have the same shard
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename,
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Hi Gencer, I just checked the volume-profile attachments. Things that seem really odd to me as far as the sharded volume is concerned: 1. Only the replica pair having bricks 5 and 6 on both nodes 09 and 10 seems to have witnessed all the IO. No other bricks witnessed any write operations. This is unacceptable for a volume that has 8 other replica sets. Why didn't the shards get distributed
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Thanks. I think reusing the same volume was the cause of lack of IO distribution. The latest profile output looks much more realistic and in line with i would expect. Let me analyse the numbers a bit and get back. -Krutika On Tue, Jul 4, 2017 at 12:55 PM, <gencer at gencgiyen.com> wrote: > Hi Krutika, > > > > Thank you so much for myour reply. Let me answer all: > >
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all, We are having a rather interesting problem with one of our VM storage systems. The GlusterFS client is throwing errors relating to GFID mismatches. We traced this down to multiple shards being present on the gluster nodes, with different gfids. Hypervisor gluster mount log: [2018-03-25 18:54:19.261733] E [MSGID: 133010] [shard.c:1724:shard_common_lookup_shards_cbk]
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Just noticed that the way you have configured your brick order during volume-create makes both replicas of every set reside on the same machine. That apart, do you see any difference if you change shard-block-size to 512MB? Could you try that? If it doesn't help, could you share the volume-profile output for both the tests (separate)? Here's what you do: 1. Start profile before starting
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Ki Krutika, After that setting: $ dd if=/dev/zero of=/mnt/ddfile bs=1G count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.7351 s, 91.5 MB/s $ dd if=/dev/zero of=/mnt/ddfile2 bs=2G count=1 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 23.7351 s, 90.5 MB/s $ dd if=/dev/zero of=/mnt/ddfile3 bs=1G count=1 1+0 records
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika, I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off. Samples: sr:~# gluster volume profile testvol start Starting volume profile on testvol has been successful sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 1+0 records in 1+0