search for: shards

Displaying 20 results from an estimated 441 matches for "shards".

Did you mean: shard
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
...On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest are written to the cold tier. This is unfortunate for archival applications where the hot tier is fast, but the cold tier is very slow. After the tier- promote-frequency (default 120 seconds), all of the shards do migrate to hot tier, but for archival applications, this migration is not helpful since the file is likely to not be accessed again just after being copied on, and it will later just migrate back to the cold tier. Sharding should be, and needs to be used with very large archive files because of...
2011 Oct 23
4
summarizing a data frame i.e. count -> group by
Hello, This is one problem at the time :) I have a data frame df that looks like this: time partitioning_mode workload runtime 1 1 sharding query 607 2 1 sharding query 85 3 1 sharding query 52 4 1 sharding query 79 5 1 sharding query 77 6 1 sharding query 67 7 1
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra, Sorry for the late follow up. I have some more data on the issue. The issue tends to happen when the shards are created. The easiest time to reproduce this is during an initial VM disk format. This is a log from a test VM that was launched, and then partitioned and formatted with LVM / XFS: [2018-04-03 02:05:00.838440] W [MSGID: 109048] [dht-common.c:9732:dht_rmdir_cached_lookup_cbk] 0-ovirt-350-zon...
2011 Oct 23
1
unfold list (variable number of columns) into a data frame
Hello, I used R a lot one year ago and now I am a bit rusty :) I have my raw data which correspond to the list of runtimes per minute (minute "1" "2" "3" in two database modes "sharding" and "query" and two workload types "query" and "refresh") and as a list of char arrays that looks like this: > str(data) List of 122 $ :
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
...ing this diagnostic information as early as you can. regards, Raghavendra On Tue, Apr 3, 2018 at 7:52 AM, Ian Halliday <ihalliday at ndevix.com> wrote: > Raghavendra, > > Sorry for the late follow up. I have some more data on the issue. > > The issue tends to happen when the shards are created. The easiest time to > reproduce this is during an initial VM disk format. This is a log from a > test VM that was launched, and then partitioned and formatted with LVM / > XFS: > > [2018-04-03 02:05:00.838440] W [MSGID: 109048] > [dht-common.c:9732:dht_rmdir_cached_lo...
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto: > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename and > then copy i.e.) > > This should simply store the files without sharding. > If you turn off sharding on a sharded volume with data in it, all sharded
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
...of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename, link, unlink on the path of file (not the gfid handle based path) internally while managing shards. Can you confirm? If it does these operations, what fops does it do? @Ian, I can suggest following way to fix the problem: * Since one of files listed is a DHT linkto file, I am assuming there is only one shard of the file. If not, please list out gfids of other shards and don't proceed with...
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
...that of shard translator on the >> stack. >> >> Adding DHT devs to take a look. >> > > Thanks Krutika. I assume shard doesn't do any dentry operations like > rename, link, unlink on the path of file (not the gfid handle based path) > internally while managing shards. Can you confirm? If it does these > operations, what fops does it do? > > @Ian, > > I can suggest following way to fix the problem: > * Since one of files listed is a DHT linkto file, I am assuming there is > only one shard of the file. If not, please list out gfids of other s...
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all, We are having a rather interesting problem with one of our VM storage systems. The GlusterFS client is throwing errors relating to GFID mismatches. We traced this down to multiple shards being present on the gluster nodes, with different gfids. Hypervisor gluster mount log: [2018-03-25 18:54:19.261733] E [MSGID: 133010] [shard.c:1724:shard_common_lookup_shards_cbk] 0-ovirt-zone1-shard: Lookup on shard 7 failed. Base file gfid = 87137cac-49eb-492a-8f33-8e33470d8cb7 [Stale file...
2018 Apr 23
1
Reconstructing files from shards
...would not turn off sharding on the files, > but on newly created files ... No, because sharded file are reconstructed on-the-fly based on the volume's sharding property. If you disable sharding, gluster knows nothing about the previous shard configuration, thus won't be able to read all shards for each file. It will only returns the first shard, resulting in data-loss or corruption.
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
...Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote: > Hello all, > > We are having a rather interesting problem with one of our VM storage > systems. The GlusterFS client is throwing errors relating to GFID > mismatches. We traced this down to multiple shards being present on the > gluster nodes, with different gfids. > > Hypervisor gluster mount log: > > [2018-03-25 18:54:19.261733] E [MSGID: 133010] [shard.c:1724:shard_common_lookup_shards_cbk] > 0-ovirt-zone1-shard: Lookup on shard 7 failed. Base file gfid = > 87137cac-49eb-492a-...
2018 Apr 23
0
Reconstructing files from shards
Il 22/04/2018 11:39, Gandalf Corvotempesta ha scritto: > Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com > <mailto:ab1 at metalit.com>> ha scritto: > > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename > and > then copy i.e.) > > This
2018 Apr 20
7
Reconstructing files from shards
...disruptive for us, and language I've seen reading tickets and old messages to this list make me think that isn't needed anymore, but confirmation of that would be good. The only discussion I can find are these videos[1]: http://opensource-storage.blogspot.com/2016/07/de-mystifying-gluster-shards.html , and some hints[2] that are old enough that I don't trust them without confirmation that nothing's changed. The video things don't acknowledge the existence of file holes. Also, the hint in [2] mentions using trusted.glusterfs.shard.file-size to get the size of a partly filled hol...
2018 Apr 23
1
Reconstructing files from shards
> On Apr 23, 2018, at 10:49 AM, WK <wkmail at bneit.com> wrote: > > From some old May 2017 email. I asked the following: > "From the docs, I see you can identify the shards by the GFID > # getfattr -d -m. -e hex path_to_file > # ls /bricks/*/.shard -lh | grep GFID > > Is there a gluster tool/script that will recreate the file? > > or can you just sort them sort them properly and then simply cat/copy+ them back together? > > cat shardGFID.1 ....
2017 Sep 03
3
Poor performance with shard
Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd' command) is approximately 250 Mbs and when the feature is on the writing speed is
2017 Dec 08
2
Testing sharding on tiered volume
Hi, I'm looking to use sharding on tiered volume. This is very attractive feature that could benefit tiered volume to let it handle larger files without hitting the "out of (hot)space problem". I decided to set test configuration on GlusterFS 3.12.3 when tiered volume has 2TB cold and 1GB hot segments. Shard size is set to be 16MB. For testing 100GB files are used. It seems writes
2018 Apr 23
0
Reconstructing files from shards
From some old May 2017 email. I asked the following: "From the docs, I see you can identify the shards by the GFID # getfattr -d -m. -e hex/path_to_file/ # ls /bricks/*/.shard -lh | grep /GFID Is there a gluster tool/script that will recreate the file? or can you just sort them sort them properly and then simply cat/copy+ them back together? cat shardGFID.1 .. shardGFID.X > thefile "...
2017 Dec 18
0
Testing sharding on tiered volume
...ot of time waiting for your hot files to rebalance to the cold tier since its out of space, you will also probably have other files being written to the cold tier with the hot tier full, further using up your IOPs. I don't know how tiering would treat sharded files, would it only promote the shards of the file that are in use or would it try to put the whole file / all the shards on the hot tier? If you get a free min update me on what you are trying todo, happy to help however I can. -b > > Best regards, > > Viktor Nosov > > > _________________________________...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals