search for: sharding

Displaying 20 results from an estimated 441 matches for "sharding".

Did you mean: harding
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
...slow. After the tier- promote-frequency (default 120 seconds), all of the shards do migrate to hot tier, but for archival applications, this migration is not helpful since the file is likely to not be accessed again just after being copied on, and it will later just migrate back to the cold tier. Sharding should be, and needs to be used with very large archive files because of bug: Bug 1277112 - Data Tiering:File create and new writes to existing file fails when the hot tier is full instead of redirecting/flushing the data to cold tier which sharding with tiering helps mitigate. This...
2011 Oct 23
4
summarizing a data frame i.e. count -> group by
Hello, This is one problem at the time :) I have a data frame df that looks like this: time partitioning_mode workload runtime 1 1 sharding query 607 2 1 sharding query 85 3 1 sharding query 52 4 1 sharding query 79 5 1 sharding query 77 6 1 sharding query 67 7 1 sharding query 98 8 1 shardin...
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
...Krutika Dhananjay" <kdhananj at redhat.com> Cc: "Ian Halliday" <ihalliday at ndevix.com>; "gluster-user" <gluster-users at gluster.org>; "Nithya Balachandran" <nbalacha at redhat.com> Sent: 3/26/2018 2:37:21 AM Subject: Re: [Gluster-users] Sharding problem - multiple shard copies with mismatching gfids >Ian, > >Do you've a reproducer for this bug? If not a specific one, a general >outline of what operations where done on the file will help. > >regards, >Raghavendra > >On Mon, Mar 26, 2018 at 12:55 PM, Raghave...
2011 Oct 23
1
unfold list (variable number of columns) into a data frame
Hello, I used R a lot one year ago and now I am a bit rusty :) I have my raw data which correspond to the list of runtimes per minute (minute "1" "2" "3" in two database modes "sharding" and "query" and two workload types "query" and "refresh") and as a list of char arrays that looks like this: > str(data) List of 122 $ : chr [1:163] "1" "sharding" "query" "607" "85" "52" "79&q...
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
...ot; <kdhananj at redhat.com> > Cc: "Ian Halliday" <ihalliday at ndevix.com>; "gluster-user" < > gluster-users at gluster.org>; "Nithya Balachandran" <nbalacha at redhat.com> > Sent: 3/26/2018 2:37:21 AM > Subject: Re: [Gluster-users] Sharding problem - multiple shard copies with > mismatching gfids > > Ian, > > Do you've a reproducer for this bug? If not a specific one, a general > outline of what operations where done on the file will help. > > regards, > Raghavendra > > On Mon, Mar 26, 2018 at 12:5...
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto: > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename and > then copy i.e.) > > This should simply store the files without sharding. > If you turn off sharding on a sharded volume with data in it, all sharded files would be unreadable > ---------...
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename,
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian, Do you've a reproducer for this bug? If not a specific one, a general outline of what operations where done on the file will help. regards, Raghavendra On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> > wrote: > >> The gfid mismatch
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all, We are having a rather interesting problem with one of our VM storage systems. The GlusterFS client is throwing errors relating to GFID mismatches. We traced this down to multiple shards being present on the gluster nodes, with different gfids. Hypervisor gluster mount log: [2018-03-25 18:54:19.261733] E [MSGID: 133010] [shard.c:1724:shard_common_lookup_shards_cbk]
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>: > Is it that really so? yes, i've opened a bug asking developers to block removal of sharding when volume has data on it or to write a huge warning message saying that data loss will happen > I thought that sharding was a extended attribute on the files created when > sharding is enabled. > > Turning off sharding on the volume would not turn off sharding on the files, > but...
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
The gfid mismatch here is between the shard and its "link-to" file, the creation of which happens at a layer below that of shard translator on the stack. Adding DHT devs to take a look. -Krutika On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote: > Hello all, > > We are having a rather interesting problem with one of our VM storage >
2018 Apr 23
0
Reconstructing files from shards
Il 22/04/2018 11:39, Gandalf Corvotempesta ha scritto: > Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com > <mailto:ab1 at metalit.com>> ha scritto: > > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename > and > then copy i.e.) > > This should simply store the files without sharding. > > > If you turn off sharding on a sharded volume with data in it, all > sharded files...
2018 Apr 20
7
Reconstructing files from shards
Hello, So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have. I'd like to turn sharding off and reverse the files back to normal. Some of these are sparse f...
2018 Apr 23
1
Reconstructing files from shards
...ted and unsupported except for reading the files off on a client, blowing away the volume and reconstructing. Which is a problem. -j > -wk > > > On 4/20/2018 12:44 PM, Jamie Lawrence wrote: >> Hello, >> >> So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have. >> >> I'd like to turn sharding off and reverse the files back to normal. Some of...
2017 Sep 03
3
Poor performance with shard
Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd' command) is approximately 250 Mbs and when the feature is on the writing speed is
2017 Dec 08
2
Testing sharding on tiered volume
Hi, I'm looking to use sharding on tiered volume. This is very attractive feature that could benefit tiered volume to let it handle larger files without hitting the "out of (hot)space problem". I decided to set test configuration on GlusterFS 3.12.3 when tiered volume has 2TB cold and 1GB hot segments. Shard size is s...
2018 Apr 23
0
Reconstructing files from shards
...the file from the mount to that of your stitched file." We tested it with some VM files and it indeed worked fine. That was probably on 3.10.1 at the time. -wk On 4/20/2018 12:44 PM, Jamie Lawrence wrote: > Hello, > > So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have. > > I'd like to turn sharding off and reverse the files back to normal. Some of these are...
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could benefit tiered volume to let it handle larger files > without hitting the "out of (hot)space problem". > I decided to set test configuration...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals