Displaying 20 results from an estimated 457 matches for "sharded".
Did you mean:
shaded
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on hot tier.
On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest are written to t...
2011 Oct 23
4
summarizing a data frame i.e. count -> group by
Hello,
This is one problem at the time :)
I have a data frame df that looks like this:
time partitioning_mode workload runtime
1 1 sharding query 607
2 1 sharding query 85
3 1 sharding query 52
4 1 sharding query 79
5 1 sharding query 77
6 1 sharding query 67
7 1
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra,
Sorry for the late follow up. I have some more data on the issue.
The issue tends to happen when the shards are created. The easiest time
to reproduce this is during an initial VM disk format. This is a log
from a test VM that was launched, and then partitioned and formatted
with LVM / XFS:
[2018-04-03 02:05:00.838440] W [MSGID: 109048]
2011 Oct 23
1
unfold list (variable number of columns) into a data frame
Hello,
I used R a lot one year ago and now I am a bit rusty :)
I have my raw data which correspond to the list of runtimes per minute (minute "1" "2" "3" in two database modes "sharding" and "query" and two workload types "query" and "refresh") and as a list of char arrays that looks like this:
> str(data)
List of 122
$ :
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :).
This looks to be a genuine issue which requires some effort in fixing it.
Can you file a bug? I need following information attached to bug:
* Client and bricks logs. If you can reproduce the issue, please set
diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If
you cannot reproduce the issue or if you cannot accommodate such big logs,
please set
2018 Apr 22
4
Reconstructing files from shards
...metalit.com> ha scritto:
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
If you turn off sharding on a sharded volume with data in it, all sharded
files would be unreadable
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180422/c47baad6/attachment.html>
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> The gfid mismatch here is between the shard and its "link-to" file, the
> creation of which happens at a layer below that of shard translator on the
> stack.
>
> Adding DHT devs to take a look.
>
Thanks Krutika. I assume shard doesn't do any dentry operations like
rename,
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian,
Do you've a reproducer for this bug? If not a specific one, a general
outline of what operations where done on the file will help.
regards,
Raghavendra
On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
>
>
> On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
> wrote:
>
>> The gfid mismatch
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all,
We are having a rather interesting problem with one of our VM storage
systems. The GlusterFS client is throwing errors relating to GFID
mismatches. We traced this down to multiple shards being present on the
gluster nodes, with different gfids.
Hypervisor gluster mount log:
[2018-03-25 18:54:19.261733] E [MSGID: 133010]
[shard.c:1724:shard_common_lookup_shards_cbk]
2018 Apr 23
1
Reconstructing files from shards
...a huge warning message
saying that data loss will happen
> I thought that sharding was a extended attribute on the files created when
> sharding is enabled.
>
> Turning off sharding on the volume would not turn off sharding on the files,
> but on newly created files ...
No, because sharded file are reconstructed on-the-fly based on the
volume's sharding property.
If you disable sharding, gluster knows nothing about the previous
shard configuration, thus won't be able to read
all shards for each file. It will only returns the first shard,
resulting in data-loss or corruption.
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
The gfid mismatch here is between the shard and its "link-to" file, the
creation of which happens at a layer below that of shard translator on the
stack.
Adding DHT devs to take a look.
-Krutika
On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote:
> Hello all,
>
> We are having a rather interesting problem with one of our VM storage
>
2018 Apr 23
0
Reconstructing files from shards
...Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename
> and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
>
> If you turn off sharding on a sharded volume with data in it, all
> sharded files would be unreadable
Is it that really so?
I thought that sharding was a extended attribute on the files created
when sharding is enabled.
Turning off sharding on the volume would not turn off sharding on the
files, but on newly created files ...
An...
2018 Apr 20
7
Reconstructing files from shards
....shard.file-size to get the size of a partly filled hole; that value looks like base64, but when I attempt to decode it, base64 complains about invalid input.
In short, I can't find sufficient information to reconstruct these. Has anyone written a current, step-by-step guide on reconstructing sharded files? Or has someone has written a tool so I don't have to?
Thanks,
-j
[1] Why one would choose to annoy the crap out of their fellow gluster users by using video to convey about 80 bytes of ASCII-encoded information, I have no idea.
[2] http://lists.gluster.org/pipermail/gluster-devel/201...
2018 Apr 23
1
Reconstructing files from shards
...work with files with holes.
Quoting from : http://lists.gluster.org/pipermail/gluster-devel/2017-March/052212.html
- - snip
1. A non-existent/missing shard anywhere between offset $SHARD_BLOCK_SIZE
through ceiling ($FILE_SIZE/$SHARD_BLOCK_SIZE)
indicates a hole. When you reconstruct data from a sharded file of this
nature, you need to take care to retain this property.
2. The above is also true for partially filled shards between offset
$SHARD_BLOCK_SIZE through ceiling ($FILE_SIZE/$SHARD_BLOCK_SIZE).
What do I mean by partially filled shards? Shards whose sizes are not equal
to $SHARD_BLOCK_SIZ...
2017 Sep 03
3
Poor performance with shard
Hey everyone!
I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet
connection.
The storage is configured with 3 gluster volumes, every volume has 12
bricks (4 bricks on every server, 1 per ssd in the server).
With the 'features.shard' off option my writing speed (using the 'dd'
command) is approximately 250 Mbs and when the feature is on the writing
speed is
2017 Dec 08
2
Testing sharding on tiered volume
Hi,
I'm looking to use sharding on tiered volume. This is very attractive
feature that could benefit tiered volume to let it handle larger files
without hitting the "out of (hot)space problem".
I decided to set test configuration on GlusterFS 3.12.3 when tiered volume
has 2TB cold and 1GB hot segments. Shard size is set to be 16MB.
For testing 100GB files are used. It seems writes
2018 Apr 23
0
Reconstructing files from shards
...ile-size to get the size of a partly filled hole; that value looks like base64, but when I attempt to decode it, base64 complains about invalid input.
>
> In short, I can't find sufficient information to reconstruct these. Has anyone written a current, step-by-step guide on reconstructing sharded files? Or has someone has written a tool so I don't have to?
>
> Thanks,
>
> -j
>
>
> [1] Why one would choose to annoy the crap out of their fellow gluster users by using video to convey about 80 bytes of ASCII-encoded information, I have no idea.
> [2] http://lists.glu...
2017 Dec 18
0
Testing sharding on tiered volume
...eq times out. You may end up spending alot of time waiting for your hot files to rebalance to the cold tier since its out of space, you will also probably have other files being written to the cold tier with the hot tier full, further using up your IOPs.
I don't know how tiering would treat sharded files, would it only promote the shards of the file that are in use or would it try to put the whole file / all the shards on the hot tier?
If you get a free min update me on what you are trying todo, happy to help however I can.
-b
>
> Best regards,
>
> Viktor Nosov
>
>...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote:
> Hi,
>
> Thank you for the answer and sorry for delay:
>
> 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
> 1. What does the glustershd.log say on all 3 nodes when you run
> the command? Does it complain anything about these files?
>
>
>
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal command like you did earlier and see if
> heals