Displaying 20 results from an estimated 4000 matches similar to: "Testing sharding on tiered volume"
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message -----
> From: "Viktor Nosov" <vnosov at stonefly.com>
> To: gluster-users at gluster.org
> Cc: vnosov at stonefly.com
> Sent: Friday, December 8, 2017 5:45:25 PM
> Subject: [Gluster-users] Testing sharding on tiered volume
>
> Hi,
>
> I'm looking to use sharding on tiered volume. This is very attractive
> feature that could
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on hot tier.
On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh,
Thanks for response!
After taking more tests with this specific geo-replication configuration I realized that
file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication.
I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster.
But this is not a case in my tests. File on
2018 Jan 16
2
Deploying geo-replication to local peer
Hi,
I'm looking for glusterfs feature that can be used to transform data between
volumes of different types provisioned on the same nodes.
It could be, for example, transformation from disperse to distributed
volume.
The possible option is to invoke geo-replication between volumes. It seems
is works properly.
But I'm concern about requirement from Administration Guide for Red Hat
Gluster
2018 Jan 17
0
Deploying geo-replication to local peer
Hi Viktor,
Answers inline
On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov <vnosov at stonefly.com> wrote:
> Hi,
>
> I'm looking for glusterfs feature that can be used to transform data
> between
> volumes of different types provisioned on the same nodes.
> It could be, for example, transformation from disperse to distributed
> volume.
> The possible option is to
2018 Jan 30
2
Tiered volume performance degrades badly after a volume stop/start or system restart.
I am fighting this issue:
Bug 1540376 ? Tiered volume performance degrades badly after a
volume stop/start or system restart.
https://bugzilla.redhat.com/show_bug.cgi?id=1540376
Does anyone have any ideas on what might be causing this, and
what a fix or work-around might be?
Thanks!
~ Jeff Byers ~
Tiered volume performance degrades badly after a volume
stop/start or system restart.
The
2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
Tested it in two different environments lately with exactly same results.
Was trying to get better read performance from local mounts with
hundreds of thousands maildir email files by using SSD,
hoping that .gluster file stat read will improve which does migrate
to hot tire.
After seeing what you described for 24 hours and confirming all move
around on the tires is done - killed it.
Here are my
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
This problem appears to be related to the sqlite3 DB files
that are used for the tiering file access counters, stored on
each hot and cold tier brick in .glusterfs/<volname>.db.
When the tier is first created, these DB files do not exist,
they are created, and everything works fine.
On a stop/start or service restart, the .db files are already
present, albeit empty since I don't have
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian,
Do you've a reproducer for this bug? If not a specific one, a general
outline of what operations where done on the file will help.
regards,
Raghavendra
On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
>
>
> On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
> wrote:
>
>> The gfid mismatch
2018 Apr 20
7
Reconstructing files from shards
Hello,
So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have.
I'd like to turn sharding off and reverse the files back to normal. Some of these are sparse files, so I need to account
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> The gfid mismatch here is between the shard and its "link-to" file, the
> creation of which happens at a layer below that of shard translator on the
> stack.
>
> Adding DHT devs to take a look.
>
Thanks Krutika. I assume shard doesn't do any dentry operations like
rename,
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
The gfid mismatch here is between the shard and its "link-to" file, the
creation of which happens at a layer below that of shard translator on the
stack.
Adding DHT devs to take a look.
-Krutika
On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote:
> Hello all,
>
> We are having a rather interesting problem with one of our VM storage
>
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :).
This looks to be a genuine issue which requires some effort in fixing it.
Can you file a bug? I need following information attached to bug:
* Client and bricks logs. If you can reproduce the issue, please set
diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If
you cannot reproduce the issue or if you cannot accommodate such big logs,
please set
2018 Apr 23
0
Reconstructing files from shards
From some old May 2017 email. I asked the following:
"From the docs, I see you can identify the shards by the GFID
# getfattr -d -m. -e hex/path_to_file/
# ls /bricks/*/.shard -lh | grep /GFID
Is there a gluster tool/script that will recreate the file?
or can you just sort them sort them properly and then simply cat/copy+
them back together?
cat shardGFID.1 .. shardGFID.X > thefile
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all,
We are having a rather interesting problem with one of our VM storage
systems. The GlusterFS client is throwing errors relating to GFID
mismatches. We traced this down to multiple shards being present on the
gluster nodes, with different gfids.
Hypervisor gluster mount log:
[2018-03-25 18:54:19.261733] E [MSGID: 133010]
[shard.c:1724:shard_common_lookup_shards_cbk]
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra,
Sorry for the late follow up. I have some more data on the issue.
The issue tends to happen when the shards are created. The easiest time
to reproduce this is during an initial VM disk format. This is a log
from a test VM that was launched, and then partitioned and formatted
with LVM / XFS:
[2018-04-03 02:05:00.838440] W [MSGID: 109048]
2018 Apr 23
1
Reconstructing files from shards
> On Apr 23, 2018, at 10:49 AM, WK <wkmail at bneit.com> wrote:
>
> From some old May 2017 email. I asked the following:
> "From the docs, I see you can identify the shards by the GFID
> # getfattr -d -m. -e hex path_to_file
> # ls /bricks/*/.shard -lh | grep GFID
>
> Is there a gluster tool/script that will recreate the file?
>
> or can you just sort
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>:
> Is it that really so?
yes, i've opened a bug asking developers to block removal of sharding
when volume has data on it or to write a huge warning message
saying that data loss will happen
> I thought that sharding was a extended attribute on the files created when
> sharding is enabled.
>
> Turning off
2018 Feb 09
1
Tiering Volumns
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto:
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
If you turn off sharding on a sharded volume with data in it, all sharded