Displaying 20 results from an estimated 4000 matches similar to: "btrfs RAID1 woes and tiered storage"
2012 Feb 13
1
Cross-subvolume reflink copy (BTRFS_IOC_CLONE over subvolume boundaries)
It''s been nearly a year since the patches needed to implement a reflinked copy
between subvolumes have been posted
(http://permalink.gmane.org/gmane.comp.file-systems.btrfs/9865 ) and I still
get "Invalid cross-device link" error with Linux 3.2.4 while I try to do a cp
--reflink between subvolumes.
This is a *very* useful feature to have (think offline file-level
2012 Jan 05
1
Set primary group of file on samba share from windows
Hi all!
I want to use 'acl group control' setting to delegate privileges to specific
administrators.
Unfortunately, I'm unable to set the primary group using windows file
permissions dialog, I can only add and remove ACL groups.
I tried to do this by removing all groups but one from Windows. This doesn't
work: the primary group has all privs removed and the extra group is
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on hot tier.
On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message -----
> From: "Viktor Nosov" <vnosov at stonefly.com>
> To: gluster-users at gluster.org
> Cc: vnosov at stonefly.com
> Sent: Friday, December 8, 2017 5:45:25 PM
> Subject: [Gluster-users] Testing sharding on tiered volume
>
> Hi,
>
> I'm looking to use sharding on tiered volume. This is very attractive
> feature that could
2023 Jan 11
1
Upgrading system from non-RAID to RAID1
I plan to upgrade an existing C7 computer which currently has one 256 GB SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest of the system remaining the same. The system also has one additional internal and one external harddisk but these should not be touched. The system will continue to run C7.
If I remember correctly, the existing SSD does not use a M2. slot so they
2017 Dec 08
2
Testing sharding on tiered volume
Hi,
I'm looking to use sharding on tiered volume. This is very attractive
feature that could benefit tiered volume to let it handle larger files
without hitting the "out of (hot)space problem".
I decided to set test configuration on GlusterFS 3.12.3 when tiered volume
has 2TB cold and 1GB hot segments. Shard size is set to be 16MB.
For testing 100GB files are used. It seems writes
2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
Tested it in two different environments lately with exactly same results.
Was trying to get better read performance from local mounts with
hundreds of thousands maildir email files by using SSD,
hoping that .gluster file stat read will improve which does migrate
to hot tire.
After seeing what you described for 24 hours and confirming all move
around on the tires is done - killed it.
Here are my
2007 Oct 02
2
Folder renaming oddities.
Hi Timo,
We are observing some weird behaviour when we try to rename an inferior
folder, followed by the superior folder.
These folders have an asterisk in the name.
* LIST (\HasChildren) "." "*Own Family"
* LIST (\HasChildren) "." "*Own Family.Tour"
* LIST (\HasNoChildren) "." "*Own Family.Maid"
* LIST (\HasNoChildren)
2023 Jan 12
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 01:33 PM, H wrote:
> On 01/11/2023 02:09 AM, Simon Matter wrote:
>> What I usually do is this: "cut" the large disk into several pieces of
>> equal size and create individual RAID1 arrays. Then add them as LVM PVs to
>> one large VG. The advantage is that with one error on one disk, you wont
>> lose redundancy on the whole RAID mirror but only on
2018 Jan 30
2
Tiered volume performance degrades badly after a volume stop/start or system restart.
I am fighting this issue:
Bug 1540376 ? Tiered volume performance degrades badly after a
volume stop/start or system restart.
https://bugzilla.redhat.com/show_bug.cgi?id=1540376
Does anyone have any ideas on what might be causing this, and
what a fix or work-around might be?
Thanks!
~ Jeff Byers ~
Tiered volume performance degrades badly after a volume
stop/start or system restart.
The
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
This problem appears to be related to the sqlite3 DB files
that are used for the tiering file access counters, stored on
each hot and cold tier brick in .glusterfs/<volname>.db.
When the tier is first created, these DB files do not exist,
they are created, and everything works fine.
On a stop/start or service restart, the .db files are already
present, albeit empty since I don't have
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> Follow-up question: Is my proposed strategy below correct:
> - Make a copy of all existing directories and files on the current disk using clonezilla.
> - Install the new M.2 SSDs.
> - Partitioning the new SSDs for RAID1 using an external tool.
> - Doing a minimal installation of C7 and mdraid.
> - If choosing three RAID partitions, one for /boot, one for /boot/efi and the
2007 May 18
1
High Latency With Tiered Queues
Hello,
I''m trying to setup what I thought would be a fairly basic tiered
shaping system. I have a 6mbit (768kbps) link coming into my eth1
device, with my LAN IPs on the eth0 device. I want to limit outgoing
traffic so that certain IPs are limited to 400kbps, with 3 classes under
that 400k so certain machines get prioritized (main servers in 1:21,
other servers in 1:22, workstations
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> On 01/11/2023 01:33 PM, H wrote:
>> On 01/11/2023 02:09 AM, Simon Matter wrote:
>>> What I usually do is this: "cut" the large disk into several pieces of
>>> equal size and create individual RAID1 arrays. Then add them as LVM PVs
>>> to
>>> one large VG. The advantage is that with one error on one disk, you
>>> wont
>>>
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> On 01/11/2023 02:09 AM, Simon Matter wrote:
>> What I usually do is this: "cut" the large disk into several pieces of
>> equal size and create individual RAID1 arrays. Then add them as LVM PVs
>> to
>> one large VG. The advantage is that with one error on one disk, you wont
>> lose redundancy on the whole RAID mirror but only on a partial segment.
2006 Jul 21
1
Handling a tiered subscription service
Hi all,
I''m working on a project that involves a tiered pricing structure for
various features, all of which are accessed via a secure admin area.
The features available in each tier, the tier structure and pricing
will change over time, so new tier schemes will come into play,
requiring feature/pricing schemes to be timestamped.
I guess what I''m aiming for is
2023 Jan 11
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 02:09 AM, Simon Matter wrote:
> What I usually do is this: "cut" the large disk into several pieces of
> equal size and create individual RAID1 arrays. Then add them as LVM PVs to
> one large VG. The advantage is that with one error on one disk, you wont
> lose redundancy on the whole RAID mirror but only on a partial segment.
> You can even lose another
2011 Jan 25
3
How to fasten btrfs?
Hi,
I am using 2.6.36.3 kernel with btrfs, 512MB memory and a very slow
disk, no special options for mounting btrfs except noatime. Now I
found it very slow. When I rm a 5GB movie, it took 20 secs.
--
竹密岂妨流水过
山高哪阻野云飞
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
2017 Nov 03
1
Ignore failed connection messages during copying files with tiering
Hi, All,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W
2010 Mar 10
39
SSD Optimizations
I''m looking to try BTRFS on a SSD, and I would like to know what SSD
optimizations it applies. Is there a comprehensive list of what ssd
mount option does? How are the blocks and metadata arranged? Are there
options available comparable to ext2/ext3 to help reduce wear and
improve performance?
Specifically, on ext2 (journal means more writes, so I don''t use ext3 on
SSDs,