similar to: time for "balance"

Displaying 20 results from an estimated 2000 matches similar to: "time for "balance""

2011 Apr 09
16
wrong values in "df" and "btrfs filesystem df"
Hallo, linux-btrfs, First I create an array of 2 disks with mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1 and mount it at /srv/MM. Then I fill it with about 1,6 TByte. And then I add /dev/sde1 via btrfs device add /dev/sde1 /srv/MM btrfs filesystem balance /srv/MM (it run about 20 hours) Then I work on it, copy some new files, delete some old files - all works well. Only df
2013 May 01
9
Best Practice - Partition, or not?
Hello If I want to manage a complete disk with btrfs, what''s the "Best Practice"? Would it be best to create the btrfs filesystem on "/dev/sdb", or would it be better to create just one partition from start to end and then do "mkfs.btrfs /dev/sdb1"? Would the same recomendation hold true, if we''re talking about huge disks, like 4TB or so?
2011 Apr 01
15
btrfs balancing start - and stop?
Hi, My company is testing btrfs (kernel 2.6.38) on a slave MySQL database server with a 195Gb filesystem (of which about 123Gb is used). So far, we''re quite impressed with the performance. Our database loads are high, and if filesystem performance wasn''t good, MySQL replication wouldn''t be able to keep up and the slave latency would begin to climb. This though, is
2012 May 05
5
Is it possible to reclaim block groups once they are allocated to data or metadata?
Hello list, recently reformatted my home partition from XFS to RAID1 btrfs. I used the default options to mkfs.btrfs except for enabling raid1 for data as well as metadata. Filesystem is made up of two 1TB drives. mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show Label: none uuid: f08a8896-e03e-4064-9b94-9342fb547e47 Total devices 2 FS bytes used 888.06GB devid 1 size 931.51GB used
2017 Jun 26
1
mirror block devices
Hi folks, I have to migrate a set of iscsi backstores to a new target via network. To reduce downtime I would like to mirror the active volumes first, next stop the initators, and then do a final incremental sync. The backstores have a size between 256 GByte and 1 TByte each. In toto its about 8 TByte. Of course I have found the --copy-devices patch, but I wonder if this works as expected? Is
2013 May 10
5
Btrfs balance invalid argument error
Hi list, I am using kernel 3.9.0, btrfs-progs 0.20-rc1-253-g7854c8b. I have a three disk array of level single: # btrfs fi sh Label: none uuid: 2e905f8f-e525-4114-afa6-cce48f77b629 Total devices 3 FS bytes used 3.80TB devid 1 size 2.73TB used 2.25TB path /dev/sdd devid 2 size 2.73TB used 1.55TB path /dev/sdc devid 3 size 2.73TB used 0.00 path /dev/sdb
2010 Jun 28
1
ACE does not work for me at all.
Hello, all. 1) ACE does not work for me I am in a voip project using Speex, failed to have hte Speex ACE work. here is how I initialize it: /** * Configurations : * #define BITS_PER_SAMPLE (16) * #define SAMPLE_RATE (8000) * #define CHANNEL_NB (1) * #define DURATION (20) * SPEEX_MODEID_NB */ _eco_state = speex_echo_state_init(_encframe_size, 10*_encframe_size); speex_echo_ctl(_eco_state,
2013 Feb 06
3
btrfs balance -> hang/crash
Hi, my btrfs "hangs" when doing a balance operation. I''m using a 3.7.1 kernel from opensuse: linux-opzz 3.7.1-2.10-m4 #11 SMP PREEMPT Fri Jan 11 18:04:04 CET 2013 x86_64 x86_64 x86_64 GNU/Linux and Btrfs v0.19+ I did a scrub which completed without errors. Then I tried "btrfs filesystem balance /" which work fine for the first 23 of 46 chunks, then ist stopped
2015 Jun 10
2
[PATCH] New API: btrfs_replace_start
Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> --- daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++ generator/actions.ml | 19 +++++++++++++++++++ tests/btrfs/test-btrfs-devices.sh | 8 ++++++++ 3 files changed, 67 insertions(+) diff --git a/daemon/btrfs.c b/daemon/btrfs.c index 39392f7..acc300d 100644 --- a/daemon/btrfs.c +++
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
在 2015年06月12日 17:12, Pino Toscano 写道: > On Friday 12 June 2015 10:58:34 Pino Tsao wrote: >> Hi, >> >> 在 2015年06月11日 17:43, Pino Toscano 写道: >>> Hi, >>> >>> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote: >>>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> >>>> --- >>>> daemon/btrfs.c
2006 Aug 04
3
OCFS2 and ASM Question
Ok guys & gals here is the scenario: 1.) Host RHEL 4 U3 2.6.9-34.0.2.EL 2.) OCFS2 latest version 3.) Successfully formatted & mounted OCFS2 filesystems on 2 nodes /dev/sdb1 /u02/oradata/usdev/voting /dev/sdc1 /u02/oradata/usdev/data01 /dev/sdd1 /u02/oradata/usdev/data02 /dev/sde1 /u02/oradata/usdev/data03 4.) Downloaded & installed ASMLib 2.0 on both nodes 5.) Ran
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
Hi, 在 2015年06月11日 17:43, Pino Toscano 写道: > Hi, > > On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote: >> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> >> --- >> daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++ >> generator/actions.ml | 19 +++++++++++++++++++ >> tests/btrfs/test-btrfs-devices.sh |
2015 Feb 28
9
Looking for a life-save LVM Guru
Dear All, I am in desperate need for LVM data rescue for my server. I have an VG call vg_hosting consisting of 4 PVs each contained in a separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). And this LV: lv_home was created to use all the space of the 4 PVs. Right now, the third hard drive is damaged; and therefore the third PV (/dev/sdc1) cannot be accessed anymore. I would like
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi, When I upgraded my cluster, df started returning some odd numbers for my legacy volumes. Newly created volumes after the upgrade, df works just fine. I have been researching since Monday and have not found any reference to this symptom. "vm-images" is the old legacy volume, "test" is the new one. [root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2012 Nov 21
5
mixing WD20EFRX and WD2002FYPS in one pool
Hi, after a flaky 8-drive Linux RAID10 just shredded about 2 TByte worth of my data at home (conveniently just before I could make a backup) I''ve decided to both go full redundancy as well as all zfs at home. A couple questions: is there a way to make WD20EFRX (2 TByte, 4k sectors) and WD200FYPS (4k internally, reported as 512 Bytes?) work well together on a current OpenIndiana? Which
2013 Dec 02
2
backup mdbox best strategy
Hello, i have to backup (tape library) a mailsystem with about 300.000 Mailboxes on 2 backends. Summary of all mailboxes are 2 TByte. The mailstore is mdbox. Is it save to do a simple filesystem backup (full and incremental) with backupsoftware? What is the prefered strategy to do a backup for desaster recovery (mailsystem crash) and restoring single usermailboxes? Regards, Claus
2015 Feb 28
1
Looking for a life-save LVM Guru
Dear James, Thank you for being quick to help. Yes, I could see all of them: # vgs # lvs # pvs Regards, Khem On Sat, February 28, 2015 7:37 am, James A. Peltier wrote: > > > ----- Original Message ----- > | Dear All, > | > | I am in desperate need for LVM data rescue for my server. > | I have an VG call vg_hosting consisting of 4 PVs each contained in a > | separate
2011 Apr 09
2
switching "balance" into background
Hallo, linux-btrfs, I can''t switch a running "btrfs filesystem balance ..." via ctrl z bg into the background, with other jobs this way works. The stopping command "ctrl z" doesn''t work. (may be on other keyboards it''s "ctrl y") What goes wrong? Viele Gruesse! Helmut -- To unsubscribe from this list: send the line
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello, I am on Ubuntu Server 13.04 with Linux 3.8. I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard drives has failed, I mean it''s materially dead. :~$ sudo btrfs filesystem show Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 Total devices 5 FS bytes used 226.90GB devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2011 Nov 08
2
Multiple Patitions with with mdbox
Having > 10 TByte mailstore filesystem-checks takes too much time. At the moment we have four different partitions, but I don't like to set symlinks or LDAP-flags to sort customers and their domains to there individual mount-point. I'd like to work with mdbox:/mail/%d/%n to calculate the path automatically. How do you handle >> 10 TB mailstore? I'm very interested in the