Displaying 20 results from an estimated 10000 matches similar to: "Unable to remove failing drive from storage pool"
2011 Nov 22
1
Recovering data from old corrupted file system
I have a corrupted multi-device file system that got corrupted ages
ago (as I recall, one of the drives stopped responding, causing btrfs
to panic). I am hoping to recover some of the data. For what it''s
worth, here is the dmesg output from trying to mount the file system
on a 3.0 kernel:
device label Media devid 6 transid 816153 /dev/sdq
device label Media devid 7 transid 816153
2013 Apr 30
1
Panic while running defrag
I ran into a panic while running find -xdev | xargs brtfs fi defrag 
''{}''. I don''t remember the exact command because the history was not 
saved. I also started and stopped it a few times however.
The kernel logs were on a different filesystem. Here is the 
kern.log:http://fpaste.org/9383/36729191/
My setup is two 2TB hard drives in raid 1. They are both sata drives so
2012 Sep 14
1
Issues with routing IPv6 to KVM Guests
Hi People,
I have some issues with routing ipv6 to my kvm guests. I use a bridge 
interface with bridge-utils like recommended in the most howtos.
Bridge conf: http://fpaste.org/hh9U/
ip -6 route show output: http://fpaste.org/c5Rd/
sysctl.conf: http://fpaste.org/oMjD/
Thanks for your help in advance. If you need more informations just let 
me know.
David Hackl
2012 Sep 25
3
[PATCH] Btrfs: limit thread pool size when remounting
For some asynchronous threads, such as submit worker and cache worker, we
limit their thread pool size when mounting.
So we also need to do such things when remounting.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
---
 fs/btrfs/super.c |   13 ++++++++-----
 1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index 83d6f9f..a58e834 100644
---
2014 Mar 11
0
JABBER_STATUS issue
Hello Everyone, 
I am using this bash script to pull resource id 
http://fpaste.org/84173/51169913/ 
this whole macro http://fpaste.org/84174/51176013/ 
that what I see when dialplan ran 
Executing [s at macro-missed-call-in:3] Set("SIP/babytel-00000022", "RES=9c32ecc4 
-- ") in new stack 
-- Executing [s at macro-missed-call-in:4] GotoIf("SIP/babytel-00000022",
2017 Aug 10
0
Errors on an SSD drive
On Thu, Aug 10, 2017, 6:48 AM Robert Moskowitz <rgm at htt-consult.com> wrote:
>
>
> On 08/09/2017 10:46 AM, Chris Murphy wrote:
> > If it's a bad sector problem, you'd write to sector 17066160 and see if
> the
> > drive complies or spits back a write error. It looks like a bad sector in
> > that the same LBA is reported each time but I've only ever
2011 May 05
1
Converting 1-drive ext4 to 4-drive raid10 btrfs
Hello!
I have a 1 TB ext4 drive that''s quite full (~50 GB free space, though I
could free up another 100 GB or so if necessary) and two empty 0.5 TB
drives.
Is it possible to get another 1 TB drive and combine the four drives to
a btrfs raid10 setup without (if all goes well) losing my data?
Regards,
Paul
--
To unsubscribe from this list: send the line "unsubscribe
2015 Oct 12
1
Megacli issue with Xen PERC 5/i Poweredge 2950 II
I am having an issue with Megacli when adding or removing virtual 
disks.  I will post a link to fpaste that has the issue shown.
In brief whenever i add, or remove, a virtual disk via MegaCli the 
attempt succeeds and the adapter is configured but the OS remounts dm-0 
in read only.  dm-0 is my root lv.  This issue does not happen if i boot 
into the default, non Xen, CentOS kernel.  On stock
2010 Nov 21
0
[JFS] Kernel oops when tried to access mounted but unplugged storage
Hello.
I''ve built a kernel from
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
(Date: Fri Nov 19 19:46:45 2010 -0800)
and got a kernel oops when tried to access to unplugged,
but mounted external usb storage formatted with JFS.
 
Steps to reproduce:
  mkfs.jfs /dev/sdb1 (unpluggable USB hard drive)
  mount /dev/sdb1 /mnt/drive
  cd /mnt/drive
  touch test
  sync
 
2013 Mar 28
1
question about replacing a drive in raid10
Hi all,
I have a question about replacing a drive in raid10 (and linux kernel 3.8.4).
A bad disk was physical removed from the server. After this a new disk
was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs
FS.
After this the server was rebooted and I mounted the filesystem in
degraded mode. It seems that a previous started balance continued.
At this point I want to
2011 Jan 21
0
btrfs RAID1 woes and tiered storage
I''ve been experimenting lately with btrfs RAID1 implementation and have to say 
that it is performing quite well, but there are few problems:
* when I purposefully damage partitions on which btrfs stores data (for 
  example, by changing the case of letters) it will read the other copy and 
  return correct data. It doesn''t report in dmesg this fact every time, but it   
  does
2013 Jan 18
0
[Lsf-pc] [CFP] Linux Storage, Filesystem and Memory Management Summit 2013
The annual Linux Storage, Filesystem and Memory Management Summit for
2013 will be held on April 18th and 19th following the Linux Foundation
Collaboration Summit at Parc 55 Hotel in San Francisco, CA:
	https://events.linuxfoundation.org/events/collaboration-summit
	https://events.linuxfoundation.org/events/lsfmm-summit
We''d therefore like to issue a call for agenda proposals that are
2012 Jul 05
1
fpaste-server pastebin service
Hi,
Any step by step guide for setting up fpaste-server on CentOS 5.8?
Regards
Kaushal
2012 Jun 18
1
Nagios 3.4.1 on CentOS 5.8
Hi
I am trying to build nagios rpm from nagios.spec file on CentOS 5.8.  I am
getting into issues.
nagios.spec http://fpaste.org/crOs/
rpmbuild -ba nagios.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.36485
+ umask 022
+ cd /usr/src/redhat/BUILD
+ cd /usr/src/redhat/BUILD
+ rm -rf nagios-3.4.1
+ /bin/gzip -dc /usr/src/redhat/SOURCES/nagios-3.4.1.tar.gz
+ tar -xf -
+ STATUS=0
+ '['
2013 Aug 13
0
Re: Modify Iptables Rules (virbr0 & virbr1)
On 08/13/2013 06:31 AM, Laine Stump wrote:
> Correct. That is a known problem since 2008:
> 
>    https://bugzilla.redhat.com/show_bug.cgi?id=453580
Thanks Laine for confirming it is a known issue.  I googled it a lot but
couldn't find that bugzilla entry.
Do you know if this is still the case with the upcoming Fedora 20 &
firewalld? (these rules are still being created)?
>
2014 Oct 05
1
CentOS 7 - Have 2 disks, each with a biosboot partition, can only boot off one of them
Hi all,
   I used a kickstart script to setup a new machine of mine with RAID 1 
(I couldn't get anaconda to create matching partition schemes). So I've 
now got /dev/sdg1 and /dev/sdh1 as 'bios_grub' (/dev/sd{a-f} are a 
separate array).
   0 root at an-nas02:~# parted /dev/sdg print free
Model: ATA ST3000NC000 (scsi)
Disk /dev/sdg: 3001GB
Sector size (logical/physical):
2017 Mar 20
2
doveadm-sync stateful
Hello,
I'm trying to migrate mail accounts from an old server to a new one.
As I need to migrate dozens of accounts which take about 1G each, I need 
to do stateful sync to make my migration in two times :
1 - I run a :
     doveadm -D -o mail_fsync=never -o imapc_user=user1 at olddomain.fr 
sync -s "" -R -1 -u user1 at newdomain.fr imapc: > /tmp/firstsync.log 2>&1
my
2013 Aug 13
1
Re: Modify Iptables Rules (virbr0 & virbr1)
On 08/13/2013 07:07 AM, Jorge Fábregas wrote:
> On 08/13/2013 06:31 AM, Laine Stump wrote:
>> Correct. That is a known problem since 2008:
>>
>>    https://bugzilla.redhat.com/show_bug.cgi?id=453580
> Thanks Laine for confirming it is a known issue.  I googled it a lot but
> couldn't find that bugzilla entry.
>
> Do you know if this is still the case with the
2012 Jun 04
2
system date using ntp client is drifting
Hi,
I have a set of servers whose system time is drifting. I am running ntp
client on CentOS 5.8. My config is here -> http://fpaste.org/s55U/
Anything i am missing?
Regards,
Kaushal
2009 Nov 05
7
Unexpected ENOSPC on a SSD-drive after day of uptime, kernel 2.6.32-rc5
I''ve just finished installing onto an OCZ Agilent v2 SSD with btrfs as
filesystem. However to my surprise I''ve hit an ENOSPC condition one
one of the partitions within less than a day of uptime, while the
filesystem on that partition only reported 50% to be in use, which is
far from the 75% limit people mention on the ML.
Note that this occurs using a vanilla 2.6.32-rc5 kernel