Displaying 20 results from an estimated 10000 matches similar to: "btrfs-convert complains that fs is mounted even if it isn't"
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all,
I'm setting up Centos4.2 on 2x80GB SATA drives.
The partition scheme is like this:
/boot = 300MB
/ = 9.2GB
/home = 70GB
swap = 500MB
The RAID is RAID 1.
md0 = 300MB = /boot
md1 = 9.2GB = LVM
md2 = 70GB = LVM
md3 = 500MB = LVM
Now, the confusing part is:
1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then
create the LV.
2. When setting up RAID 1, should I
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2007 Dec 18
1
How can I extract the AIC score from a mixed model object produced using lmer?
I am running a series of candidate mixed models using lmer (package lme4)
and I'd like to be able to compile a list of the AIC scores for those
models so that I can quickly summarize and rank the models by AIC. When I
do logistic regression, I can easily generate this kind of list by creating
the model objects using glm, and doing:
> md <- c("md1.lr", "md2.lr",
2007 Oct 17
2
Hosed my software RAID/LVM setup somehow
CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1.
I rebooted one of my machines after doing some changes to RAID/LVM and now
the two RAID partitions that I made changes to are "gone". I cannot boot
into the system.
On bootup it tells me that the devices md2 and md3 are busy or mounted and
drops me to the repair shell. When I run fs check manually it just tells
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2023 Jan 09
2
RAID1 setup
Hi
> Continuing this thread, and focusing on RAID1.
>
> I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn
> it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then
it's not hardware RAID at all.
>
> I am planning two groupings of RAID1 (it has 4 bays).
>
> There is also an internal USB boot port.
>
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally
Through df -i I got the approximate number of files is 63694442
[root at CentOS-73-64-minimal ~]# df -i
Filesystem Inodes IUsed IFree IUse%
Mounted on
/dev/md2 131981312 30901030 101080282 24% /
devtmpfs 8192893 435 8192458 1%
/dev
tmpfs
2010 Nov 04
1
orphan inodes deleted issue
Dear All,
My servers running on CentOS 5.5 x86_64 with kernel 2.6.18.194.17.4.el
gigabyte motherboard and 2 harddisks (seagate 500GB).
My CentOS box configured RAID 1, yesterday and today I had the same
problem on 2 servers with same configuration. See the following error
messages for details:
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2008 Oct 05
3
Software Raid Expert Needed
Hello all,
I have 2 x 250GB sata disks (sda and sdb).
# fdisk -l /dev/sda
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 14939 119997486 fd Linux raid
autodetect
/dev/sda2 14940 29878
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I
see this:
md7 : active raid1 sda2[0] sdb2[1]
26627648 blocks [2/2] [UU] [-->> it's OK]
md1 : active raid1 sdb3[1] sda3[0]
4192896 blocks [2/2] [UU] [-->> it's OK]
md2 : active raid1 sda5[0] sdb5[1]
4192832 blocks [2/2] [UU] [-->> it's OK]
md3 : active raid1 sdb6[1] sda6[0]
4192832 blocks [2/2]
2005 May 21
1
Software RAID CentOS4
Hi,
I have a system with two IDE controllers running RAID1.
As a test I powered down, removed one drive (hdc), and powered back up.
System came up fine, so powered down installed a new drive (hdc)
And powered back up.
/proc/mdstat indicatd RAID1 active with hda only. I thought it would
Auto add the new hdc drive... Also when I removed the new drive and
added
The original hdc, the swap partitions
2009 Feb 26
1
smbd could not access share directory on drbd (8.3 on Centos 5 i386)
Dear, all. I am pulling my hair because I could not find any error
messsages that could point me to a fix to my problem.
The directory I want to share was mounted on /home with drbd and
heartbeat but then my users could not access any shares / their home
directories. However, if I set up shares else where on my box like
share under /opt or /usr/local, then the same users would be able to
access
2017 Oct 17
1
Distribute rebalance issues
Nithya,
Is there any way to increase the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from
2011 Apr 14
3
Debian Squeeze hangs with kernel 2.6.32-5-xen-686
Hi all!
After upgrading to Squeeze, I am watching a Xen VMHost that after a
while it hangs. This did not happen when I was using Xen with Debian
Lenny (in this case as with Squeeze, the Xen components are from Debian
repositories).
In each case I connected a keyboard and monitor to the computer and the
screen remained black without answering any key.
This problem seems to also affect domUs,
2007 Sep 25
2
mdadm problem.
So I'm trying to RAID-1 this system which has two identical disks
installed in it, and it isn't working for some reason.
I started by doing a CentOS-4 install on /dev/sda1 as root, and with
/dev/sda2 as my swap.
I finish the install, yum update, and then I want to make the mirrors.
I copy the partition table from one disk to the other:
# sfdisk -d /dev/sda | sfdisk /dev/sdb
I create
2018 Apr 30
0
Gluster rebalance taking many years
Hi,
This value is an ongoing rough estimate based on the amount of data
rebalance has migrated since it started. The values will cange as the
rebalance progresses.
A few questions:
1. How many files/dirs do you have on this volume?
2. What is the average size of the files?
3. What is the total size of the data on the volume?
Can you send us the rebalance log?
Thanks,
Nithya
On 30
2007 Dec 01
2
Looking for Insights
Hi Guys,
I had a strange problem yesterday and I'm curious as to what everyone
thinks.
I have a client with a Red Hat Enterprise 2.1 cluster. All quality HP
equipment with an MSA 500 storage array acting as the shared storage
between the two nodes in the cluster.
This cluster is configured for reliability and not load balancing. All
work is handled by one node or the other not both.