Displaying 20 results from an estimated 10000 matches similar to: "'partitions not contstrained to drive'???"
2005 Jul 11
1
Disk Druid - control RAID partition layout?
Is it possible to partition a disk with Disk Druid and control the order
of the partitions? I want to create my own boot and swap as RAID
partitions but Disk Druid won't keep them in the order I want, no
matter what I specify first. I've done this before by using fdisk
to make the partitions before starting Disk Druid but I thought
someone said this was an old bug and should have been
2008 Mar 15
0
Re: CentOS Digest, Vol 38, Issue 15
Nmhxc
Sent from my BlackBerry? wireless handheld
-----Original Message-----
From: centos-request at centos.org
Date: Sat, 15 Mar 2008 12:00:07
To:centos at centos.org
Subject: CentOS Digest, Vol 38, Issue 15
Send CentOS mailing list submissions to
centos at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos
or, via email,
2008 Feb 01
2
RAID Hot Spare
I've googled this question without a great deal of information.
Monday I'm rebuilding a Linux server at work. Instead of purchasing 3
drives for this system I purchased 4 with intent to create a hot spare.
Here is my usual setup which I'll do again but with a hot spare for each
partion.
Create /dev/md0 mount point /boot RAID1 3 drives with 1 hot spare
Create two more raid setups
2006 Sep 28
1
adding a usb drive to an existing raid1 set
it seems like I keep running into a wall.
The present raid array...well let me do an fdisk -l:
----------------------------------------
Disk /dev/hda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 fd Linux
2008 Feb 25
2
ext3 errors
I recently set up a new system to run backuppc on centOS 5 with the
archive stored on a raid1 of 750 gig SATA drives created with 3 members
with one specified as "missing". Once a week I add the 3rd partition,
let it sync, then remove it. I've had a similar system working for a
long time using a firewire drive as the 3rd member, so I don't think the
raid setup is the cause
2008 Jun 10
1
raid1 disk format?
If you have a disk with several partitions set up as members of a raid1
md devices, can you make a dd image of that disk to replace its matching
drive with identical partitions or are there differences between the
mirrored partitions?
--
Les Mikesell
lesmikesell at gmail.com
2010 May 13
1
raid resync speed?
Has anything changed in updates that would affect md raid1 resync speed?
I regularly swap a 750G drive and resync to keep an offsite copy and
haven't paid enough attention to known when things changed but it seems
to take much longer to sync than it did months ago, even if I unmount
the partition and stop most other processes that might compete with it.
--
Les Mikesell
2011 Jan 12
3
variable raid1 rebuild speed?
I have a 750Gb 3-member software raid1 where 2 partitions are always
present and the third is regularly rotated and re-synced (SATA disks in
hot-swap bays). The timing of the resync seems to be extremely variable
recently, taking anywhere from 3 to 10 hours even if the partition is
unmounted and the drives aren't doing anything else and regardless of
what I echo into
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed:
md0 is made up of two 250G disks on which the OS and a very large /var
partions resides for a number of virtual machines.
md1 is made up of two 2T disks on which /home resides.
Challenge is that disk 0 of md0 is the problem and it has a 524M /boot
partition outside of the raid partition.
My plan is to back up /home (md1) and at a
2006 Oct 14
0
automatic shutdown of UPS (and RAID1)
I have an MGE UPS Pulsar Ellipse 600 USBS that I'm setting up with CentOS 4.4
I tried for some time to get it working with USB, but failed and I
have had much more success using the serial port.
I have managed to complete the whole install procedure (shown here:
http://www.networkupstools.org/doc/2.0.1/INSTALL.html )but I'm lost
when it comes to implementing the shutdown script.
I'm
2015 Feb 03
2
Very slow disk I/O
On Mon, Feb 2, 2015 at 11:37 PM, Jatin Davey <jashokda at cisco.com> wrote:
>
> I will test and get the I/O speed results with the following and see what
> works best with the given workload:
>
> Create 5 volumes each with 150 GB in size for the 5 VMs that i will be
> running on the server
> Create 1 volume with 600GB in size for the 5 VMs that i will be running on
>
2015 Feb 03
0
Very slow disk I/O
Lol - spinning disks? Really?
SSD is down to like 50cents a gig. And they have 1TB disks... slow disks = you get what you deserve... welcome to 2015. Autolacing shoes, self drying jackets, hoverboards - oh, yeah, and 110k IOPS 1TB SamSung Pro 850 SSD Drives for $449 on NewEgg.
dumbass
-----Original Message-----
From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On Behalf
2007 Oct 02
1
change md uuid?
I sometimes clone working machines by separating RAID1 mirrors and
letting each re-sync with a new partner in different machines. What
will happen if a mismatched pair of these drives ever end up together in
the same machine? If there is any chance that they would automatically
be paired, is there a way to change the uuid when moving to a new
machine so that wouldn't happen?
--
Les
2011 Oct 12
1
raid on large disks?
What's the right way to set up >2TB partitions for raid1 autoassembly?
I don't need to boot from this but I'd like it to come up and mount
automatically at boot.
--
Les Mikesell
lesmikesell at gmail.com
2012 Apr 18
3
3TB system drive partitioning question
so I want to install c6.2 x86_64 onto a 2.7TB /dev/sda ... its a
virgin machine with no software, using pxe boot.
disk druid or whatever seems to only want to let me have like 2tb of
default stuff, I'm guessing because its not using GPT?
do I need to preboot into a shell or something and use parted before I
can install ?
--
john r pierce N 37, W 122
santa
2007 May 15
5
Make Raid1 2nd disk bootable?
On earlier versions of Centos, I could boot the install CD in rescue
mode, let it find and mount the installed system on the HD even when it
was just one disk of RAID1 partitions (type=FD). When booting from the
centos5 disk the attempt find the system gives a box that says 'You
don't have any Linux partitions'. At the bottom of the screen there is
something that says:
2014 Dec 05
0
DegradedArray message
On 12/04/2014 05:45 AM, David McGuffey wrote:
> md0 is made up of two 250G disks on which the OS and a very large /var
> partions resides for a number of virtual machines.
...
> Challenge is that disk 0 of md0 is the problem and it has a 524M /boot
> partition outside of the raid partition.
Assuming that you have an unused drive port, you can fix that pretty easily.
Attach a new
2006 Apr 17
0
one-shot raid mirror to remote?
I have a filesystem on a RAID1 partition created with a 'missing'
mirror so I can periodically sync with an external drive for
backup. Now I'd like to move the contents to a different machine
that has a matching partition with minimal downtime. Is it
possible to export the new partition via iscsi or an nbd device
so that the old system will sync to it over the network allowing
an
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On Wed, Jan 7, 2015 at 9:52 AM, Gordon Messmer <gordon.messmer at gmail.com> wrote:
>
> Every regular file's directory entry on your system is a hard link. There's
> nothing particular about links (files) that make a filesystem fragile.
Agreed, although when there are millions, the fsck fixing it is somewhat slow.
>> It is mostly on aging hardware, so it
>> is
2015 Jan 07
2
reboot - is there a timeout on filesystem flush?
On Wed, January 7, 2015 10:33 am, Les Mikesell wrote:
> On Wed, Jan 7, 2015 at 9:52 AM, Gordon Messmer <gordon.messmer at gmail.com>
> wrote:
>>
>> Every regular file's directory entry on your system is a hard link.
>> There's
>> nothing particular about links (files) that make a filesystem fragile.
>
> Agreed, although when there are millions, the