Displaying 20 results from an estimated 20000 matches similar to: "Cloning Centos server with RSync - what to exclude?"
2013 Dec 10
0
DR hot/warm spare node patterns / anti-patterns?
With our without puppet. We are looking at setting up DR nodes with
some draft ideas. We use puppet against a git repository (with ppg)
instead of against a puppet server. Nodes are actually VMs.
Configuration ideas:
- a filesystem-level flag that indicates that the node is running as
standby spare, to use as conditional in bash/python scripts as well as
in puppet
- some services
2013 Sep 13
2
Cloning CentOS workstations
I manage a set of CentOS operations workstations which are all clones of each
other (3 "live" and 1 "spare" kept powered down); each has a single drive with
four partitions (/boot, /, /home, swap). I've already set up cron'd rsync jobs
to copy the operations accounts between the workstations on a daily basis,
so that when one fails, it is a simple, quick process to swap
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 30/01/19 14:02, mark ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2011 Mar 08
0
Race condition with mdadm at bootup?
Hello folks,
I am experiencing a weird problem at bootup with large RAID-6 arrays.
After Googling around (a lot) I find that others are having the same
issues with CentOS/RHEL/Ubuntu/whatever. In my case it's Scientific
Linux-6 which should behave the same way as CentOS-6. I had the same
problem with the RHEL-6 evaluation version. I'm posting this question
to the SL mailing list
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2007 Feb 18
3
prio not seeming to work
Hello,
I am trying to mess with a prio type qdisc, and must be missing something.
Here''s my sample code:
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1:0 prio 1 protocol ip u32 \
match ip dst 208.0.0.0/8 flowid 1:1
tc filter add dev eth0 parent 1:0 prio 3 protocol ip u32 \
match ip dst 0.0.0.0/0 flowid 1:3
I would assume that any traffic going to
2002 Dec 10
5
2gb limit & weird filenames
Hello,
I'm trying to set up a samba server for a friend who has a mac. He's running
OSX, version 10.2. He's got some really big video editing files that are well
in excess of 2gb. We're trying to back these up to the samba server, but it
quits right around 2gb. That seems to be a magical number. This is with
version 2.2.3 of Samba, and Linux-Mandrake 8.2, ext3 file system.
I
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what happened, but the box I was working
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto:
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what
2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have
gone into a faulted state and now, apparently, we can''t remove them
or otherwise de-fault them. I''m confidant that the underlying disks
are fine, but ZFS seems quite unwilling to do anything with the spares
situation.
(The specific faulted state is ''FAULTED corrupted data'' in
''zpool
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
>>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 29/01/19 18:47, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>
>>>>> I've no idea what happened, but the box I was working on last week
>>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2010 May 12
6
ZFS High Availability
I''m looking for input on building an HA configuration for ZFS. I''ve read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is imported during a failover.
The problem is that we use ZFS for a specialized purpose that results in 10''s of thousands of filesystems (mostly snapshots and clones). All versions of
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev"
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare would automatically
be activated. But I''m finding that ZFS does not behave this way
2009 Nov 07
1
Net-SNMP interfaces out of order
Hello Centos People,
I have a CentOS 5.3 box that had a total of 5 ethernet cards in it. It
functions to share an internet connection with 4 different subnets. All
works fine, except I'm noticing that my MRTG traffic graphs are wrong.
Further digging with snmpwalk reveal that the order of the ethernet
interfaces changes every time the machine is rebooted to a different order.
For
2010 Nov 23
1
drive replaced from spare
I have a x4540 with a single pool made from a bunch of raidz1''s with
2 spares (solaris 10 u7). Been running great for over a year, but
I''ve had my first event.
A day ago the system activated one of the spares c4t7d0, but given the
status below, I''m not sure what to do next.
# zpool status
pool: pool1
state: ONLINE
scrub: resilver completed after 2h25m
2010 Dec 05
4
Zfs ignoring spares?
Hi all
I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status