Displaying 20 results from an estimated 4000 matches similar to: "zfs"
2019 Jun 14
0
zfs [SOLVED]
mark wrote:
> Hi, folks,
>
>
> testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
> pulled one drive (11-drive, one hot spare pool), and it resilvered with
> the hot spare. zpool status -x shows me state: DEGRADED
> status: One or more devices could not be used because the label is missing
> or invalid. Sufficient replicas exist for the pool to
2019 Jul 01
1
Was, Re: raid 5 install, is ZFS
Speaking of ZFS, got a weird one: we were testing ZFS (ok, it was on
Ubuntu, but that shouldn't make a difference, I would think). and I've got
a zpool z2. I pulled one drive, to simulate a drive failure, and it
rebuilt with the hot spare. Then I pushed the drive I'd pulled back in...
and it does not look like I've got a hot spare. zpool status shows
config:
NAME STATE
2019 Jul 01
5
raid 5 install
On Jul 1, 2019, at 7:56 AM, Blake Hudson <blake at ispn.net> wrote:
>
> I've never used ZFS, as its Linux support has been historically poor.
When was the last time you checked?
The ZFS-on-Linux (ZoL) code has been stable for years. In recent months, the BSDs have rebased their offerings from Illumos to ZoL. The macOS port, called O3X, is also mostly based on ZoL.
That leaves
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi,
I''m running OpenSuse 12.2 with kernel 3.5.3
HBA= LSI 1068e using the MPTSAS driver (patched)
(https://patchwork.kernel.org/patch/1379181/)
SANOS1:/media # uname -a
Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64
x86_64 GNU/Linux
I''ve tried to simulate a disk replacement but it seems that now
/dev/sdg is stuck in the btrfs pool (RAID10)
SANOS1:/media #
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2010 May 28
2
permanently add md device
Hi All
Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run:
$mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
the device file "md2" is created and the raid is being configured. but somehow
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1%
2010 Sep 17
1
multipath troubleshoot
Hi,
My storage admin just assigned a Lun (fibre) to my server. Then re scanned using
echo "1" > /sys/class/fc_host/host5/issue_lip
echo "1" > /sys/class/fc_host/host6/issue_lip
I can see the scsi device using dmesg
But mpath device are not created for this LUN
Pleas see below. The last 4 should be active and I think this is the problem
Kernel:
2011 Nov 22
1
Recovering data from old corrupted file system
I have a corrupted multi-device file system that got corrupted ages
ago (as I recall, one of the drives stopped responding, causing btrfs
to panic). I am hoping to recover some of the data. For what it''s
worth, here is the dmesg output from trying to mount the file system
on a 3.0 kernel:
device label Media devid 6 transid 816153 /dev/sdq
device label Media devid 7 transid 816153
2009 Jan 13
2
mounted.ocfs2 -f return Unknown: Bad magic number in inode
Hello,
I have installed ocfs2 without problem and use it for a RAC10gR2.
Only Clusterware files are ocfs2 type.
multipath is also used.
When I issue : mounted.ocfs2 -f
I have a strange result:
Device FS Nodes
/dev/sda ocfs2 Unknown: Bad magic number in inode
/dev/sda1 ocfs2 pocrhel2, pocrhel1
/dev/sdb ocfs2 Not mounted
/dev/sdf
2020 Sep 09
4
Btrfs RAID-10 performance
Hi, thank you for your reply. I'll continue inline...
Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
> Miloslav> Hello,
> Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
> Miloslav> "RAID-1 would be preferable"
> Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
>
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
2017 Mar 14
2
systemd, oh my
Ok, folks, I don't get this one at all.
I've got a server that I just rebuilt last week, from C5 to C7. It used to
export filesystems. Those were moved to another server, and NFS wasn't
turned up when I built it. I just turned it down again. And yet, I see
Mar 14 10:26:33 <servername> systemd: Job
dev-disk-by\x2dlabel-export1.device/start timed out.
Mar 14 10:26:33
2020 Sep 07
4
Btrfs RAID-10 performance
Hello,
I sent this into the Linux Kernel Btrfs mailing list and I got reply:
"RAID-1 would be preferable"
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
server with Intel(R) Xeon(R) CPU E5-2620 v4 @
2013 Nov 07
1
IBM Storwize V3700 storage - device names
Hello,
I have IBM Storwize V3700 storage, connected to 2 IBM x3550 M4 servers
via fiber channel. The servers are with QLogic ISP2532-based 8Gb Fibre
Channel to
PCI Express HBA cards and run Centos 5.10
When I export a volume to the servers, each of them sees the volume
twice, i.e /dev/sdb and /dev/sdc, with the same size.
Previously I have installed many systems with IBM DS3500 series of
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster
volume, in order to avoid split-brain and I might add that work pretty well
to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running
different Linux distro - and Windows as well - with Cpanel and other stuff,
in production.
Anyway here the
2006 Oct 13
3
error running webserver 7 with the DTrace dvm agents...
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1">
<title></title>
</head>
<body text="#330000" bgcolor="#ffffff">
<tt><font size="+1">I am attempting to run the sun webserver 7