similar to: biggest disk partition on 5.8?

Displaying 20 results from an estimated 400 matches similar to: "biggest disk partition on 5.8?"

2012 Jul 18
1
RAID card selection - JBOD mode / Linux RAID
I don't think this is off topic since I want to use JBOD mode so that Linux can do the RAID. I'm going to hopefully run this in CentOS 5 and Ubuntu 12.04 on a Sunfire x2250 Hard to get answers I can trust out of vendors :-) I have a Sun RAID card which I am pretty sure is LSI OEM. It is a 3G/s SAS1 with 2 external connectors like the one on the right here :
2012 May 23
1
pvcreate limitations on big disks?
OK folks, I'm back at it again. Instead of taking my J4400 ( 24 x 1T disks) and making a big RAID60 out of it which Linux cannot make a filesystem on, I'm created 4 x RAID6 which each are 3.64T I then do : sfdisk /dev/sd{b,c,d,e} <<EOF ,,8e EOF to make a big LVM partition on each one. But then when I do : pvcreate /dev/sd{b,c,d,e}1 and then pvdisplay It shows each one as
2019 Mar 14
1
howto monitor disks on a serveraid-8k?
On 3/14/19 2:31 PM, isdtor wrote: > >> I'd like to monitor the disks connected to a ServeRaid-8k controller in a >> server running Centos 7 such that I can know when one fails. >> >> What's the best way to do that? > > It's been a long time since I worked with ServeRaid, and things may have changed in the meantime. > > IBM used to have a an iso
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All, It may be this not the correct mailing list, but I''m having a ZFS issue when a disk is failing. The system is a supermicro motherboard X8DTH-6F in a 4U chassis (SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1). It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x LSI
2009 Nov 20
1
fsck.btrfs assertion failure with large number of disks in fs
Hello all We are experimenting with btrfs and we''ve run into some problems. We are running on two Sun Storage J4400 Arrays containing a total of 48 1 TB disks. With 24 disks in the btrfs: # mkfs.btrfs /dev/sd[b-y] WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL WARNING! - see http://btrfs.wiki.kernel.org before using adding device /dev/sdc id 2 ... adding device /dev/sdy id 24 fs
2012 Sep 12
1
systutils/arcconf errors on 9.x versions
Back in July, this error was discussed briefly on the mailing list(s). It appears that a fix (r238182) was submitted for inclusion in 9.1 (early). This problem still appears in 9.1-RC1. Will the fix be included in 9.1-RELEASE (or better yet 9.1-RC2)? Thanks. David Boyd. ---------------------------------------------------------------------------- ------ 1st e-mail from pluknet responding to
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2006 Nov 10
3
aaccli on recent conrollers?
I have just built a new SunFire X4100 server with an Adaptec 2230SLP RAID card using FreeBSD 6.2-PRE kernel (from September 20). Everything is working extremely well except I cannot run the aaccli utility on this controller. When I try to open the controller, it gives this error: Command Error: <The current AFAAPI.DLL is too old to work with the current controller software.> On
2013 Jul 04
6
Trouble creating DomU with 2 NICs
Hey folks, I created a DomU, installed Linux, and then realized I''d only given it 1 NIC so brought it down to edit the cfg file to give it another NIC. Originally I just had : vif = [''''] And so I guess the defaults worked for the 1 NIC. So I changed it to : vif =
2009 Aug 18
2
OT: RAID5, RAID50 and RAID60 performance??
We have several DELL servers with MD1000 connect to it. Server will install CENTOS 5.x X86_64 version. My questions are: 1. Which configuration have better performance RAID5, RAID50 or RAID60? 2. how much performance difference? ___________________________________________________ ??????? ? ???????????????? http://messenger.yahoo.com.tw/
2019 Mar 14
4
howto monitor disks on a serveraid-8k?
Hi, I'd like to monitor the disks connected to a ServeRaid-8k controller in a server running Centos 7 such that I can know when one fails. What's the best way to do that?
2009 Nov 11
0
[storage-discuss] ZFS on JBOD storage, mpt driver issue - server not responding
miro at cybershade.us said: > So at this point this looks like an issue with the MPT driver or these SAS > cards (I tested two) when under heavy load. I put the latest firmware for the > SAS card from LSI''s web site - v1.29.00 without any changes, server still > locks. > > Any ideas, suggestions how to fix or workaround this issue? The adapter is > suppose to be
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2011 May 19
2
Faulted Pool Question
I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I''d ask here as well, as this forum often provides better, faster answers :-) We have a server (M4000) with 6 FC attached SE-3511 disk arrays (some behind a 6920 DSP engine). There are many LUNs, all about 500 GB and mirrored via ZFS. The LUNs
2009 May 28
3
IBM ServeRAID Manager software
Hi there, I'm in the process of installing Centos 5.2 on an IBM x236 w/ ServeRAID 7k I recently acquired to act as a samba file server. The hardware has all passed various stress tests I could throw at it so we're okay there. My question is. Has anyone had any luck getting the latest IBM ServeRAID Manager v9.0 working in CentOS? If so how? ServeRAID Manager is based off Adaptec's
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions
2013 Dec 04
3
Adaptec 5805 as a guest on ESXi 5.5 - problem
Hi, I've installed a FreeBSD (stable 9.2) as a guest on ESXi 5.5. I've added Adaptec Controller via passthrough. Unfortunately FreeBSD does not show hard drives. Any clue? aac0: <Adaptec RAID 5805> mem 0xfd200000-0xfd3fffff irq 18 at device 0.0 on pci3 aac0: Enabling 64-bit address support aac0: Enable Raw I/O aac0: Enable 64-bit array aac0: New comm. interface enabled aac0:
2010 May 03
2
Is the J4200 SAS array suitable for Sun Cluster?
I''m setting up a two-node cluster with 1U x86 servers. It needs a small amount of shared storage, with two or four disks. I understand that the J4200 with SAS disks is approved for this use, although I haven''t seen this information in writing. Does anyone have experience with this sort of configuration? I have a few questions. I understand that the J4200 with SATA disks will
2016 Jul 12
4
CentOS 6, mptfusion software?
Hi, folks, Got an older Dell R410, with an LSI 1068E PCI-Express Fusion-MPT SAS (rev 08). It *appears* that a) trying MegaRaid, and b) from what I'm googling, that what I need are mptfusion-related packages. Unfortunately, yum shows me nothing available in base, epel, or rpmfusion. Am I looking for the wrong thing, or does anyone have a source (no, I haven't looked at LSI, sorry
2010 Oct 24
3
ZFS with STK raid card w battery
We have Sun STK RAID cards in our x4170 servers. These are battery backed with 256mb cache. What is the recommended ZFS configuration for these cards? Right now, I have created a one-to-one logical volume to disk mapping on the RAID card (one disk == one volume on RAID card). Then, I mirror them using ZFS. No hardware mirror. What I am a little confused with is if it is better to not do any