Displaying 17 results from an estimated 17 matches for "j4400".
Did you mean:
4400
2012 May 23
5
biggest disk partition on 5.8?
Hey folks,
I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected
to a Sunfire x2250 running 5.8 ( 64 bit )
I used 'arcconf' to create a big RAID60 out of (see below).
But then I mount it and it is way too small
This should be about 20TB :
[root at solexa1 StorMan]# df -h /dev/sdb1
Filesystem...
2009 Nov 11
0
[storage-discuss] ZFS on JBOD storage, mpt driver issue - server not responding
...re for the
> SAS card from LSI''s web site - v1.29.00 without any changes, server still
> locks.
>
> Any ideas, suggestions how to fix or workaround this issue? The adapter is
> suppose to be enterprise-class.
We have three of these HBA''s, used as follows:
X4150, J4400, Solaris-10U7-x86, mpt patch 141737-01
V245, J4200, Solaris-10U7, mpt patch 141736-05
X4170, J4400, Solaris-10U8-x86, mpt/kernel patch 141445-09
None of these systems are suffering the issues you describe. All of
their SAS HBA''s are running the latest Sun-supported firmware I could
fin...
2009 Nov 20
1
fsck.btrfs assertion failure with large number of disks in fs
Hello all
We are experimenting with btrfs and we''ve run into some problems.
We are running on two Sun Storage J4400 Arrays containing a total of
48 1 TB disks.
With 24 disks in the btrfs:
# mkfs.btrfs /dev/sd[b-y]
WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
adding device /dev/sdc id 2
...
adding device /dev/sdy id 24
fs created label (null) on /dev/sd...
2012 May 23
1
pvcreate limitations on big disks?
OK folks, I'm back at it again. Instead of taking my J4400 ( 24 x 1T
disks) and making a big RAID60 out of it which Linux cannot make a
filesystem on, I'm created 4 x RAID6 which each are 3.64T
I then do :
sfdisk /dev/sd{b,c,d,e} <<EOF
,,8e
EOF
to make a big LVM partition on each one.
But then when I do :
pvcreate /dev/sd{b,c,d,e}1
and then...
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data.
The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card.
I''m looking for suggestions
2012 Jul 18
1
RAID card selection - JBOD mode / Linux RAID
...12.04 on a Sunfire x2250
Hard to get answers I can trust out of vendors :-)
I have a Sun RAID card which I am pretty sure is LSI OEM. It is a
3G/s SAS1 with 2 external connectors like the one on the right here :
http://www.cablesondemand.com/images/products/CS-SAS1MUKBCM.jpg
And I have 2 x Sun J4400 JBOD cabinets each with 24 disks.
If I buy a new card that is 6G/s SAS2 with the same connector, can I
connect my cabinets to it and have them work? Even if they only work
at 3G/s I don't care.
I've also hit an issue with the number of logical devices allowed, and
am wondering whether th...
2010 May 03
2
Is the J4200 SAS array suitable for Sun Cluster?
I''m setting up a two-node cluster with 1U x86 servers. It needs a
small amount of shared storage, with two or four disks. I understand
that the J4200 with SAS disks is approved for this use, although I
haven''t seen this information in writing. Does anyone have experience
with this sort of configuration? I have a few questions.
I understand that the J4200 with SATA disks will
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact
with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun
badged Seagate ST31000N in a J4400) ? I have a resilver running and am
seeing about 700-800 writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the replication
disabled until the resilver completes.
Solaris 10U9
zpool version...
2011 May 19
2
Faulted Pool Question
...ol online returning ? The system is running
10U9 with (I think) the September 2010 CPU and a couple multipathing /
SAS / SATA point patches (for a MPxIO and SATA bug we found). zpool
version is 22 zfs version is either 4 or 5 (I forget which). We are
moving off of the 3511s and onto a stack of five J4400 with 750 GB
SATA drives, but we aren''t there yet :-(
P.S. The other zpools on the box are still up and running. The ones
that had deviceson the faulted 3511 are degraded but online, the ones
that did not have devices on the faulted 3511 are OK. Because of these
other zpools we can'...
2009 Nov 18
0
open(2), but no I/O to large files creates performance hit
...or does tweaking zfs_prefetch_disable just
trigger some other effect? For this test, setting that flag(disabling
prefetch) doesn''t seem to hurt performance (at least for this test scenario.
System specs:
-S10,U7
-2 way Nehalem (Xeon X55500, 8 virtual processors), 2.67GHz
-24GB RAM
- J4400 for storage (JBOD)
# zpool status zpl1
pool: zpl1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpl1 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0...
2009 Nov 17
14
X45xx storage vs 7xxx Unified storage
We are looking at adding to our storage. We would like ~20TB-30 TB.
we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we are looking for high reliability, good performance (up to at least 350 MBytes /second over 10 GigE connection) and large capacity.
For the X45xx (aka thumper) capacity and performanance seem to be there (we have 3 now)
However, for system upgrades , maintenance
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all
Sorry if it''s kind of off-topic for the list but after talking
to lots of vendors I''m running out of ideas...
We are looking for JBOD systems which
(1) hold 20+ 3.3" SATA drives
(2) are rack mountable
(3) have all the nive hot-swap stuff
(4) allow 2 hosts to connect via SAS (4+ lines per host) and see
all available drives as disks, no RAID volume.
In a
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
stripe. They''re all supermicro-based with retail LSI cards.
I''ve noticed a tendency for things to go a little bonkers during the
weekly scrub (they all scrub over the weekend), and that''s when I''ll
lose a disk here and there. OK, fine, that''s sort of the point, and
they''re
2009 Jan 30
35
j4200 drive carriers
apparently if you don''t order a J4200 with drives, you just get filler
sleds that won''t accept a hard drive. (had to look at a parts breakdown
on sunsolve to figure this out -- the docs should simply make this clear.)
it looks like the sled that will accept a drive is part #570-1182.
anyone know how i could order 12 of these?
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All,
It may be this not the correct mailing list, but I''m having a ZFS issue
when a disk is failing.
The system is a supermicro motherboard X8DTH-6F in a 4U chassis
(SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1).
It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each
of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x
LSI
2011 Apr 07
40
X4540 no next-gen product?
While I understand everything at Oracle is "top secret" these days.
Does anyone have any insight into a next-gen X4500 / X4540? Does some
other Oracle / Sun partner make a comparable system that is fully
supported by Oracle / Sun?
http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html
What do X4500 / X4540 owners use if they''d like more
2009 Jan 06
11
zfs list improvements?
To improve the performance of scripts that manipulate zfs snapshots and the zfs snapshot service in perticular there needs to be a way to list all the snapshots for a given object and only the snapshots for that object.
There are two RFEs filed that cover this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6352014 :
''zfs list'' should have an option to only present direct