Displaying 4 results from an estimated 4 matches for "device_i".
Did you mean:
device_id
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2009 Oct 10
11
SSD over 10gbe not any faster than 10K SAS over GigE
GigE wasn''t giving me the performance I had hoped for so I spring for some 10Gbe cards. So what am I doing wrong.
My setup is a Dell 2950 without a raid controller, just a SAS6 card. The setup is as such
:
mirror rpool (boot) SAS 10K
raidz SSD 467 GB on 3 Samsung 256 MLC SSD (220MB/s each)
to create the raidz I did a simple zpool create raidz SSD c1xxxxx c1xxxxxx c1xxxxx. I have
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but