Displaying 20 results from an estimated 500 matches similar to: "StorageTek 2540 performance radically changed"
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.
2006 Nov 03
2
Filebench, X4200 and Sun Storagetek 6140
Hi there
I''m busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above available for tests, I''m open to suggestions on potential configs that I could run for you.
Pop me a mail if you want something specific _or_ you have suggestions concerning filebench (varmail) config setup.
Cheers
This message posted from
2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2008 Mar 26
0
different read i/o performance for equal guests
Hello,
I''m using Xen 3.0 in a Debian Linux Etch / Dell PowerEdge 860 / 4GB
RAM / Pentium 4 Dual Core 3Ghz. The machine is using a RAID Controller
SAS 5iR, configured with two 500GB disks in RAID-1 (mirroring). I was
getting I/O throughput problems, but then I''ve searched the Internet
and find a solution saying that I needed to enable the write cache on
the RAID controller. Well,
2003 Mar 25
2
AS/400 - Unix Connectivity
Hi Folks,
I've used Samba in the past for Windows NT - Unix connectivity situations.
I wondered if Samba also supported AS/400 - Unix connectivity, specifically
to allow AS/400 to read/write to a Unix file system...??
Regards
Ian Gill
Principal Consultant
Professional Services
(+1) 727.784.4475 Office
(+1) 727.784.1278 Fax
(+1) 727.560.6710 Cell.
Ian_Gill@StorageTek.com
2009 Jun 18
7
7110 questions
Hi all,
(down to the wire here on EDU grant pricing :)
i''m looking at buying a pair of 7110''s in the EDU grant sale.
The price is sure right. I''d use them in a mirrored, cold-failover
config.
I''d primarily be using them to serve a vmware cluster; the current config
is two standalone ESX servers with local storage, 450G of SAS RAID10 each.
the 7110 price
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve
run across the recent "VTrak" SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
http://www.promise.com/product/product_detail_eng.asp?product_id=175
E310s SAS-connected RAID:
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2011 Jan 27
0
Move zpool to new virtual volume
Hello all,
I want to reorganize the virtual disk/ storage pool /volume layout on a StorageTek 6140 with two CSM200 expansion units attached (for example stripe LUNs across trays, which is not the case at the moment). On a data server I have a zpool "pool1" over one of the volumes on the StorageTek. The zfs file systems in the pool are mounted locally and exported via nfs to clients. Now
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi,
I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit
and in [b]/etc/system[/b] I put:
[b]set zfs:zfs_nocacheflush = 1[/b]
And after rebooting, I get the message:
[b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b]
So is this variable not available in the Solaris kernel?
I''m getting really poor
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet.
#] iozone -r 32k -r 512k -s 8G
KB reclen write rewrite read reread read write
read rewrite read fwrite frewrite fread freread
8388608 32 10559 9792 62435 62260
8388608 512 63012 63409 63409 63138
It seems 32k write/rewrite performance are very
2011 Dec 08
4
Backup Redux
Hey folks,
I just went through the archives to see what people are doing for backups,
and here is what I found :
- amanda
- bacula
- BackupPC
- FreeNAS
Here is my situation : we have pretty much all Sun hardware with a Sun
StorageTek SL24 tape unit backing it all up. OSes are a combination of
RHEL and CentOS. The software we are using is EMC
NetWorker Management Console version
2016 Mar 18
3
Incorrect memory usage returned from virsh
When I run `virsh dominfo <domain>` I get the following:
Id: 455
Name: instance-000047e0
UUID: 50722aa0-d5c6-4a68-b4ef-9b27beba48aa
OS Type: hvm
State: running
CPU(s): 4
CPU time: 123160.4s
Max memory: 33554432 KiB
Used memory: 33554432 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model:
2011 Nov 23
3
P2Vs seem to require a very robust Ethernet
Now that we can gather diagnostic info, I think I know why our P2Vs kept
failing last week. Another one just died right in front of my eyes. I
think either the Ethernet or NFS server at this site occasionally
"blips" offline when it gets busy and that messes up P2V migrations.
The RHEV export domain is an NFS share offered by an old Storagetek NAS,
connected over a 10/100 Ethernet.
2002 Aug 06
7
Mirroring 2 IDE Drive Linux
I have 2 drives in a Linux 7.1 server , 1 primary and 1 secondary. I am trying to clone to the secondary slave drive and be take the primary out and change the slave to primary and boot from it. The problem I am having is that it the copied drive will not boot, It is not getting the lilo boot and config properly. When I try to boot the copied drive , it boots to a Blank screen with an
2004 Jan 25
0
RE: SAMBA 15TB Volume?
Thanks Paul, I really appreciate your help.
Regards,
Thomas Massano
Systems Engineer
One World Financial Center
New York, NY
212.416.0710 Office
212.416.0740 FAX
Thomas_Massano@StorageTek.com
INFORMATION made POWERFUL
-----Original Message-----
From: Green, Paul [mailto:Paul.Green@stratus.com]
Sent: Saturday, January 24, 2004 7:11 PM
To: Green, Paul; Massano, Thomas
Cc: 'Samba
2010 Feb 24
0
disks in zpool gone at the same time
Hi,
Yesterday I got all my disks in two zpool disconected.
They are not real disks - LUNS from StorageTek 2530 array.
What could that be - a failing LSI card or a mpt driver in 2009.06?
After reboot got four disks in FAILED state - zpool clear fixed
things with resilvering.
Here is how it started (/var/adm/messages)
Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info]
/pci at 0,0/pci10de,5d at
2002 Oct 03
2
How to run rsync as a daemon program taht runs at particular time given
Hi,
I want to run rsync as a daemon program so that it can run at particular time in nights given by us
any help greatly appreciated
Thks in advance
--Venkat
2009 Jan 06
11
zfs list improvements?
To improve the performance of scripts that manipulate zfs snapshots and the zfs snapshot service in perticular there needs to be a way to list all the snapshots for a given object and only the snapshots for that object.
There are two RFEs filed that cover this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6352014 :
''zfs list'' should have an option to only present direct
2002 Jul 03
3
EXT3-fs error on kernel 2.4.18-pre3
Hi,
I just noticed that my file server running 2.4.18-pre3 + IDE patches &
NTFS patches has this error message in the logs:
EXT3-fs error (device md(9,4)): ext3_free_blocks: Freeing blocks not in
datazone - block = 33554432, count = 1
This is the only ext3 error I have seen and the uptime is currently over
74 days. The error actually appeared two weeks ago. The timing coincides
well with