similar to: really large file systems with centos

Displaying 20 results from an estimated 10000 matches similar to: "really large file systems with centos"

2009 Oct 09
2
DNS is confusing! (I really need some help understanding!)
OK, I am confused and DNS is the reason. So, Comcast, 13 public IP's bound to my modem. Each public IP has a DNS name from comcast (they assign it automatically) like: 173.13.167.209 --> 173-13-167-209-sfba.hfc.comcastbusiness.net I created a DNS entry at GoDaddy for 173.13.167.209 that is 'inhouse.theindiecompanyllc.com' When eth0 is alive, I see that it tells me my name
2007 May 12
3
zfs and jbod-storage
Hi. I''m managing a HDS storage system which is slightly larger than 100 TB and we have used approx. 3/4. We use vxfs. The storage system is attached to a solaris 9 on sparc via a fiberswitch. The storage is shared via nfs to our webservers. If I was to replace vxfs with zfs I could utilize raidz(2) instead of the built-in hardware raid-controller. Are there any jbod-only storage
2007 Sep 04
23
I/O freeze after a disk failure
Hi all, yesterday we had a drive failure on a fc-al jbod with 14 drives. Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages: Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52): Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello, I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2018 Apr 09
0
JBOD / ZFS / Flash backed
> > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb > flash? > Is anyone able to clarify this requirement for me? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180409/0b7677ce/attachment.html>
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2009 Nov 17
13
ZFS storage server hardware
Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I''m in the same boat, but I''ve found that hardware choice is the biggest issue. I''m struggling to find something which will work nicely under solaris and which meets my expectations in terms of hardware.
2018 Apr 09
2
JBOD / ZFS / Flash backed
On 09/04/18 16:49, Vincent Royer wrote: > > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, > or 4gb flash? > > RAID and JBOD are completely different things. JBODs are just that, bunches of disks, and they don't have any cache above them in hardware. If you're going to use ZFS under Gluster, look at the ZFS docs first. Short answer is no.
2010 Jan 16
95
Best 1.5TB drives for consumer RAID?
Which consumer-priced 1.5TB drives do people currently recommend? I had zero read/write/checksum errors so far in 2 years with my trusty old Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of drives that are big, reliable and cheap. As of Jan 2010 it seems the price sweet spot is the 1.5TB drives. As I had a lot of success with Western Digital drives I thought I would
2007 Oct 20
4
Distribued ZFS
Hi Ged; At the moment ZFS is not a shared file system nor a paralell file system. However lustre integration which will take some time will provide parallel file system abilities. I am unsure if lustre at the moment supports redundancy between storage nodes (it was on the road map) But ZFS at the moment supports Sun cluster 3.2 (no paralel acccess is supported) and new upcoming SAS Jbods
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2008 Apr 12
5
Newbie question: ZFS on Xserve RAID with Solaris 10
--Apologies if you get two copies of this message - it was submitted for moderation and hasn''t appeared on the list in two days, so I''m resubmitting. Hi all, Just a quick question. Is it possible to utilise an Apple Xserve RAID as an array for use with ZFS with RAID-Z in Solaris? I''ve seen various mentions of ''zfs'' and ''xserve
2012 Jul 02
14
HP Proliant DL360 G7
Hello, Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated. Thanks in advance! -Anh
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi, my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller. So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache. You have a choice of flash module sizes to include on the card. The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed flash-based Cache when the disks are presented by the RAID card in JBOD. The Hardware vendor asked "Do you know if Gluster makes use of
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2018 Apr 09
2
JBOD / ZFS / Flash backed
Your question is difficult to parse. Typically RAID and JBOD are mutually exclusive. By "flash-backed", do you mean a battery backup unit (BBU) on your RAID controller? On Mon, Apr 9, 2018 at 8:49 AM, Vincent Royer <vincent at epicenergy.ca> wrote: > >> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb >> flash? >> > > Is anyone
2009 Jan 30
35
j4200 drive carriers
apparently if you don''t order a J4200 with drives, you just get filler sleds that won''t accept a hard drive. (had to look at a parts breakdown on sunsolve to figure this out -- the docs should simply make this clear.) it looks like the sled that will accept a drive is part #570-1182. anyone know how i could order 12 of these?
2018 Apr 04
5
JBOD / ZFS / Flash backed
Hi, Trying to make the most of a limited budget. I need fast I/O for operations under 4MB, and high availability of VMs in an Ovirt cluster. I Have 3 nodes running Ovirt and want to rebuild them with hardware for converging storage. Should I use 2 960GB SSDs in RAID1 in each node, replica 3? Or can I get away with 1 larger SSD per node, JBOD, replica 3? Is a flash-backed Raid required for
2008 Nov 17
14
Storage 7000
I''m not sure if this is the right place for the question or not, but I''ll throw it out there anyways. Does anyone know, if you create your pool(s) with a system running fishworks, can that pool later be imported by a standard solaris system? IE: If for some reason the head running fishworks were to go away, could I attach the JBOD/disks to a system running snv/mainline