similar to: More than 2TB RAID...

Displaying 20 results from an estimated 4000 matches similar to: "More than 2TB RAID..."

2009 Aug 21
3
p800 and HP
I was wondering if anyone here has experience with HP MSA60 with P400 and P800 controller. How reliable are they for a 24x7 shop? TIA
2015 Oct 07
2
OT hardware issue: HP controller to 3rd party RAID
On 10/07/15 11:13, m.roth at 5-cent.us wrote: > Jack Bailey wrote: > >>> controller that expects to talk to individual SAS or SATA drives. you >>> can manage it with hpssacli from centos. >> I have the P822. It has no JBOD or RAID 0. hpacucli works with CentOS >> 7.0, but is broken on 7.1 -- it cannot find the controller. > Can it do RAID 6? > This page
2015 Oct 07
3
OT hardware issue: HP controller to 3rd party RAID
On 10/07/15 10:06, John R Pierce wrote: > On 10/7/2015 8:42 AM, m.roth at 5-cent.us wrote: >> Got an old HP box with a P800 Smart Array controller. The HP RAID >> box >> plugged into it's failing, and we got a new JetStor. Anyone know if we >> can just plug the JetStor in and set it to passthrough, or if we have >> to use the P800's firmware to set up
2015 Oct 07
4
OT hardware issue: HP controller to 3rd party RAID
Hi, folks, Got an old HP box with a P800 Smart Array controller. The HP RAID box plugged into it's failing, and we got a new JetStor. Anyone know if we can just plug the JetStor in and set it to passthrough, or if we have to use the P800's firmware to set up the RAID, or other gotchas? I *think* we could use the RAID boxes firmware to build the RAID, but last resort would be
2014 Aug 21
3
HP ProLiant DL380 G5
I have CentOS 6.x installed on a "HP ProLiant DL380 G5" server. It has eight 750GB drives in a hardware RAID6 array. Its acting as a host for a number of OpenVZ containers. Seems like every time I reboot this server which is not very often it sits for hours running a disk check or something on boot. The server is located 200+ miles away so its not very convenient to look at. Is
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded from SUN. But as written in Operations manual, it creates rpms for 2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append custom as extraversion. Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp. -- Regards-- Rishi Pathak National PARAM Supercomputing Facility Center for Development of Advanced
2012 Sep 13
5
Partition large disk
Hi, I have a 24TB RAID6 disk with a GPT partition table on it. I need to partition it into 2 partitions one of 16TB and 1 of 8TB to put ext4 filesystems on both. But I really need to do this remotely. ( if I can get to the site I could use gparted ) Now fdisk doesn't understand GPT partition tables and pat
2013 Oct 14
1
Many questions from a potential btrfs user
Hi. I am seriously considering employing btrfs on my systems, particularly due to some space-saving features that it has (namely, deduplication and compression). In fact, I was (a few moments ago) trying to back up some of my systems to a 2TB HD that has an ext4 filesystem and, in the middle of the last one, I got the error message that the backup HD was full. Given that what I backup there are
2004 Jan 07
5
Client for P800/P900
Hi Guys, is there a client which can be used on the SonyEricsson P800/P900...? IAX would be cool, but i take anything that can connect (via bluetooth) to an asterisk-server ;-). The phone is Symbian, and can also execute java-stuff... Greez Andreas _________________________________________________________________ Find your perfect match @ http://personals.xtramsn.co.nz with XtraMSN
2012 May 23
1
pvcreate limitations on big disks?
OK folks, I'm back at it again. Instead of taking my J4400 ( 24 x 1T disks) and making a big RAID60 out of it which Linux cannot make a filesystem on, I'm created 4 x RAID6 which each are 3.64T I then do : sfdisk /dev/sd{b,c,d,e} <<EOF ,,8e EOF to make a big LVM partition on each one. But then when I do : pvcreate /dev/sd{b,c,d,e}1 and then pvdisplay It shows each one as
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2011 Apr 12
17
40TB File System Recommendations
Hello All I have a brand spanking new 40TB Hardware Raid6 array to play around with. I am looking for recommendations for which filesystem to use. I am trying not to break this up into multiple file systems as we are going to use it for backups. Other factors is performance and reliability. CentOS 5.6 array is /dev/sdb So here is what I have tried so far reiserfs is limited to 16TB ext4
2020 Jun 06
3
Change in package.skeleton behavior from R 3.6.3 to R 4.0.0 ?
The Rcpp package and some related packages such as RcppArmadillo make use of (local) wrappers around the utils::package.skeleton() function for creating (basic yet functional) packages using Rcpp or RcppArmadillo. RStudio also exposes this under the graphical menu as a nice way to construct a package. But it seems that something changed quite recently in R. I looked into this a little yesterday
2006 Jul 19
3
create very large file system
Suse Linux Enterprise Server 9 SP3 I've tried to create a large 5TB file system using both reiserfs and ext3 and both have failed. I end up with only a 1.5TB file system. Does anyone know why this doesn't work, what to do to fix it? Others have suggested that only XFS or JFS will work. Is this so? Thanks, -Mark
2015 Oct 07
0
OT hardware issue: HP controller to 3rd party RAID
On 10/7/2015 11:47 AM, Jack Bailey wrote: > On 10/07/15 11:13, m.roth at 5-cent.us wrote: >> Jack Bailey wrote: >> >>>> controller that expects to talk to individual SAS or SATA drives. you >>>> can manage it with hpssacli from centos. >>> I have the P822. It has no JBOD or RAID 0. hpacucli works with CentOS >>> 7.0, but is broken on 7.1
2010 Jan 25
3
Debian Lenny - Samba 3.2.5 + OpenLDAP (slapd) 2.4.11
I have a serous problem. I have for some time now tried to get an SAMBA based Domain Controller working. I have tried with OpenLDAP and tdbsam as backend, but I get the same error every time. I wood prefer to use LDAP as my backend. I have read tons of how-to SAMBA + LDAP, but non of the seams to work for my, is there someone that maybe can see what I have done rung in my config.? I have
2015 Jul 10
2
OT, hardware: HP smart array drive issue
Jason Warr wrote: > On July 10, 2015 11:47:09 AM CDT, m.roth at 5-cent.us wrote: >> Hi. Anyone working with these things? I've got a drive in "predictive >> failure" on in a RAID5. Now here's the thing: there was an issue >> yesterday when I got in, and I wound up power cycling the RAID; >> first boot of attached server had issues, and said the
2017 Jan 12
4
Network Storage
Hello, I have looked into the various network attached storage devices and software based solutions. Can't really find one I like. Would it be possible to add a couple HDs to my existing 6.8 server and set them up as RAID drives? If so, how would I keep it from mirroring the system drive? Just spent$2500 on a transmission for my truck, so I broke right know and need to go the
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x bricks => Replicated, Distributed volume * Gluster 3.4 1) I'm asking me, if I can
2013 Jan 30
9
Poor performance of btrfs. Suspected unidentified btrfs housekeeping process which writes a lot
Welcome, I''ve been using btrfs for over a 3 months to store my personal data on my NAS server. Almost all interactions with files on the server are done using unison synchronizer. After another use of bedup (https://github.com/g2p/bedup) on my btrfs volume I experienced huge perfomance loss with synchronization. It now takes over 3 hours what have taken only 15 minutes! File