similar to: Network Storage

Displaying 20 results from an estimated 9000 matches similar to: "Network Storage"

2017 Jan 12
0
Network Storage
On 1/12/2017 1:55 PM, TE Dukes wrote: > Would it be possible to add a couple HDs to my existing 6.8 server and set > them up as RAID drives? assuming your server has room for more drives and sata or sas ports, sure. physically add the disks, use mdraid to put them in a mirror or raid5/6 or whatever, use vgcreate to define a new volumegroup from that raid, then use lvcreate to create
2010 Jan 25
24
Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboard
My current home fileserver (running Open Solaris 111b and ZFS) has an ASUS M2N-SLI DELUXE motherboard. This has 6 SATA connections, which are currently all in use (mirrored pair of 80GB for system zfs pool, two mirrors of 400GB both in my data pool). I''ve got two more hot-swap drive bays. And I''m getting up towards 90% full on the data pool. So, it''s time to expand,
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2017 Jan 12
0
Network Storage
TE Dukes wrote: > > I have looked into the various network attached storage devices and > software based solutions. > > Can't really find one I like. > > Would it be possible to add a couple HDs to my existing 6.8 server and set > them up as RAID drives? If so, how would I keep it from mirroring the > system drive? Of course, and you need to read up on RAID a bit.
2017 Jan 05
2
[OT] Network Attached Storage
> -----Original Message----- > From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of John R > Pierce > Sent: Tuesday, January 3, 2017 1:50 PM > To: centos at centos.org > Subject: Re: [CentOS] [OT] Network Attached Storage > > I've been using a HP Microserver for the last couple years as my home file > server, with FreeNAS, and 4x3TB drives. > >
2016 Mar 01
10
Any experiences with newer WD Red drives?
Might be slightly OT as it isn't necessarily a CentOS related issue. I've been using WD Reds as mdraid components which worked pretty well for non-IOPS intensive workloads. However, the latest C7 server I built, ran into problems with them on on a Intel C236 board (SuperMicro X11SSH) with tons of "ata bus error write fpdma queued". Googling on it threw up old suggestions to
2011 Aug 15
1
SAS storage arrays, C6, and SES lights
So I'm curious how SAS JBOD arrays and linux MDraid as implemented in CentOS6, and SES (SCSI/SAS Enclosure Services) backplane controllers 'get along' and how much configuration is needed to get the warning lights to work properly. scenario: whitebox server with a SAS backplane or two, daisy chained on a SAS HBA (like an LSI Logic 2008), and disks organized as several raid5/6
2017 Feb 15
1
RAID questions
> -----Original Message----- > From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of John R > Pierce > Sent: Tuesday, February 14, 2017 8:13 PM > To: centos at centos.org > Subject: Re: [CentOS] RAID questions > > On 2/14/2017 5:08 PM, Digimer wrote: > > Note; If you're mirroring /boot, you may need to run grub install on > > both disks to ensure
2017 Feb 15
3
RAID questions
On 14/02/17 07:58 PM, John R Pierce wrote: > On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote: >> 1- Better to go with a hardware RAID (mainboardsupported) or software? > > I would only use hardware raid if its a card with battery (or > supercap+flash) backed writeback cache, such as a megaraid, areca, etc. > otherwise I would use mdraid mirroring. > >
2017 Feb 15
3
RAID questions
Hello, Just a couple questions regarding RAID. Here's thesituation. I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far into this that Ican't start over. I wanted disk space to backup 3 other machines. I way overestimated what I needed for full, incremental and image backups with UrBackup.I've used less than 1TB so far. I would like to add an additional drive
2010 Nov 12
4
Opinion on best way to use network storage
I need the community''s opinion on the best way to use my storage SAN to host xen images. The SAN itself is running iSCSI and NFS. My goal is to keep all my xen images on the SAN device, and to be able to easily move images from one host to another as needed while minimizing storage requirements and maximizing performance. What I see are my options: 1) Export a directory through NFS.
2009 Oct 06
1
Disc layout advice
I have a DAS w/ 6 750Gb and 6 1Tb discs I am setting up using Linux raid. The controller is a POS so each disc is exported as an R0 single volume. I used a parted and fdisk script to create 1 max size partition labeled as Linux Raid Autodetect and created the first r6 array w/ mdadm. I normally create and mark partitions as raid or lvm so people know what's going on. The first md array is
2015 Oct 07
3
Software RAID1 Drives
I have 3 4TB WD drives I want to put in a RAID1 array. Two WD4000FYYZ and One WD4000F9YZ All enterprise class but two are WD Re and one is WD Se. I ordered the first two thinking 2 drives in the raid array would be sufficient but later decided its a long drive to the server so I would rather have 3 drives and ordered a third in accidentally did not get EXACT same thing. Would there be ANY
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2016 Oct 12
5
Backup Suggestion on C7
Hi list, I'm building a backup server for 3 hosts (1 workstation, 2 server). I will use bacula to perform backups. The backup is performed on disks (2 x 3TB on mdraid mirror) and for each hosts I've created a logical volume with related size. This 3 hosts have different data size with different disk change rate. Each host must have a limited sized resource and a reserved space. If a
2008 Jun 17
3
LSAI SAS SATA card and MB comptability questions?
Hello, I am new to open solaris and am trying to setup a ZFS based storage solution. I am looking at setting up a system with the following specs: Intel BOXDG33FBC Intel Core 2 Duo 2.66Ghz 2 or 4 GB ram For the drives I am looking at using a LSI SAS3081E-R I''ve been reading around and it sounds like LSI solutions work well in terms of compatability with solaris. Could someone help
2006 Dec 06
5
LVM & volume groups
Can anybody tell me if it makes a difference if domU''s have separate LVM volume groups? For instance, the Xen User Manual ( http://tx.downloads.xensource.com/downloads/docs/user/#SECTION03330000000000000000) says, when setting up a domU''s disks with LVM, to do a vgcreate vg /dev/sda10 Should each domU have it''s own volume group, or can all the domU''s share
2023 May 19
3
[libguestfs PATCH 0/3] test "/dev/mapper/VG-LV" with "--key"
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168506 This small set covers the new /dev/mapper/VG-LV "--key" ID format in the libguestfs LUKS-on-LVM inspection test. Thanks, Laszlo Laszlo Ersek (3): update common submodule LUKS-on-LVM inspection test: rename VGs and LVs LUKS-on-LVM inspection test: test /dev/mapper/VG-LV translation common
2019 Jan 14
2
Samba shares no longer visible
All, I came into work to find a strange problem today: The shares on a Samba server were no longer accessible. After working on it for a while, I finally turned up logging and found the following in the client connection logs: [2019/01/14 14:59:21.384622, 1] ../auth/gensec/spnego.c:1218(gensec_spnego_server_negTokenInit_step) gensec_spnego_server_negTokenInit_step: ntlmssp: parsing
2023 May 19
3
[guestfs-tools PATCH 0/3] test "/dev/mapper/VG-LV" with "--key"
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168506 This small set covers the new /dev/mapper/VG-LV "--key" ID format in the LUKS-on-LVM virt-inspector test. Thanks, Laszlo Laszlo Ersek (3): update common submodule inspector: rename VGs and LVs in LUKS-on-LVM test inspector: test /dev/mapper/VG-LV translation in LUKS-on-LVM test common