similar to: Basic questions

Displaying 20 results from an estimated 5000 matches similar to: "Basic questions"

2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results: Summary: I am not able to reproduce the problem, IOW I get relatively equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0 Next steps: - Could you pass along your volfile (both for a brick and also the client vol file (from /var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol file from the same place) - I want to check
2017 Sep 06
0
3.10.5 vs 3.12.0 huge performance loss
On 09/06/2017 05:48 AM, Serkan ?oban wrote: > Hi, > > Just do some ingestion tests to 40 node 16+4EC 19PB single volume. > 100 clients are writing each has 5 threads total 500 threads. > With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s > With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s > I did not change any volume
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with 3.11.3 too. On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote: > On 09/06/2017 05:48 AM, Serkan ?oban wrote: >> >> Hi, >> >> Just do some ingestion tests to 40 node 16+4EC 19PB single volume. >> 100 clients are writing each has 5 threads total 500 threads.
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All. I have a server which uses RAID10 made of 4 partitions for / and boots from it. It looks like so: mdadm -D /dev/md1 /dev/md1: Version : 00.90 Creation Time : Mon Apr 27 09:25:05 2009 Raid Level : raid10 Array Size : 973827968 (928.71 GiB 997.20 GB) Used Dev Size : 486913984 (464.36 GiB 498.60 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1
2017 Aug 14
0
Is Intel Omni-Path tested/supported?
Hi, I am new to GlusterFS. I have 2 computers with Intel Omni-Path hardware/software installed. RDMA is running ok on those according to "ibv_devices" output. GlusterFS "server" installed and set up ok on those. Peer probes on TCP-over-OPA worked ok. I created a replicated volume with transport=tcp, then tested them by mounting the volume on one of servers. Things worked
2016 Jun 13
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-06-01 20:07, Kelly Lesperance wrote: > Software RAID 10. Servers are HP DL380 Gen 8s, with 12x4 TB 7200 RPM drives. > > On 2016-06-01, 3:52 PM, "centos-bounces at centos.org on behalf of m.roth at 5-cent.us" <centos-bounces at centos.org on behalf of m.roth at 5-cent.us> wrote: > > >Kelly Lesperance wrote: > >> I did some additional testing - I
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system here, and shortly thereafter remembreed I hadn't updated it in a while, so i ran yum update... while installing/updating stuff, got these errors: Installing: kernel ####################### [14/69] raid level raid10 (in /proc/mdstat) not recognized ... Installing: kernel-smp
2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings- I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain. Option #1: My understanding of RAID10 using 4
2013 Mar 28
1
question about replacing a drive in raid10
Hi all, I have a question about replacing a drive in raid10 (and linux kernel 3.8.4). A bad disk was physical removed from the server. After this a new disk was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs FS. After this the server was rebooted and I mounted the filesystem in degraded mode. It seems that a previous started balance continued. At this point I want to
2011 Jul 22
0
Strange problem with LVM, device-mapper, and software RAID...
Running on a up-to-date CentOS 5.6 x86_64 machine: [heller at ravel ~]$ uname -a Linux ravel.60villagedrive 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux with a TYAN Computer Corp S4881 motherboard, which has a nVidia 4 channel SATA controller. It also has a Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02). This machine has a 120G
2017 Sep 12
0
3.10.5 vs 3.12.0 huge performance loss
Serkan, Will it be possible to provide gluster volume profile <volname> info output with 3.10.5 vs 3.12.0? That should give us clues about what could be happening. On Tue, Sep 12, 2017 at 1:51 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi, > Servers are in production with 3.10.5, so I cannot provide 3.12 > related information anymore. > Thanks for help,
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs
2011 Jun 24
1
How long should resize2fs take?
Hullo! First mail, sorry if this is the wrong place for this kind of question. I realise this is a "piece of string" type question. tl;dr version: I have a resizefs shrinking an ext4 filesystem from ~4TB to ~3TB and it's been running for ~2 days. Is this normal? Strace shows lots of:- lseek(3, 42978250752, SEEK_SET) = 42978250752 read(3,
2014 May 28
0
Failed Disk RAID10 Problems
Hi, I have a Btrfs RAID 10 (data and metadata) file system that I believe suffered a disk failure. In my attempt to replace the disk, I think that I've made the problem worse and need some help recovering it. I happened to notice a lot of errors in the journal: end_request: I/O error, dev dm-11, sector 1549378344 BTRFS: bdev /dev/mapper/Hitachi_HDS721010KLA330_GTA040PBG71HXF1 errs: wr
2012 Feb 06
1
Unknown KERNEL Warning in boot messages
CentOS Community, Would someone who is familiar with reading boot messages and kernel errors be able to assist with advising me on what the following errors might mean in dmesg. It seems to come up randomly towards the end of the logfile. ------------[ cut here ]------------ WARNING: at arch/x86/kernel/cpu/mtrr/generic.c:467 generic_get_mtrr+0x11e/0x140() (Not tainted) Hardware name: empty
2013 Mar 15
0
[PATCH] btrfs-progs: mkfs: add missing raid5/6 description
Signed-off-by: Matias Bjørling <m@bjorling.me> --- man/mkfs.btrfs.8.in | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/man/mkfs.btrfs.8.in b/man/mkfs.btrfs.8.in index 41163e0..db8c57c 100644 --- a/man/mkfs.btrfs.8.in +++ b/man/mkfs.btrfs.8.in @@ -37,7 +37,7 @@ mkfs.btrfs uses all the available storage for the filesystem. .TP \fB\-d\fR, \fB\-\-data
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
> I got the email that a drive in my 4-drive RAID10 setup failed. What are > my > options? > > Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). > > mdadm.conf: > > # mdadm.conf written out by anaconda > MAILADDR root > AUTO +imsm +1.x -all > ARRAY /dev/md/root level=raid10 num-devices=4 > UUID=942f512e:2db8dc6c:71667abc:daf408c3 > >
2011 Aug 30
3
OT - small hd recommendation
A little OT - but I've seen a few opinions voiced here by various admins and I'd like to benefit. Currently running a single combined server for multiple operations - fileserver, mailserver, webserver, virtual server, and whatever else pops up. Current incarnation of the machine, after the last rebuild, is an AMD Opteron 4180 with a Supermicro MB using ATI SB700 chipset - which
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to configure in RAID10. I read somewhere that Raid10 support is in the latest kernel, but I can't seem to get anaconda to let me create it. I only see raid 0, 1, 5, and 6. Even when I tried to set up raid5 or raid1, it would not let me put the /boot partition on it, and I though that this was now possible. Is it