Displaying 20 results from an estimated 2000 matches similar to: "Filesystem writes unexpectedly slow (CentOS 6.4)"
2013 Aug 21
2
fsck.ext4 Failed to optimize directory
I had a rather large ext4 partition on an Areca RAID shut down uncleanly
while it was writing. When I mount it again, it recommends fsck, which I
do, and I get the following error:
Failed to optimize directory ... EXT2 directory corrupted
This error shows up every time I run fsck.ext4 on this partition.
How can I fix this? The file system seems to work ok otherwise, I can
mount it and it
2013 Mar 24
5
How to make a network interface come up automatically on link up?
I have a recently installed Mellanox VPI interface in my server. This is
an InfiniBand interface, which, through the use of adapters, can also do
10GbE over fiber. I have one of the adapter's two ports configured for
10GbE in this way, with a point to point link to a Mac workstation with
a Myricom 10GbE card.
I've configured this interface on the Linux box (eth2) using
2014 Oct 14
2
CentOS 6.4 kernel panic on boot after upgrading kernel to 2.6.32-431.29.2
I'm on a Supermicro server, X9DA7 motherboard, Intel C602 chipset, 2x 2.4GHz
Intel Xeon E5-2665 8-core CPU, 96GB RAM, and I'm running CentOS 6.4.
I just tried to use yum to upgrade the kernel from 2.6.32-358 to
2.6.32-431.29.2. However, I get a kernel panic on boot. The first kernel panic I
got included stuff about acpi, so I tried adding noacpi noapic to the kernel
boot parameters,
2013 Mar 26
1
ext4 deadlock issue
I'm having an occasional problem with a box. It's a Supermicro 16-core
Xeon, running CentOS 6.3 with kernel 2.6.32-279.el6.x86_64, 96 gigs of
RAM, and an Areca 1882ix-24 RAID controller with 24 disks, 23 in RAID6
plus a hot spare. The RAID is divided into 3 partitions, two of 25 TB
plus one for the rest.
Lately, I've noticed sporadic hangs on writing to the RAID, which
2013 Mar 23
2
"Can't find root device" with lvm root after moving drive on CentOS 6.3
I have an 8-core SuperMicro Xeon server with CentOS 6.3. The OS is
installed on a 120 GB SSD connected by SATA, the machine also contains
an Areca SAS controller with 24 drives connected. The motherboard is a
SuperMicro X9DA7.
When I installed the OS, I used the default options, which creates an
LVM volume group to contain / and /home, and keeps /boot and /boot/efi
outside the volume group.
2013 Apr 26
1
Why is my default DISPLAY suddenly :3.0?
I'm on Fedora 6.3. After a reboot, some proprietary software didn't want
to run. I found out that the startup script for said software manually
sets DISPLAY to :0.0, which I know is not a good idea, and I can fix.
However, this still doesn't explain why my default X DISPLAY is suddenly
:3.0.
--
Joakim Ziegler - Supervisor de postproducci?n - Terminal
joakim at terminalmx.com
2013 Aug 19
1
LVM RAID0 and SSD discards/TRIM
I'm trying to work out the kinks of a proprietary, old, and clunky
application that runs on CentOS. One of its main problems is that it
writes image sequences extremely non-linearly and in several passes,
using many CPUs, so the sequences get very fragmented.
The obvious solution to this seems to be to use SSDs for its output, and
some scripts that will pick up and copy our the sequences
2010 Apr 13
6
12-15 TB RAID storage recommendations
Hello listmates,
I would like to build a 12-15 TB RAID 5 data server to run under
ContOS. Any recommendations as far as hardware, configuration, etc?
Thanks.
Boris.
2007 Feb 06
2
Samba enterprise performance?
Hi list,
Due to possible budget cuts, I am looking at finding alternatives to the Netapp
filers we currently have. Obviously, one of the key drivers is the performance
required for our specific application.
On https://www.fotoloog.org/fs.png you can see the load on our main filer. The
key question I have is: looking at that graph, do you think it is worthwhile
looking into Samba further as a
2009 Oct 21
4
Recommendation for PCI-e SATA RAID 5 card?
Hello:
I am looking for a recommendation for a PCI-e
RAID card for my server. The server has a
PCI-e x16 low profile slot so the card has
to be at most 6.6 inches long x 2.536 inches
high. I would like to use RAID 5 with 3 drives
so I have to have those capabilities.
It has to be CentOS 5.4 compatible (Of course!).
I took a look at the offerings from 3Ware, but
their cards are too long.
If
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users,
I've spent several months trying to get any kind of high performance out of
gluster. The current XFS/samba array is used for video editing and
300-400MB/s for at least 4 clients is minimum (currently a single windows
client gets at least 700/700 for a single client over samba, peaking to 950
at times using blackmagic speed test). Gluster has been getting me as low
as
2010 Feb 16
3
SAS raid controllers
Is anyone running either the newish Adaptec 5805 or the new LSI (3ware) 9750
sas raid controllers in a production environment with Centos 5.3/5.4?
The low price of these cards makes me suspicious, compared to the more
expensive pre-merger 3ware cards and considerably more expensive Areca
ARC-1680. I've been 'burned' by the low cost of Promise raid cards (just as
this group pointed
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with
3.11.3 too.
On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote:
> On 09/06/2017 05:48 AM, Serkan ?oban wrote:
>>
>> Hi,
>>
>> Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
>> 100 clients are writing each has 5 threads total 500 threads.
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon,
Can you please turn OFF client-io-threads as we have seen degradation of
performance with io-threads ON on sequential read/writes, random
read/writes. Server event threads is 1 and client event threads are 2 by
default.
Thanks & Regards
On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com>
wrote:
> Hi gluster users,
> I've spent several
2013 Aug 21
1
Gluster 3.4 Samba VFS writes slow in Win 7 clients
Hello?
We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client.
two glusterfs server nodes export share with name of "gvol":
hardwares:
brick use a raid 5 logic disk with 8 * 2T SATA HDDs
10G network connection
one linux client mount the "gvol" with cmd:
[root at localhost current]# mount.cifs //192.168.100.133/gvol
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD?
Regards,
Bartosz
> Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47:
>
> Hi gluster users,
> I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2003 May 19
2
Illegal instruction on a new asterisk build.
Hi. I have asterisk (cvs build from tonight) running fine on a RedHat9
box, with zaptel hardware.
On a second RedHat9 box (with no zaptel hardware) I've built the
same version, apparently with no errors, but immediately upon invoking
asterisk I get: Illegal instruction.
running under gdb, it shows that its failing in ast_ulaw_init as shown
below:
[New Thread 16384 (LWP 1097)]
Program
2008 Jun 03
6
development machine with xen on gentoo
Hi all.
We just put a relative large server in operation (for our measures ;-)
), a 4x Dual Core / Intel with 16GB Ram and a 3ware 9550SX SATA-RAID
with 4 drives. Operating system for the Dom0 is an up-to-date gentoo linux.
Everything runs really fine, there are 4 Domu''s running, 1x gentoo and
2x debian and 1x Windows Server 2003. DomU''s are running blazing fast in
normal
2010 Jan 06
16
8-15 TB storage: any recommendations?
Hello everyone,
This is not directly related to CentOS but still: we are trying to set up
some storage servers to run under Linux - most likely CentOS. The storage
volume would be in the range specified: 8-15 TB. Any recommendations as far
as hardware?
Thanks.
Boris.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Nov 27
3
core_udp_sendto: no mapping
On Mon, 2017-11-27 at 18:18 -0500, Gene Cumm wrote:
>
> On Mon, Nov 27, 2017 at 6:07 PM, Joakim Tjernlund
> <Joakim.Tjernlund at infinera.com> wrote:
> > On Mon, 2017-11-27 at 18:03 -0500, Gene Cumm wrote:
> > > On Mon, Nov 27, 2017 at 2:14 PM, Gene Cumm <gene.cumm at gmail.com> wrote:
> > > > On Mon, Nov 27, 2017 at 12:07 PM, Joakim Tjernlund
>