similar to: Re: reiserfs3/ext4/btrfs RAID read performance

Displaying 20 results from an estimated 100 matches similar to: "Re: reiserfs3/ext4/btrfs RAID read performance"

2005 May 04
0
Re: ReiserFS3 Support
Along the lines of the recent discussion about ReiserFS support, I was wondering if it is compatible with SELinux? I know there are some file system support issues regarding it... Another random question: is there a way to get/use Disk Druid under a running installation? This would make adding new disks, RAIDs, LVM management, etc. a breeze compared to what it can be like sometimes
2014 Jun 24
3
How to remove LVM Physical Volume from Volume Group?
Hi. I have a volume group (let's say) vg_data. It consists from /dev/sdd5 sdd6 sdd7 I added sdc5 Now I want to remove (free) sdd7 and you is to for RAID partition. What are the commands (ordered) I need to perform? I failed to find clear howto. vg-data has only one partition, total size is over 1TB, free space is about 500GB so
2002 Feb 28
5
Problems with ext3 fs
Hi, Apologies, this is going to be quite long - I'm going to provide as much info as possible. I'm running a system with ext3 fs on software RAID. The RAID set-up is as shown below: jlm@nijinsky:~$ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md0 : active raid1 hdc1[1] hda1[0] 96256 blocks [2/2] [UU] md5 : active raid1 hdk1[1] hde1[0]
2000 Dec 15
0
sshd demons
Hi there, I'm having a problem with sshd demons not shuting down after connection is closed. The strange thing is that this is happening on both my Redhat 6.2 server and Redhat 7.0, both running OpenSSH_2.3.0p1. I'm positive that KeepAlive is set to yes ! Is this a common problem ? I'm suspecting that is has something to do with the client as well. Think we're all using
2010 Nov 01
3
btrfs benchmark with 2.6.37-rc1
Here is a small btrfs vs. ext4 benchmark with kernel 2.6.37-rc1. compilebench with options -i 10 -r 30 on 2.6.37-rc1 btrfs ========================================================================== intial create total runs 10 avg 73.11 MB/s (user 0.34s sys 1.96s) create total runs 5 avg 49.53 MB/s (user 0.41s sys 1.62s) patch total runs 4 avg 22.13 MB/s (user 0.09s sys 1.79s) compile total runs
2003 Jan 10
1
Thread extension
Just figured I'd mention that CVS supports now THREAD extension. I also did a bit of benchmarking using a folder with 4685 mails (evolution mailing list): dovecot+mbox: - 0.59s user 0.01s system 98% cpu 0.608 total - malloc() memory usage 45072 -> 825685 dovecot+maildir: - 0.60s user 0.17s system 98% cpu 0.780 total - malloc() memory usage: 45003 -> 825480 Meaning it takes almost
2001 Nov 11
2
Software RAID and ext3 problem
Hi, I'm having a problem with ext3 on my system. I'm running 2.4.13 with the appropiate ext3 patch and a software raid array with paritiions as shown below: Filesystem Size Used Avail Use% Mounted on /dev/md5 939M 237M 654M 27% / /dev/md0 91M 22M 65M 25% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.3G
2001 Nov 11
0
(no subject)
Hi, I'm having a problem with ext3 on my system. I'm running 2.4.13 with the appropiate ext3 patch and a software raid array with paritiions as shown below: Filesystem Size Used Avail Use% Mounted on /dev/md5 939M 237M 654M 27% / /dev/md0 91M 22M 65M 25% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.3G
2008 Jun 21
3
[LLVMdev] llvm-gcc -O0 compile times
I've started investigating -O0 -g compile times with llvm-gcc, which are pretty important for people in development mode (e.g. all debug builds of llvm itself!). I've found some interesting things. I'm testing with mainline as of r52596 in a Release build and with checking disabled in the front- end. My testcase is a large C++ source file: my friend
2003 Jan 14
2
2.4.21-pre3 - problems with ext3
Hello Since 2.4.20, we have problems with ext3. Machine is 2xPentium III (1GHz), 2GB RAM, 1GB swap. RH 8.0 (glibc-2.3.1-21), gcc (GCC) 3.2 20020903 We have a lot of users: oceanic:~# wc -l /etc/passwd 6694 /etc/passwd connected via SAMBA (2.2.7) from 200-300 Windows-XX workstations Partition with ext3 looks like this: oceanic:~# mount |grep ext3 /dev/sdb5 on /home1 type ext3
2012 Jan 17
2
Transition to CentOS - RAID HELP!
Hi Folks, I've inherited an old RH7 system that I'd like to upgrade to CentOS6.1 by means of wiping it clean and doing a fresh install. However, the system has a software raid setup that I wish to keep untouched as it has data on that I must keep. Or at the very least, TRY to keep. If all else fails, then so be it and I'll just recreate the thing. I do plan on backing up
2015 Jan 30
4
C6 server responding extremely slow on ssh interactive
Op 29-01-15 om 21:21 schreef Gordon Messmer: > > I haven't seen delays anywhere near that long before, even with heavy swapping. But I guess I'd look at that sort of thing first. > > Run "iostat -x 2" and see if your disks are being fully utilized during the pauses. Run "top" and see if there's anything useful there. Check swap use with
2005 May 03
4
Compiling Kernel Modules
Hi, Here'a a question - is it possible to compile a single module (distributed in the kernel source tree) for the current CentOS kernel (2.6.9-5.0.5) without recompiling the entire kernel and all other modules. I basically need reiserfs3 (nb. why is it disabled? it's a module, you use it, it doesn't wreck anything...) and I don't really want to change the rest of the kernel, and
2012 Nov 14
0
[LLVMdev] Using LLVM to serialize object state -- and performance
I've been profiling more; see <https://dl.dropbox.com/u/46791180/perf.png>. One thing I'm a bit confused about is why I see a FunctionPassManager there. I use a FunctionPassManager at the end of LLVM IR code generation, write the IR to disk, then read it back later. Why is apparently another FunctionPassManager being used during the JIT'ing of the IR code? And how do I
2012 Nov 13
3
[LLVMdev] Using LLVM to serialize object state -- and performance
Switching to CodeGenOpt::None reduced the execution time from 5.74s to 0.84s. By just tweaking things randomly, changing to CodeModel::Small reduced it further to 0.22s. We have some old, ugly, pure C++ code that we're trying to replace (both because it's ugly and because it's slow). It's execution time is about 0.089s, so that's the time to beat. Hence, I'd like to
2012 Nov 14
2
[LLVMdev] Using LLVM to serialize object state -- and performance
The passes run are determined by TargetMachine::adPassesToEmitMachineCode (or addPassesToEmitMC in the case of MCJIT), which is called from the JIT constructor. You can step through that to see where the passes are coming from or you can create a custom target machine instance to control it. -Andy -----Original Message----- From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at
2003 Oct 09
2
IPC connections and utmp
Hi, I am running Samba version 2.2.5 with utmp turned on. I have a problem with utmp and not displaying who is currently logged in. The basic idea is that even though a user has logged off the computer (win2k pro) a connection to IPC$ remains. It gives this. [root@lifesaver root]# smbstatus -u scott Samba version 2.2.5 Service uid gid pid machine
2015 Feb 05
1
lost at 'repository' entry installing centos7
On 02/02/2015 03:15 PM, Tim wrote: > What are you exactly searching for? Sounds like he is doing a network install, and is looking for the network path that must be supplied in order to do the install. If he doesn't have a local repository, then he has to supply the first part of the path (e.g. http://..../xyz/ ) and he has to stop at the directory level above .../7/ or some such. I
2011 Nov 12
1
Using require_relative to speed up rspec require time.
Hi, I noticed recently that require ''rspec'' on my machine was taking close to half a second. That''s not a huge amount of time, but it is still the single slowest part of my test suite. It boils down to Ruby 1.9''s rather slow require. I''m using 1.9.3, but I''d still like to shave off some of the require time. As an experiment, I went into
2014 Mar 02
1
No speed improvement with FTS for iOS 7?
Hi, I recompiled Dovecot with Lucene FTS to try to improve iOS 7 IMAP search speed. Unfortunately this does not seem to help. I have 60 mailboxes, totaling 300 MB; lucene-indexes is 30 MB in size. % doveadm mailbox status -t all '*' messages=16335 recent=0 unseen=1736 vsize=280049586 Searching for a single word which is present in two messages of one mailbox takes 40 seconds to