similar to: Finding wich files a writen to

Displaying 20 results from an estimated 900 matches similar to: "Finding wich files a writen to"

2011 Apr 09
3
5.6 is out, great my first upgrade didn't work...
5.6 is out. That is good news. I did a yum update on one of my non-critical server, and the server stoped responding to ping after the reboot, and never answered back. It's now been 10 minutes, so I'll have to take a ride to the colo... Nice work dev team, keep up the good work. Let's hope that C6 will come soon ! I'm eager to upgrade.
2012 Mar 09
2
iotop :: OSError: Netlink error: Invalid argument (22)
Hi! i have a problem with iotop : root at alien: ~ # iotop Traceback (most recent call last): File "/usr/bin/iotop", line 16, in ? main() File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 567, in main main_loop() File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 557, in <lambda> main_loop = lambda: run_iotop(options) File
2010 Jul 03
6
Disk performance
Hi Everyone, My Xen host has 2 X 1TB hard drives in a RAID1 setup. If one DomU starts to dd a 5GB file (A ran it in a loop as a test), access ssh on the other DomU becomes very slow. Is this normal? Many Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2010 Jun 23
2
iotop
Is there something like iotop in rpm form available for Centos 4.x? Matt
2011 Apr 16
2
centosplus kernel not up to date ?
Is it possible that the kernel in the centosplus repo is not up to date ? The centosplus kernel I have is kernel-PAE-2.6.18-238.5.1.el5 and the regular one is 2.6.18-238.9.1.el5... I need the plus because of the firewire drivers. Regards,
2011 Jun 07
1
daemon user can't acess file while nobody user can.
I have a server where /home is on a nfs. I installed an apache (my compile) server. By default, it runs as daemon. That user can't acces files in /home/*/public_html, while the nobody user can. So if I change my apache config, it now can. /home is : drwxr-xr-x /home/user is : drwxr-xr-x /home/user/public_html is : drwxr-xr-x selinux is disabled. In the error log of apache I get :
2008 Mar 31
1
my centos switched alone its root fs to read-only
My test server was in very eavy load, running kolab's components: postfix, cyrus-imap and openldap A second test machine was sending, and reading emails as fast as it can. The CPU was never idle ! I left the server at about 17H30 and at 22H I find it "uncommitted" (difficult to work when the partition in read-only !) However the mount command report it as RW ! The /boot is still RW
2019 Nov 11
2
cli Checking disk i/o
OK.? That is interesting.? I am assuming tps is transfers per sec? I would have to get a stop watch, but it seems to go a bit of time, and then a write. Is there something that would accumulate this and give me a summary over some period of time?? Of course it better NOT be doing its own IOs... On 11/10/19 6:03 PM, shimi wrote: > iostat 1 > > On Mon, 11 Nov 2019, 00:11 Robert
2006 Jun 16
5
[slightly OT] Problem with subversion 1.3.1 on OSX Tiger
I have a subversion repo on a Debian Sarge server. I do rails development on two Debian workstations (home, work) and also a MacBookPro. I installed subversion from Darwinports. Things worked ok for a while and I did commits from and updates to all three machines until today. When I did svn status I noticed a lock on the working dir. $ svn status ? Rakefile ? readme ! L . .... etc I cannot
2011 Aug 09
17
Re: Applications using fsync cause hangs for several seconds every few minutes
On 06/21/2011 01:15 PM, Jan Stilow wrote: > Hello, > > Nirbheek Chauhan <nirbheek <at> gentoo.org> writes: >> [...] >> >> Every few minutes, (I guess) when applications do fsync (firefox, >> xchat, vim, etc), all applications that use fsync() hang for several >> seconds, and applications that use general IO suffer extreme >> slowdowns.
2011 Sep 20
2
Finding i/o bottleneck
Hi list ! We have a very busy webserver hosted in a clustered environment where the document root and data is on a GFS2 partition off a fiber-attached disk array. Now on busy moments, I can see in htop, nmon that there is a fair percentage of cpu that is waiting for I/O. In nmon, I can spot that the most busy block device correspond to our gfs2 partition where many times, it shows that
2020 Apr 30
2
io_uring cause data corruption
On Thu, Apr 30, 2020 at 10:25:49AM +0200, A L wrote: > So I did some more tests. smbclient mget does not copy in the same way > Windows Explorer does. When copying in Windows Explorer, there are many > multiple concurrent threads used to transfer the files. With smbclient mget > there are no corruptions, both locally and over the network from another > Linux machine. > > I
2008 Apr 30
1
X just died then restarted
Just had a weird thing happen that I've never encountered before. Running Centos 5.x (updated) on 32-bit (Athlon XP). Was going along in Firefox when suddenly my screen went black and the hard drive light came on mostly steady for several seconds. After a little bit I got back the X login screen. It normally comes up on F7, but F7 was dead (with some text I'll paste in below in case it
2013 Sep 04
10
Performance test regarding xenstored
Hello, I am running a mail benchmark test using smtp-source[1] with VM running postfix on Xen hypervisor. My system configuration is: uname -a -> Linux cadlab 3.1.10-1.19-xen #1 SMP Mon Feb 25 10:32:50 UTC 2013 (f0b13a3) x86_64 x86_64 x86_64 GNU/Linux I am observing a very high disk write usage by xenstored (some 5 Mbps) without even anything running on the VM. Is it normal? During test, I
2013 Sep 04
10
[Xen-users] Performance test regarding xenstored
Hello, I am running a mail benchmark test using smtp-source[1] with VM running postfix on Xen hypervisor. My system configuration is: uname -a -> Linux cadlab 3.1.10-1.19-xen #1 SMP Mon Feb 25 10:32:50 UTC 2013 (f0b13a3) x86_64 x86_64 x86_64 GNU/Linux I am observing a very high disk write usage by xenstored (some 5 Mbps) without even anything running on the VM. Is it normal? During test, I
2010 Nov 05
2
xServes are dead ;-( / SAN Question
Hi ! As some of you might know, Apple has discontinued it's xServes server as of january 31st 2011. We have a server rack with 12 xserves ranging from dual G5's to dual quand-core xeon lastest generation, 3 xserve-raid and one activeraid 16 TB disk enclosure. We also use xSan to access a shared file system among the servers. Services are run from this shared filesystem, spreaded
2011 Dec 15
3
Cause for kernel panic
Hi ! On an 8-node cluster, one of the node did a kernel panic. The only bit of information I have is on a ssh console I had open, which said : Message from syslogd at node108 at Dec 14 19:00:15 ... kernel:------------[ cut here ]------------ Message from syslogd at node108 at Dec 14 19:00:15 ... kernel:invalid opcode: 0000 [#1] SMP Message from syslogd at node108 at Dec 14 19:00:15 ...
2017 Oct 03
4
[PATCH v2 0/2] builder: Choose better weights in the planner.
v1 -> v2: - Removed the f_type field from StatVFS.statvfs structure. - New function StatVFS.filesystem_is_remote, written in C. [Thinking about it, this should probably be called ?is_network_filesystem?, but I can change that before pushing]. - Use statvfs instead of fstatvfs, and statfs instead of fstatfs. - Rejigged the comments in builder/builder.ml to make them simpler
2012 Jun 26
2
Question about storage for virtualisation
Hi ! I'm about to deploy a new server that will host several virtual host for mainly website hosting purposes. My server will be a Xeon 3440 or 3450 with 32 gigs of ram (the max of that board). So I will have 8 logical cores. At the moment, I don't know how many vms I will have, in the order of 5 or 6. I am quite new to managing VMs, I did play alot with them over the course of the
2020 Feb 03
3
Hard disk activity will not die down
I updated my backup server this weekend from CentOS 7 to CentOS 8. OS disk is SSD, /dev/md0 are two 4TB WD mechanical drives. No hardware was changed. 1. wiped all drives 2. installed new copy of 8 on system SSD 3. re-created the 4TB mirror /dev/md0 with the same WD mechanical drives 4. created the largest single partition possible on /dev/md0 and formatted it ext4 5. waited several hours for the