similar to: Will samba work on Linux/486 with heavy swapping?

Displaying 20 results from an estimated 40000 matches similar to: "Will samba work on Linux/486 with heavy swapping?"

2002 Mar 01
2
Re: ext3, S/W RAID-5 and many services
I am having the exact same problems with my linux box. I have a 486, 24Mb of ram, and a RAID 1 partition. The box runs dhcp, xinetd, telnetd, and samba. It currently only serves the RAID 1 partition via samba to a single Win98 box. Under any large file transfer to the linux box, regardless of which machine initiates the x-fer, the linux box kernel panics and dies. Currently, I have tried
2009 Jan 29
1
7.1, mpt and slow writes
Hello, I think this needs a few more eyes: http://lists.freebsd.org/pipermail/freebsd-scsi/2009-January/003782.html In short, writes are slow, likely do to the write-cache being enabled on the controller. The sysctl used in 6.x to turn the cache off don't seem to be in 7.x. Thanks, Charles ___ Charles Sprickman NetEng/SysAdmin Bway.net - New York's Best Internet - www.bway.net
2013 Sep 02
1
heavy IO load when working with sparse files (centos 6.4)
Dear List, We have noticed a variety of reproducible conditions working with sparse files on multiple servers under load with CentOS 6.4. The short story is that processes that read / write sparse files with large "holes" can generate an IO storm. Oddly, this only happens with holes and not with the sections of the files that contain data. We have seen extremely high IO load for
2006 Apr 01
1
6.1 responsiveness under heavy (cpu?) load + thread monitoring
Hello, I have a RELENG_6/AMD64 with an AMD64X2 cpu, 1GB of memory, 512MB swap. I remarked that when I run 2 simulation program [1], sometimes, one of them run one or several of its threads [2] very slowly, but sometimes, it doesn't occur. I remarked that is doesn't depend on SMP/UP, ULE/4BSD, i386/AMD64. It seems to occur also with the kernel stress test under the same conditions (all
2002 Feb 22
0
(no subject)
Is there a known bug that samba has on older machines in low memory environments? I have set up a 486 with 24Mb of memory with RedHat 7.1. I set it up to run dhcpd, xinetd, and samba. Samba was not run through xinetd. The samba server had a tendency to crash the machine if a large upload was sent to it. Occasionally, it wouldn't crash, but windows would error and stop uploading. No problems
2005 May 13
4
Gigabit Throughput too low
Hi I was wondering if you ever got better performance out of your Gigabit/IDE/Fc2? I am facing a similar situation. I am running FC2 with Samba 3.x My problem lies in not that I am limited to 10 MBytes per second sustained. I think it's related to this pdflush and how it's buffers are setup. (I have been doing some research and before 2.6 kernels bdflush was the method that was used and
2008 Feb 01
3
swapping on centos 5.1
Hi all, I used to use centos 4.5 on an AMD 4800+ with 2GIG ram. Now I use centos 5.1 on AMD 6400+ with 4GIG RAM. The system responsiveness is different between the two. I noticed that centos 5.1 seems to be swapping programs out of memory at times resulting in slowness (perceived by me). I played with swappiness (/proc/sys/vm/) setting to 10, then 1 then 0. Still resulted in the same perceived
2009 Dec 20
6
storage servers crashing, hair being pulled out!
I have a trio of servers that like to reboot during high disk / network IO operations. They don't appear to panic, as I have kernel.panic = 0 in sysctl.conf. The syslog just shows normal messages, like samba complaining about browse master and then just syslogd starting up. The machines seem to crash when I'm not near the console, usually when I'm trying to pull data off them to
2000 Jan 13
5
Inhibiting swapping with mlock
There's one vulnerability that's bugged me for some time. It applies to nearly all crypto software, including ssh. That's the swapping of sensitive info (such as keys and key equivalents) onto hard drives where they could possibly be recovered later. The Linux kernel provides a system call, mlock(), that inhibits swapping of a specified region of virtual memory. It locks it into real
2005 Apr 25
4
Suse 9.3 boot problem
Hi there I got Suse 9.3 on a raid 1 (md0 :boot and root and md1 home) When I try to get the xen kernel booted, the process goes up to a certain point and then reboots I got two problems: 1) Since I used default suse parameters I would assume all my settings should be OK, so why does it not boot? 2) When booting and getting to the reboot point, it holds the messages only for one second. How can
2005 Feb 03
2
RAID 1 sync
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
2003 May 22
3
Tuning system response degradation under heavy ext3/2 activity.
Hello. I'm looking for assistance or pointers for the following problem. OS: RHAS2.1 enterprise16 HW: HP proliant 2CPU 6GB RAM, internal RAID1 + RAID5(4 x 10K 72GB) When we run any kind of process (especially tar for some reason) that creates heavy disk activity the machine becomes Very Slow, (e.g. takes 30-45 seconds to get a reply from ls at the console, or a minute to log in.) I
2009 Feb 27
3
ext3 heavy file fragmentation with NFS write
Hello, Does anybody know how to avoid the file fragmentation when a file is created over NFSv3? A file created locally is OK: dd bs=32k if=/dev/zero of=test count=32x1024 conv=fsync filefrag test test: 10 extents found, perfection would be 9 extents When I create the file in the same dir, but from another machine, mounted over NFS: filefrag test test: 4833 extents found, perfection would be
2008 Mar 03
1
Quota setup fails because of OST ordering
Hi all, after installing a Lustre test file system consisting of 34 OSTs, I encountered a strange error when trying to set up quotas: lfs quotacheck gave me an "Input/Output error", while in /var/log/kern.log I found a Lustre error LustreError: 20807:0:(quota_check.c:227:lov_quota_check()) lov idx 32 inactive Indeed, in /proc/fs/lustre/lov/.../target_obd all 34 OSTs were listed
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s. These machines are attached via scsi to an external HP disk enclosure. They run 32bit RH AS 4.0 and OCFS 1.2.4, the latest release. They were upgraded from 1.2.3 only a few days after 1.2.4 was released. I had reported on the mailing list that my developers were happy, and things seemed faster. However, twice in that time, the cluster has gone down due
2007 Mar 19
3
net.ipv4 TCP/IP Optimizations = sysctl.conf?
If I execute these via command line, will they persist after a reboot? Or, should I be putting these into a file like /etc/sysctl.conf? --------------snip-------------- /sbin/sysctl -w net.ipv4.tcp_max_syn_backlog=2048 /sbin/sysctl -w net.ipv4.tcp_fin_timeout=30 /sbin/sysctl -w net.ipv4.tcp_keepalive_intvl=10 /sbin/sysctl -w net.ipv4.tcp_keepalive_probes=7 /sbin/sysctl -w
2003 Apr 06
1
load testing and tuning a 4GB RAM server
Hello everyone, First of all, great job on the 4.8-R. We have been a long standing user of FreeBSD and are very happy with everything. Now my question. I am trying to stress test a new Dell PowerEdge server and find the limits of its hardware and my tuning. Here are the server stats: * 2x Xeon 2.8 with SMP compiled, hyperthreading NOT compiled in kernel * 4 GB of RAM, 8 GB of swap on Raid 1
2017 Oct 22
2
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
-----Original Message----- From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of Noam Bernstein Sent: Sunday, October 22, 2017 8:54 AM To: CentOS mailing list <centos at centos.org> Subject: [CentOS] Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64 > Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, >
2009 Mar 24
3
LSI Logic raid status
Hi, I have a LSI Logic sata/sas raid running, is there a way to see the state of the volume, like optimal, degraded or resyncing? I've tried several commands with camcontrol but I cant figure it out. -- Peter Ankerst?l peter@pean.org http://www.pean.org/
2006 Nov 09
2
How to create a huge file system - 3-4TB?
We have a server with about 6x750Gb SATA drives setup on a hardware RAID controller. We created hardware RAID 5 on these 6x750GB HDDs. The effective size after RAID 5 implementation is 3.4TB. This server we want to use it as a data backup server. Here is the problem we are stuck with, when we use fdisk -l, we can see the drive specs and its size as 3.4TB. But when we want to create two different