Displaying 10 results from an estimated 10 matches for "read_ahead_kb".
2014 Mar 22
4
suggestions for a "fast" fileserver - 1G / 10G
Hi,
I'm looking for some recommendations for a "fast" fileserver regarding
the hardware you use.
We have different fileservers as our requirements changed over time.
The "main" problem we are faced with is, that with smb (windows 7 and OS
X) clients we never get really close to GBit speed on reads or writes.
Using the same servers/storages with ftp, ssh, rsync, nfs we
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all
after being a silent reader for some time and not very successful in getting
good performance out of our test set-up, I'm finally getting to the list with
questions.
Right now, we are operating a web server serving out 4MB files for a
distributed computing project. Data is requested from all over the world at a
rate of about 650k to 800k downloads a day. Each data file is usually
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
...ggressive caching, does this match
> the needs of your application? That is, do you have the situation
> where the client NFS layer does an aggressive read-ahead that is never
> used by the application?
That was one of our early theories. On 6, you can adjust this via
/sys/class/bdi/X:Y/read_ahead_kb (use stat on the mountpoint to
determine X and Y). This file doesn't exist on 5. But we tried
increasing and decreasing it from the default (960), and didn't see
any changes.
> Are C5 and C6 using the same NFS protocol version? How about TCP vs
> UDP? If UDP is in play, have a lo...
2012 Nov 01
15
[RFC PATCH v2 0/3] mm/fs: Implement faster stable page writes on filesystems
Hi all,
This patchset makes some key modifications to the original ''stable page writes''
patchset. First, it provides users (devices and filesystems) of a
backing_dev_info the ability to declare whether or not it is necessary to
ensure that page contents cannot change during writeout, whereas the current
code assumes that this is true. Second, it relaxes the
2015 Apr 29
5
nfs (or tcp or scheduler) changes between centos 5 and 6?
We have a "compute cluster" of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5.
We did a "piecemeal" upgrade, usually upgrading five or so machines at
a time, every few
2011 Nov 09
4
Please advise on very fast search
Hello,
I try to create some kind of mail backup system. What I need is system
that will store mail for the whole domain, and allow me to restore
messages from/to specified email at that domain.
The scheme is pretty simple: on our main mail server the SMTP server
itself has a rule to send a copy of every message to
'backup at backupserver.host', and the backupserver.host domain is
2018 May 23
3
[PATCH] block drivers/block: Use octal not symbolic permissions
..._requests_entry = {
- .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
+ .attr = {.name = "nr_requests", .mode = 0644 },
.show = queue_requests_show,
.store = queue_requests_store,
};
static struct queue_sysfs_entry queue_ra_entry = {
- .attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
+ .attr = {.name = "read_ahead_kb", .mode = 0644 },
.show = queue_ra_show,
.store = queue_ra_store,
};
static struct queue_sysfs_entry queue_max_sectors_entry = {
- .attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
+ .att...
2018 May 23
3
[PATCH] block drivers/block: Use octal not symbolic permissions
..._requests_entry = {
- .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
+ .attr = {.name = "nr_requests", .mode = 0644 },
.show = queue_requests_show,
.store = queue_requests_store,
};
static struct queue_sysfs_entry queue_ra_entry = {
- .attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
+ .attr = {.name = "read_ahead_kb", .mode = 0644 },
.show = queue_ra_show,
.store = queue_ra_store,
};
static struct queue_sysfs_entry queue_max_sectors_entry = {
- .attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
+ .att...
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All,
For our project, we bought 8 new Supermicro servers. Each server is a
quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives.
To start out, we only populated 2 x 2TB enterprise drives in each
server and added all 8 peers with their total of 16 drives as bricks to
our gluster pool as distributed replicated (2). The replica worked as
follows:
1.1 -> 2.1
1.2
2010 Sep 28
18
[PATCH] Btrfs: add a disk info ioctl to get the disks attached to a filesystem
This was a request from the systemd guys. They need a quick and easy way to get
all devices attached to a Btrfs filesystem in order to check if any of the disks
are SSD for...something, I didn''t ask :). I''ve tested this with the
btrfs-progs patch that accompanies this patch. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com>
---
fs/btrfs/ioctl.c | 64