Displaying 20 results from an estimated 700 matches similar to: "folder with no permissions"
2012 Jun 18
1
dovecot-sieve and LMT
Dear list,
My mail server is working perfectly. So I am trying to add feature after feature, until I have all the features I need. This has worked fine until now. I am trying to get dovecot-sieve to work. So I activated dovecot-lda and the sieve plugin and told postfix to use deliver instead of procmail. After restarting all services I then created a test sieve file. Obviously I have not yet
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote:
> On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote:
>> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote:
>> > Btrfs cannot handle having logically non-contiguous requests submitted. For
>> > example if you have
>> >
>> > Logical: [0-4095][HOLE][8192-12287]
2010 Dec 25
2
predict.lrm vs. predict.glm (with newdata)
Hi all
I have run into a case where I don't understand why predict.lrm and
predict.glm don't yield the same results. My data look like this:
set.seed(1)
library(Design); ilogit <- function(x) { 1/(1+exp(-x)) }
ORDER <- factor(sample(c("mc-sc", "sc-mc"), 403, TRUE))
CONJ <- factor(sample(c("als", "bevor", "nachdem",
2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi,
Well I have a freshly built system with ZFS raidz.
Intel P4 2.4 Ghz
1GB Ram
Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
(2) Intel Dual Port 1Gbit nics
I have (5) 300GB disks in a Raidz1 with Zfs.
I''ve created a couple of FS on this.
/export/downloads
/export/music
/export/musicraw
I''ve shared these out as well.
First with ZFS ''zfs
2010 May 25
0
Magic parameter "-ec" of IOZone to increase the write performance of samba
Hi,
I am measuring the performance of my newly bought NAS with IOZone.
The NAS is of an embedded linux with samba installed. (CPU is Intel Atom)
The IOZone reported that write performance to be over 1GBps while the file
size less or equals to 1GB.
Since the nic is 1Gbps, the maximum speed is supposed to be 125MiBps at
most.
The testing report of IOZone is amazing.
Later I found that If the
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff,
On 07/03/2010 03:58 AM, Jeff Moyer wrote:
> Hi,
>
> Running iozone or fs_mark with fsync enabled, the performance of CFQ is
> far worse than that of deadline for enterprise class storage when dealing
> with file sizes of 8MB or less. I used the following command line as a
> representative test case:
>
> fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results:
Summary: I am not able to reproduce the problem, IOW I get relatively
equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0
Next steps:
- Could you pass along your volfile (both for a brick and also the
client vol file (from
/var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol
file from the same place)
- I want to check
2011 Jan 08
1
how to graph iozone output using OpenOffice?
Hi all,
Can anyone please steer me in the right direction with this one? I've
searched the net, but couldn't find a clear answer.
How do I actually generate graphs from iozone, using OpenOffice? Every
website I've been to simply mentions that iozone can output an xls
file which can be used in MS Excel to generate a 3D graph. But, I
can't see how it's actually done. Can anyone
2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello:
Sorry for asking iozone ques in this mailing list but couldn't find
any mailing list on iozone...
In IOZone, is there a way to configure # of outstanding requests
client sends to server side? Something on the lines of IOMeter option
"Number of outstanding requests".
Thanks a lot!
2012 Aug 24
1
Typical setup questions
All,
I am curious what is used typically for the file system replication and
how do you make sure that it is consistent.
So for example when using large 3TB+ sata/NL-sas drives. Is is typical
to replicate three times to get similar protection to raid 6?
Also what is typically done to ensure that all replicas are in place and
consistent? A cron that stats of ls's the file system from a
2008 Feb 19
1
ZFS and small block random I/O
Hi,
We''re doing some benchmarking at a customer (using IOzone) and for some
specific small block random tests, performance of their X4500 is very
poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically,
the test is the IOzone multithreaded throughput test of an 8GB file size
and 8KB record size, with the server physmem''d to 2GB.
I noticed a couple of peculiar
2015 Apr 14
0
Re: VM Performance using KVM Vs. VMware ESXi
Dear Jatin,
Maybe it’s a good idea first to implement Spice:
<video>
<model type='qxl' ram='65536' vram='65536' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
2006 Feb 09
0
strange behaviour of domU - i/o performance tests
Hi,
I am currently making some i/o performance tests within domUs with iozone3.
One scenario is a domU with file-backed VBDs (root and swap) as sda1 and sda2
lying on an nfs-kernel-server (2.6.14.4 - Debian Sarge).
The exact iozone command is:
iozone -a -R -b result.xls -f /tmp/iozone.test -n 1m -g 256m -i 0 -i 1
As soon as the iozone-test comes to a filesize of 132M, the complete! system
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL,
IHAC who would like to use Sun Fire X4500 to be the NFS server for the
backend services, and would like to see the potential performance gain
comparing to their existing systems. However the outputs of the I/O
stress test with iozone show the mixed results as follows:
* The read performance sharply degrades (almost down to 1/20, i.e
from 2,000,000 down to 100,000) when the
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD?
Regards,
Bartosz
> Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47:
>
> Hi gluster users,
> I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2015 Apr 14
3
VM Performance using KVM Vs. VMware ESXi
Hi All
We are currently testing our product using KVM as the hypervisor. We are
not using KVM as a bare-metal hypervisor. We use it on top of a RHEL
installation. So basically RHEL acts as our host and using KVM we deploy
guests on this system.
We have all along tested and shipped our application image for VMware
ESXi installations , So this it the first time we are trying our
application
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone
[1] on it results in an oops [2]. remove_suid is called, accessing
offset 14 of a NULL pointer.
Let me know if you''d like me to test any fix, do further debugging or
get more information.
Thanks,
Daniel
--- [1]
# mkfs.btrfs /dev/sda4
# mount /dev/sda4 /mnt
/mnt# iozone -a .
--- [2]
[ 899.118926] BUG: unable to
2017 Oct 11
0
iozone results
I'm testing iozone inside a VM booted from a gluster volume.
By looking at network traffic on the host (the one connected to the
gluster storage) I can
see that a simple
iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F /tmp/gluster.ioz
will make about 1200mbit/s on a bonded dual gigabit nic (probably,
with a bad bonding mode configured)
fio returns about 50000kB/s, that are 400000 kbps.