similar to: using KVM on glusterfs

Displaying 20 results from an estimated 7000 matches similar to: "using KVM on glusterfs"

2011 Jul 29
3
issue with GlusterFS to store KVM guests
i'm having difficulty running KVM virtual machines off of a glusterFS volume mounted using the glusterFS client. i am running centOS 6, 64-bit. i am using virt-install to create my images but encountering the following error: qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid argument (see below for a more lengthy version of the error) i have found an example of
2011 Jun 27
2
Using TSM to back-up glusterfs
Hi We have been trying back-up a glusterfs (v3.1.4) area using the Tivoli TSM software to an off-site area. The back-up keeps failing with the following typical error messages 06/14/2011 22:22:58 ANS1587W I/O error reading file attributes for: /gdata/projects/philex/OAG/2011/May16/mdor3km10/coast_den2.in. errno = 22, Invalid argument 06/14/2011 22:22:59 ANS4007E Error processing
2013 Mar 18
2
How to evaluate the glusterfs performance with small file workload?
Hi guys I have met some troubles when I want to evaluate the glusterfs performance with small file workload. 1: What kind of benchmark should I use to test the small file operation ? As we all know, we can use iozone tools to test the large file operation, while for the sake of memory cache, if we testing small file operation with iozone, the result will not correct.
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone, I am learning and evaluating a glusterfs for film/video editing facilities. Some major film/video editing realtime applications are using the O_DIRECT file access for video/audio data files. The GLFS client via fuse mechanism is disallow the open file with O_DIRECT flag. I made a little sample code for read a file with O_DIRECT flag, and tried open the files on GLFS volumes. It
2018 Mar 05
6
SQLite3 on 3 node cluster FS?
Raghavendra, Thanks very much for your reply. I fixed our data corruption problem by disabling the volume performance.write-behind flag as you suggested, and simultaneously disabling caching in my client side mount command. In very modest testing, the flock() case appears to me to work well - before it would corrupt the db within a few transactions. Testing using built in sqlite3 locks is
2018 Mar 05
2
SQLite3 on 3 node cluster FS?
Hi, tl;dr summary of below: flock() works, but what does it take to make sync()/fsync() work in a 3 node GFS cluster? I am under the impression that POSIX flock, POSIX fcntl(F_SETLK/F_GETLK,...), and POSIX read/write/sync/fsync are all supported in cluster operations, such that in theory, SQLite3 should be able to atomically lock the file (or a subset of page), modify pages, flush the pages to
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
+Csaba. On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > Thanks very much for your reply. > > I fixed our data corruption problem by disabling the volume > performance.write-behind flag as you suggested, and simultaneously > disabling caching in my client side mount command. > Good to know it worked. Can you give us the
2018 Mar 05
0
SQLite3 on 3 node cluster FS?
On Mon, Mar 5, 2018 at 8:21 PM, Paul Anderson <pha at umich.edu> wrote: > Hi, > > tl;dr summary of below: flock() works, but what does it take to make > sync()/fsync() work in a 3 node GFS cluster? > > I am under the impression that POSIX flock, POSIX > fcntl(F_SETLK/F_GETLK,...), and POSIX read/write/sync/fsync are all > supported in cluster operations, such that in
2018 Mar 06
2
SQLite3 on 3 node cluster FS?
Raghavendra, I've commited my tests case to https://github.com/powool/gluster.git - it's grungy, and a work in progress, but I am happy to take change suggestions, especially if it will save folks significant time. For the rest, I'll reply inline below... On Mon, Mar 5, 2018 at 10:39 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > +Csaba. > > On Tue, Mar 6,
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > I've commited my tests case to https://github.com/powool/gluster.git - > it's grungy, and a work in progress, but I am happy to take change > suggestions, especially if it will save folks significant time. > > For the rest, I'll reply inline below... > > On Mon,
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong. What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons No RAID (individual
2018 Dec 29
5
guest access
Hello List I install a samba 4.6.5 server for active directory authentication and shares. I have a number of Samba shares, some of which I would like to allow guest access to Windows machines. If a windows machine tries to access a "guest" share, it requests a username and password. please help me to connect to share without username and password. thanks Here is my smb.conf: # Global
2018 Mar 06
1
SQLite3 on 3 node cluster FS?
On Tue, Mar 6, 2018 at 10:58 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <pha at umich.edu> wrote: > >> Raghavendra, >> >> I've commited my tests case to https://github.com/powool/gluster.git - >> it's grungy, and a work in progress, but I am happy to take change >>
2004 Dec 01
2
cp --o_direct
Another question. When my database is running, I do [oracle@LNCSTRTLDB03 LPTE3]$ cp --o_direct xdb01.dbf /tmp cp: cannot open `xdb01.dbf' for reading: Permission denied [oracle@LNCSTRTLDB03 LPTE3]$ When the database is shudown it works. Is this normal for ocfs because with any other filesystem I can just copy a file at any time (Its only a testing, I know I cant copy datafiles and have
2007 Jan 16
4
ocfs Vs ocfs2
Hi everbody this is my first post, I have two test server .(Both of them is idle) db1 : RHEL4 OCFS2 db2 : RHEL3 OCFS I test the IO both of them The result is below. db1(Time Spend)db2(Time Spend)OS Test Command dd (1GB) (Yazma)0m0.796s0m18.420stime dd if=/dev/zero of=./sill.t bs=1M count=1000 dd (1GB) (Okuma)0m0.241s8m16.406stime dd of=/dev/zero if=./sill.t bs=1M count=1000 cp
2007 Jan 16
4
ocfs Vs ocfs2
Hi everbody this is my first post, I have two test server .(Both of them is idle) db1 : RHEL4 OCFS2 db2 : RHEL3 OCFS I test the IO both of them The result is below. db1(Time Spend)db2(Time Spend)OS Test Command dd (1GB) (Yazma)0m0.796s0m18.420stime dd if=/dev/zero of=./sill.t bs=1M count=1000 dd (1GB) (Okuma)0m0.241s8m16.406stime dd of=/dev/zero if=./sill.t bs=1M count=1000 cp
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
you definitely need mount options to /etc/fstab use ones from here http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I went on with using local mounts to achieve performance as well Also, 3.12 or 3.10 branches would be preferable for production On Fri, Apr 6, 2018 at 4:12 AM, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi again, > > I'd like to
2012 Jun 01
3
Striped replicated volumes in Gluster 3.3.0
Hi all, I'm very happy to see the release of 3.3.0. One of the features I was waiting for are striped replicated volumes. We plan to store KVM images (from a OpenStack installation) on it. I read through the docs and found the following phrase: "In this release, configuration of this volume type is supported only for Map Reduce workloads." What does that mean exactly? Hopefully not,
2010 Oct 09
2
[PATCH 1/2] Ocfs2: Add a mount option "coherency=*" for O_DIRECT writes.
Currently, default behavior of O_DIRECT writes was allowing concurrent writing among nodes, no cluster coherency guaranteed (no EX locks was taken), it hurts buffered reads on other nodes by reading stale data from cache. The new mount option introduce a chance to choose two different behaviors for O_DIRECT writes: * coherency=full, as the default value, will disallow concurrent
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself. here is direct-io-mode https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode Same as you I ran tests on a large volume of files, finding that main delays are in attribute calls, ending up with those mount options to add performance. I discovered those options through basically googling this user list with