similar to: Wiki entry under wrong category

Displaying 20 results from an estimated 20000 matches similar to: "Wiki entry under wrong category"

2018 Apr 10
0
glusterfs disperse volume input output error
Hi, Could you help me? i have a problem with file on disperse volume. When i try to read this from mount point i recieve error, # md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2 md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error Configuration and status of volume is: # gluster volume info vol1 Volume Name: vol1 Type: Disperse Volume ID:
2018 May 07
0
Compiling 3.13.2 under FreeBSD 11.1?
On 05/07/2018 04:29 AM, Roman Serbski wrote: > Hello, > > Has anyone managed to successfully compile the latest 3.13.2 under > FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make > fails: See https://review.gluster.org/19974 3.13 reached EOL with 4.0. There will be a fix posted for 4.0 soon. In the mean time I believe your specific problem with 3.13.2 should be
2010 Apr 09
1
[Gluster-devel] Gluster health/status
Gluster devs, I found the message below in the archives. glfs-health.sh is not included in the v3.0.3 sources - is there any plan to add this to the "extras" directory? What's its status? Ian == snip == Raghavendra G Mon, 22 Feb 2010 20:20:33 -0800 Hi all, Here is some work related to Health monitoring. glfs-health.sh is a shell script to check the health of glusterfs.
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID:
2017 Nov 09
0
[Gluster-devel] Poor performance of block-store with RDMA
Hi Kalever! First of all, I really appreciate your test results for block-store(https://github.com/pkalever/iozone_results_gluster/tree/master/block-store) :-) My teammate and I tested block-store(glfs backstore with tcmu-runner) but we have met a problem of performance. We tested some cases with one server that has RDMA volume and one client that is connected to same RDMA network. two
2008 Oct 24
1
performance lower then expected
I've setup an eight node server stripe using gluster 1.4.0pre5 using the stripe example from the wiki. Each of these eight nodes has a 100Mbit ethernet card and a single hard disk. I've connected them all together using a gigabit switch and I have a gigabit workstation connected, with gluster mounted and running fine. However, when i try to do a dd test to the disk "dd if=/dev/zero
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone, I am learning and evaluating a glusterfs for film/video editing facilities. Some major film/video editing realtime applications are using the O_DIRECT file access for video/audio data files. The GLFS client via fuse mechanism is disallow the open file with O_DIRECT flag. I made a little sample code for read a file with O_DIRECT flag, and tried open the files on GLFS volumes. It
2018 May 07
2
Compiling 3.13.2 under FreeBSD 11.1?
Hello, Has anyone managed to successfully compile the latest 3.13.2 under FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make fails: Making all in src CC glfs.lo cc: warning: argument unused during compilation: '-rdynamic' [-Wunused-command-line-argument] cc: warning: argument unused during compilation: '-rdynamic' [-Wunused-command-line-argument] fatal
2017 Sep 13
0
glusterfs expose iSCSI
On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr <ltrgiang86 at gmail.com> wrote: > Hi all > Hi GiangCoi, The Good news is that now we have gluster-block [1] which will help you configure block storage using gluster very easy. gluster-block will take care of all the targetcli and tcmu-runner configuration for you, all you need as a pre-requisite is a gluster volume. And the sad part is
2013 Sep 10
4
compiling samba vfs module
hi All, The system is Ubuntu 12.04 I download and extracted source packages of samba and glusterfs and I built glusterfs, so I get the right necessary structure: glusterfs version is 3.4 and it's from ppa. # ls /data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs.h /data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs.h Unfortunately I'm
2012 Nov 06
2
I am very confused about strip Stripe what way it hold space?
I have 4 dell 2970 server , three server harddisk is 146Gx6 ,one hard disk is 72Gx6: each server mount info is /dev/sda4 on /exp1 type xfs (rw) /dev/sdb1 on /exp2 type xfs (rw) /dev/sdc1 on /exp3 type xfs (rw) /dev/sdd1 on /exp4 type xfs (rw) /dev/sde1 on /exp5 type xfs (rw) /dev/sdf1 on /exp6 type xfs (rw) I create a gluster volume have 4 stripe gluster volume create test-volume3 stripe 4
2009 Jan 05
4
wiki page edit request HowTos/Disk_Optimization
Hi, I have gone thought he painful operation of registering just to edit one page then I found out i have to also get on this mailing list to get permission... Why can't the wiki page display that I have no permission to edit? and also why can't edits be sent to a moderation queue for approval? Anyways this is my request to edit http://wiki.centos.org/HowTos/Disk_Optimization page as it
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive. The archive stores webpages collected by our spiders. The test setup consists of three data machines, each exporting a volume of about 3.7TB and one nameserver machine. File layout is such that each host has it's own directory, for example the GlusterFS website would be located in:
2017 Oct 02
1
nfs-ganesha locking problems
Hi Soumya, what I can say so far: it is working on a standalone system but not on the clustered system from reading the ganesha wiki I have the impression that it is possible to change the log level without restarting ganesha. I was playing with dbus-send but so far was unsuccessful. if you can help me with that, this would be great. here some details about the tested machines. the nfs client
2017 Apr 20
0
qemu-kvm-ev-2.6.0-28.el7_3.9.1 now available for testing
Hi, just pushed to testing a new build of qemu-kvm-ev, here's the ChangeLog: * Thu Apr 20 2017 Sandro Bonazzola <sbonazzo at redhat.com> - ev-2.6.0-28.el7_3.9.1 - Removing RH branding from package name * Fri Mar 24 2017 Miroslav Rezanina <mrezanin at redhat.com> - rhev-2.6.0-28.el7_3.9 - kvm-block-gluster-memory-usage-use-one-glfs-instance-per.patch [bz#1413044] -
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the performance I get (note use a file size > amount of RAM on client and server systems, 13GB in this case) : 4k block size : 111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds testing from 8k -
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2015 Dec 24
0
[PATCH v2] btrfs: Fix logical to physical block address mapping
The current btrfs support did not handled multiple stripes stored in chunk items, hence skipping the physical addresses that were needed to do the mapping. Besides, the chunk tree may contain DEV_ITEM keys which store information on all of the underlying block devices, so we must skip them instead of finishing lookup. The bug was reproduced with btrfs-progs v4.2.2. Cc: Gene Cumm <gene.cumm
2015 Dec 27
0
[PATCH v3] btrfs: Fix logical to physical block address mapping
The current btrfs support did not handled multiple stripes stored in chunk items, hence skipping the physical addresses that were needed to do the mapping. Besides, the chunk tree may contain DEV_ITEM keys which store information on all of the underlying block devices, so we must skip them instead of finishing lookup. The bug was reproduced with btrfs-progs v4.2.2. Cc: Gene Cumm <gene.cumm
2008 Apr 21
1
rejecting I/O to offline device (PERC woes)
Haven't gotten any tips on a solution to the problem below. It happened again this weekend. My next test steps (order not determined): 1. Downgrade to CentOS 4 2. Swap out PERC controller with a spare I have never had a problem with the PERC4/DC controllers on our other machines (RHEL3/4, CentOS 4). Although, I've no other machine that has 5 300G Fujitsu SCSI drives either. Any