similar to: Writing to the data brick path instead of fuse mount?

Displaying 20 results from an estimated 3000 matches similar to: "Writing to the data brick path instead of fuse mount?"

2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi, I have a cluster of 10 servers all running Fedora 24 along with Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with Gluster 3.12. I saw the documentation and did some testing but I would like to run my plan through some (more?) educated minds. The current setup is: Volume Name: vol0 Distributed-Replicate Number of Bricks: 2 x (2 + 1) = 6 Bricks: Brick1:
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a cluster of 10 servers all running Fedora 24 along with > Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with > Gluster 3.12. I saw the documentation and did some testing but I > would like to run my plan through some (more?) educated minds. >
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I need to make sure it stays up or schedule some downtime if it doesn't doesn't. Thanks. On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > > > On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> > wrote: >> >> Hi, >>
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi, I have a problem joining four Gluster 3.10 nodes to an existing Gluster 3.8 nodes. My understanding that this should work and not be too much of a problem. Peer robe is successful but the node is rejected: gluster> peer detach elkpinfglt07 peer detach: success gluster> peer probe elkpinfglt07 peer probe: success. gluster> peer status Number of Peers: 6 Hostname: elkpinfglt02
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the info file in shchhv01. As per the code, this field should be written into the glusterd store if the op-version is >= 30706 . What I am guessing is since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on op-version bump up" in 3.8.4 while bumping up the op-version the info and volfiles were
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a problem joining four Gluster 3.10 nodes to an existing > Gluster 3.8 nodes. My understanding that this should work and not be > too much of a problem. > > Peer robe is successful but the node is rejected: > > gluster> peer detach elkpinfglt07 > peer
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem. Current: 3.8.4 Volume Name: shchst01 Type: Distributed-Replicate Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3 Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: shchhv01-sto:/data/brick3/shchst01 Brick2: shchhv02-sto:/data/brick3/shchst01 Brick3:
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
Yes Atin. I'll take a look. On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > Looks like a bug as I see tier-enabled = 0 is an additional entry in the > info file in shchhv01. As per the code, this field should be written into > the glusterd store if the op-version is >= 30706 . What I am guessing is > since we didn't have the commit
2009 Jan 28
1
Mount fails with error status -22?
Hi, I am little puzzled. I looked through the mailing list archive and some other sources but doesn't seem like anything anyone encountered. I have two systems with attached HP SAN. I'm using SLES 10.1 with multipath-tools. When trying to mount OCFS2 device I get this: SERVER:/ # mount.ocfs2 /dev/mapper/mpath0 /mnt/temp/ mount.ocfs2: Invalid argument while mounting
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion Self-heal daemon is not running. Check self-heal daemon log file. gluster> Is there a specific log? When i check /var/log/glusterfs/glustershd.log glustershd.log:[2013-04-30 15:51:40.463259] E [afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0: Stopping crawl for dyn_coldfusion-client-1 , subvol went down Is there a specific log? When
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All, I am facing issues restarting the gluster volume. When I start the volume after stopping it, the gluster fails to start the volume Below is the message that I get on CLI /root> gluster volume start _home volume start: _home: failed: Commit failed on localhost. Please check the log file for more details. Logs says that it was unable to start the brick [2013-08-08
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all, Right out of the blue glusterfs is not working fine any more every now end the it stops working telling me, Endpoint not connected and writing core files: [root at tuepdc /]# file core.15288 core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'glusterfs' My Version: [root at tuepdc /]# glusterfs --version glusterfs 3.2.0 built on Apr 22 2011
2011 Sep 06
1
Inconsistent md5sum of replicated file
I was wondering if anyone would be able to shed some light on how a file could end up with inconsistent md5sums on Gluster backend storage. Our configuration is running on Gluster v3.1.5 in a distribute-replicate setup consisting of 8 bricks. Our OS is Red Hat 5.6 x86_64. Backend storage is an ext3 RAID 5. The 8 bricks are in RR DNS and are mounted for reading/writing via NFS automounts.
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all, I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2. now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this: Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
Good afternoon, I am playing around with GlusterFS 3.1 in CentOS 6 virtual machines to see if I can get of proof of concept for a bigger project. In my setup, I have 4 GlusterFS servers with two bricks each of 10GB with XFS (per your quick-start guide). So, I have a total of 8 bricks. When bu I have no problem with distributed-replicated volumes. However, when I set up a striped replicated
2013 Aug 28
1
volume on btrfs brick and copy-on-write
Hello Is it possible to take advantage of copy-on-write implemented in btrfs if all bricks are stored on it? If not is there any other mechanism (in glusterfs) which supports CoW? regards -- Maciej Ga?kiewicz Shelly Cloud Sp. z o. o., Sysadmin http://shellycloud.com/, macias at shellycloud.com KRS: 0000440358 REGON: 101504426 -------------- next part -------------- An HTML attachment was
2012 Oct 19
1
copying failed once brick-replace is starting
hi, all when I`m copying a file (about 1GB) into the volume, I tried to replace one brick of the volume. Then the copying was halted with the error msg: "cp: writing "./d", transport endpoint is not connected". Besides, I use two server bricks without afr. I`m not sure, is it a bug, or a yet-to-be-added feature of glusterfs? Best Regards. Jules Wang.
2013 Mar 25
1
A problem when mount glusterfs via NFS
HI: I run glusterfs with four nodes, 2x2 Distributed-Replicate. I mounted it via fuse and did some test, it was ok. However when I mounted it via nfs, a problem was found: When I copied 200G files to the glusterfs, the glusterfs process in the server node(mounted by client) was killed because of OOM, and all terminals of the client were hung. Trying to test for many times, I got the