similar to: issue with sharesmb and sharenfs properties enabled on the same pool

Displaying 20 results from an estimated 10000 matches similar to: "issue with sharesmb and sharenfs properties enabled on the same pool"

2009 Feb 28
4
possibly a stupid question, why can I not set sharenfs="sec=krb5, rw"?
x86 snv 108 I have a pool with around 5300 file systems called home. I can do: zfs set sharenfs=on home however zfs set sharenfs="sec=krb5,rw" home complains: cannot set property for ''home'': ''sharenfs'' cannot be set to invalid options I feel I must be overlooking something elementary. Thanks, Alastair -------------- next part --------------
2008 Jul 12
2
sharenfs=off, but still being shared?
I noticed an oddity on my 2008.05 box today. Created a new zfs file system that I was planning to nfs share out to an old FreeBSD box, after I put sharenfs=on for it, I noticed there was a bunch of others shared too: -bash-3.2# dfshares -F nfs RESOURCE SERVER ACCESS TRANSPORT reaver:/store/movies reaver - - reaver:/export
2010 Jan 17
3
I can''t seem to get the pool to export...
root at nas:~# zpool export -f raid cannot export ''raid'': pool is busy I''ve disabled all the services I could think of. I don''t see anything accessing it. I also don''t see any of the filesystems mounted with mount or "zfs mount". What''s the deal? This is not the rpool, so I''m not booted off it or anything like that.
2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6 On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote: > a short while ago I experimented with tiering on one of my volumes. I > decided it was not working out so I removed the tier. I now have spam in > the glusterd.log evert 7 seconds: > > [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads? What is your heal times per brick now? On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the > rebuild time are bottlenecked by matrix operations which scale as the square > of the number of data stripes. There are
2017 Oct 18
2
warning spam in the logs after tiering experiment
a short while ago I experimented with tiering on one of my volumes. I decided it was not working out so I removed the tier. I now have spam in the glusterd.log evert 7 seconds: [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory) [2017-10-18
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure - what is the reason for iops to stop/fail? Rebooting a node is somewhat similar to updating gluster, replacing cabling etc. IMO this should not always end up with arbiter blaming the other node and even though I did not investigate this issue deeply, I do not believe the blame is the reason for iops to drop. On Sep 7, 2017
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder to achieve than with just replica 2 + arbiter. On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi Neil, docs mention two live nodes of replica 3 blaming each other and > refusing to do IO. > > https://gluster.readthedocs.io/en/latest/Administrator% >
2017 Jun 30
2
Multi petabyte gluster
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the rebuild time are bottlenecked by matrix operations which scale as the square of the number of data stripes. There are some savings because of larger data chunks but we ended up using 8+3 and heal times are about half compared to 16+3. -Alastair On 30 June 2017 at 02:22, Serkan ?oban <cobanserkan at gmail.com>
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and refusing to do IO. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote: > *shrug* I don't use arbiter for vm work loads just straight replica 3.
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2007 Aug 27
1
Nested ZFS sharenfs exports are empty on automount clients
Hello I''ve got nested ZFS filesystems exported via NFS. They are mounted on the clients using automount (from a NIS map). But: only the root exported filesystem shows any contents on the clients. Any sub-directories it has are fine, but any sub-filesystems are empty. ie. NIS map auto.stuff contains "stuff server:/stuff/images" server% zfs get sharenfs stuff/images
2008 Oct 09
0
"zfs set sharenfs" takes a long time to return.
I have an X4500 fileserver (NFS, Samba) running OpenSolaris 2008.05 pkg upgraded to snv_91 with ~3200 filesystems (and ~27429 datasets, including snapshots). I''ve been encountering some pretty big slow-downs on this system when running certain zfs commands. The one causing me the most pain at the moment is setting the "sharenfs" property on a filesystem takes a little under 7
2006 Oct 31
0
6345875 The zfs "sharenfs" option fails after an alternate-root mount, until reboot
Author: lling Repository: /hg/zfs-crypto/gate Revision: 470ed1fa8c0b5104bdf6c9dcdb194eb4781ddc19 Log message: 6345875 The zfs "sharenfs" option fails after an alternate-root mount, until reboot Files: update: usr/src/cmd/zpool/zpool_dataset.c update: usr/src/cmd/zpool/zpool_main.c update: usr/src/cmd/zpool/zpool_util.h
2006 Oct 24
2
zfs set sharenfs=on
I started sharing out zfs filesystems via NFS last week using sharenfs=on. That seems to work fine until I reboot. Turned out the NFS server wasn''t enabled - I had to enable nfs/server, nfs/lockmgr and nfs/status manually. This is a stock SXCR b49 (ZFS root) install - don''t think I''d changed anything much. Shouldn''t a ZFS share be permanently enabling NFS?
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2012 Jul 17
0
[LLVMdev] RFC: Profiling Enhancements (GSoC)
Hi Alastair, In addition to your planned tasks, you might want to put in some work to ensure branch probabilities are not lost during optimization. One known issue is LLVM optimizer can turn branchy code into switch statements and it would completely discard probability. Here is a simple example: static void func2(int N, const int *a, const int *b, int *c) __attribute__((always_inline)); void
2017 Apr 12
1
[PATCH v3 10/10] drm/nouveau: Enable stereoscopic 3D output over HDMI
On Tue, Apr 11, 2017 at 1:32 PM, Ilia Mirkin <imirkin at alum.mit.edu> wrote: > On Tue, Apr 11, 2017 at 1:11 PM, Alastair Bridgewater > <alastair.bridgewater at gmail.com> wrote: > > + /* HDMI 3D support */ > > + if ((disp->disp.oclass >= NV50_DISP) > > You probably meant G82_DISP. Although I don't know if there were any > G80's
2017 Apr 11
0
[PATCH v3 10/10] drm/nouveau: Enable stereoscopic 3D output over HDMI
On Tue, Apr 11, 2017 at 1:11 PM, Alastair Bridgewater <alastair.bridgewater at gmail.com> wrote: > Enable stereoscopic output for HDMI and DisplayPort connectors on > NV50+ (G80+) hardware. We do not enable stereoscopy on older > hardware in case there is some older board that still has HDMI > output but for which we have no logic for setting the Vendor > InfoFrame. > >