similar to: Temporary mount Properties, small bug?

Displaying 20 results from an estimated 2000 matches similar to: "Temporary mount Properties, small bug?"

2008 Jun 05
2
ZFS NFS cannot write
This is the first time I tried nfs with zfs. I shared the zfs filesystem with nfs, but i can''t write to the files though i mount it as read-write. This is for Solaris 10 update 4. I wonder if there is a bug? ---------------server (sdw2-2) #zfs create -o sharenfs=on data/nfstest #zfs get all data/nfstest NAME PROPERTY VALUE SOURCE data/nfstest type
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2010 Jan 06
0
ZFS filesystem size mismatch
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a ''du -sh'' on the filesystem root, I only get appr. 300GB which is the correct size. The file system became full during Christmas and I increased the quota from 1 to 1.5 to 2TB and then decreased to 1.5TB. No reservations. Files and processes that filled up the file systems have been removed/stopped.
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all I did some test about MySQL''s Insert performance on ZFS, and met a big performance problem,*i''m not sure what''s the point*. Environment 2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel). A Java client run 8 threads concurrency insert into one Innodb table: *~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1 ~600 qps when sync_binlog=10
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
All, Running Samba 3.5.4 on Solaris 10 with ZFS file system. I have issues where we have shared group folders. In these folders a userA in GroupA create file just fine with the correct inherited permissions 660. Problem is when userB in GroupA reads and modifies that file, with M$ office apps, the permissions get whacked to 060+ and the file becomes read only by everyone. I did
2009 Nov 11
0
libzfs zfs_create() fails on sun4u daily bits (daily.1110)
I encountered a strange libzfs behavior while testing a zone fix and want to make sure that I found a genuine bug. I''m creating zones whose zonepaths reside in ZFS datasets (i.e., the parent directories of the zones'' zonepaths are ZFS datasets). In this scenario, zoneadm(1M) attempts to create ZFS datasets for zonepaths. zoneadm(1M) has done this for a long time (since
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
Hello, I'm currently using dovecot 1.2.11 on FreeBSD 8.0 with ZFS filesystems. So far, so good, it works quite nicely, but I have a couple glitches. Each user has his own zfs partition, mounted on /home/<user> (easier to set per user quotas) and mail is stored in their home. From day one, when people check their mail via imap, a lot of indexes corruption occured : dovecot:
2009 Aug 21
9
Not sure how to do this in zfs
Hello all, I''ve tried changing all kinds of attributes for the zfs''s, but I can''t seem to find the right configuration. So I''m trying to move some zfs''s under another, it looks like this: /pool/joe_user move to /pool/homes/joe_user I know I can do this with zfs rename, and everything is fine. The problem I''m having is, when I mount
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss. I mounted the zfs-based
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2010 Aug 13
15
NFS issue with ZFS
I have Solaris 10 U7 that is exporting ZFS filesytem. The client is Solaris 9 U7. I can mount the filesytem just fine but I am unable to write to it. showmount -e shows my mount is set for everyone. the dfstab file has option rw set. So what gives? Phillip -- This message posted from opensolaris.org
2013 Mar 06
0
where is the free space?
hi All, Ubuntu 12.04 and glusterfs 3.3.1. root at tipper:/data# df -h /data Filesystem Size Used Avail Use% Mounted on tipper:/data 2.0T 407G 1.6T 20% /data root at tipper:/data# du -sh . 10G . root at tipper:/data# du -sh /data 13G /data It's quite confused. I also tried to free up the space by stopping the machine (actually LXC VM) with no lock. After umounting the space
2010 Mar 23
0
zfs send/receive and file system properties
I am trying to coordinate properties and data between 2 file servers. on file server 1 I have: zfs get all zfs52/export/os/sles10sp2 NAME PROPERTY VALUE SOURCE zfs52/export/os/sles10sp2 type filesystem - zfs52/export/os/sles10sp2 creation Mon Mar 22 15:28 2010
2012 Feb 18
6
Cannot mount encrypted filesystems.
Looking for help regaining access to encrypted ZFS file systems that stopped accepting the encryption key. I have a file server with a setup as follows: Solaris 11 Express 1010.11/snv_151a 8 x 2-TB disks, each one divided into three equal size partitions, three raidz3 pools built from a "slice" across matching partitions: Disk 1 Disk 8 zpools +--+ +--+ |p1| .. |p1| <-