Displaying 20 results from an estimated 4000 matches similar to: "High load when ''zfs send'' to the file"
2009 Mar 09
3
cannot mount ''/export'' directory is not empty
Hello,
I am desperate. Today I realized that my OS 108 doesn''t want to boot.
I have no idea what I screwed up. I upgraded on 108 last week without
any problems.
Here is where I''m stuck:
Reading ZFS config: done.
Mounting ZFS filesystems: (1/17) cannot mount ''/export'': directory is
not empty (17/17)
$ svcs -x
svc:/system/filesystem/local:default (local file
2009 Apr 22
1
prstat -Z and load average values in different zones give same numeric results
Folks,
Perplexing question about load average display with prstat -Z
Solaris 10 OS U4 (08/07)
We have 4 zones with very different processes and workloads..
The prstat -Z command issued within each of the zones, correctly displays
the number of processes and lwps, but the load average value looks
exactly the
same on all non-global zones..I mean all 3 values (1,5,15 load averages)
are the same
2008 Apr 18
2
plockstat: failed to add to aggregate: Abort due to drop
when check java process lock statistics, plockstat failed, please see below:
# prstat -mLp 21162
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
21162 7677 0.9 0.1 0.0 0.0 0.0 99 0.0 0.3 83 89 215 0 java/81
21162 7677 0.3 0.1 0.0 0.0 0.0 0.0 99 0.2 106 33 305 0 java/35
21162 7677 0.1 0.0 0.0 0.0 0.0 100 0.0 0.1 79 6 85 0 java/59
2008 Jul 25
11
send/receive
I created snapshot for my whole zpool (zfs version 3):
zfs snapshot -r tank@`date +%F_%T`
then trid to send it to the remote host:
zfs send tank at 2008-07-25_09:31:03 | ssh user at 10.0.1.14 -i identitykey ''zfs
receive tank/tankbackup''
but got the error "zfs: command not found" since user is not superuser, even
though it is in the root group.
I found
2013 Mar 06
0
where is the free space?
hi All,
Ubuntu 12.04 and glusterfs 3.3.1.
root at tipper:/data# df -h /data
Filesystem Size Used Avail Use% Mounted on
tipper:/data 2.0T 407G 1.6T 20% /data
root at tipper:/data# du -sh .
10G .
root at tipper:/data# du -sh /data
13G /data
It's quite confused.
I also tried to free up the space by stopping the machine (actually LXC VM) with no lock.
After umounting the space
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone,
I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone,
I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2007 Feb 24
1
zfs received vol not appearing on iscsi target list
Just installed Nexenta and I''ve been playing around with zfs.
root at hzsilo:/tank# uname -a
SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris
root at hzsilo:/tank# zfs list
NAME USED AVAIL REFER MOUNTPOINT
home 89.5K 219G 32K /export/home
tank 330K 1.78T 51.9K /tank
tank/iscsi_luns 147K
2007 Apr 24
2
zfs submounts and permissions with autofs
Hi,
Is it expected that if I have filesystem tank/foo and tank/foo/bar
(mounted under /tank) then in order to be able to browse via
/net down into tank/foo/bar I need to have group/other permissions
on /tank/foo open?
# zfs create tank/foo
# zfs create tank/foo/bar
# chown gavinm /tank/foo /tank/foo/bar
# zfs set sharenfs=rw tank/foo
# ls -laR /tank/foo
/tank/foo:
total 9
drwxr-xr-x 3 gavinm
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi,
I want to move all the ZFS fs from one pool to another, but I don''t want
to "gain" an extra level in the folder structure on the target pool.
On the source zpool I used zfs snapshot -r tank at moveTank on the root fs
and I got a new snapshot in all sub fs, as expected.
Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/...
which would place all zfs fs
2012 Dec 03
0
Nested ANCOVA question
Hello R experts,
I have having a difficult time figuring out how to perform and interpret an ANCOVA of my nested experimental data and would love any suggestions that you might have.
Here is the deal:
1) I have twelve tanks of fish (1-12), each with a bunch of fish in them
2) I have three treatments (1-3); 4 tanks per treatment. (each tank only has one treatment applied to it)
3) I sampled
2024 Jan 03
1
Files exist, but sometimes are not seen by the clients: "No such file or directory"
Hello all,
We're having problems with files that suddenly stop being seen on the fuse clients.
I couldn't yet find a way to reproduce this. It happens every once in a while.
Sometimes you try to ls some file and it can't be found.
When you run ls on the parent directory, it is shown on the output, and, after that, you can access it.
I'm mentioning ls, but the problem also
2024 Jul 05
0
Problems creating or renaming directories and files in gluster volume, via SAMBA
Hello all,
We have a distributed volume running in 7 hosts and 28 bricks.
We've been experiencing some strange behaviors over the time, some of them are solved when the gluster services are restarted, but other problems persist.
In the case below, and this one happens many times, some people access the gluster filesystem via Samba.
They have a mapped drive in their windows machine.
2010 Oct 11
0
Ubuntu iSCSI install to COMSTAR zfs volume Howto
I apologize if this has been covered before. I have not seen a blow-by-blow installation guide for Ubuntu onto an iSCSI target.
The install guides I have seen assume that you can make a target visible to all, which is a problem if you want multiple iSCSI installations on the same COMSTAR target. During install Ubuntu generates three random initiators and you have to deal with them to get things
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host
2006 Sep 06
2
creating zvols in a non-global zone (or ''Doctor, it hurts when I do this'')
A colleague just asked if zfs delegation worked with zvols too.
Thought I''d give it a go and got myself in a mess
(tank/linkfixer is the delegated dataset):
root at non-global / # zfs create -V 500M tank/linkfixer/foo
cannot create device links for ''tank/linkfixer/foo'': permission denied
cannot create ''tank/linkfixer/foo'': permission denied
Ok, so
2010 Jan 09
0
Activity after LU with ZFS/Zone working
Hy all,
recently I upgraded to S10U8 a T5120 using LU. The system had a zones
configured and at time of upgrade procedure the zones was still alive
and worked fine. The LU procedure was ended successfully. Zones on the
system was installed in a ZFS filesystem. Here the result at the end
of LU (ABE-from: s10Aug2007, ABE-to: s10Set2009):
# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
2008 Oct 28
4
blktap, vmdk, vdi, and disk management support
Just a quick fyi...
We''ve recently added support for blktap along with
support for managing virtual disks (disk file images).
There are some difference from a linux dom0.
This is available in b101 @
http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/
This allows you to create and manage vmdk and vdi
(Virtual Box) disk files. By default, virt-install
will now use a vmdk vdisk when
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752
Summary: zfs set keysource no longer works on existing pools
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: blocker
Priority: P1
Component: other
AssignedTo: