similar to: Problems with send/receive

Displaying 20 results from an estimated 20000 matches similar to: "Problems with send/receive"

2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all I did some test about MySQL''s Insert performance on ZFS, and met a big performance problem,*i''m not sure what''s the point*. Environment 2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel). A Java client run 8 threads concurrency insert into one Innodb table: *~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1 ~600 qps when sync_binlog=10
2009 Nov 11
0
libzfs zfs_create() fails on sun4u daily bits (daily.1110)
I encountered a strange libzfs behavior while testing a zone fix and want to make sure that I found a genuine bug. I''m creating zones whose zonepaths reside in ZFS datasets (i.e., the parent directories of the zones'' zonepaths are ZFS datasets). In this scenario, zoneadm(1M) attempts to create ZFS datasets for zonepaths. zoneadm(1M) has done this for a long time (since
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2008 Jul 12
2
sharenfs=off, but still being shared?
I noticed an oddity on my 2008.05 box today. Created a new zfs file system that I was planning to nfs share out to an old FreeBSD box, after I put sharenfs=on for it, I noticed there was a bunch of others shared too: -bash-3.2# dfshares -F nfs RESOURCE SERVER ACCESS TRANSPORT reaver:/store/movies reaver - - reaver:/export
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around
2010 Jan 06
0
ZFS filesystem size mismatch
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a ''du -sh'' on the filesystem root, I only get appr. 300GB which is the correct size. The file system became full during Christmas and I increased the quota from 1 to 1.5 to 2TB and then decreased to 1.5TB. No reservations. Files and processes that filled up the file systems have been removed/stopped.
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem. What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2013 Mar 06
0
where is the free space?
hi All, Ubuntu 12.04 and glusterfs 3.3.1. root at tipper:/data# df -h /data Filesystem Size Used Avail Use% Mounted on tipper:/data 2.0T 407G 1.6T 20% /data root at tipper:/data# du -sh . 10G . root at tipper:/data# du -sh /data 13G /data It's quite confused. I also tried to free up the space by stopping the machine (actually LXC VM) with no lock. After umounting the space
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2010 May 05
3
[indiana-discuss] image-update doesn''t work anymore (bootfs not supported on EFI)
On 5/5/10 1:44 AM, Christian Thalinger wrote: > On Tue, 2010-05-04 at 16:19 -0600, Evan Layton wrote: >> Can you try the following and see if it really thinks it''s an EFI lable? >> # dd if=/dev/dsk/c12t0d0s2 of=x skip=512 bs=1 count=10 >> # cat x >> >> This may help us determine if this is another instance of bug 6860320 > > # dd
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
Hello, I'm currently using dovecot 1.2.11 on FreeBSD 8.0 with ZFS filesystems. So far, so good, it works quite nicely, but I have a couple glitches. Each user has his own zfs partition, mounted on /home/<user> (easier to set per user quotas) and mail is stored in their home. From day one, when people check their mail via imap, a lot of indexes corruption occured : dovecot:
2008 Jul 22
2
Problems mounting ZFS after install
Let me thank everyone in advance. I''ve read a number of posts here and it helped tremendously in getting the install done. I have a couple of remaining issues which I can''t seem to overcome. Here are the basics: dom0 - CentOS 5.2 32-bit Xen 3.2.1 compiles from source domU - os200805.iso The install config: [root@internetpowagroup oshman]# cat opensolaris.install name =
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
All, Running Samba 3.5.4 on Solaris 10 with ZFS file system. I have issues where we have shared group folders. In these folders a userA in GroupA create file just fine with the correct inherited permissions 660. Problem is when userB in GroupA reads and modifies that file, with M$ office apps, the permissions get whacked to 060+ and the file becomes read only by everyone. I did
2010 Jan 17
3
I can''t seem to get the pool to export...
root at nas:~# zpool export -f raid cannot export ''raid'': pool is busy I''ve disabled all the services I could think of. I don''t see anything accessing it. I also don''t see any of the filesystems mounted with mount or "zfs mount". What''s the deal? This is not the rpool, so I''m not booted off it or anything like that.
2008 Sep 13
3
Restore a ZFS Root Mirror
Hi all, after installing OpenSolaris 2008.05 in VirtualBox I''ve created a ZFS Root Mirror by: "zfs attach rpool Disk B" and it works like a charm. Now I tried to restore the rpool from the worst Case Scenario: The Disk the System was installed to (Disk A) fails. I replaced the Disk A with another virtual Disk C and tried to restore the rpool, but my Problem is that I
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all, I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny. I installed default snv_129, installed guest additions -> reboot, set
2009 Oct 31
1
Kernel panic on zfs import
Hi, I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever I try to import one of my pools. What''s the best practice for recovering (before I resort to nuking the pool and restoring from backup)? There are two pools on the system: rpool and tank. The rpool seems to be fine, since I can boot from a 2009.06 CD and ''zpool import -f rpool''; I can