Displaying 6 results from an estimated 6 matches for "snv_129".
Did you mean:
snv_121
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all,
I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny.
I installed default snv_129, installed guest addit...
2009 Dec 19
2
Zfs upgrade freezes desktop
On snv_129, a zfs upgrade (*not* a zpool upgrade) from version 3 to version 4 caused the
desktop to freeze - no response to keyboard or mouse events and clock not updated.
ermine% uname -a
SunOS ermine 5.11 snv_129 i86pc i386 i86pc
ermine% zpool upgrade
This system is currently running ZFS pool version 22....
2010 Jan 17
3
I can''t seem to get the pool to export...
...led all the services I could think of. I don''t see anything accessing it. I also don''t see any of the filesystems mounted with mount or "zfs mount". What''s the deal? This is not the rpool, so I''m not booted off it or anything like that. I''m on snv_129.
I''m attempting to move the main storage to a new pool. I created the new pool, used "zfs send | zfs recv" for the filesystems. That''s all fine. The plan was to export both pools, and use the import to rename them. I''ve got the new pool exported, but the older...
2010 Feb 02
7
Help needed with zfs send/receive
Hi folks,
I''m having (as the title suggests) a problem with zfs send/receive.
Command line is like this :
pfexec zfs send -Rp tank/tsm@snapshot | ssh remotehost pfexec zfs recv
-v -F -d tank
This works like a charm as long as the snapshot is small enough.
When it gets too big (meaning somewhere between 17G and 900G), I get
ssh errors (can''t read from remote host).
I tried
2013 Sep 05
1
primary GID based access for user in 16 supplementary groups
We observe a difference between a Windows 7 client and Windows 2003/XP client when accessing directories that should be accessible via the UNIX accounts primary group GID. Windows client refuses access.
Ignoring for now why the two different client behaviours (either some subtle difference in the requests or the way the Samba reply is dealt with) the question is what should be the correct
2010 Mar 18
13
ZFS/OSOL/Firewire...
An interesting thing I just noticed here testing out some Firewire drives with OpenSolaris.
Setup :
OpenSolaris 2009.06 and a dev version (snv_129)
2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 devices on the chain)
- one SATA bridge
- one PATA bridge
Created a zpool with both drives as simple vdevs
Started a zfs send/recv to backup a local filesystem
Watching zpool iostat I see that the total throughput maxes...