search for: caselog

Displaying 20 results from an estimated 23 matches for "caselog".

Did you mean: casella
2007 Sep 19
7
ZFS Solaris 10 Update 4 Patches
...C 2006/479 zfs receive -F PSARC 2006/486 ZFS canmount property PSARC 2006/497 ZFS create time properties PSARC 2006/502 ZFS get all datasets PSARC 2006/504 ZFS user properties PSARC 2006/622 iSCSI/ZFS Integration PSARC 2006/638 noxattr ZFS property Go to http://www.opensolaris.org/os/community/arc/caselog/ for more details on the above. See http://www.opensolaris.org/jive/thread.jspa?threadID=39903&tstart=0 for complete list of CRs. Thanks, George
2007 Aug 13
2
ZFS Boot for Solaris SPARC
Hi, Searching this alias I can find a number of guides and scripts that describe the configuration of Solaris to boot from a ZFS rootpool. However, these guides appear to be Solaris 10 x86 specific. Is the creation of a ZFS boot disk available for Solaris SPARC ? If so, could you point me in the direction of where I can obtain details of this new feature from. Thanks and Regards, Paul. PS:
2009 Oct 23
5
PSARC 2009/571: ZFS deduplication properties
I haven''t seen any mention of it in this forum yet, so FWIW you might be interested in the details of ZFS deduplication mentioned in this recently-filed case. Case log: http://arc.opensolaris.org/caselog/PSARC/2009/571/ Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115507 Very nice -- I like the interaction with "copies", and (like a few others) I think the default threshold could be much lower. Like *many* others, I look forward to trying it out. =-) Also see PSA...
2007 Sep 19
8
ZFS Solaris 10u5 Proposed Changes
ZFS Fans, Here''s a list of features that we are proposing for Solaris 10u5. Keep in mind that this is subject to change. Features: PSARC 2007/142 zfs rename -r PSARC 2007/171 ZFS Separate Intent Log PSARC 2007/197 ZFS hotplug PSARC 2007/199 zfs {create,clone,rename} -p PSARC 2007/283 FMA for ZFS Phase 2 PSARC/2006/465 ZFS Delegated Administration PSARC/2006/577 zpool property to
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
...= 0 _exit(0) touch is not calling chmod(), also the same happens with mkdir.1 (which also doesn''t call chmod()). To summarize: ACLs are not inherited when aclmode = discard. Why is this? Afaik this should not be the case. Thanks! -f [1] http://arc.opensolaris.org/caselog/PSARC/2010/029/20100126_mark.shellenbaum
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2008 Aug 04
6
[Fwd: [networking-discuss] code-review: fine-grained privileges for datalink administration]
...of those lines are modified or new. There are over 1000 lines of deleted code. Workspace with cscope database: /net/zhadum.east/export/ws/seb/dladm-privs-hg/ The high-level architecture behind the change is described in the PSARC fast-track case log: http://www.opensolaris.org/os/community/arc/caselog/2008/473/ One thing to keep in mind while reviewing these changes is that the dld control device is no longer a STREAMS device, but a regular character device. For one, STREAMS serves no purpose here. The device simply processes GLDv3 ioctls, and that''s it. Removing STREAMS from this p...
2007 Oct 09
1
Moving default snapshot location
Hi, We have implemented a zfs files system for home directories and have enabled it with quotas+snapshots. However the snapshots are causing an issue with the user quotas. The default snapshot files go under ~username/.zfs/snapshot, which is a part of the user file system. So if the quota is 10G and the snapshots total to 2G, this adds to the disk space used by the user. Is there any turnaround
2009 May 21
2
About ZFS compatibility
I have created a pool on external storage with B114. Then I export this pool and import it on another system with B110.But this import will fail and show error: cannot import ''tpool'': pool is formatted using a newer ZFS version. Any big change in ZFS with B114 leads to this compatibility issue? Thanks Zhihui -------------- next part -------------- An HTML attachment was
2007 Nov 04
7
Status of Samba/ZFS integration
I''ve tried to set up a SAMBA file server that acts completely identical with a Microsoft Windows 2000 or 2003 one. First of all, the problem with the ACI ordering is simple: The Microsoft ACI specification imposes that the DENY ACIs are put on top. It can be solved with a simple chmod. Problem no.2 the Samba NFSv4 ACL module doesn''t interpret owner@, group@, everyone at . While
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi, I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help. Now it looks like that: # zpool status pool: tank state: UNAVAIL status:
2010 Feb 19
4
ZFS unit of compression
Hello. I want to know what is the unit of compression in ZFS. Is it 4 KB or larger? Is it tunnable? Thanks. Thanos -- This message posted from opensolaris.org
2010 Jun 02
11
ZFS recovery tools
Hi, I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks to some great forum posts from Victor Latushkin, however without his posts I would still be crying at night... I think the worst example is the zdb man page, which all it does is to ask you
2009 Nov 14
2
Patch: event port-based ioloop and notify
Greetings, thanks to all of you who work on Dovecot! I have prepared a small patch to support Solaris 10 and Opensolaris' event port mechanism for both the ioloop and the notify subsystems. It seems to work fine for me, but I haven't conducted any extensive testing. It would be great if someone could review and/or test it (and if it could eventually enter the code base). I have
2008 Jul 05
4
iostat and monitoring
Hi gurus, I like zpool iostat and I like system monitoring, so I setup a script within sma to let me get the zpool iostat figures through snmp. The problem is that as zpool iostat is only run once for each snmp query, it always reports a static set of figures, like so: root at exodus:snmp # zpool iostat -v capacity operations bandwidth pool used avail read
2010 Jan 15
4
Bridging firewall with snv_125 and ipfilter
Has anyone gotten a transparent firewall working? I''m using snv_125 on an IBM x346 (snv_130 goes into endless boot loops on this hardware). I can create a working bridge with dladm, but can''t stop packets, even with "block in quick all". That stops packets on my management interface bge0, but not on the bridge. :( tim at ghost:~# ifconfig -a lo0:
2008 Mar 20
5
Snapshots silently eating user quota
All, I assume this issue is pretty old given the time ZFS has been around. I have tried searching the list but could not get understand the structure of how ZFS actually takes snapshot space into account. I have a user walter on whom I try to do the following ZFS operations bash-3.00# zfs get quota store/catB/home/walter NAME PROPERTY VALUE SOURCE
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick: > Log message: > PSARC 2009/571 ZFS Deduplication Properties > 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html Via c0t0d0s0.org.
2007 Sep 04
23
I/O freeze after a disk failure
Hi all, yesterday we had a drive failure on a fc-al jbod with 14 drives. Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages: Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52): Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':