search for: laotsao

Displaying 20 results from an estimated 20 matches for "laotsao".

2011 Sep 14
3
Is there any implementation of VSS for a ZFS iSCSI snapshot on Solaris?
I am using a Solaris + ZFS environment to export a iSCSI block layer device and use the snapshot facility to take a snapshot of the ZFS volume. Is there an existing Volume Shadow Copy (VSS) implementation on Windows for this environment? Thanks S Joshi -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Aug 28
4
ufs root to zfs root liveupgrade?
...w zfsroot will be active in next boot init 6 but it come up with UFS root, lustatus show ufsroot active zpool rpool is mounted but not used by boot Is this a known bug? I donot have access to sunsolve now regards -------------- next part -------------- A non-text attachment was scrubbed... Name: laotsao.vcf Type: text/x-vcard Size: 221 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100828/53f6a943/attachment.vcf>
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f <long id number> Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB. This hard disk is partitioned, with a
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton: http://www.linux-mag.com/id/7839 anyone have views on whether this sort of caching would be useful for the MDT? My feeling is that MDT reads are probably pretty random but writes might benefit...? GREG -- Greg Matthews 01235 778658 Senior Computer Systems Administrator Diamond Light Source, Oxfordshire, UK
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing". What is the proper procedure to deal with this? -brian -- This message posted from
2011 Aug 09
7
Disk IDs and DD
Hiya, Is there any reason (and anything to worry about) if disk target IDs don''t start at 0 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 PCIe); AVAILABLE DISK SELECTIONS: 0. c8t0d0 <ATA -ST9160314AS -SDM1 cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,cb84 at 5/disk at 0,0 1. c8t1d0 <ATA -ST9160314AS -SDM1
2012 Jul 25
8
online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically
2012 Jan 11
3
Unable to allocate dma memory for extra SGL
Hi all; We have a Solaris 10 U9 x86 instance running on Silicon Mechanics / SuperMicro hardware. Occasionally under high load (ZFS scrub for example), the box becomes non-responsive (it continues to respond to ping but nothing else works -- not even the local console). Our only solution is to hard reset after which everything comes up normally. Logs are showing the following: Jan 8
2008 Feb 12
4
xVM and VirtualBox
Hi unfortunately VirtualBox does not work yet in a Solaris Dom0: I installed VirtualBox beta and it runs fine on bare metal. Unfortunately the necessary driver does not load automatically in a xVM Dom0. It can be loaded manually but it looks like it does not work in a Dom0: bash-3.2# modinfo | grep vbox # this is a one time task: bash-3.2# cp /platform/i86pc/kernel/drv/amd64/vboxdrv
2011 May 19
8
Mapping sas address to physical disk in enclosure
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: NAME STATE READ WRITE CKSUM cuve ONLINE 0 0 0 mirror-0 ONLINE 0 0 0
2011 May 24
2
ndmp?
When I search around, I see that nexenta has ndmp, and solaris 10 does not, and there was at least some talk about supporting ndmp in opensolaris ... So ... Is ndmp present in solaris 11 express? Is it an installable 3rd party package? How would you go about supporting ndmp if you wanted to? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Aug 10
1
Scripting
Hiya, Now I have figured out how to read disks using dd to make LEDs blink, I want to write a little script that iterates through all drives, dd''s them with a few thousand counts, stop, then dd''s them again with another few thousand counts, so I end up with maybe 5 blinks. I don''t want somebody to write something for me, I''d like to be pointed in the right
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2011 May 28
7
Have my RMA... Now what??
I have a raidz2 pool with one disk that seems to be going bad, several errors are noted in iostat. I have an RMA for the drive, however - no I am wondering how I proceed. I need to send the drive in and then they will send me one back. If I had the drive on hand, I could do a zpool replace. Do I do a zpool offline? zpool detach? Once I get the drive back and put it in the same drive bay..
2010 Aug 25
6
(preview) Whitepaper - ZFS Pools Explained - feedback welcome
Hello list, while following this list for more then 1 year, I feel that this list was a great way to get insights into ZFS. Thank you all for contributing. Over the last month''s I was writing a little "whitepaper" trying to consolidate the knowledge collected here. It has now reached a "beta" state and I would like to share the result with you. I call it -
2012 Jul 11
5
Solaris derivate with the best long-term future
As a napp-it user who recently needs to upgrade from NexentaCore I recently saw "preferred for OpenIndiana live but running under Illumian, NexentaCore and Solaris 11 (Express)" as a system recommendation for napp-it. I wonder about the future of OpenIndiana and Illumian, which fork is likely to see the most continued development, in your opinion? Thanks.
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all, I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely to expire and get freed relatively quickly). I believe it was sometimes implied on this list that such fragmentation for "static" data can be currently combatted only by zfs send-ing existing
2007 Dec 03
31
How to enable 64bit solaris guest on top of solaris dom0
I can enabling 32bit solaris guest on top of solaris dom0, but I don''t know how to enable 64bit solaris guest on top of solaris dom0. what configuration I need to modify?
2010 Aug 18
10
Networker & Dedup @ ZFS
Hi, We are considering using a ZFS based storage as a staging disk for Networker. We''re aiming at providing enough storage to be able to keep 3 months worth of backups on disk, before it''s moved to tape. To provide storage for 3 months of backups, we want to utilize the dedup functionality in ZFS. I''ve searched around for these topics and found no success stories,
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All, It may be this not the correct mailing list, but I''m having a ZFS issue when a disk is failing. The system is a supermicro motherboard X8DTH-6F in a 4U chassis (SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1). It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x LSI