search for: openstorag

Displaying 19 results from an estimated 19 matches for "openstorag".

Did you mean: openstorage
2010 Jul 09
4
resilver of older root pool disk
This is a hypothetical question that could actually happen: Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0 and for some reason c0t0d0s0 goes off line, but comes back on line after a shutdown. The primary boot disk would then be c0t0d0s0 which would have much older data than c0t1d0s0. Under normal circumstances ZFS would know that c0t0d0s0 needs to be resilvered. But in this case
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2010 Sep 12
3
Failed zfs send "invalid backup stream".............
I''m trying to replicate a 300 GB pool with this command zfs send alpha at 3 | zfs receive -F omega about 2 hours in to the process it fails with this error "cannot receive new filesystem stream: invalid backup stream" I have tried setting the target read only (zfs set readonly=on omega) also disable Timeslider thinking it might have something to do with it. What could be
2011 Apr 07
40
X4540 no next-gen product?
While I understand everything at Oracle is "top secret" these days. Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html What do X4500 / X4540 owners use if they''d like more
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. Since I am hoping to provide a 2TB
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false? B) If I buy larger drives and resilver, does defrag happen? C) Does zfs send zfs receive mean it will defrag? -- This message posted from opensolaris.org
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2010 May 08
6
Mirrored Servers
Lets say I have two servers, both running opensolaris with ZFS. I basically want to be able to create a filesystem where the two servers have a common volume, that is mirrored between the two. Meaning, each server keeps an identical, real time backup of the other''s data directory. Set them both up as file servers, and load balance between the two for incoming requests. How would anyone
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which
2012 Dec 14
12
any more efficient way to transfer snapshot between two hosts than ssh tunnel?
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Thanks. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121213/654f543f/attachment-0001.html>
2010 Oct 19
8
Balancing LVOL fill?
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2017 Feb 17
0
Wine release 2.2
...ommand stream calls. Thierry Vermeylen (1): wnaspi32: Do not crash on SC_GETSET_TIMEOUTS. Wei Xie (1): qcap: Add O_CLOEXEC flag to prevent child process from inheriting handles. Zebediah Figura (2): storage.dll16: Simplify operations in IStream16::Seek. storage.dll16: Set OpenStorage/OpenStream output to NULL on failure. -- Alexandre Julliard julliard at winehq.org
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identify itself as: Seagate-External-SG11-2.73TB Under both Solaris 10 and Solaris 11x, I receive the evil message: | I/O request is not aligned with 4096 disk sector size. | It is handled through Read Modify Write but the performance
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux