similar to: Re: zfs-discuss Digest, Vol 89, Issue 12

Displaying 20 results from an estimated 2000 matches similar to: "Re: zfs-discuss Digest, Vol 89, Issue 12"

2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2012 Jul 23
0
[zfs] LZ4 compression algorithm
----- Forwarded message from Bob Friesenhahn <bfriesen at simple.dallas.tx.us> ----- From: Bob Friesenhahn <bfriesen at simple.dallas.tx.us> Date: Mon, 23 Jul 2012 12:55:44 -0500 (CDT) To: zfs at lists.illumos.org cc: Radio m?odych bandyt?w <radiomlodychbandytow at o2.pl>, Pawel Jakub Dawidek <pjd at FreeBSD.org>, developer at lists.illumos.org Subject: Re: [zfs] LZ4
2007 Sep 18
1
zfs-discuss Digest, Vol 23, Issue 34
Hello, I am a final year computer engg student and I am planning to implement zfs on linux, I have gone through the articles posted on solaris . Please let me know about the feasibility of zfs to be implemented on linux. waiting for valuable replies. thanks in advance. On 9/14/07, zfs-discuss-request at opensolaris.org <zfs-discuss-request at opensolaris.org> wrote: > Send
2013 Oct 23
0
Fwd: [dilos-dev] DilOS 1.3.4 has been release
Hi List, Please refer below email from Igor Kozhukhov. Illumos ( DilOS ) is back as dom0 for Xen. Give it a shot :) Both PV and HVM guests are working well. Illumos ( DilOS ) also works as PV guest with recent updates. You can send dilos related queries to dilos-dev@lists.illumos.org. Regards, Rushikesh -------- Original Message -------- Subject: [dilos-dev] DilOS 1.3.4 has been release
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2007 Sep 28
4
Sun 6120 array again
Greetings, Last April, in this discussion... http://www.opensolaris.org/jive/thread.jspa?messageID=143517 ...we never found out how (or if) the Sun 6120 (T4) array can be configured to ignore cache flush (sync-cache) requests from hosts. We''re about to reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable zfs_nocacheflush is not going to serve us well (there is a ZFS
2009 Nov 11
0
[storage-discuss] ZFS on JBOD storage, mpt driver issue - server not responding
miro at cybershade.us said: > So at this point this looks like an issue with the MPT driver or these SAS > cards (I tested two) when under heavy load. I put the latest firmware for the > SAS card from LSI''s web site - v1.29.00 without any changes, server still > locks. > > Any ideas, suggestions how to fix or workaround this issue? The adapter is > suppose to be
2017 Feb 18
2
[lld] Has anybody ever run into the Solaris linker before?
Recently LLD made it to the front page of HN (yay!): https://news.ycombinator.com/item?id=13670458 This comment about the Solaris linker surprised me: https://news.ycombinator.com/item?id=13672364 """ > To me, the biggest advantage is cross compiling Not all system linkers have this problem. For example, Solaris ld(1) is perfectly capable of cross-linking any valid ELF file.
2017 Jun 30
0
Multi petabyte gluster
>Thanks for the reply. We will mainly use this for archival - near-cold storage. Archival usage is good for EC >Anything, from your experience, to keep in mind while planning large installations? I am using 3.7.11 and only problem is slow rebuild time when a disk fails. It takes 8 days to heal a 8TB disk.(This might be related with my EC configuration 16+4) 3.9+ versions has some
2017 Jun 29
2
Multi petabyte gluster
Thanks for the reply. We will mainly use this for archival - near-cold storage. Anything, from your experience, to keep in mind while planning large installations? Sent from my Verizon, Samsung Galaxy smartphone -------- Original message --------From: Serkan ?oban <cobanserkan at gmail.com> Date: 6/29/17 4:39 AM (GMT-05:00) To: Jason Kiebzak <jkiebzak at gmail.com> Cc: Gluster
2012 Apr 10
2
[LLVMdev] [cfe-dev] where to send test suite errors
illumos has done a free and clear implementation of locale support for libc, and, in fact, it was taken from FreeBSD. Have a look at: https://www.illumos.org/projects/illumos-gate/repository/show/usr/src/lib/libc/port/locale I'd love it if we could get libc++ ported to illumos. I've got two bugs already filed for issues I had with the build infrastructure that our porting system turned
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads? What is your heal times per brick now? On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the > rebuild time are bottlenecked by matrix operations which scale as the square > of the number of data stripes. There are
2018 Apr 08
0
[cfe-dev] [RFC] Open sourcing and contributing TAPI back to the LLVM community
To belatedly second Juergen, yes I think the concept of TBD files is great, and not just useful to the specific XCode situation of proprietary libraries. For example the mapfiles[1] of Illumos are exactly analogous and used not because the libc of Illumos is closed source (it isn't) but rather to ensure comparability across Illumos versions. The libc (shared library) ABI of Illumos is the
2012 Apr 10
0
[LLVMdev] [cfe-dev] where to send test suite errors
Hi Bayard, (and apologies to anyone for whom this is off-topic) On 10 Apr 2012, at 19:56, Bayard Bell wrote: > illumos has done a free and clear implementation of locale support for > libc, and, in fact, it was taken from FreeBSD. Have a look at: > > https://www.illumos.org/projects/illumos-gate/repository/show/usr/src/lib/libc/port/locale I saw that, but unfortunately it's
2017 Jun 30
2
Multi petabyte gluster
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the rebuild time are bottlenecked by matrix operations which scale as the square of the number of data stripes. There are some savings because of larger data chunks but we ended up using 8+3 and heal times are about half compared to 16+3. -Alastair On 30 June 2017 at 02:22, Serkan ?oban <cobanserkan at gmail.com>
2012 Apr 10
0
[LLVMdev] [cfe-dev] where to send test suite errors
Open bug reports, and if you could add me to the CC list that would be great. I did the Solaris port for a customer and I have a couple of diffs that I need to commit, which may fix some things for you. I'm not sure how applicable it is to Illumos, but on Solaris 10 and 11 on x86-64 they've got LLVM/Clang (self-hosting) running on top of libc++ / libcxxrt and able to build their own
2017 Jun 28
1
Multi petabyte gluster
Has anyone scaled to a multi petabyte gluster setup? How well does erasure code do with such a large setup? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170628/b030376d/attachment.html>
2019 Apr 12
1
gencache.tdb: device busy
Hi Jeremy,   I got some info on that topic from the illumos devs:   > It's a sporadic issue, you're lucky enough to not encounter it on 4.9.5. > > I confirmed in 4.10.2, it happens: > > winbindd.log:  tdb(/tmw-nas-3p/samba/var/lock/gencache.tdb): tdb_open_ex: tdb_mutex_init failed for /tmw-nas-3p/samba/var/lock/gencache.tdb: Device busy > > So either apply OS fix, or
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console: Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources'' Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2017 Aug 01
1
openindiana GSSAPI failure to samba 4.6.6
2017-07-31 17:41 GMT+02:00 Greg Dickie via samba <samba at lists.samba.org>: > Hey guys, > > Thanks for the ideas. I made life easier for myself and just replaced the > SunOS (illumos) implementation with real samba. That works very well so > we're all good. Is it just me or is kerberos complicated? > At first, no it is not you : ) But after a while (and thanks to