search for: summit2010

Displaying 17 results from an estimated 17 matches for "summit2010".

2010 Jul 09
4
resilver of older root pool disk
This is a hypothetical question that could actually happen: Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0 and for some reason c0t0d0s0 goes off line, but comes back on line after a shutdown. The primary boot disk would then be c0t0d0s0 which would have much older data than c0t1d0s0. Under normal circumstances ZFS would know that c0t0d0s0 needs to be resilvered. But in this case
2010 Nov 14
0
[LLVMdev] tot clang/llvm and tot gcc performance comparision
...for us to investigate this. One question, can you tell if gcc is inlining significantly more than llvm? We have reports that this is one of the issue plaguing eon performance. There are also some relatively well known spec optimizations that we haven't implemented. e.g. http://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=get&target=meissner2.pdf Are there more? Evan On Nov 13, 2010, at 12:56 PM, Xinliang David Li wrote: > > Hi, I have looked at the LLVM code generation quality using small test cases and in general it is better than I thought and in some cases better than gcc....
2010 Nov 13
3
[LLVMdev] tot clang/llvm and tot gcc performance comparision
Hi, I have looked at the LLVM code generation quality using small test cases and in general it is better than I thought and in some cases better than gcc. However, there are still some gap in SPEC performance. I have not looked at the root cause of those gaps. Anyone who cares about LLVM performance need to take this seriously. For fair comparison, I used -fno-strict-aliasing in gcc to turn off
2010 Sep 12
3
Failed zfs send "invalid backup stream".............
I''m trying to replicate a 300 GB pool with this command zfs send alpha at 3 | zfs receive -F omega about 2 hours in to the process it fails with this error "cannot receive new filesystem stream: invalid backup stream" I have tried setting the target read only (zfs set readonly=on omega) also disable Timeslider thinking it might have something to do with it. What could be
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. Since I am hoping to provide a 2TB
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false? B) If I buy larger drives and resilver, does defrag happen? C) Does zfs send zfs receive mean it will defrag? -- This message posted from opensolaris.org
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2019 Feb 07
2
RFC: [DebugInfo] Improving Debug Information in LLVM to Recover Optimized-out Function Parameters
...rfstd.org/ShowIssue.php? issue=100909.1. [2 Jakub Jelínek, Roland McGrath, Jan Kratochvíl, and Alexandre Oliva. DWARF DW_TAG_call_site extension proposal. http://dwarfstd.org/ ShowIssue.php?issue=100909.2 [3] J. Jelinek “Improving debug info for optimized away parameters” https://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=view&target=jelinek.pdf [4] FOSDEM talk http://bofh.nikhef.nl/events/FOSDEM/2019/K.4.201/llvm_debug.webm [5] Elfutils https://sourceware.org/elfutils/
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which
2019 Feb 08
3
RFC: [DebugInfo] Improving Debug Information in LLVM to Recover Optimized-out Function Parameters
...e.php? issue=100909.1. > [2 Jakub Jelínek, Roland McGrath, Jan Kratochvíl, and Alexandre Oliva. DWARF DW_TAG_call_site extension proposal. http://dwarfstd.org/ ShowIssue.php?issue=100909.2 > [3] J. Jelinek “Improving debug info for optimized away parameters” https://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=view&target=jelinek.pdf > [4] FOSDEM talk http://bofh.nikhef.nl/events/FOSDEM/2019/K.4.201/llvm_debug.webm > [5] Elfutils https://sourceware.org/elfutils/ > > > > > > > > ______________________...
2010 Oct 19
8
Balancing LVOL fill?
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2019 Feb 08
2
RFC: [DebugInfo] Improving Debug Information in LLVM to Recover Optimized-out Function Parameters
...akub Jelínek, Roland McGrath, Jan Kratochvíl, and Alexandre > Oliva. DWARF DW_TAG_call_site extension proposal. http://dwarfstd.org/ > ShowIssue.php?issue=100909.2 > > [3] J. Jelinek “Improving debug info for optimized away parameters” > https://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=view&target=jelin > ek.pdf > > [4] FOSDEM talk > http://bofh.nikhef.nl/events/FOSDEM/2019/K.4.201/llvm_debug.webm > > [5] Elfutils https://sourceware.org/elfutils/ > > > > > > >...
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux