search for: 3mib

Displaying 17 results from an estimated 17 matches for "3mib".

Did you mean: 3mb
2018 Mar 12
4
Expected performance for WORM scenario
...only hopped up to around 2.1MiB/s. Perplexed, I tried it first with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My last resort was to use a single node running on ramdisk, just to 100% exclude any network shenanigans, but the write performance stayed at an absolutely abysmal 3MiB/s. Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I don't actually remember the numbers, but my test that took 2 minutes with gluster completed before I had time to blink). Writing straight to the backing SSD drives gives me a throughput of 96MiB/sec. The te...
2018 Mar 12
0
Expected performance for WORM scenario
...round 2.1MiB/s. Perplexed, I tried it first > with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My > last resort was to use a single node running on ramdisk, just to 100% > exclude any network shenanigans, but the write performance stayed at an > absolutely abysmal 3MiB/s. > > Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I > don't actually remember the numbers, but my test that took 2 minutes with > gluster completed before I had time to blink). Writing straight to the > backing SSD drives gives me a throughp...
2018 Mar 13
5
Expected performance for WORM scenario
...round 2.1MiB/s. Perplexed, I tried it first > with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My > last resort was to use a single node running on ramdisk, just to 100% > exclude any network shenanigans, but the write performance stayed at an > absolutely abysmal 3MiB/s. > > > > Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I > don't actually remember the numbers, but my test that took 2 minutes with > gluster completed before I had time to blink). Writing straight to the > backing SSD drives gives me...
2018 Mar 12
0
Expected performance for WORM scenario
...only hopped up to around 2.1MiB/s. Perplexed, I tried it first with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My last resort was to use a single node running on ramdisk, just to 100% exclude any network shenanigans, but the write performance stayed at an absolutely abysmal 3MiB/s. Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I don't actually remember the numbers, but my test that took 2 minutes with gluster completed before I had time to blink). Writing straight to the backing SSD drives gives me a throughput of 96MiB/sec. The te...
2006 Mar 29
3
load file RData which store in zip file
Dear R users, My situation: (1) I have limited workspace for my work harddisk (about 10 GiB). (2) I have a lot of data files in R workspace (*.RData) which most of them > 200 MiB. For some reason I zip some of them, for instance "filename.RData (250 MiB)" to "filename.zip (3MiB)". In this work I have a lot of more space of my harddisk. Normally, If I want to use "filename.RData" for my experiment, I can do it with load("filename.RData"). Then I tried to open/load > load("filename.zip") Error: bad restore file magic number (file may...
2018 Mar 14
0
Expected performance for WORM scenario
...Perplexed, I tried it first >> with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My >> last resort was to use a single node running on ramdisk, just to 100% >> exclude any network shenanigans, but the write performance stayed at an >> absolutely abysmal 3MiB/s. >> >> >> >> Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I >> don't actually remember the numbers, but my test that took 2 minutes with >> gluster completed before I had time to blink). Writing straight to the >>...
2018 Mar 13
0
Expected performance for WORM scenario
...only hopped up to around 2.1MiB/s. Perplexed, I tried it first with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My last resort was to use a single node running on ramdisk, just to 100% exclude any network shenanigans, but the write performance stayed at an absolutely abysmal 3MiB/s. Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I don't actually remember the numbers, but my test that took 2 minutes with gluster completed before I had time to blink). Writing straight to the backing SSD drives gives me a throughput of 96MiB/sec. The te...
2018 Mar 13
3
Expected performance for WORM scenario
...round 2.1MiB/s. Perplexed, I tried it first > with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My > last resort was to use a single node running on ramdisk, just to 100% > exclude any network shenanigans, but the write performance stayed at an > absolutely abysmal 3MiB/s. > > > > Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I > don't actually remember the numbers, but my test that took 2 minutes with > gluster completed before I had time to blink). Writing straight to the > backing SSD drives gives me...
2018 Mar 13
0
Expected performance for WORM scenario
...only hopped up to around 2.1MiB/s. Perplexed, I tried it first with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My last resort was to use a single node running on ramdisk, just to 100% exclude any network shenanigans, but the write performance stayed at an absolutely abysmal 3MiB/s. Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I don't actually remember the numbers, but my test that took 2 minutes with gluster completed before I had time to blink). Writing straight to the backing SSD drives gives me a throughput of 96MiB/sec. The te...
2018 Mar 14
2
Expected performance for WORM scenario
...round 2.1MiB/s. Perplexed, I tried it first > with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My > last resort was to use a single node running on ramdisk, just to 100% > exclude any network shenanigans, but the write performance stayed at an > absolutely abysmal 3MiB/s. > > > > Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I > don't actually remember the numbers, but my test that took 2 minutes with > gluster completed before I had time to blink). Writing straight to the > backing SSD drives gives me...
2018 Mar 13
1
Expected performance for WORM scenario
...round 2.1MiB/s. Perplexed, I tried it first > with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My > last resort was to use a single node running on ramdisk, just to 100% > exclude any network shenanigans, but the write performance stayed at an > absolutely abysmal 3MiB/s. > > > > Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I > don't actually remember the numbers, but my test that took 2 minutes with > gluster completed before I had time to blink). Writing straight to the > backing SSD drives gives me...
2013 Jan 31
1
Installing RHEL On Laptop.....
...sical id: 3 slot: L2-Cache size: 256KiB capacity: 256KiB capabilities: synchronous internal write-through data *-cache:2 description: L3 cache physical id: 4 slot: L3-Cache size: 3MiB capacity: 3MiB capabilities: synchronous internal write-back unified *-logicalcpu:0 description: Logical CPU physical id: 0.1 width: 64 bits capabilities: logical *-logicalcpu:1 descr...
2010 Mar 10
8
[Bug 26986] New: TNT2 crashed on startup with 2.6.34-rc1
http://bugs.freedesktop.org/show_bug.cgi?id=26986 Summary: TNT2 crashed on startup with 2.6.34-rc1 Product: xorg Version: git Platform: x86 (IA32) OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Driver/nouveau AssignedTo: nouveau at lists.freedesktop.org
2020 Jan 21
2
Re: USB-hotplugging fails with "failed to load cgroup BPF prog: Operation not permitted" on cgroups v2
...exe="/usr/bin/libvirtd" hostname=? addr=? terminal=? res=failed' I honestly don't know how to even begin debugging what's happening, what the reason for the rejection is. -- Pol P.S. My previous message got rejected from the list because I wasn't thinking and attached 3MiB of debug logs. Since Pavel hasn't found what he was looking for in them I figure it makes little sense to upload and link them, the rest of the message is included inline.
2015 Aug 04
18
[Bug 91557] New: [NVE4] freezes: HUB_INIT timed out
https://bugs.freedesktop.org/show_bug.cgi?id=91557 Bug ID: 91557 Summary: [NVE4] freezes: HUB_INIT timed out Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: normal Priority: medium Component: Driver/nouveau Assignee: nouveau at
2020 Jan 18
3
USB-hotplugging fails with "failed to load cgroup BPF prog: Operation not permitted" on cgroups v2
Hi all, I've disabled cgroups v1 on my system with the kernel boot option "systemd.unified_cgroup_hierarchy=1". Since doing so, USB hotplugging fails to work, seemingly due to a permissions problem with BPF. Please note that the technique I'm going to describe worked just fine for hotplugging USB devices to running domains until this change. Attaching / detaching USB devices
2008 Apr 04
1
Driver Problem with 7150M
...NOUVEAU(0): [drm] added 1 reserved context for kernel (II) NOUVEAU(0): X context handle = 0x1 (II) NOUVEAU(0): [drm] installed DRM signal handler (II) NOUVEAU(0): Allocated 64MiB VRAM for framebuffer + offscreen pixmaps (II) NOUVEAU(0): GART: PCI DMA - using 3840KiB (II) NOUVEAU(0): GART: Allocated 3MiB as a scratch buffer (II) NOUVEAU(0): Opened GPU channel 1 Backtrace: 0: X(xf86SigHandler+0x65) [0x492fc5] 1: /lib64/libc.so.6 [0x3ea7032f80] 2: /usr/lib64/xorg/modules/drivers//nouveau_drv.so [0x7fcbae846e86] 3: /usr/lib64/xorg/modules/drivers//nouveau_drv.so(nv30UpdateArbitrationSettings+0x23) [0...