Displaying 20 results from an estimated 31 matches for "26gb".
Did you mean:
16gb
2012 Jun 19
3
Memory recognition in 6.2
Hi All:
I have an HP DL380G5 server which I am loading CentOS 6.2 on and it does
not appear to recognize all of the RAM installed on the server. The BIOS
is reporting 26GB however top is reporting:
Mem: 15720140k total, 418988k used, 15301152k free, 30256k buffers
Swap: 17956856k total, 0k used, 17956856k free, 135536k cached
and free is reporting:
total used free shared buffers cached
Mem: 15720140 418848 15301292 0...
2006 Sep 20
6
ocfs2 - disk usage inconsistencies
Hi all.
I have a 50 GB OCFS2 file system. I'm currently using ~26GB of space
but df is reporting 43 GB used. Any ideas how to find out where the
missing 17GB is at?
The file system was formatted with a 16K cluster & 4K block size.
Thanks,
Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/piperm...
2008 Jul 10
1
When installing EXE, not enough space.......
When i am installing my Exe file it tells me there is not enough free space on Drive C, i have been into Wine File manager and i have 26GB free on C:\, this is confusing but i am slightly new to all this, can anyone help?
Ta Very Much
Chris
2011 Jun 13
2
cause 'memory not mapped'
...on.
I try to do microarray normalization by R.
I use justRMA function within affy package, got error about segment fault.
I don't know why it happen.
I attached error below.
Please help me.
Thank you.
Cheers,
Won
=======================
OS : Redhat linux
Cpu : intel xeon X5570
Memory : 26Gb
&
OS : Ubuntu
Cpu : intel q6600
Memory : 8Gb
=======================
Loading required package: Biobase
Loading required package: methods
Welcome to Bioconductor
Vignettes contain introductory material. To view, type
'browseVignettes()'. To cite Bioconductor, see
'citation...
2008 Jan 18
1
Recover lost data from LVM RAID1
Guys,
The other day while working on my old workstation it get frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh install. It seems that the disk were with problems and weren't
syncing :(. I confessed, I didn't check that at first time but after I
lost the data and check /var/log/messages I saw that.
>Fr...
2008 Jan 18
1
HowTo Recover Lost Data from LVM RAID1 ?
Guys,
The other day while working on my old workstation it got frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
--------------------------
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh install. It seems that the disk were with problems and weren't
syncing :(. I confessed, I didn't check that at first time but after I
lost the data and check /var/log/m...
2004 Nov 23
1
Samba + CFS
...ess and delete files from that folder over the Samba share. But I
can not add files to that directory over the share (what I can do on the
Linux box).
The Windows Explorer says that the directory has 0 bytes of free space and
that the total space is 20MB.
The total free space should be around 26GB.
I think that the problem is Samba. A cfs directory can only be accessed by
the user who set it up. Even root won't see anything.
Samba will access the dir with user priviliges but tries to read the
disk-free info as user root or samba or whatever but not as user.
So it will always be 0 bec...
2013 Nov 19
0
qemu-dm memory leak?
...boot a.ka. xl destroy + xl create is the only way to get it back.
This *could* be related to "[Xen-devel] qemu-system-i386: memory leak?" http://xen.markmail.org/message/chqpifrj46lxdxx2
DomU by themselves doesn’t use any abnormal memory or swap.
To give an overview, currently Dom0 uses 26GB of swap with 8 active domU. Swap per process:
Pid Swap Process Uptime
3766 98452 kB qemu-dm -d 29 -domain-name [hostname] -nographic -M xenpv 160 days
6100 276988 kB qemu-dm -d 42 -domain-name [hostname] -nographic -M xenpv 108 days
6790 121620 kB qemu-dm -d 46 -domain-n...
2009 Nov 18
0
open(2), but no I/O to large files creates performance hit
..., I don''t see the performance degradation.
It doesn''t make sense that this would help unless there is some cache/VM
pollution resulting from the open(2).
The basic situation is that there are 8 SAS applications running a
statistical procedure. The 8 applications read the same 26GB file but
each writes its own unique 41GB output file (using vanilla
read(2)/write(2) of 4K/8K/16K I/O sizes). All I/O going to the same
mirrored ZFS filesystem. The performance problem occurs only if the 41
GB output files exist. In this situation, SAS will open the file to be
overwritten fo...
2019 Oct 14
2
[RFC] Propeller: A frame work for Post Link Optimizations
...low. ThinLTO has enabled much broader adoption of whole
program optimization, by making it non-monolithic.
* For Chromium builds,
https://chromium-review.googlesource.com/c/chromium/src/+/695714/3/build/toolcha
in/concurrent_links.gni, the linker process memory is set to 10GB with
ThinLTO.
It was 26GB with Full LTO before that and individual processes will run of
out
of memory beyond that.
* Here,
https://gotocon.com/dl/goto-chicago-2016/slides/AysyluGreenberg_BuildingADistrib
utedBuildSystemAtGoogleScale.pdf, a distributed build system at Google
scale
is shown where 5 million binary and test b...
2009 Apr 12
2
Indexing speed benchmark - Xapian, Solr
I came across this benchmark between Xapian & Solr:
http://www.anur.ag/blog/2009/03/xapian-and-solr/
According to the benchmark, a doc set that took Solr 34 min to index took Xapian 7 hours. Solr's index is also much smaller - 2.5GB to Xapian's 8.9GB.
I'm new to Xapian. Just wondering if results like these are typical? Is indexing speed & size a known issue in Xapian? Or is
2014 Jan 16
0
[PATCH net-next v4 3/6] virtio-net: auto-tune mergeable rx buffer size for improved performance
...ark CPUs. Trunk includes
SKB rx frag coalescing.
net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s
net-next (MTU-size bufs): 13170.01Gb/s
net-next + auto-tune: 14555.94Gb/s
Jason Wang also reported a throughput increase on mlx4 from 22Gb/s
using MTU-sized buffers to about 26Gb/s using auto-tuning.
Signed-off-by: Michael Dalton <mwdalton at google.com>
---
v2->v3: Remove per-receive queue metadata ring. Encode packet buffer
base address and truesize into an unsigned long by requiring a
minimum packet size alignment of 256. Permit attempts to fill...
2019 Oct 17
2
[RFC] Propeller: A frame work for Post Link Optimizations
...3_build_toolcha&d=DwMFaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=4c9jZ8ZwYXlxUZHyw4Wing&m=BOTyGbKXpK1kdAvdQF0QoVsl4A5BCIQJMEEXJRVW6To&s=8EBzmSqxfeVJXXXFKkx4Mzkf5d6cucxPc9pXkF36v_o&e=>
> in/concurrent_links.gni, the linker process memory is set to 10GB with
> ThinLTO.
> It was 26GB with Full LTO before that and individual processes will run of
> out
> of memory beyond that.
>
> * Here,
>
> https://gotocon.com/dl/goto-chicago-2016/slides/AysyluGreenberg_BuildingADistrib
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__gotocon.com_dl_goto-2Dchicag...
2014 Jan 07
0
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...ark CPUs. Trunk includes
SKB rx frag coalescing.
net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s
net-next (MTU-size bufs): 13170.01Gb/s
net-next + auto-tune: 14555.94Gb/s
Jason Wang also reported a throughput increase on mlx4 from 22Gb/s
using MTU-sized buffers to about 26Gb/s using auto-tuning.
Signed-off-by: Michael Dalton <mwdalton at google.com>
---
v2: Add per-receive queue metadata ring to track precise truesize for
mergeable receive buffers. Remove all truesize approximation. Never
try to fill a full RX ring (required for metadata ring in v2).
d...
2019 Oct 11
2
[RFC] Propeller: A frame work for Post Link Optimizations
Is there large value from deferring the block ordering to link time? That
is, does the block layout algorithm need to consider global layout issues
when deciding which blocks to put together and which to relegate to the
far-away part of the code?
Or, could the propellor-optimized compile step instead split each function
into only 2 pieces -- one containing an "optimally-ordered" set of
2019 Oct 18
3
[RFC] Propeller: A frame work for Post Link Optimizations
...3_build_toolcha&d=DwMFaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=4c9jZ8ZwYXlxUZHyw4Wing&m=BOTyGbKXpK1kdAvdQF0QoVsl4A5BCIQJMEEXJRVW6To&s=8EBzmSqxfeVJXXXFKkx4Mzkf5d6cucxPc9pXkF36v_o&e=>
> in/concurrent_links.gni, the linker process memory is set to 10GB with
> ThinLTO.
> It was 26GB with Full LTO before that and individual processes will run of
> out
> of memory beyond that.
>
> * Here,
>
> https://gotocon.com/dl/goto-chicago-2016/slides/AysyluGreenberg_BuildingADistrib
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__gotocon.com_dl_goto-2Dchicag...
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 08
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...rag coalescing.
>
> net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s
> net-next (MTU-size bufs): 13170.01Gb/s
> net-next + auto-tune: 14555.94Gb/s
>
> Jason Wang also reported a throughput increase on mlx4 from 22Gb/s
> using MTU-sized buffers to about 26Gb/s using auto-tuning.
>
> Signed-off-by: Michael Dalton <mwdalton at google.com>
> ---
> v2: Add per-receive queue metadata ring to track precise truesize for
> mergeable receive buffers. Remove all truesize approximation. Never
> try to fill a full RX ring (required...
2014 Jan 08
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...rag coalescing.
>
> net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s
> net-next (MTU-size bufs): 13170.01Gb/s
> net-next + auto-tune: 14555.94Gb/s
>
> Jason Wang also reported a throughput increase on mlx4 from 22Gb/s
> using MTU-sized buffers to about 26Gb/s using auto-tuning.
>
> Signed-off-by: Michael Dalton <mwdalton at google.com>
> ---
> v2: Add per-receive queue metadata ring to track precise truesize for
> mergeable receive buffers. Remove all truesize approximation. Never
> try to fill a full RX ring (required...