Displaying 15 results from an estimated 15 matches for "119m".
Did you mean:
119
2012 Apr 12
1
6.2 x86_64 "mtrr_cleanup: can not find optimal value"
...55M
Apr 11 17:25:36 kernel: gran_size: 64M chunk_size: 1G num_reg: 7 lose cover RAM: 55M
Apr 11 17:25:36 kernel: gran_size: 64M chunk_size: 2G num_reg: 8 lose cover RAM: 55M
Apr 11 17:25:36 kernel: gran_size: 128M chunk_size: 128M num_reg: 6 lose cover RAM: 119M
Apr 11 17:25:36 kernel: gran_size: 128M chunk_size: 256M num_reg: 7 lose cover RAM: 119M
Apr 11 17:25:36 kernel: gran_size: 128M chunk_size: 512M num_reg: 8 lose cover RAM: 119M
Apr 11 17:25:36 kernel: gran_size: 128M chunk_size: 1G num_reg: 7 lose cover RA...
2007 Sep 14
3
Convert Raid-Z to Mirror
Is there a way to convert a 2 disk raid-z file system to a mirror without backing up the data and restoring?
We have this:
bash-3.00# zpool status
pool: archives
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
archives ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
2010 Feb 19
0
[PATCH] Help reduce size of iso by symlinking initrd's from isolinux/ and EFI/boot if md5's match
116M ovirt-node-image/ovirt-node-image.iso
119M ovirt-node-image.iso.edited.iso
---
edit-livecd.py | 24 +++++++++++++++++++++++-
1 files changed, 23 insertions(+), 1 deletions(-)
diff --git a/edit-livecd.py b/edit-livecd.py
index 279b225..ebcb7a6 100644
--- a/edit-livecd.py
+++ b/edit-livecd.py
@@ -26,7 +26,7 @@ import shutil
import subpro...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...r volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
= *196,4 TB *but df shows:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
76T 1,6T 74T 3% /volumedisk0
stor1data:/volumedisk1...
2003 Jun 07
1
Wish-to-have in Asterisk
Dear Pals
I think all of us in this business (or most) looking at the Corn Flakes Back
Side , Wish to find a NAT Friendly VOIP solution. Thats ridiculous I know, but
what about this wish list (Personal Point of view)
Upnp support for Asterisk
STUN Support for Asterisk
Small Embedded Asterisk (just to redirect IAX) SIP-IAX , h.323-IAX or
something like that , on the End Point.
As a Nat
2011 Jan 12
2
Samba Share Access Delay !
> Hello Samba Users,
>
>
>
> I am using Samba for our project needs to share folders between a Windows
> Server 2003 machine and a
>
> RedHat Linux machine. I am facing issues with Samba shares (Samba Version
> 3.5.5 for RHEL 4 x86_64). The scenario is as below ?
>
>
> The windows machine has a couple of shared folders, one of them being * **
> C:\output*
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...49.1TB + 49.1TB +49.1TB
> = *196,4 TB *but df shows:
>
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 48G 21G 25G 46% /
> tmpfs 32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 76T 1,6T 74T 3% /volumedisk0
>...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
....1TB = *196,4 TB *but df shows:
>>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda2 48G 21G 25G 46% /
>> tmpfs 32G 80K 32G 1% /dev/shm
>> /dev/sda1 190M 62M 119M 35% /boot
>> /dev/sda4 395G 251G 124G 68% /data
>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>> 76T 1,6T 74T 3...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...f shows:
>>>
>>> [root at stor1 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda2 48G 21G 25G 46% /
>>> tmpfs 32G 80K 32G 1% /dev/shm
>>> /dev/sda1 190M 62M 119M 35% /boot
>>> /dev/sda4 395G 251G 124G 68% /data
>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>> stor1data:/volumedisk0
>>>...
2012 Jun 14
5
(fwd) Re: ZFS NFS service hanging on Sunday morning
...3814 root 5 59 0 30M 3928K sleep 0:00 0.32% pkgserv
3763 root 1 60 0 8400K 1256K sleep 0:02 0.20% zfs
3826 root 1 52 0 3516K 2004K cpu/1 0:00 0.05% top
3811 root 1 59 0 7668K 1732K sleep 0:00 0.02% pkginfo
1323 noaccess 18 59 0 119M 1660K sleep 4:47 0.01% java
174 root 50 59 0 8796K 1208K sleep 1:47 0.01% nscd
332 root 1 49 0 2480K 456K sleep 0:06 0.01% dhcpagent
8 root 15 59 0 14M 640K sleep 0:07 0.01% svc.startd
1236 root 1 59 0 15M 5172K sleep 2:06...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...;
>>>> [root at stor1 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sda2 48G 21G 25G 46% /
>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>> /dev/sda4 395G 251G 124G 68% /data
>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>> stor1data:/volumedisk0
>>>>...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...t; [root at stor1 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>>> stor1data:/volumedisk0
>...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...1 ~]# df -h
>>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>>>> stor1data:...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...>>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>>>>...
2019 Apr 30
6
Disk space and RAM requirements in docs
...clang/tools/libclang/CMakeFiles/libclang.dir
122M build/tools/clang/tools/libclang/CMakeFiles
122M build/tools/clang/tools/libclang
120M build/tools/clang/lib/Serialization/CMakeFiles/clangSerialization.dir
120M build/tools/clang/lib/Serialization/CMakeFiles
120M build/tools/clang/lib/Serialization
119M build/tools/clang/unittests/Basic
119M build/lib/Target/NVPTX/CMakeFiles/LLVMNVPTXCodeGen.dir
119M build/lib/Target/NVPTX/CMakeFiles
111M build/lib/CodeGen/GlobalISel/CMakeFiles/LLVMGlobalISel.dir
111M build/lib/CodeGen/GlobalISel/CMakeFiles
111M build/lib/CodeGen/GlobalISel
108M build/lib/Target/L...