Displaying 20 results from an estimated 205 matches for "16g".
Did you mean:
16
2009 Jan 15
2
i386 hypervisor seeing only ~16G RAM, amd64 required?
Hi,
I have several machines using xen-hypervisor-3.2-1-i386 from
etch-backports and they recently got upgraded to 20 or 24G RAM. I
have seen talk of a limit of a 16G RAM with 32bit PAE Xen and indeed
this is what I am seeing.
I am guessing there is still no way to get the 32bit hypervisor to
see more than 16G RAM, and I must go to 64bit.
Can I boot a 64bit hypervisor and still keep the same 32bit dom0?
What're people's experiences running 32bit domUs...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...is still
>> stuck that way; ovirt gui and gluster volume heal engine info both show the
>> volume fully healed, but it is not:
>> [root at ovirt1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/mapper/centos_ovirt-root 20G 4.2G 16G 21% /
>> devtmpfs 16G 0 16G 0% /dev
>> tmpfs 16G 16K 16G 1% /dev/shm
>> tmpfs 16G 26M 16G 1% /run
>> tmpfs 16G 0 16G 0% /sys/fs/cgroup
>> /...
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all,
I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has
32 GB RAM installed. I am running Sol11Expr on this host and I use it to
primarily serve Netatalk AFP shares. From day one, I have noticed that
the amount of free RAM decereased and along with that decrease the
overall performance of ZFS decreased as well.
Now, since I am still quite a Solaris newbie, I seem to
2021 Jul 05
3
Problems with CentOS 8 kickstart
On Mon, 5 Jul 2021 at 07:15, Hooton, Gerard <g.hooton at ucc.ie> wrote:
>
> Hi All,
> I am having problems with a kickstart install of CentOS 8
> When I try to do a completely automated install using PXE/UEFI it get to the point where it reads the kickstart config file.
> Then I see the following message
> "kickstart install Started cancel waiting for multipath
2012 Nov 08
5
map two names into one
Thanks.
Yes. Your approach can identify:
Glaxy ace S 5830 and
S 5830 Glaxy ace
But you can not identify using same program:
Iphone 4S 16 G
Iphone 4S 16G
How should I solve both in same time.
Kind regards,Tammy
[[alternative HTML version deleted]]
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0 [2/2] [UU]
md3 : active raid1 sda4[0] sdb4[1]
3672901440 blocks super 1.1 [2/2] [UU]...
2012 Jun 14
5
(fwd) Re: ZFS NFS service hanging on Sunday morning
...le going on on the system. eg.,
root at server5:/tmp# /usr/local/bin/top
last pid: 3828; load avg: 4.29, 3.95, 3.84; up 6+23:11:4407:12:47
79 processes: 78 sleeping, 1 on cpu
CPU states: 73.0% idle, 0.0% user, 27.0% kernel, 0.0% iowait, 0.0% swap
Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
784 root 17 60 -20 88M 632K sleep 270:03 13.02% nfsd
2694 root 1 59 0 1376K 672K sleep 1:45 0.69% touch
3814 root 5 59 0 30M 3928K sleep 0:00 0.32% pkgse...
2010 Oct 21
2
Bug? Mount and fstab
...swap defaults 0 0
/dev/sdb1 /gluster ext3 defaults 0 1
/etc/glusterfs/glusterfs.vol /pifs/ glusterfs defaults 0 0
[root at vm-container-0-0 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 16G 2.6G 12G 18% /
/dev/sda5 883G 35G 803G 5% /state/partition1
/dev/sda2 3.8G 121M 3.5G 4% /var
tmpfs 7.7G 0 7.7G 0% /dev/shm
/dev/sdb1 917G 200M 871G 1% /gluster
none 7.7G 104K 7.7G 1% /var/lib/xenstored...
2008 Nov 14
23
Still more questions WRT selecting a mobo for small ZFS RAID
Like many others, I am looking to put together a SOHO NAS based on ZFS/CIFS. The plan is 6 x 1TB drives in RAIDZ2 configuration, driven via mobo with 6 SATA ports.
I''ve read most, if not all, of the threads here, as well as sbredon''s excellent article on building a home NAS, yet I still have a number of unanswered questions.
I was leaning heavily towards the M2N-E for a while,
2011 Jul 21
0
Templates and self-knowledge
...;
>
> fdisk all solaris all
>
> boot_device any preserve
>
> filesys rootdisk.s1 16384 swap
>
> filesys rootdisk.s0 40960 /
>
> filesys rootdisk.s7 free /export
>
> <% elsif zfs_root == "c3s" %>
>
> pool rootpool auto 16g 16g mirror c3t0d0s0 c3t4d0s0
>
> fdisk c3t0d0 solaris all
>
> fdisk c3t4d0 solaris all
>
> <% else %>
>
> pool rootpool auto 16g 16g mirror <%= zfs_root %>t0d0s0 <%= zfs_root
>> %>t1d0s0
>
> fdisk <%= zfs_root %>t0d0 solaris all
>
>...
2015 Apr 02
1
mounted NFS does not show in df -h
.../shm
tmpfs 1001M 101M 901M 11% /run
tmpfs 1001M 0 1001M 0% /sys/fs/cgroup
s3fs 256T 0 256T 0% /backup/cassandradb
s3fs 256T 0 256T 0% /backup/mysql
nfs1.jokefire.com:/var/www 20G 3.1G 16G 17% /var/www
Yet, when I do a df -h on the directory I mounted the NFS share on, I see
that it's mounted via NFS as expected:
[root at web1:~] #df -h /mnt/home
Filesystem Size Used Avail Use% Mounted on
nfs1.jokefire.com:/home 20G 3.1G 16G 17% /mnt/home
So, what do you...
2007 Mar 23
1
Consolidating LVM volumes..
...my current disk setup..
[root at server1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-RootVol00
15G 1.5G 13G 11% /
/dev/md0 190M 42M 139M 24% /boot
/dev/mapper/VolGroup00-DataVol00
39G 16G 22G 42% /data
none 157M 0 157M 0% /dev/shm
/dev/mapper/VolGroup00-HomeVol00
77G 58G 15G 80% /home
/dev/mapper/VolGroup00-VarVol00
16G 382M 15G 3% /var
Rather than try and reduce the size of the VarVol00 volume to...
2015 Nov 04
2
getting a CentOS6 VM on VMware ESXi platform to recognize a new disk device
...ice, no
> problem. Now, I ask - do I have to reboot the VM? Logically I hope there
> ought to be a way for me not to have to do that - but I have yet to figure
> out how to get there.
>
vmware esxi 5.5.0 (free, using vsphere client to manage), vm is minimal
centos 7 64bit. I added a 16gb vdisk and immediately see this in dmesg...
[155484.386792] vmw_pvscsi: msg type: 0x0 - MSG RING: 1/0 (5)
[155484.386796] vmw_pvscsi: msg: device added at scsi0:1:0
[155484.388250] scsi 0:0:1:0: Direct-Access VMware Virtual
disk 1.0 PQ: 0 ANSI: 2
[155484.391275] sd 0:0:1:0: [sdb] 33554...
2010 Jan 25
9
memory not released to dom0
after shutdown many domu, the memory is not release to dom0,
before:
host: 8g phy memory
16g swap (4g used)
dom0: 600m
domu: consumed all memory, no new domu could be created
after shutdown many domu:
host: 8g phy memory
16g swap (4g used)
dom0: 600m
domu: 2g
why memory of dom0 is still 600m? how to release more to dom0?
pls advise.
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk...
2010 Jul 14
5
Matrix Size
...igure out how perform a linear
regression on a huge matrix.
i am sure this topic has passed through the email list before but could
not find anything in the archives.
i have a matrix that is 2,000,000 x 170,000 the values right now are
arbitray.
i try to allocate this on a x86_64 machine with 16G of ram and i get the
following:
> x <- matrix(0,2000000,170000);
Error in matrix(0, 2e+06, 170000) : too many elements specified
>
is R capable of handling data of this size? am i doing it wrong?
cheers
paul
2015 Nov 21
5
CPU Limit in Centos
A few years ago, I vaguely recall some issue with RHEL needing a special license or something like that, if you had more than a certain amount of CPU's or a certain amount of RAM.
Does Centos work fine for 2 CPU's, 16 cores, 32 threads, and 256 G of ram?
Centos6 specifically.
2006 Jun 06
3
memory.limit function not found
I have installed R 2.2.1 in Solaris 10 and am trying to increase the memory capacity (the system has 16G RAM) to 3 or 4G, but I keep getting:
> memory.limit(size=3000)
Error: couldn't find function "memory.limit"
Am I missing anything? I do that all the time under Windows.
Any help would be appreciated.
Thanks
Priscila
[[alternative HTML version deleted]]
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
...scenarios I have experienced umount times higher than 15
minutes, even when there''s no pending IO (after a btrfs fs sync).
A quick way to reproduce this issue:
$ mkfs.btrfs -f /dev/sdb3
$ mount /dev/sdb3 /mnt/btrfs
$ cd /mnt/btrfs
$ sysbench --test=fileio --file-num=128 --file-total-size=16G \
--file-test-mode=seqwr --num-threads=128 \
--file-block-size=16384 --max-time=60 --max-requests=0 run
$ time btrfs fi sync .
FSSync ''.''
real 0m25.457s
user 0m0.000s
sys 0m0.092s
$ cd ..
$ time umount /mnt/btrfs
real 1m38.234s
user 0m0.000s
sys 1m25.760s
The same test...
2019 Nov 26
5
debug build busts memory
The linking state of a Debug build seems to require a humongous amount of
memory. My poor little linux machine with 16G of ram swaps its brains out.
Waiting for it to finish (if it ever does) is like the old days when you
submitted your deck of cards and waited until the next day to see the results.
To debug a new backend, is there a way to just get the debug info for the
Target/Foo directory?
Is there a way to...
2015 Jan 30
4
HugePages - can't start guest that requires them
...of my guests and 16777216 KiB is the amount of
memory I'm trying to give to the guest.
Yes, i can see the hugepages via numastat -m and hugetlbfs is mounted
via /dev/hugepages and there is a dir structure
/dev/hugepages/libvirt/qemu (it's empty).
HugePages is big enough to accommodate the 16G i'm allocating... and
changing the perms on that directory structure to 777 doesn't work
either.
Any help is much appreciated.
HOST: http://sprunge.us/SEdc
GUEST: http://sprunge.us/VCYB
Regards,
Richard