search for: 194m

Displaying 12 results from an estimated 12 matches for "194m".

Did you mean: 1943
2012 Jul 27
2
Modifying a netinstall ISO image
...info-table -R -J -v -T . I got the mkisofs command line from http://www.centos.org/docs/5/html/5.2/Installation_Guide/s2-steps-make-cd.html (I couldn't find anything similar in the CentOS/RHEL 6.X documentation). This appears to work just fine, but the resulting ISO image is about 20% bigger (194M vs. 162M) and has an extra TRANS.TBL file at the top level. Any ideas what is causing this? It's no big deal as I've got things working, but I'm curious nonetheless... Alfred
2009 Sep 30
9
du vs df size difference
Hi all, Curious issue.. looking in to how much disk space is being used on a machine (CentOS 5.3). When I compare the output of du vs df, I am seeing a 12GB difference with du saying 8G used and df saying 20G used. # du -hcx / 8.0G total # df -h / Filesystem Size Used Avail Use% Mounted on /dev/xvda3 22G 20G 637M 97% / I recognize that in most cases du and df
2005 Jan 27
2
Disk Space Error
...hare/gmp-4.1.4.tar.gz': No space left on device" from the linux machine. I have over 16Gb of space on the partition that the share resides. [root samba-share]# df -h Filesystem Size Used Avail Use% Mounted on /dev/hda1 726M 668M 58M 92% / /dev/hda3 194M 19M 175M 10% /var /dev/hda4 17G 318M 16G 2% /home [root samba-share]# pwd /home/samba-share smb.conf: [global] workgroup = Workgroup server string = Samba Server os level = 33 preferred master = Yes remote announce = 192.168.168.255...
2009 Nov 16
5
how to mount domU images on dom0
how to mount the domU images on dom0 to chroot them mount it on the dom0 and chroot to it. I am getting an error xen console mydomU _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2019 Oct 12
0
qeum on centos 8 with nvme disk
I have CentOS 8 install solely on one nvme drive and it works fine and relatively quickly. /dev/nvme0n1p4????????? 218G?? 50G? 168G? 23% / /dev/nvme0n1p2????????? 2.0G? 235M? 1.6G? 13% /boot /dev/nvme0n1p1????????? 200M? 6.8M? 194M?? 4% /boot/efi You might want to partition the device (p3 is swap) Alan On 13/10/2019 10:38, Jerry Geis wrote: > Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD > disks. > > I installed an nVME drive in the computer also. I tried to insall CentOS8 > on i...
2011 Feb 22
0
Problem with xapi and stunnel on XenServer 5.6.1
...125 sleeping, 0 stopped, 1 zombie Cpu(s): 4.2%us, 5.3%sy, 0.0%ni, 86.5%id, 0.3%wa, 0.0%hi, 0.0%si, 3.7%st Mem: 314368k total, 306676k used, 7692k free, 1320k buffers Swap: 524280k total, 6252k used, 518028k free, 109556k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23343 root 20 0 194m 9272 3236 S 37.5 2.9 29:17.48 xapi 23570 root 20 0 5908 2696 1792 S 9.6 0.9 7:31.16 stunnel thanks in advance Asen -- View this message in context: http://xen.1045712.n5.nabble.com/Problem-with-xapi-and-stunnel-on-XenServer-5-6-1-tp3395304p3395304.html Sent from the Xen - User mailing list archi...
2006 Feb 23
7
ipp2p don''t block Ares
...body are using ipp2p blocking the latest Ares version ? My system settings are: kernel : 2.6.13 iptables: 1.3.3 ipp2p: 0.81 rc1 iptables -L -v output: Chain FORWARD (policy ACCEPT 53M packets, 22G bytes) pkts bytes target prot opt in out source destination 2321K 194M DROP all -- any any anywhere anywhere ipp2p v0.8.1_rc1 --kazaa --gnu --edk --dc --bit --apple --soul --winmx --ares --mute --waste --xdcc Thanks for any help. roberto -- Ing. Roberto Pereyra ContenidosOnline Servidores BSD, Solaris y Linux Soporte técnico...
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
...Utilis? Dispo Uti% Mont? sur /dev/md127 226G 1,1G 213G 1% / devtmpfs 1,4G 0 1,4G 0% /dev tmpfs 1,4G 0 1,4G 0% /dev/shm tmpfs 1,4G 8,5M 1,4G 1% /run tmpfs 1,4G 0 1,4G 0% /sys/fs/cgroup /dev/md125 194M 80M 101M 45% /boot /dev/sde1 917G 88M 871G 1% /mnt The root partition (/dev/md127) only shows 226 G of space. So where has everything gone? [root at nestor:~] # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md125 : active raid1 sdc2[2] sdd2[3] sdb2[1] sda...
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
...Utilis? Dispo Uti% Mont? sur /dev/md127 226G 1,1G 213G 1% / devtmpfs 1,4G 0 1,4G 0% /dev tmpfs 1,4G 0 1,4G 0% /dev/shm tmpfs 1,4G 8,5M 1,4G 1% /run tmpfs 1,4G 0 1,4G 0% /sys/fs/cgroup /dev/md125 194M 80M 101M 45% /boot /dev/sde1 917G 88M 871G 1% /mnt The root partition (/dev/md127) only shows 226 G of space. So where has everything gone? [root at nestor:~] # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md125 : active raid1 sdc2[2] sdd2[3] sdb2[1] sda...
2009 Aug 22
6
Fw: Re: my bootlog
...inuz-2.6.30-rc6-tip root=/dev/mapper/VolGroup-lv_root ro console=tty0         module /boot/initrd-2.6.30-rc6-tip.img [root@localhost boot]# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root                        77G   11G   62G  15% / /dev/sda7             194M   37M  148M  20% /boot tmpfs                1002M  672K 1002M   1% /dev/shm [root@localhost boot]# ll total 30922 -rw-r--r--. 1 root root   97799 2009-05-28 03:39 config-2.6.29.4-167.fc11.i686.PAE -rw-r--r--. 1 root root   97469 2009-08-15 11:20 config-2.6.29.6-217.2.8.fc11.i686.PAE drwxr-xr-x. 3 r...
2019 Oct 12
7
qeum on centos 8 with nvme disk
Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD disks. I installed an nVME drive in the computer also. I tried to insall CentOS8 on it (the physical /dev/nvme0n1 with the -hda /dev/nvme0n1 as the disk. The process started installing but is really "slow" - I was expecting with the nvme device it would be much quicker. Is there something I am missing how to
2013 Jun 13
4
puppet: 3.1.1 -> 3.2.1 load increase
Hi, I recently updated from puppet 3.1.1 to 3.2.1 and noticed quite a bit of increased load on the puppetmaster machine. I''m using the Apache/passenger/rack way of puppetmastering. Main symptom is: higher load on puppetmaster machine (8 cores): - 3.1.1: around 4 - 3.2.1: around 9-10 Any idea why there''s more load on the machine with 3.2.1? -- You received this