search for: 23g

Displaying 19 results from an estimated 19 matches for "23g".

Did you mean: 23
2009 Jun 28
1
Partitionning for future.
...machine intended to have mainly mysql database, apache and some web data.? I didn't use LVM for / and /boot during the installtion Could I extend? easily in the future the /var partition? when I add another disk? ?Filesystem??????????? Size? Used Avail Use% Mounted on /dev/cciss/c0d0p6????? 23G? 432M?? 22G?? 2% / /dev/mapper/VolGroup00-LogVol00 ????????????????????? 5.0G? 139M? 4.7G?? 3% /home /dev/mapper/VolGroup00-LogVol03 ?????????????????????? 98G? 275M?? 93G?? 1% /var /dev/mapper/VolGroup00-LogVol02 ????????????????????? 5.0G? 2.9G? 1.9G? 61% /usr /dev/cciss/c0d0p1????? 99M?? 19M??...
2004 Sep 09
1
Bug in rsync? (--delete[-after])
...fore and after status of the most troublesome disk; the first the from-disk and the last two the to-disk; of these the first by use of --delete-after and the last by use of --delete: hox /# df -h /mn/hox/u7 Filesystem Size Used Available Capacity Mounted enhalvtil#dynmet2 29G 23G 6003M 80% /mn/hox/u7 hox /# ssh anvil df -h /mn/anvil/dynmet-u2 Filesystem Size Used Available Capacity tera3scsi#dynmet-u2 52G 50G 2734M 95% /mn/anvil/dynmet-u2 hox /# ssh anvil df -h /mn/anvil/dynmet-u2 Filesystem Size Used Available Capac...
2016 Apr 07
2
Suddenly increased my hard disk
Hi John, Ashish, Still no luck . I have tried your commands in root folder. It's showing max size 384 only in home directory. But if i try df -h shown 579. Is there any way to find out recycle bin folder On Thu, Apr 7, 2016 at 2:16 PM, Ashish Yadav <gwalashish at gmail.com> wrote: > Hi Chandran, > > > On Thu, Apr 7, 2016 at 10:38 AM, Chandran Manikandan <tech2mani at
2018 May 08
1
mount failing client to gluster cluster.
...ch.co.nz:/gv0 /isos works fine. =========== root at kvm01:/var/lib/libvirt# df -h Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 9.2M 1.6G 1% /run /dev/mapper/kvm01--vg-root 23G 3.8G 18G 18% / tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/kvm01--vg-home 243G 61M 231G 1% /home /dev/mapper/kvm01--vg-tm...
2018 Feb 02
3
Run away memory with gluster mount
..._size >>>> # grep itable <client-statedump> | grep purge | wc -l >>>> # grep itable <client-statedump> | grep purge_size >>>> >>> >>> Had to restart the test and have been running for 36 hours now. RSS is >>> currently up to 23g. >>> >>> Working on getting a bug report with link to the dumps. In the mean >>> time, I'm including the results of your above queries for the first >>> dump, the 18 hour dump, and the 36 hour dump: >>> >>> # grep itable glusterdump.153904.d...
2018 Feb 01
0
Run away memory with gluster mount
...e <client-statedump> | grep lru_size >>> # grep itable <client-statedump> | grep purge | wc -l >>> # grep itable <client-statedump> | grep purge_size >> >> Had to restart the test and have been running for 36 hours now. RSS is >> currently up to 23g. >> >> Working on getting a bug report with link to the dumps. In the mean >> time, I'm including the results of your above queries for the first >> dump, the 18 hour dump, and the 36 hour dump: >> >> # grep itable glusterdump.153904.dump.1517104561 | grep ac...
2016 Apr 07
0
Suddenly increased my hard disk
...-h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_free-lv_root 50G 7.7G 39G 17% / tmpfs 12G 0 12G 0% /dev/shm /dev/sda1 477M 146M 306M 33% /boot /dev/mapper/vg_free-lv_home 30G 7.1G 23G 25% /home /dev/mapper/vg_free-lvpgsql 30G 671M 30G 3% /var/lib/pgsql /dev/mapper/vg_free-lvimages 150G 61G 90G 41% /var/lib/libvirt/images /dev/mapper/vgdata-lvhome2 1.8T 470G 1.4T 26% /home2 # du -hs /home/* 398M...
2008 Feb 22
0
mddisk(ramdisk) root system that image size limit ?
...T after system up] 1. dd if=/dev/zero of=/image bs=1k count=128k 2. mdconfig -a -t vnode -f /image -u 0 3. bsdlabel -Bw /dev/md0 auto 4. newfs /dev/md0a 5. mount /dev/md0a /mnt When system up normally: df -h Filesystem Size Used Avail Capacity Mounted on /dev/ad0s1a 23G 1.5G 20G 7% / devfs 1.0K 1.0K 0B 100% /dev /dev/md0a 124M 4.0K 114M 0% /mnt system not panic. but config loader.rc, add "load -t mfs_root /image" or "load -t md_image /image" will panic. I tried configuring the kerne...
2007 Sep 03
1
Re: OT: Suggestions for RAID HW for 2 SATA drives in
On 31 August 2007, Phil Schaffner <Philip.R.Schaffner at NASA.gov> wrote: > > Message: 21 > <snip> > > As discussed recently on-list, VMware CPU requirements to support > > virtualization are not nearly so rigorous as for Xen. You are > > probably OK with VMware on most any relatively modern x86 or x86_64 > > CPU. > >
2007 Sep 09
0
Re: OT: Suggestions for RAID HW for 2 SATA drives in
...> No new partition is needed. VMware virtual disks are created as files > within the hosts OS. That's not a lot of free space to play with, but > enough to experiment with. Here's a sample of a directory of assorted > VMware VMs: > > [prs at lynx vmware]$ du -sh * > 23G C5_64 > 4.0G CentOS_3_9 > 4.7G CentOS-QA > 6.7G fedora-7-i386 > 4.1G PCLinuxOS_2007 > 21G W2K_Pro > 22G XP Phil: Thank you! Not sure if I have enough space available, but I am going to try this. :-) Lanny
2018 Feb 03
0
Run away memory with gluster mount
...# grep itable <client-statedump> | grep purge | wc -l > # grep itable <client-statedump> | grep purge_size > > > Had to restart the test and have been running for 36 hours > now. RSS is > currently up to 23g. > > Working on getting a bug report with link to the dumps. In > the mean > time, I'm including the results of your above queries for > the first > dump, the 18 hour dump, and the 36 hour dump: > >...
2016 Apr 07
1
Suddenly increased my hard disk
...e Used Avail Use% Mounted on > /dev/mapper/vg_free-lv_root > 50G 7.7G 39G 17% / > tmpfs 12G 0 12G 0% /dev/shm > /dev/sda1 477M 146M 306M 33% /boot > /dev/mapper/vg_free-lv_home > 30G 7.1G 23G 25% /home > /dev/mapper/vg_free-lvpgsql > 30G 671M 30G 3% /var/lib/pgsql > /dev/mapper/vg_free-lvimages > 150G 61G 90G 41% /var/lib/libvirt/images > /dev/mapper/vgdata-lvhome2 > 1.8T 470G 1.4T 26% /ho...
2018 Feb 21
1
Run away memory with gluster mount
...ble <client-statedump> | grep purge | wc -l >> ??????????????? # grep itable <client-statedump> | grep purge_size >> >> >> ??????????? Had to restart the test and have been running for 36 hours >> ??????????? now. RSS is >> ??????????? currently up to 23g. >> >> ??????????? Working on getting a bug report with link to the dumps. In >> ??????????? the mean >> ??????????? time, I'm including the results of your above queries for >> ??????????? the first >> ??????????? dump, the 18 hour dump, and the 36 hour dump...
2018 Feb 05
1
Run away memory with gluster mount
...client-statedump> | grep purge | wc -l > > # grep itable <client-statedump> | grep purge_size > > > > > > Had to restart the test and have been running for 36 hours > > now. RSS is > > currently up to 23g. > > > > Working on getting a bug report with link to the dumps. In > > the mean > > time, I'm including the results of your above queries for > > the first > > dump, the 18 hour dump, and the 36 ho...
2018 Jan 29
2
Run away memory with gluster mount
...> | grep lru | wc -l > # grep itable <client-statedump> | grep lru_size > # grep itable <client-statedump> | grep purge | wc -l > # grep itable <client-statedump> | grep purge_size Had to restart the test and have been running for 36 hours now. RSS is currently up to 23g. Working on getting a bug report with link to the dumps. In the mean time, I'm including the results of your above queries for the first dump, the 18 hour dump, and the 36 hour dump: # grep itable glusterdump.153904.dump.1517104561 | grep active | wc -l 53865 # grep itable glusterdump.15390...
2018 Jan 30
1
Run away memory with gluster mount
...> # grep itable <client-statedump> | grep lru_size > > # grep itable <client-statedump> | grep purge | wc -l > > # grep itable <client-statedump> | grep purge_size > > Had to restart the test and have been running for 36 hours now. RSS is > currently up to 23g. > > Working on getting a bug report with link to the dumps. In the mean > time, I'm including the results of your above queries for the first > dump, the 18 hour dump, and the 36 hour dump: > > # grep itable glusterdump.153904.dump.1517104561 | grep active | wc -l > 5386...
2018 Jan 29
0
Run away memory with gluster mount
----- Original Message ----- > From: "Ravishankar N" <ravishankar at redhat.com> > To: "Dan Ragle" <daniel at Biblestuph.com>, gluster-users at gluster.org > Cc: "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>, "Nithya Balachandran" <nbalacha at redhat.com>, > "Raghavendra
2009 Aug 22
6
Fw: Re: my bootlog
...11:59 AM 1. Please , don''t break the thread ( i mean cc:). 2. Could  you post "df -h" output. In my case:- [root@ServerXen341F xen-3.4.1]# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/mapper/vg_serverxen341f-LogVol00                               29G   23G  4.5G  84% / /dev/sdb8             194M   55M  130M  30% /boot tmpfs                 3.9G  528K  3.9G   1% /dev/shm /dev/sda14             36G   15G   20G  43% /mnt It is for Intel ICHR9R (AHCI) South Bridge with SATA drive attached. In you case i would suspect SAS Controller setup in BIOS as &qu...
2018 Jan 27
6
Run away memory with gluster mount
On 01/27/2018 02:29 AM, Dan Ragle wrote: > > On 1/25/2018 8:21 PM, Ravishankar N wrote: >> >> >> On 01/25/2018 11:04 PM, Dan Ragle wrote: >>> *sigh* trying again to correct formatting ... apologize for the >>> earlier mess. >>> >>> Having a memory issue with Gluster 3.12.4 and not sure how to >>> troubleshoot. I don't