search for: 13g

Displaying 20 results from an estimated 38 matches for "13g".

Did you mean: 13
2016 Jul 12
3
Broken output for fdisk -l
...63G 0 63G 0% /sys/fs/cgroup /dev/mapper/MD3200-results 19T 9.0T 9.3T 50% /data/results /dev/sdb1 1.5T 234M 1.5T 1% /share /dev/mapper/centos-home 1.8T 5.5G 1.7T 1% /home /dev/sda1 497M 158M 340M 32% /boot tmpfs 13G 32K 13G 1% /run/user/1000 tmpfs 13G 36K 13G 1% /run/user/0 /dev/sde 917G 625G 247G 72% /media/usb [root at localhost ~]# fdisk -l /dev/sdc fdisk: cannot open /dev/sdc: Input/output error [root at localhost ~]# ------------------------------...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...7.3G 18G 29% /gluster/brick4 > 192.168.8.11:/engine 15G 9.7G 5.4G 65% > /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine > 192.168.8.11:/data 136G 125G 12G 92% > /rhev/data-center/mnt/glusterSD/192.168.8.11:_data > 192.168.8.11:/iso 13G 7.3G 5.8G 56% > /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso > > View from ovirt2: > Filesystem Size Used Avail Use% Mounted on > /dev/mapper/gluster-engine 15G 9.7G 5.4G 65% /gluster/brick1 > /dev/mapper/gluster-data 174G 119G 56G 69...
2005 Nov 07
1
Re Phrase Tuning.
...been running it on a machine with 5G of memory -but it has to contend with other processes for resources, I have also run it on its own on a 1G system -but quartz was atrocious on this (I haven't tried flint yet). If the management agrees then I'll probably try splinting the index (now 12-13G) over 2 machines and running flint -obviously the fact that it's beta will give them a bit of a problem, but it looks like it's that or nothing (or Lucene). Cheers, jeremy. -------------------------------------------------------------------- mail2web - Check your email from the web at ht...
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the performance I get (note use a file size > amount of RAM on client and server systems, 13GB in this case) : 4k block size : 111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds testing from 8k - 128k block size on the dd, best performance was achiev...
2018 Jul 30
2
Issues booting centos7 [dracut is failing to enable centos/root, centos/swap LVs]
...having a strange problem booting a new centos7 installation. Below some background on this. [I have attached the tech details at the bottom of this message] I started a new CentOS7 installation on a VM, so far all good, o/s boots fine. Then I decided to increase VM disk size (initially was 10G) to 13G. Powered off the VM, increased the vhd via the hypervisor, booted from CentOS livecd, selected "recover my centos installation". Then I used the following sequence of commands to make the new vhd size "visible" to the o/s ... - ran fdisk /dev/sda and deleted partition 2. (This...
2012 Jun 28
2
Strange du/df behaviour.
...*/Maildir/.Spam/cur/* --exclude=*/Maildir/.Spam/new/* --use-compress-program /usr/bin/pigz -cf /home/paczki-workdir/abaksa-mail-20120628-0413.tgz and it writes so much data: du -sh /home/paczki-workdir/abaksa-mail-20120628-0413.tgz;sleep 3;du -sh /home/paczki-workdir/abaksa-mail-20120628-0413.tgz 13G /home/paczki-workdir/abaksa-mail-20120628-0413.tgz 13G /home/paczki-workdir/abaksa-mail-20120628-0413.tgz du -sk /home/paczki-workdir/abaksa-mail-20120628-0413.tgz;sleep 3;du -sk /home/paczki-workdir/abaksa-mail-20120628-0413.tgz 13410988 /home/paczki-workdir/abaksa-mail-20120628-04...
2009 Dec 29
2
ext3 partition size
...t: /dev/sdb8 on /srv/multimedia type ext3 (rw,relatime) $ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sdb2 ext3 30G 1.1G 28G 4% / /dev/sdb7 ext3 20G 1.3G 18G 7% /var /dev/sdb6 ext3 30G 12G 17G 43% /usr /dev/sdb5 ext3 40G 25G 13G 67% /home /dev/sdb1 ext3 107M 52M 50M 52% /boot */dev/sdb8 ext3 111G 79G 27G 76% /srv/multimedia* tmpfs tmpfs 2.9G 35M 2.9G 2% /dev/shm Parted info: (parted) select /dev/sdb Using /dev/sdb (parted) print Model: ATA ST3500630AS (scsi) Disk /dev/sdb: 500GB Se...
2002 Aug 13
0
[EXT3-fs error with RH7.2 and RH7.3]
...i ! My system is RH7.2 with Adaptec/DPT/I2O drivers (http://people.redhat.com/tcallawa/dpt/). There is a 2 disk RAID 1 array which had no disk fail. Several partition on it: Filesystem Size Used Avail Use% Mounted on /dev/sda1 1.9G 435M 1.4G 24% / /dev/sda2 13G 3.2G 9.1G 26% /home none 504M 0 503M 0% /dev/shm At this time, all seems to be ok. But there was in /etc/log/message Aug 13 10:55:40 web kernel: EXT3-fs error (device sd(8,1)): ext3_free_blocks: Freeing blocks not in datazone - block = 439612, count = 1 Aug 13 10:55:41...
2007 Mar 23
1
Consolidating LVM volumes..
Hi, Something I haven't done before is reduce the number of volumes on my server.. Here is my current disk setup.. [root at server1 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-RootVol00 15G 1.5G 13G 11% / /dev/md0 190M 42M 139M 24% /boot /dev/mapper/VolGroup00-DataVol00 39G 16G 22G 42% /data none 157M 0 157M 0% /dev/shm /dev/mapper/VolGroup00-HomeVol00 77G 58G 15G 80% /home /dev/mapper/VolGroup0...
2018 Jul 30
0
Issues booting centos7 [dracut is failing to enable centos/root, centos/swap LVs]
...new centos7 installation. Below > some > background on this. [I have attached the tech details at the bottom of > this > message] > > I started a new CentOS7 installation on a VM, so far all good, o/s boots > fine. Then I decided to increase VM disk size (initially was 10G) to 13G. > Powered off the VM, increased the vhd via the hypervisor, booted from > CentOS livecd, selected "recover my centos installation". Then I used the > following sequence of commands to make the new vhd size "visible" to the > o/s ... > > - ran fdisk /dev/sda an...
2017 Nov 13
1
Shared storage showing 100% used
...M?? 13T?? 1% /glusterfs/a4/b2 /dev/md151p1??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a2/b1 /dev/md151p2??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a2/b2 /dev/md152p1??????????????????????? 26T? 4.4T?? 22T? 17% /glusterfs/a3/b1 /dev/md122????????????????????????? 20G? 6.1G?? 13G? 33% /var /dev/md126???????????????????????? 976M? 233M? 677M? 26% /boot /dev/md150p1??????????????????????? 13T? 1.1T?? 12T?? 9% /glusterfs/a1/b1 /dev/md150p2??????????????????????? 13T? 6.7T? 6.2T? 52% /glusterfs/a1/b2 /dev/md123???????????????????????? 1.7T?? 77M? 1.6T?? 1% /home /dev/md153p1???...
2001 Nov 11
2
Software RAID and ext3 problem
...65M 25% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.3G 595M 69% /usr /dev/md8 938M 761M 177M 82% /var /dev/md9 9.2G 2.6G 6.1G 30% /home /dev/md10 11G 2.1G 8.7G 19% /scratch /dev/md12 56G 43G 13G 77% /global The /usr and /var filesystems keep switching to ro mode following the detection of errors. This has been happening on a daily basis since switching to ext3 (usually the /var switches, /usr has done it once). I suppose my first question should be: do ext3 and software raid mix? If th...
2018 Aug 01
0
(EXT) CentOS Digest, Vol 162, Issue 29
...having a strange problem booting a new centos7 installation. Below some background on this. [I have attached the tech details at the bottom of this message] I started a new CentOS7 installation on a VM, so far all good, o/s boots fine. Then I decided to increase VM disk size (initially was 10G) to 13G. Powered off the VM, increased the vhd via the hypervisor, booted from CentOS livecd, selected "recover my centos installation". Then I used the following sequence of commands to make the new vhd size "visible" to the o/s ... - ran fdisk /dev/sda and deleted partition 2. (This...
2023 Jul 04
1
remove_me files building up
...The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79% /data/glusterfs/gv1/brick3 /dev/sdc1 15G 2.8G 13G 19% /data/glusterfs/gv1/brick1 /dev/sde1 15G 14G 1.6G 90% /data/glusterfs/gv1/brick2 And this is the df -hi? output for the bricks on the arb server: /dev/sdd1 7.5M 2.7M 4.9M 35% /data/glusterfs/gv1/brick3 /dev/sdc1 7.5M 643K 6.9M 9% /data/glust...
2002 Jun 06
1
Backuo problem from ext3 file system
...ental. This is happening only for /proj and proj1 is working fine. Filesystem Size Used Avail Use% Mounted on /dev/md1 8.1G 3.4G 4.3G 44% / /dev/sda1 48M 14M 31M 30% /boot /dev/md0 39G 29G 7.7G 79% /proj /dev/md2 16G 13G 2.4G 84% /proj1 none 262M 0 262M 0% /dev/shm Backup Device : HP SURESTORE DAT40 Dump Version : dump-0.4b28-1.i386.rpm Regards, Rajeesh Kumar M.P System Administrator Aalayance E-Com Services Ltd, Bangalore - India
2001 Nov 11
0
(no subject)
...65M 25% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.3G 595M 69% /usr /dev/md8 938M 761M 177M 82% /var /dev/md9 9.2G 2.6G 6.1G 30% /home /dev/md10 11G 2.1G 8.7G 19% /scratch /dev/md12 56G 43G 13G 77% /global The /usr and /var filesystems keep switching to ro mode following the detection of errors. This has been happening on a daily basis since switching to ext3 (usually the /var switches, /usr has done it once). I suppose my first question should be: do ext3 and software raid mix? If th...
2012 Dec 03
1
recommended procedure for mandatory roaming profiles for win7 with samba 3
Hello, I have a PDC and a File (member) server for homes and profiles (Samba 3.4.17). For XP clients I have mandatory profiles with all user shell folders redirected to their respective home share. Now I'm adding win 7 clients to the mix and I want the same thing. It's (almost) working but I think my procedure is a bit dirty (i.e. I use "windows enabler" to build my ntuser.man
2004 Jul 19
2
large Xapian index files
Hello Arjen van der Meijden, on xapian-discuss you mentioned that your Xapian installation has got up to 15 GB database size. Can you tell me about the largest index filesize you got? According to <http://xapian.org/docs/scalability.html>, it seems that the quartz database filesize is limited only by the OS and file system. Can you confirm from your experience that there is no 2GB limit?
2013 Jul 04
3
odd inconsistency with nfs
...crs1.mirror -soft,intr,retrans=1 goblin:/scrs1.mirror summit.mirror -soft,intr,retrans=1 goblin:/summit.mirror ( cd /var/yp ; make ) on boltzmann: (nfs server) df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb2 50G 13G 37G 26% / tmpfs 3.9G 1.2M 3.9G 1% /dev/shm /dev/sdb3 177G 188M 175G 1% /aux /dev/sda3 208G 44G 164G 21% /aux2 mkdir /aux/scrs1_bolt mkdir /aux2/summit_bolt ln -s /aux/scrs1_bolt /scrs1_bolt...
2003 Sep 03
1
Weird DISKS behaviour on 4.8-STABLE
...ox, I am still experiencing the same symptoms. For starters, `mount` does not report the correct disk sizes: da0 is 17GB da1 is 36GB but `df -h` gives following output Filesystem Size Used Avail Capacity Mounted on /dev/da0s1a 16G 4.5G 10.0G 31% / /dev/da1s1e 34G 17G 13G 56% /wananchi The output of `mount` is: wash@ns2 ('tty') ~ 129 -> mount /dev/da0s1a on / (ufs, local, soft-updates) /dev/da1s1e on /wananchi (ufs, local, soft-updates) I am beginning to think that disk space is being freed, but the blocks are not being reallocated/reassigned,...