search for: 18g

Displaying 20 results from an estimated 49 matches for "18g".

Did you mean: 18
2006 May 11
5
Issue with hard links, please help!
Hello, Sometimes when creating hard links to the rsync destination directory, it seems like the new directory (created from the cp -al command) ends up with all the data. This causes a problem in the sense that if the rsync destination directory had 21GB, after the cp -al command, it ends up having only 8mb, then the rsync source directory determines that it now requires 21.98GB to update the
2007 May 04
3
NFS issue
Hi List, I must be going mad or something, but got a really odd problem with NFS mount and a DVD rom. Here is the situation, /dev/md7 58G 18G 37G 33% /data which is shared out by NFS, (/etc/exportfs) This has been working since I installed the OS, Centos 4.4 I have a DVD on that is device /dev/scd0, which I can mount anywhere I like, no problem. However, the problem comes when I try to mount it under /data/shared/Photos/Archive1....
2005 Sep 12
2
New to the list and one quick question
Hi all, Let me start off with a little back ground. I am running right now RH9 fully updated on a Dell PowerEdge 1600SC with 512megs of Ram, 18G SCSI HD. I run this at home so this is no critical server, unless you ask my girls when I have it down, I have been debating on upgrading to Fedora until I started looking at CentOS. I have download the ISO and burned them with X-CD-Roast. Here is my question: I tried running the mediacheck...
2007 Sep 18
2
Windows2003 P2V migration, need help creating a shrunken disk image.
I have a windows server with a 250G drive the drive is partitioned as follows. partition 1: dell utility partition partition 2: fat32 windows c: partition partition 3: extended partition partition 5: logical NTFS partitition. NTFS partition was setup with 220G. but only about 18G was being used, so i shrank NTFS down to 50G. now i want to make a drive image to create a HVM domain. how can I create the image so that is is not contain the 170G or so of free space i freed up? technically i need the image with partition2 and partition5 which has the MBR code to boot from par...
2009 Aug 11
4
how to migrate 40T data and 180M files
...nc them in parallel or in serial? If yes, how many groups is better? The second question is about memory. How much memory should I install to the linux box? The rsync FAQ(http://rsync.samba.org/FAQ.html#4) says one file will use 100 bytes to store relevant information, so 180M files will use about 18G memory. How much memory should be installed totally? And any other thing I could do to reduce the risk? Thanks in advance. Gao, Ming -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.samba.org/pipermail/rsync/attachments/20090811/a43c53b3/attachment...
2009 Dec 29
2
ext3 partition size
...12.fc11.i586 e2fsprogs-devel-1.41.4-12.fc11.x86_64 e2fsprogs-debuginfo-1.41.4-12.fc11.x86_64 mount: /dev/sdb8 on /srv/multimedia type ext3 (rw,relatime) $ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sdb2 ext3 30G 1.1G 28G 4% / /dev/sdb7 ext3 20G 1.3G 18G 7% /var /dev/sdb6 ext3 30G 12G 17G 43% /usr /dev/sdb5 ext3 40G 25G 13G 67% /home /dev/sdb1 ext3 107M 52M 50M 52% /boot */dev/sdb8 ext3 111G 79G 27G 76% /srv/multimedia* tmpfs tmpfs 2.9G 35M 2.9G 2% /dev/shm Parted info: (parted) s...
2009 Nov 05
2
MySQL error 28, can't write temp files - how to debug?
...iting file '/tmp/MYL4qeT5' (Errcode: 28) SQL: , , select * from tips order by rand() limit 0, 1 , According to google search, errorcode 28 means the HDD is full. But it isn't: root at vps:[~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 84G 18G 62G 23% / none 640M 0 640M 0% /dev/shm /usr/tmpDSK 485M 11M 449M 3% /tmp What else could cause this kind of problem? -- Kind Regards Rudi Ahlers CEO, SoftDux Hosting Web: http://www.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532
2009 May 19
2
14.4G samba filesystem limit?
...0G free. But for some reason, all my CIFS clients report only 14.4G empty. Depending on what I'm trying to do with the share, the client may happily ignore the supposed free space limitation, but some programs actually give me a warning and refuse to work, "Error, this operation requires 18G but the destination only has 14.4G free..." If I force the operation to happen, it will happily write 18G or whatever ... and then it will still report 14.4G free. Anybody have any idea where this 14.4G number is coming from, or how to correct it? My server is the latest release of...
2009 Nov 05
1
MySQL error 28, can't write temp files - how to debug? [SOLVED]
...Rudi Ahlers <Rudi at softdux.com> wrote: >> According to google search, errorcode 28 means the HDD is full. But it >> isn't: >> >> >> root at vps:[~]$ df -h >> Filesystem ? ? ? ? ? ?Size ?Used Avail Use% Mounted on >> /dev/sda1 ? ? ? ? ? ? ?84G ? 18G ? 62G ?23% / >> none ? ? ? ? ? ? ? ? ?640M ? ? 0 ?640M ? 0% /dev/shm >> /usr/tmpDSK ? ? ? ? ? 485M ? 11M ?449M ? 3% /tmp >> >> >> What else could cause this kind of problem? > > You only have 449MB free on /tmp. It could easily fill that up during the > query,...
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2013 Apr 18
1
Memory usage reported by gc() differs from 'top'
...g 7.2g 2612 S 1 93.4 16:26.73 R So the R process is holding on to 18.2g memory, but it only seems to have accout of 1.5g or so. Where is the rest? I tried searching the archives, and found answers like "just buy more RAM". Which doesn't exactly answer my question. And come on, 18g is pretty big; sure it doesn't fit in my RAM (only 7.2g are in), but that's beside the point. The huge memory demand is specific to R version 2.15.3 Patched (2013-03-13 r62500) -- "Security Blanket". The same test runs without issues under R version 2.15.1 beta (2012-06-11 r5955...
2003 Jun 17
1
efficiency issue with rsync.....
Hi rsync team, I thought that rsync would try to overlap computing and IO on both machines. I'm rsyncing a large tree (18G) and am keeping an eye on that. Suddenly my side completely stopped. No IO visible, no CPU time spent. The otherside was doing 100% CPU. Then the other side started to do disk IO. Then suddenly the activities moved over to my side, and I saw things moving again in the "-v --progress" out...
2006 Jan 27
2
Do I have a problem? (longish)
Hi, to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config: c2t9d0 9G c2t10d0 9G c2t11d0 18G c2t12d0 18G c2t11d0 is devided in two: selecting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please see zpool(1M). /dev/...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...engine; brick2=data, brick4=iso): > Filesystem Size Used Avail Use% Mounted on > /dev/mapper/gluster-engine 25G 12G 14G 47% /gluster/brick1 > /dev/mapper/gluster-data 136G 125G 12G 92% /gluster/brick2 > /dev/mapper/gluster-iso 25G 7.3G 18G 29% /gluster/brick4 > 192.168.8.11:/engine 15G 9.7G 5.4G 65% > /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine > 192.168.8.11:/data 136G 125G 12G 92% > /rhev/data-center/mnt/glusterSD/192.168.8.11:_data > 192.168.8.11:/iso 13G 7.3G...
2006 Aug 09
1
Re: URGENT: OCFS2 hang - 32 node cluster POC
...eg for RAC in here. > > Thanks > > > Colin Laird wrote: >> Hi, >> >> We are in the middle of a very large bid (Centrelink, Australia) with >> time at a premium. So PLEASE HELP. we have been experiencing >> machine hangs whenever we do large copies (5-18G) into OCFS2. Either >> from ftp or local disk. The whole machine just freezes and we need >> to run off and on. we now cannot get the data available for the POC >> across the nodes! >> >> The setup is: >> >> 32 clustered Dell 6850 nodes running RHEL4...
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM partitions to ZFS. I used Live Upgrade to migrate from U1 to U2 and that went without a hitch on my SunBlade 2000. And the initial conversion of one side of the UFS mirrors to a ZFS pool and subsequent data migration went fine. However, when I attempted to attach the second side mirrors as a mirror of the ZFS pool, all
2010 Sep 29
4
XFS on a 25 TB device
Hello all, I have just configured a 64-bit CentOS 5.5 machine to support an XFS filesystem as specified in the subject line. The filesystem will be used to store an extremely large number of files (in the tens of millions). Due to its extremely large size, would there be any non-standard XFS build/configuration options I should consider? Thanks. Boris. -------------- next part -------------- An
2017 May 05
4
CentOS 7 cloned VM cannot boot
On Fri, May 5, 2017 at 2:38 PM, Nikolaos Milas <nmilas at noa.gr> wrote: > On 5/5/2017 3:15 ??, Gianluca Cecchi wrote: > > ... >> grub2-install /dev/vda >> ... >> Was this one of the command you already tried? >> > > Yes, I have tried that multiple times, both from Troubleshooting Mode > (booting using CentOS 7 Installation CD) and from within the
2018 May 08
1
mount failing client to gluster cluster.
...0 /isos works fine. =========== root at kvm01:/var/lib/libvirt# df -h Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 9.2M 1.6G 1% /run /dev/mapper/kvm01--vg-root 23G 3.8G 18G 18% / tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/kvm01--vg-home 243G 61M 231G 1% /home /dev/mapper/kvm01--vg-tmp 1.8G...
2010 Sep 16
0
Free space issue
...,debugfs vermagic: 2.6.9-89.0.26.ELsmp SMP gcc-3.4 # When I run df -h to show available disk space It shows as follows # df -h cfs Filesystem Size Used Avail Use% Mounted on /dev/emcpowera1 67G 42G 26G 62% /d00/cfs But actual usage is much less :- # du -sh /d00/cfs 18G cfs I have the output of stat_sysdir.sh if needed, but it is over 11,000 lines long. I think this might be to do with inodes or something not being freed up - but am not 100% sure? Given I am stuck on Red Hat 4 until next year, I cant upgrade OCFS2 to 1.4.x where I think some of these proble...