Displaying 20 results from an estimated 33 matches for "17gb".
Did you mean:
16gb
2009 May 07
1
df & du - that old chestnut
.../eva_mpio_myserver07_08_oracle_bkup0 /data/orabackup
ocfs2 _netdev,nointr,defaults 0 0
We have a RMAN retention policy of 3 days so backups older than that do
get deleted.
As a test I did a df & du of this filesystem and then deleted (via RMAN)
the oldest backup, this file was 17Gb in size, after which I again did a
du & df. These both reported that 17Gb had been released to the
filesystem.
What I still don't get is why df reports that the Used space is at 198Gb
- that seems an awful waste
Any insight into this major discrepancy would be much appreciated....
2002 Mar 12
2
Using Backup from Windows -> Samba : >4GB file limit?
Hi,
I was backup up my laptop the other day, and the Windows backup utility said
it was going to do about 17GB of data - and I was dumping this to a Samba
share from my Linux machine. However, it looks like the data wrapped around
when it got past 4GB. Not really sure what happened though. When I saw
that the .bkf file was smaller (the next morning) that what a previous 'ls -l'
showed (the night...
2002 Oct 27
3
rsync with large gzip files.
Hi,
I tried performing a complete copy of 17GB of filesystems over the WAN
(0.8GB/hr) with the speed of 16Mbps. The filesystem consists of several
large g-zipped files. These large g-zipped files have actually been zipped
out of other sub-filesystems and directories. I noticed that while
transferring a lists of large g-zipped files, rsync tends...
2015 Feb 23
1
Re: HugePages - can't start guest that requires them
...u're not using hugepages. Or you've
just posted wrong XML?
Then again, kernel's approach to hugepages is not as awesome as to
regular system pages. Either on boot (1GB) or at runtime (2MB) one must
cut a slice of memory off to be used by hugepages and nothing else. So
even if you have ~17GB RAM free on both nodes, they are reserved for
hugepages, hence the OOM.
Michal
2008 Mar 03
7
DO NOT REPLY [Bug 5299] New: 2.6.9 client cannot receive files from 3.0.0 server
https://bugzilla.samba.org/show_bug.cgi?id=5299
Summary: 2.6.9 client cannot receive files from 3.0.0 server
Product: rsync
Version: 3.0.0
Platform: x86
OS/Version: Windows XP
Status: NEW
Severity: major
Priority: P3
Component: core
AssignedTo: wayned@samba.org
ReportedBy:
2006 Sep 20
6
ocfs2 - disk usage inconsistencies
Hi all.
I have a 50 GB OCFS2 file system. I'm currently using ~26GB of space
but df is reporting 43 GB used. Any ideas how to find out where the
missing 17GB is at?
The file system was formatted with a 16K cluster & 4K block size.
Thanks,
Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20060920/a1dbbe3c/attachment.html
2006 Aug 22
1
rsync performance
...l bytes received: 5017339
sent 68 bytes received 5017339 bytes 56060.41 bytes/sec
total size is 6716087965 speedup is 1338.56
lion:/homes/ ========= Tue Aug 22 12:14:58 CEST 2006 ==================
233780 files/89 sec = 2626 files/sec
On the large filesystems with ~1.200.000 files/17GB, rsync takes 30
minutes, even when only few files change (and even when invoked with -n):
lion:/homes/ ========= Tue Aug 22 12:14:58 CEST 2006 ==================
receiving file list ... done
--<snip-snip>--
Number of files: 1232323
Number of files transferred: 124
Total...
2006 Jun 08
7
Wrong reported free space over NFS
NFS server (b39):
bash-3.00# zfs get quota nfs-s5-s8/d5201 nfs-s5-p0/d5110
NAME PROPERTY VALUE SOURCE
nfs-s5-p0/d5110 quota 600G local
nfs-s5-s8/d5201 quota 600G local
bash-3.00#
bash-3.00# df -h | egrep "d5201|d5110"
nfs-s5-p0/d5110 600G 527G 73G 88% /nfs-s5-p0/d5110
2010 Mar 04
0
fsck.ext4 huge memory usage
I have a 5.4TB file system, ext4, that currently is reporting problems,
however, every time I run fsck.ext4 on the file system, it grows to more
than 17GB.
Can anyone tell me WTF is going on here? Why is it using so much disk
space?
--
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
HPC Coordinator
Simon Fraser University - Burnaby Campus
Phone : 778-782-6573
Fax : 778-782-3045
E-Mail : jpeltier at sfu.ca
Website :...
2002 Dec 24
1
large file handling problems in 2.2.7a
...properly, if I run 'smbclient -Tc' to create a tarball from such a share,
only the first N bytes of a large file in the share get stuffed in the
tarball, where N is (I think) the incorrectly reported size, e.g., reported
in the first example.
The large file I'm dealing with is about a 17GB Exchange database on a W2K
server, but I suspect anything over 2GB will exhibit similar errors. I'm
running RH7.3.
Can anybody else confirm these errors?
Regards,
Carey
2005 Jul 15
0
[Fwd: Encoding: theora legalities]
...Theora] Encoding: theora legalities
Date: Tue, 7 Jun 2005 13:51:18 +1000
From: Sime Mardesic <simem@traffic.redflex.com.au>
To: <theora@xiph.org>
Hi,
I am having trouble understanding the legalities of the whole encoding
issue. Here is what I am trying to do:
Problem: We get 17GB?s worth of 12 second long mpeg2 video clips from a
capture card. We want some way of compressing these clips so that we can
save space and bandwidth. These clips are used in an activity that
generates income (nothing sinister).
Proposed solution: Use some sort of encoder to encode the clips thus
s...
2003 Sep 03
1
Weird DISKS behaviour on 4.8-STABLE
...salvage of blocks, etc. Then as sure as day follows night, the space
is back to normal.
At some point I retired two disks, replacing them with two new ones.
Now on the same Compaq box, I am still experiencing the same symptoms.
For starters, `mount` does not report the correct disk sizes:
da0 is 17GB
da1 is 36GB
but `df -h` gives following output
Filesystem Size Used Avail Capacity Mounted on
/dev/da0s1a 16G 4.5G 10.0G 31% /
/dev/da1s1e 34G 17G 13G 56% /wananchi
The output of `mount` is:
wash@ns2 ('tty') ~ 129 -> mount
/dev/da0s1a on / (ufs, lo...
2008 Apr 29
4
Applying user function over a large matrix
Respected R experts,
I am trying to apply a user function that basically calls and
applies the R loess function from stat package over each time
series. I have a large matrix of size 21 X 9000000 and I need
to apply the loess for each column and hence I have
implemented this separate user function that applies loess
over each column and I am calling this function foo as follows:
2015 Jan 29
0
Indexing Mail faster
Dear Peter,
My inbox is MDA_external
Storage: 17GB of 24GB
Subject / From / To is fast but FTS(Full Text Search) for body is
horrible. I suppose this is where we need Apache Solr.
Do you think my mail storage format is bad? Do I need to change for better
performance?
Please advise
Kevin
On Thu, Jan 29, 2015 at 12:25 PM, Peter Hodur <peteho...
2013 Apr 07
2
"btrfs send" fails with having too many open fd's
...5GB
devid 2 size 298.09GB used 142.01GB path /dev/sdc1
devid 1 size 298.09GB used 142.03GB path /dev/sdb1
The send fs is on a dm-crypt device and the receive on a btrfs raid1
A scrub for both fs ran fine without errors.I can however send/receive
my root fs which is much smaller (~17GB).
The error occurs also if i just use "btrfs send" and pipe the output to
a file.
It takes a very long time before my system crashes (several hours) so i
wasn''t able to monitor when exactly the fd''s increase.
In the beginning "btrfs send" just opens less tha...
2011 Dec 13
1
Dovecot 2.1rc1 + 2.0.16 woes regarding fts_squat
...and working.
During a large mail import with 2.0.16 today, I ran across a worrying message in the logs during an fts_squat reindex: out of memory. The plugin doesn't obey the mmap_disable configuration directive, which I've confirmed in the plugin source.
The mailbox in question has only 17GB (mdbox style), with about 90,000 emails in it. Its "index" (for the purposes of normal IMAP retrieval as opposed to IMAP TEXT/BODY searching) is fine and uncorrupted. I freshly import these mailboxes between test iterations and any version changes anyway, so if there's corruption, it...
2013 Dec 09
0
FreeBSD 10.0-RC1 now available
...available in both QCOW2, VHD, and VMDK format. The
image download size is approximately 135 MB, which decompress to a 20GB
sparse image.
The partition layout is:
- 512k - freebsd-boot GPT partition type (bootfs GPT label)
- 1GB - freebsd-swap GPT partition type (swapfs GPT label)
- ~17GB - freebsd-ufs GPT partition type (rootfs GPT label)
Changes between -BETA4 and -RC1 include:
- Fix to a regression in bsdinstall(8) that prevents ZFS on GELI
installation from working correctly.[*]
*Please note: a last-minute problem was found in 10.0-RC1
testing with this inst...
2015 Jan 29
6
Indexing Mail faster
> * Kevin Laurie <superinterstellar at gmail.com> 2015.01.24 19:41:
>
> > Currently the time it takes to search 25,000mails is 4mins. If indexed
> how
> > much faster are we looking at?
>
> With a current version of Dovecot a search is pretty fast _without_ using
> external indexes. I have a view defined (virtual plugin) with around 22.000
> messages in it,
2013 Nov 05
1
FreeBSD 10.0-BETA3 now available
...s are available in both QCOW2 and VMDK format. The image
download size is approximately 136 MB, which decompress to a 20GB sparse
image.
The partition layout is:
- 512k - freebsd-boot GPT partition type (bootfs GPT label)
- 1GB - freebsd-swap GPT partition type (swapfs GPT label)
- ~17GB - freebsd-ufs GPT partition type (rootfs GPT label)
ISO Checksums:
amd64:
SHA256 (FreeBSD-10.0-BETA3-amd64-bootonly.iso) = 2fd1c59c94f0e30a8a23cf5a8b2b6caa565e45fc2c97dd5d831d38bf60db47e8
SHA256 (FreeBSD-10.0-BETA3-amd64-disc1.iso) = ffae9adf91e6030e0f83fecb4fe1a1cc3e8478efddbd0e2cfa5457...
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
On 09.02.2015 18:19, G. Richard Bellamy wrote:
> First I'll quickly summarize my understanding of how to configure numa...
>
> In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to
> use hugepages for the guest, and to get those hugepages from a
> particular host NUMA node.
No, @nodeset refers to guest NUMA nodes.
>
> In