Displaying 20 results from an estimated 27 matches for "21g".
Did you mean:
21
2006 May 11
5
Issue with hard links, please help!
Hello,
Sometimes when creating hard links to the rsync destination directory,
it seems like the new directory (created from the cp -al command) ends
up with all the data. This causes a problem in the sense that if the
rsync destination directory had 21GB, after the cp -al command, it ends
up having only 8mb, then the rsync source directory determines that it
now requires 21.98GB to update the destination directory.
Here is an example of a test that I was doing. I have no idea why
sometimes it works like it should, and sometimes it doesn't. My...
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading;
Documents = 147MB
Videos = 11G
Software= 1.4G
By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated;
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE -
It doesn''t look like any snapshots have been taken, according to zfs list -t snapshot. I''ve read about the ''copies'' parameter but I...
2013 May 21
2
rsync behavior on copy-on-write filesystems
.../jobarchive_Ajobarchivetest2
## 4) Make a snapshot of the second volume called job1. Note that it
takes up almost no space.
$ btrfs subvolume snapshot current job1
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/jobarchive-Ajobarchivetest2
300G 21G 273G 7% /vol/jobarchive_Ajobarchivetest2
## 5) Change the first 4k bytes of the original file
$ time dd if=/dev/urandom of=src/10gb bs=4k count=1 conv=notrunc
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.000601676 s, 6.8 MB/s
0.001u 0.001s 0:00.03 0.0% 0+0k 32+8io 1pf+0w...
2018 May 22
1
Re: Create qcow2 v3 volumes via libvirt
...production again.
The VM itself reports ample space available:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 789M 8.8M 780M 2% /run
/dev/mapper/RT--vg-root 51G 21G 28G 42% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 472M 155M 293M 35% /boot
192.168.0.16:/volume1/file...
2007 Sep 03
1
Re: OT: Suggestions for RAID HW for 2 SATA drives in
On 31 August 2007, Phil Schaffner <Philip.R.Schaffner at NASA.gov> wrote:
> > Message: 21
> <snip>
> > As discussed recently on-list, VMware CPU requirements to support
> > virtualization are not nearly so rigorous as for Xen. You are
> > probably OK with VMware on most any relatively modern x86 or x86_64
> > CPU.
> >
2007 Sep 09
0
Re: OT: Suggestions for RAID HW for 2 SATA drives in
...ot of free space to play with, but
> enough to experiment with. Here's a sample of a directory of assorted
> VMware VMs:
>
> [prs at lynx vmware]$ du -sh *
> 23G C5_64
> 4.0G CentOS_3_9
> 4.7G CentOS-QA
> 6.7G fedora-7-i386
> 4.1G PCLinuxOS_2007
> 21G W2K_Pro
> 22G XP
Phil: Thank you! Not sure if I have enough space available, but I am
going to try this. :-) Lanny
2012 Jun 23
3
How to upgrade from 5.8 to 6.2
Good day,
Please am new on CentOS, may you help me with the upgrade from 5.8 to
6.2 using?
Thanks a lot
--
--
You Truly
Eric Kom
System Administrator - Metropolitan College
_________________________________________
/ You are scrupulously honest, frank, and \
| straightforward. Therefore you have few |
\ friends. /
-----------------------------------------
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...ace : 49.1TB
Inode Count : 5273970048
Free Inodes : 5273127036
Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
= *196,4 TB *but df shows:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...73970048
> Free Inodes : 5273127036
>
>
> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
> = *196,4 TB *but df shows:
>
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 48G 21G 25G 46% /
> tmpfs 32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T 29%...
2009 Nov 05
7
Unexpected ENOSPC on a SSD-drive after day of uptime, kernel 2.6.32-rc5
..., 4448256 delalloc bytes, 10704134144 bytes_used,
0 bytes_reserved, 0 bytes_pinned, 0 bytes_readonly, 0 may use
10708582400 total
Further details:
0) The partition that reports ENOSPC is mounted as:
/dev/sda3 /usr btrfs defaults,rw,nodev,noatime
1) df -h reports : /dev/sda3 21G 11G 9.5G 53% /usr
2) btrfs-show :
Label: none uuid: 0a89100d-096d-4c67-b3c7-745c9b7c3dc5
Total devices 1 FS bytes used 10.60GB
devid 1 size 20.00GB used 20.00GB path /dev/sda3
3) The other partitions using btrfs show a similar relatively large
difference between the space reported by df...
2003 Apr 15
8
repost (passive FTP server in DMZ and shorewall 1.4.2)
I apologize for the first message. :)
---------------------------------------
I have an FTP server running in the DMZ section of my home network. It uses port 23000 for connection and ports 19990 to 19994 for data transfer.
I have setup the following rule for outside people to connect to it:
DNAT net dmz:192.168.2.2 tcp 23000
I''m at work right now and I can''t use
2018 Jul 25
2
[RFC 0/4] Virtio uses DMA API for all devices
...;/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
In the host back end its an QEMU raw image on tmpfs file system.
disk:
-rw-r--r-- 1 libvirt-qemu kvm 5.0G Jul 24 06:26 disk2.img
mount:
size=21G on /mnt type tmpfs (rw,relatime,size=22020096k)
TEST CONFIG
===========
FIO (https://linux.die.net/man/1/fio) is being run with and without
the patches.
Read test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=read
unlink=1
iodepth=256...
2018 Jul 25
2
[RFC 0/4] Virtio uses DMA API for all devices
...;/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
In the host back end its an QEMU raw image on tmpfs file system.
disk:
-rw-r--r-- 1 libvirt-qemu kvm 5.0G Jul 24 06:26 disk2.img
mount:
size=21G on /mnt type tmpfs (rw,relatime,size=22020096k)
TEST CONFIG
===========
FIO (https://linux.die.net/man/1/fio) is being run with and without
the patches.
Read test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=read
unlink=1
iodepth=256...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...: 5273127036
>>
>>
>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>> +49.1TB = *196,4 TB *but df shows:
>>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda2 48G 21G 25G 46% /
>> tmpfs 32G 80K 32G 1% /dev/shm
>> /dev/sda1 190M 62M 119M 35% /boot
>> /dev/sda4 395G 251G 124G 68% /data
>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>> /dev/sdc1 5...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...;>>
>>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>> +49.1TB = *196,4 TB *but df shows:
>>>
>>> [root at stor1 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda2 48G 21G 25G 46% /
>>> tmpfs 32G 80K 32G 1% /dev/shm
>>> /dev/sda1 190M 62M 119M 35% /boot
>>> /dev/sda4 395G 251G 124G 68% /data
>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>> /dev...
2011 Dec 31
1
problem with missing bricks
Gluster-user folks,
I'm trying to use gluster in a way that may be a considered an unusual use
case for gluster. Feel free to let me know if you think what I'm doing
is dumb. It just feels very comfortable doing this with gluster.
I have been using gluster in other, more orthodox configurations, for
several years.
I have a single system with 45 inexpensive sata drives - it's a
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...t; Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>>> +49.1TB = *196,4 TB *but df shows:
>>>>
>>>> [root at stor1 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sda2 48G 21G 25G 46% /
>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>> /dev/sda4 395G 251G 124G 68% /data
>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
&...
2018 May 01
4
Re: Create qcow2 v3 volumes via libvirt
I have been using internal snapshots on production qcow2 images for a
couple of years, admittedly as infrequently as possible with one
exception and that exception has had multiple snapshots taken and
removed using virt-manager's GUI.
I was unaware of this:
> There are some technical downsides to
> internal snapshots IIUC, such as inability to free the space used by the
> internal
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...r volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>>>> +49.1TB = *196,4 TB *but df shows:
>>>>>
>>>>> [root at stor1 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/...
2019 Sep 12
2
Fw: Btrfs Samba and Quotas
Hello Hendrik
Can you help input 2 commands 'mount' and 'df -TPh' on OMV,
and post the output to us, thank you.
--
Regards,
Jones Syue | ???
QNAP Systems, Inc.