Displaying 6 results from an estimated 6 matches for "176g".
Did you mean:
16g
2009 Jan 12
1
ZFS size is different ?
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G /pool2/data2
pool1/data 176G 638G 175G /pool1/data1
It keep about 30,000,000 files.
The content of p_pool/p1 and backup/p_backup is almost same. But why is the size different?
2. /pool2/data2 is a RAID5 Disk Array with 8 disks, and , and /pool1/data1 is a RAIDZ2 with 5 disks.
The configure like this:
NAME...
2019 Apr 20
3
Does devtmps and tmpfs use underlying hard disk storage or Physical Memory (RAM)
...0 7.8G 0% /dev/shm
tmpfs tmpfs 7.8G 817M 7.0G 11% /run
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/995
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1000
total - 185G 8.8G 176G 5% -
#
Does devtmpfs and tmpfs use underlying hard disk storage or does it uses
Physical Memory (RAM). What is the purpose of devtmpfs which is mounted on
/dev, tmpfs mounted on /dev/shm and so on and so forth. What is the
difference between devtmpfs and tmpfs?
I will appreciate if anyone can h...
2012 May 23
5
biggest disk partition on 5.8?
...0 running 5.8 ( 64 bit )
I used 'arcconf' to create a big RAID60 out of (see below).
But then I mount it and it is way too small
This should be about 20TB :
[root at solexa1 StorMan]# df -h /dev/sdb1
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 186G 60M 176G 1% /mnt/J4400-1
Here is how I created it :
./arcconf create 1 logicaldrive name J4400-1-RAID60 max 60 0 0 0 1 0 2
0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 16 0 17 0
18 0 19 0 20 0 21 0 22 0 23 noprompt
[root at solexa1 StorMan]# ./arcconf getconfig 1 ld
Controllers found: 1...
2006 May 03
2
Rsync error on client end: unexpected tag 3 [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(843) [sender]
...celery tmp]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 9.7G 3.6G 5.6G 39% /
none 379M 0 379M 0% /dev/shm
/dev/hda4 17G 675M 15G 5% /home
/dev/hda2 9.7G 360M 8.8G 4% /var
/dev/hdb1 276G 176G 87G 68% /disk2
/dev/hdc1 276G 175G 87G 67% /disk3
/dev/sda1 276G 225G 37G 87% /media/usbdisk
------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------...
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2011 Mar 20
2
task md1_resync:9770 blocked for more than 120 seconds and OOM errors
...read+0x0/0xc4
kernel: [<ffffffff80032870>] kthread+0x0/0x132
kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
kernel:
The /var/log/mcelog is empty.
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 20G 1.4G 18G 8% /
/dev/md3 176G 754M 166G 1% /var
/dev/md0 993M 30M 913M 4% /boot
/dev/md2 263G 352M 250G 1% /home
tmpfs 2.0G 0 2.0G 0% /dev/shm
Does anybody please have an advice? :-(
(Besides "contact or change" your hoster, because it doesn't work)....