Displaying 8 results from an estimated 8 matches for "168g".
Did you mean:
168
2007 Apr 20
0
problem mounting one of the zfs file system during boot
.../mypool
mypool/d 271G 2.12G 143G /d/d2
mypool/d at 2006_month_10 3.72G - 123G -
mypool/d at 2006_month_12 22.3G - 156G -
mypool/d at 2007_month_01 23.3G - 161G -
mypool/d at 2007_month_02 16.1G - 172G -
mypool/d at 2007_month_03 13.8G - 168G -
mypool/d at 2007_month_04 15.7G - 168G -
mypool2 489G 448G 52K /mypool2
mypool2/d3 171G 448G 171G legacy
Regards,
Chris
2007 Apr 14
3
zfs snaps and removing some files
...0G 24.5K /mypool
mypool/d 271G 2.40G 143G /d/d2
mypool/d at month_10 3.72G - 123G -
mypool/d at month_12 22.3G - 156G -
mypool/d at month_01 23.3G - 161G -
mypool/d at month_02 16.1G - 172G -
mypool/d at month_03 13.8G - 168G -
mypool/d at month_04 15.7G - 168G -
mypool/d at day_14 185M - 143G -
Anyway in snaps that I have I do have certain files in those snaps that are few
gig''s in sizes. I did go to snaps and I did try to remove them but I got
message:
[11:42:50] root at chrysek...
2019 Oct 12
0
qeum on centos 8 with nvme disk
I have CentOS 8 install solely on one nvme drive and it works fine and
relatively quickly.
/dev/nvme0n1p4????????? 218G?? 50G? 168G? 23% /
/dev/nvme0n1p2????????? 2.0G? 235M? 1.6G? 13% /boot
/dev/nvme0n1p1????????? 200M? 6.8M? 194M?? 4% /boot/efi
You might want to partition the device (p3 is swap)
Alan
On 13/10/2019 10:38, Jerry Geis wrote:
> Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD
>...
2009 Apr 27
5
Wine and /home partition woes on Gentoo/amd64
I apologize if the question has been posed before, but I did not find any similar posts by browsing or searching through the forums. Here it goes:
I have been using Gentoo Linux for a year now, and never have I had a problem I couldn't solve. However, not long ago I bought a 1 TB hard disk which I divided into 7 partitions: /boot, (swap), /, /var, /tmp, /usr and /home (yes, that's FreeBSD
2019 Oct 12
7
qeum on centos 8 with nvme disk
Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD
disks.
I installed an nVME drive in the computer also. I tried to insall CentOS8
on it
(the physical /dev/nvme0n1 with the -hda /dev/nvme0n1 as the disk.
The process started installing but is really "slow" - I was expecting with
the nvme device it would be much quicker.
Is there something I am missing how to
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.>
I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.
Here is the log in /var/adm/messages
Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40:
Feb 8
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2010 Nov 11
8
zpool import panics
...segments 30640 maxsize 1.23G
freepct 7%
metaslab 22 offset 58000000000 spacemap 2896
free 10.0G
segments 1769 maxsize 10.0G
freepct 3%
metaslab 23 offset 5c000000000 spacemap 3078
free 168G
segments 29401 maxsize 113G
freepct 65%
metaslab 24 offset 60000000000 spacemap 3187
free 11.2G
segments 2230 maxsize 9.20G
freepct 4%
metaslab 25 offset 64000000000 spac...