search for: 115g

Displaying 7 results from an estimated 7 matches for "115g".

Did you mean: 115
2001 Dec 19
3
ext3 inode error 28
hello: I have been reviewin my message slog and have found the following message: Dec 19 06:27:28 server02 kernel: EXT3-fs error (device sd(8,7)) in ext3_new_inode: error 28 What is error 28 and should I be worried about it? Ray Turcotte
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi, After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer revision, all arrays I have using ZFS mirroring are displaying errors. This started happening immediately after ZFS upgrades. Here is an example: ormandj at neutron.corenode.com:~$ zpool status pool: rpool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was
2013 May 09
0
Memory reservation for 32bit guests on high RAM systems
...with 384G of RAM: dom0_mem total_available_memory 32bit 64bit min:3G,max:128G 128 125G 127G -128G 128 65G 128G min:3G,max:-128G 240 65G 239G min:3G,max:160G 150 115G 149G We''re either missing something fundamental about how to configure this, or there''s a bug in ballooning or reservation that is preventing us from using the full 128G towards 32bit domains while being able to use all of the remaining RAM for 64 bit guests. Anyone have an...
2017 Sep 11
2
Cannot chainload a formerly working Linux system
...tion. Works OK (as far as it goes). Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 2042239 2040192 996.2M ef EFI (FAT-12/16/32) /dev/sdb2 2042240 3070399 1028160 502M 82 Linux swap / Solaris /dev/sdb3 3072000 244244479 241172480 115G 83 Linux /dev/sdb4 244244480 499196591 254952112 121.6G 7 HPFS/NTFS/exFAT The BLFS system (on sdb3) used to be booted from a Win7 (sdb4), no longer active now (hence the current attempts at booting through syslinux). First sector of sdb3 (with "legacy", 0.97, GRUB STAGE2 used to b...
2018 Mar 14
2
rsync of a reflink from OCFS2
...896 Mar 13 13:25 lost+found 47427620 -rwxr-xr-x 1 root root 107374182400 Mar 14 13:36 sa.raw 47410284 -rwxr-xr-x 1 root root 107374182400 Mar 14 11:37 sa.raw.snap ha-idg-1:/cluster/guests/servers_alive # df -h Filesystem Size Used Avail Use% Mounted on ... /dev/dm-9 115G 49G 67G 42% /cluster/guests/servers_alive You see that just 49GB are allocated, because the source has not grown to the maximum, and the reflink occupies no space in the beginning. Maximum size is 100GB. I would now expect a rsync from the snap would transfer just some megay bytes to the fi...
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0
2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/fs32k filebench> set $iosize=32k filebench> set