Displaying 20 results from an estimated 23 matches for "124g".
Did you mean:
1241
2008 Jan 04
3
Can''t access my data
...l/homes'': mountpoint or dataset is busy
The data seems it might still exist, (correct amount of used space is
reported), but /homespool
doesn''t provide me any obvious way to get to it any more.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
homespool 9.91G 124G 18K /homespool
homespool/homes 9.91G 124G 9.91G /homes
ls -laR /homespool
/homespool:
total 5
drwxr-xr-x 2 root sys 2 Jan 4 11:48 .
drwxr-xr-x 48 root root 1024 Jan 4 20:45 ..
I''d dearly love to recover this pool, (no backup, yes I know :-(.
It WA...
2015 Jan 16
2
Guests using more ram than specified
Hi,
today I noticed that one of my HVs started swapping aggressively and
noticed that the two guests running on it use quite a bit more ram than
I assigned to them. They respectively were assigned 124G and 60G with
the idea that the 192G system then has 8G for other purposes. In top I
see the VMs using about 128G and 64G which means there is nothing left
for the system. This is on a CentOS 7 system.
Any ideas what causes this or how I can calculate the actual maximum
amount of RAM I can assign to...
2016 Jan 27
2
CentOS 7, 327 kernel still crashing
...ink* they're all R420's, but I
could be wrong, just all do the same thing on boot.
*****************
I've just updated a CentOS 7 server to the latest kernel,
vmlinuz-3.10.0-327.4.5.el7.x86_64, and the server fails to boot. It has
failed on every 327 kernel.
Server: Dell R420, 2 Xeons, 124G RAM.
>From the rdsosreport.txt, the relevant portion is:
[ 3.317974] <servername> systemd[1]: Starting File System Check on
/dev/disk//
by-label/\x2f...
[ 3.320089] <servername> systemd-fsck[590]: Failed to detect device
/dev/diskk
/by-label//
[ 3.320567] <servername> systemd[...
2011 Jun 30
14
700GB gone?
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I can only see 300GB. Where is the rest? Is there a command I can do to reach the rest of the data? Will scrub help?
--
This message posted from opensolaris.org
2010 Aug 19
0
Unable to mount legacy pool in to zone
...currently in use
errors: No known data errors
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tol-pool 1.08T 91.5G 39.7K /tol-pool
tol-pool/db01 121G 78.7G 121G legacy
tol-pool/db02 112G 87.9G 112G legacy
tol-pool/db03 124G 75.8G 124G legacy
tol-pool/db04 110G 89.5G 110G legacy
tol-pool/db05 118G 82.1G 118G legacy
tol-pool/oracle 16.8G 13.2G 16.8G legacy
tol-pool/redo01 2.34G 17.7G 2.34G legacy
tol-pool/redo02 2.20G 17.8G 2.20G legacy
tol-pool/redo03 1.17G...
2015 Jan 17
1
Re: Guests using more ram than specified
...znik wrote:
> On 16.01.2015 13:33, Dennis Jacobfeuerborn wrote:
>> Hi,
>> today I noticed that one of my HVs started swapping aggressively and
>> noticed that the two guests running on it use quite a bit more ram than
>> I assigned to them. They respectively were assigned 124G and 60G with
>> the idea that the 192G system then has 8G for other purposes. In top I
>> see the VMs using about 128G and 64G which means there is nothing left
>> for the system. This is on a CentOS 7 system.
>> Any ideas what causes this or how I can calculate the actual m...
2016 Jan 29
2
CentOS 7, 327 kernel still crashing
...all do the same thing on boot.
> > *****************
> > I've just updated a CentOS 7 server to the latest kernel,
> > vmlinuz-3.10.0-327.4.5.el7.x86_64, and the server fails to boot. It has
> > failed on every 327 kernel.
> >
> > Server: Dell R420, 2 Xeons, 124G RAM.
> >
>
> I have the same issue on a 2011 iMac. Usually a it takes one or two rounds of kernels more and it starts working, but I have to stay on
> 3.10.0-229.20.1 right now. All the 327?s crash on boot.
>
> -wes
The `rpm -q --changelog ` of the 327 kernel looks like th...
2009 Feb 01
4
Automounter issue
....18-53.el5/
drwxr-xr-x 4 pdbuild everyone 4.0K Sep 24 2007 2.6.18-8.el5/
drwxr-xr-x 7 pdbuild everyone 4.0K Oct 16 2007 2.6.18-xen/
[pdbuild at build-c5u1: ~] df /tools/vault/kernels
Filesystem Size Used Avail Use% Mounted on
nas02:/vol/tools/vault
779G 655G 124G 85% /tools/vault
Now I've seen this before where some processes don't wait for the automount=
er
to do its thing before continuing; they just report "fail" and move on
to the failure handling.
I'm guessing that I need some magic on the automounter configuration to
change th...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...16G 26M 16G 1% /run
>> tmpfs 16G 0 16G 0% /sys/fs/cgroup
>> /dev/mapper/gluster-engine 25G 12G 14G 47% /gluster/brick1
>> /dev/sda1 497M 315M 183M 64% /boot
>> /dev/mapper/gluster-data 136G 124G 13G 92% /gluster/brick2
>> /dev/mapper/gluster-iso 25G 7.3G 18G 29% /gluster/brick4
>> tmpfs 3.2G 0 3.2G 0% /run/user/0
>> 192.168.8.11:/engine 15G 9.7G 5.4G 65%
>> /rhev/data-center/mnt/glusterSD/192.168.8.11:_eng...
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2016 Jan 28
0
CentOS 7, 327 kernel still crashing
...t I
> could be wrong, just all do the same thing on boot.
> *****************
> I've just updated a CentOS 7 server to the latest kernel,
> vmlinuz-3.10.0-327.4.5.el7.x86_64, and the server fails to boot. It has
> failed on every 327 kernel.
>
> Server: Dell R420, 2 Xeons, 124G RAM.
>
I have the same issue on a 2011 iMac. Usually a it takes one or two rounds of kernels more and it starts working, but I have to stay on 3.10.0-229.20.1 right now. All the 327?s crash on boot.
-wes
2011 Jun 20
2
using a cross partition WINEPREFIX
...ext4 (rw,commit=0)
> /dev/sda1 on /media/2c512d2e-fcf5-4ef5-8200-e3c79a8a1aca type ext4 (rw,nosuid,nodev,uhelper=udisks)
> root at kayve-laptop:/media/2c512d2e-fcf5-4ef5-8200-e3c79a8a1aca/home/kayve# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda6 124G 98G 20G 84% /
> none 1.9G 300K 1.9G 1% /dev
> none 1.9G 1.3M 1.9G 1% /dev/shm
> none 1.9G 220K 1.9G 1% /var/run
> none 1.9G 0 1.9G 0% /var/lock
> /dev/sr0 239M 239M 0 100% /m...
2015 Jan 16
0
Re: Guests using more ram than specified
On 16.01.2015 13:33, Dennis Jacobfeuerborn wrote:
> Hi,
> today I noticed that one of my HVs started swapping aggressively and
> noticed that the two guests running on it use quite a bit more ram than
> I assigned to them. They respectively were assigned 124G and 60G with
> the idea that the 192G system then has 8G for other purposes. In top I
> see the VMs using about 128G and 64G which means there is nothing left
> for the system. This is on a CentOS 7 system.
> Any ideas what causes this or how I can calculate the actual maximum
> amou...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...+49.1TB
= *196,4 TB *but df shows:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
76T 1,6T 74T 3% /volumedisk0
stor1data:/volumedisk1
*148T* 42T 106T 29% /volumedisk...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...:
>
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 48G 21G 25G 46% /
> tmpfs 32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 76T 1,6T 74T 3% /volumedisk0
> stor1data:/volumedisk1
> *148T...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda2 48G 21G 25G 46% /
>> tmpfs 32G 80K 32G 1% /dev/shm
>> /dev/sda1 190M 62M 119M 35% /boot
>> /dev/sda4 395G 251G 124G 68% /data
>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>> 76T 1,6T 74T 3% /volumedisk0
>> stor1data:/volumedisk1
>>...
2011 Jun 19
4
Trying to install ChessBase 8.0 and Hiarcs
I have Ubuntu 11.04 with wine and I am trying to install my ChessBase 8.0 on it using Wine. I have copied the CD to my hard disk and I manage to get the installation splash page, but when I click on Install, it says "Can't Run Setup.Exe"
[Image: http://kayve.net/cannot_setup.png ]
[Image: http://kayve.net/InstallChessBase.png ]
[Image: http://kayve.net/install_Hiarcs.png ]
Can
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...;>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda2 48G 21G 25G 46% /
>>> tmpfs 32G 80K 32G 1% /dev/shm
>>> /dev/sda1 190M 62M 119M 35% /boot
>>> /dev/sda4 395G 251G 124G 68% /data
>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>> stor1data:/volumedisk0
>>> 76T 1,6T 74T 3% /volumedisk0
>>> stor1data:/volu...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...stem Size Used Avail Use% Mounted on
>>>> /dev/sda2 48G 21G 25G 46% /
>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>> /dev/sda4 395G 251G 124G 68% /data
>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>> stor1data:/volumedisk0
>>>> 76T 1,6T 74T 3% /volumedisk0
>>>...