Displaying 16 results from an estimated 16 matches for "190m".
Did you mean:
10m
2016 Jun 09
1
Unable to setup messaging listener
...a/private/smbd.tmp/msg/msg.14033.41':NT_STATUS_DISK_FULL
My first reaction was to check disk, but there still 5G free:
[root at bcd ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 5,7G 7,1G 45% /
tmpfs 1,9G 0 1,9G 0% /dev/shm
/dev/sda1 190M 47M 134M 26% /boot
Even with this "error" samba is working fine.
My question is: Is safe to ignore it, or I have a real problem?
Rafael
2007 Mar 23
1
Consolidating LVM volumes..
...omething I haven't done before is reduce the number of volumes on my
server.. Here is my current disk setup..
[root at server1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-RootVol00
15G 1.5G 13G 11% /
/dev/md0 190M 42M 139M 24% /boot
/dev/mapper/VolGroup00-DataVol00
39G 16G 22G 42% /data
none 157M 0 157M 0% /dev/shm
/dev/mapper/VolGroup00-HomeVol00
77G 58G 15G 80% /home
/dev/mapper/VolGroup00-VarVol00...
2010 Feb 23
2
how to show only quota limit to users via SSH?
...on this user, but when he logs in he
can see all the limits:
-sh-3.2$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fluid01-root
38G 36G 530M 99% /
/dev/mapper/fluid01-home
48G 15G 30G 34% /home
/dev/md0 190M 33M 148M 19% /boot
tmpfs 881M 0 881M 0% /dev/shm
/dev/mapper/fluid01-cpbackup
203G 184G 9.4G 96% /cpbackup
-sh-3.2$
Is it possible to show him only his limits, and for that matter mounted
partitions, which in this case is /cpbackup/knocky ?
--...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
= *196,4 TB *but df shows:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
76T 1,6T 74T 3% /volumedisk0
stor1data:/volumed...
2006 Aug 05
0
Memory Usage after upgrading to pre-release and removing sendfile
After the upgrade my memory usage is shown like this:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4592 flipl 16 0 197m 150m 2360 S 0.0 14.9 6:17.28 mongrel_rails
4585 mongrel 16 0 190m 140m 1756 S 0.0 13.9 0:52.86 mongrel_rails
4579 mongrel 16 0 200m 157m 1752 S 0.0 15.5 0:56.31 mongrel_rails
4582 mongrel 16 0 189m 139m 1752 S 0.0 13.8 1:05.89 mongrel_rails
5427 foo 16 0 184m 139m 1732 S 0.0 13.8 3:30.28 mongrel_rails
5092 blah 16 0 175m...
2006 Apr 17
1
Smbd using too much CPU
...nning so high I can't even think.
This is an extract from top:
----------------------------------------------------------------------------
--------------------------------
[root@localhost /]# top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13320 root 16 0 190m 181m 2400 R 77.2 36.1 43:36.14 smbd
This is the result from running strace for about five seconds:
----------------------------------------------------------------------------
--------------------------------
[root@localhost /]# strace -p 13320 -cfqrT
% time seconds usecs/cal...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...e: 49.1TB + 49.1TB + 49.1TB +49.1TB
> = *196,4 TB *but df shows:
>
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 48G 21G 25G 46% /
> tmpfs 32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 76T 1,6T 74T 3% /volum...
2008 Nov 09
2
Managesieve: Remote login fails
...dap.conf
socket:
type: listen
client:
path: /var/run/dovecot/auth-client
mode: 432
master:
path: /var/run/dovecot/auth-master
mode: 438
user: dovecot
group: mail
plugin:
fts: squat
sieve: ~/.dovecot.sieve
quota: maildir
quota_rule: *:storage=190M
quota_rule2: Trash:storage=50M
acl: vfile:/etc/dovecot/dovecot-acls
trash: /etc/dovecot/dovecot-trash.conf
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...>> +49.1TB = *196,4 TB *but df shows:
>>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda2 48G 21G 25G 46% /
>> tmpfs 32G 80K 32G 1% /dev/shm
>> /dev/sda1 190M 62M 119M 35% /boot
>> /dev/sda4 395G 251G 124G 68% /data
>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>> 76T 1,...
2010 Jul 01
1
Superblock Problem
.../proc proc defaults 0 0
/dev/md3 swap swap defaults 0 0
== END cat /etc/fstab ==
== BEGIN df -h ==
Filesystem Size Used Avail Use% Mounted on
/dev/md1 450G 72G 355G 17% /
/dev/md0 190M 45M 136M 25% /boot
== END df -h ==
== BEGIN fdisk -l ==
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md3 doesn't contain a valid partition table
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sec...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...4 TB *but df shows:
>>>
>>> [root at stor1 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda2 48G 21G 25G 46% /
>>> tmpfs 32G 80K 32G 1% /dev/shm
>>> /dev/sda1 190M 62M 119M 35% /boot
>>> /dev/sda4 395G 251G 124G 68% /data
>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>> stor1data:/volumedisk0
>>>...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...;>>>
>>>> [root at stor1 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sda2 48G 21G 25G 46% /
>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>> /dev/sda4 395G 251G 124G 68% /data
>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>> stor1data:/volumedisk0
>>...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...t;>>> [root at stor1 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>>> stor1data:/vol...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...root at stor1 ~]# df -h
>>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>>>>...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...# df -h
>>>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>>>>> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
>>>...
2014 Sep 16
5
[PATCH 0/3] tests: Introduce test harness for running tests.
These are my thoughts on adding a test harness to run tests instead of
using automake. The aim of this exercise is to allow us to run the
full test suite on an installed copy of libguestfs. Another aim is to
allow us to work around all the limitations and problems of automake.
The first patch makes an observation that since the ./run script sets
up $PATH to contain all the directories