Displaying 20 results from an estimated 21 matches for "48g".
Did you mean:
48
2008 Oct 16
3
strict memory
Hello All:
Running 5.2 at our university. We have several student's processes
that take up too much memory. Our system have 64G of RAM and some
processes take close to 32-48G of RAM. This is causing many problems
for others. I was wondering if there is a way to restrict memory usage
per process? If the process goes over 32G simply kill it. Any thoughts
or ideas?
TIA
2016 Jan 22
4
LVM mirror database to ramdisk
I'm still running CentOS 5 with Xen.
We recently replaced a virtual host system board with an Intel
S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core
Xeon with 48G RAM, max 96G. The drives are SSD.
I was recently asked to move an InterBase server from Windows 7 to
Windows Server. The database is 30G.
I'm speculating that if I put the database on a 35G virtual disk and
mirror it to a 35G RAM disk, the speed of database access might improve.
I us...
2009 Feb 25
2
No space left on device
...ce loaded
But we are unable to create files bigger that 972 KB. Here is the output:
[root at t1 root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 16G 8.7G 6.3G 58% /
none 1.0G 0 1.0G 0% /dev/shm
/dev/hda3 299G 252G 48G 85% /home/nfs
[root at t1 root]# mount
/dev/hda1 on / type ext3 (rw)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
none on /dev/shm type tmpfs (rw)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
nfsd on /proc/fs/nfsd type...
2015 Jun 23
3
installing Centos Question
Hi just partition my harddrive to 2GB was not sure how many Mb or Gb centos runs can someone help me please that?s all I need to know mike
2016 Jan 22
0
LVM mirror database to ramdisk
On Fri, Jan 22, 2016 at 11:02 AM, Ed Heron <Ed at heron-ent.com> wrote:
> I'm still running CentOS 5 with Xen.
>
> We recently replaced a virtual host system board with an Intel
> S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core
> Xeon with 48G RAM, max 96G. The drives are SSD.
>
> I was recently asked to move an InterBase server from Windows 7 to
> Windows Server. The database is 30G.
>
> I'm speculating that if I put the database on a 35G virtual disk and
> mirror it to a 35G RAM disk, the speed of database a...
2016 Jan 23
0
LVM mirror database to ramdisk
On 01/22/2016 11:02 AM, Ed Heron wrote:
> I'm still running CentOS 5 with Xen.
>
> We recently replaced a virtual host system board with an Intel
> S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core
> Xeon with 48G RAM, max 96G. The drives are SSD.
>
> I was recently asked to move an InterBase server from Windows 7 to
> Windows Server. The database is 30G.
>
> I'm speculating that if I put the database on a 35G virtual disk and
> mirror it to a 35G RAM disk, the speed of database...
2015 Jun 23
0
installing Centos Question
..., I generally allocated a 20-40GB /
partition. /home is however large your users are. if you run
databases or webservers, those can be as big as you need them to be.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/c7test1-root 50G 2.8G 48G 6% /
/dev/mapper/c7test1-home 76G 5.8G 70G 8% /home
/dev/sda1 497M 264M 234M 54% /boot
but tthis is a server, there's no desktop stuff at all.
--
john r pierce, recycling bits in santa cruz
2010 Feb 23
2
how to show only quota limit to users via SSH?
...to
SSH?
For example, I set a soft limit of 10GB on this user, but when he logs in he
can see all the limits:
-sh-3.2$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fluid01-root
38G 36G 530M 99% /
/dev/mapper/fluid01-home
48G 15G 30G 34% /home
/dev/md0 190M 33M 148M 19% /boot
tmpfs 881M 0 881M 0% /dev/shm
/dev/mapper/fluid01-cpbackup
203G 184G 9.4G 96% /cpbackup
-sh-3.2$
Is it possible to show him only his limits, and for that matter mounted
partit...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...isk Space : 49.1TB
Inode Count : 5273970048
Free Inodes : 5273127036
Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
= *196,4 TB *but df shows:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1...
2006 Dec 01
2
/var goes read-only
...ev/fd0 /media/floppy auto
pamconsole,exec,noauto,managed 0 0
Here is the output of df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 22G 2.0G 19G 10% /
none 4.0G 0 4.0G 0% /dev/shm
/dev/sdb1 269G 48G 208G 19% /opt
/dev/sda3 9.7G 2.3G 7.0G 25% /var
Kernel version: 2.6.9-42.0.3.ELsmp
Any help would be greatly appreciated!!
--
Thx
Joshua Gimer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachment...
2007 Jan 09
0
FOSDEM Request For Proposal
...; http://www.fosdem.org
/\\ FOSDEM 2007 :: 24 + 25 February 2007 in Brussels
_\_v Free and Opensource Software Developers European Meeting
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)
iD8DBQFFmHL+r3NMWliFcXcRAm7UAJ9j6uuJy8Hkag9KPC9j150Y8Z70GgCgpHJp
RI8xA337GQsPq3mD4K4/48g=
=usLr
-----END PGP SIGNATURE-----
2016 Jan 22
2
LVM mirror database to ramdisk
...016 at 11:02 AM, Ed Heron <Ed at heron-ent.com> wrote:
> > I'm still running CentOS 5 with Xen.
> >
> > We recently replaced a virtual host system board with an Intel
> > S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core
> > Xeon with 48G RAM, max 96G. The drives are SSD.
> >
> > I was recently asked to move an InterBase server from Windows 7 to
> > Windows Server. The database is 30G.
> >
> > I'm speculating that if I put the database on a 35G virtual disk and
> > mirror it to a 35G RAM...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...: 5273970048
> Free Inodes : 5273127036
>
>
> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
> = *196,4 TB *but df shows:
>
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 48G 21G 25G 46% /
> tmpfs 32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T...
2009 Apr 22
6
WinXP Xen guest: compare VNC vs RDP
I'm experimenting with using WinXP Xen guests as an alternative to
upgrading workstations. The administrative advantages seem overwhelming.
Please share thoughts about using VNC vs RDP for remote desktop
connections.
Please share any anecdotal information regarding user reactions and/or
implementation issues.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...: 5273127036
>>
>>
>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>> +49.1TB = *196,4 TB *but df shows:
>>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda2 48G 21G 25G 46% /
>> tmpfs 32G 80K 32G 1% /dev/shm
>> /dev/sda1 190M 62M 119M 35% /boot
>> /dev/sda4 395G 251G 124G 68% /data
>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>> /dev/sdc1...
2011 Nov 23
3
P2Vs seem to require a very robust Ethernet
Now that we can gather diagnostic info, I think I know why our P2Vs kept
failing last week. Another one just died right in front of my eyes. I
think either the Ethernet or NFS server at this site occasionally
"blips" offline when it gets busy and that messes up P2V migrations.
The RHEV export domain is an NFS share offered by an old Storagetek NAS,
connected over a 10/100 Ethernet.
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...t;
>>>
>>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>> +49.1TB = *196,4 TB *but df shows:
>>>
>>> [root at stor1 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda2 48G 21G 25G 46% /
>>> tmpfs 32G 80K 32G 1% /dev/shm
>>> /dev/sda1 190M 62M 119M 35% /boot
>>> /dev/sda4 395G 251G 124G 68% /data
>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
>>>...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>>> +49.1TB = *196,4 TB *but df shows:
>>>>
>>>> [root at stor1 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sda2 48G 21G 25G 46% /
>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>> /dev/sda4 395G 251G 124G 68% /data
>>>> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...ize for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>>>> +49.1TB = *196,4 TB *but df shows:
>>>>>
>>>>> [root at stor1 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>> /dev/sdb1 26T 601G 25T 3%...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...should be: 49.1TB + 49.1TB + 49.1TB
>>>>>> +49.1TB = *196,4 TB *but df shows:
>>>>>>
>>>>>> [root at stor1 ~]# df -h
>>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>>> /dev/sda2 48G 21G 25G 46% /
>>>>>> tmpfs 32G 80K 32G 1% /dev/shm
>>>>>> /dev/sda1 190M 62M 119M 35% /boot
>>>>>> /dev/sda4 395G 251G 124G 68% /data
>>>>>> /dev/sdb1 26T...