search for: 78mb

Displaying 6 results from an estimated 6 matches for "78mb".

Did you mean: 782b
2010 Oct 14
1
installing centOS on extended partition
Can CentOS be installed on an extended partition? System has the following partitions: OEM(reserved)-78MB(primary) System-100MB(primary) C-55GB(primary) D-100GB(extended) Can I divide D into 2 parts:70GB and 30GB and install CentOS in the 70GB logical partition? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20101...
2004 Jul 16
1
/proc/xen/memory_target patch
...request is parsed with the same function that parses the mem= boottime kernel param. Memory sizes in bytes are internally truncated to pages. Minimum change is PAGE_SIZE (4k). dragonfly:~# cat /proc/xen/memory_target 134217728l dragonfly:~# echo 50m > /proc/xen/memory_target Relinquish 78MB to xen. Domain now has 50MB dragonfly:~# echo 100m > /proc/xen/memory_target Reclaim 50MB from xen. Domain now has 100MB dragonfly:~# cat /proc/xen/memory_target 104857600l Daivd (btw, updating twisted to 1.3 solved my console problems) *** xeno-unstable.bk/linux-2.4.26-xen-sparse/ar...
2016 Oct 24
5
Server migration
Hi i have to migrate, online, a dovecot 1.2.15 to a new server. Which is the best way to accomplish this? I have 2 possibility: 1) migrate from the very old server to a newer server with the same dovecot version 2) migrate from the very old server to a new server with the latest dovecot version can i simply use rsync to sync everything and, when the sync is quick, move the mailbox from the old
2007 Oct 02
23
Mongrel using way more memory on production than staging. Any ideas why?
I''ve been trying to track down the culprit of erratic behaviour and crashes on my production server (which is split into a number of Xen instances), so set up a staging server so that I could really try to get to the bottom of it. The staging server (also split with Xen) is set up pretty much identically as far as the mongrel_cluster server is concerned (the production box has two
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello, i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing any data. As far as i know there are two ways to achieve this: - Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple redundancy/parity disks. I
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface