search for: swapouts

Displaying 20 results from an estimated 20 matches for "swapouts".

Did you mean: swapout
2001 Nov 06
2
ext3-0.9.15 against linux-2.4.14
Download details and documentation are at http://www.uow.edu.au/~andrewm/linux/ext3/ Changes since ext3-0.9.13 (which was against linux-2.4.13): - Fixed a null-pointer dereference oops which could hit on SMP machines. This fix was applied to 2.4.12-ac6, but the oops has never been reported against -ac kernels. - Large amounts of developer debug code has been removed. This will now be
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 4:02 PM John Hubbard <jhubbard at nvidia.com> wrote: > > On 2020-06-22 15:33, Yang Shi wrote: > > On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: > >> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > >>> On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > >>>> On
2005 Jul 29
1
sda of CentOS 4 and hda Windows dual boot possible?
...dev/hda5 d: w95 fat 32 currently i can boot into the CentOS 4 install and it does not see the PATA drive by default unless i go into fdisk of course. it is not mounted. or i can switch them and boot into win98, whatever is best for this learning test. ive always tried to have 2 or more (hot/cold swapouts) of everything client and server hardwares so i never needed to dual boot. can someone show me what steps they would take using linux rescue to get this "test" situation to dual boot please? i did try with "linux rescue" and was rec'v grub not found errors when running gru...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: > > On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > > > > On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > > > > > On 6/22/20 1:10 PM, Zi Yan wrote: > > >> On 22 Jun 2020, at 15:36, Ralph Campbell wrote: > > >> > > >>> On
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > On 6/22/20 1:10 PM, Zi Yan wrote: >> On 22 Jun 2020, at 15:36, Ralph Campbell wrote: >> >>> On 6/21/20 4:20 PM, Zi Yan wrote: >>>> On 19 Jun 2020, at 17:56, Ralph Campbell wrote: >>>> >>>>> Support transparent huge page migration to ZONE_DEVICE private memory. >>>>> A
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
On 2020-06-22 15:33, Yang Shi wrote: > On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: >> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: >>> On 22 Jun 2020, at 17:31, Ralph Campbell wrote: >>>> On 6/22/20 1:10 PM, Zi Yan wrote: >>>>> On 22 Jun 2020, at 15:36, Ralph Campbell wrote:
2006 Nov 04
0
page allocation failure. order:0, mode:0x50
Just because I've been pulling my hair out for over a month about this problem, I'd like to get it into the archives. When copying large files (multigigabyte) on a CentOS 4.4 4GB Xeon (running 32 bit) server (2.6.9-42.0.3.ELsmp), I was getting: page allocation failure. order:0, mode:0x50 (snip lines and lines of debug output) This was accompanied by a huge swapout storm that brought
2003 Aug 27
0
In the beginning, there was mud...
A couple of days ago, I was given the task of replacing a mail server -- old hardware, network topology changes, and personnel changes led up to this. Now, the actual swapout of a mail server is relatively straightforward and (somewhat) easy to do. But this one had a wrench -- I had to authenticate off of an NT4 domain. There were some other, smaller, wrenches as well, but they have nothing to
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jerome, On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: > So for the above the easiest thing is to call set_page_dirty() from > the mmu notifier callback. It is always safe to use the non locking > variant from such callback. Well it is safe only if the page was > map with write permission prior to the callback so here i assume > nothing stupid is going on and
2020 Jun 23
0
[PATCH 13/16] mm: support THP migration to device private memory
On 6/22/20 4:54 PM, Yang Shi wrote: > On Mon, Jun 22, 2020 at 4:02 PM John Hubbard <jhubbard at nvidia.com> wrote: >> >> On 2020-06-22 15:33, Yang Shi wrote: >>> On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: >>>> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: >>>>> On 22 Jun 2020, at
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:38:38PM -0500, Andrea Arcangeli wrote: > On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote: > > I thought this patch was only for anonymous memory ie not file back ? > > Yes, the other common usages are on hugetlbfs/tmpfs that also don't > need to implement writeback and are obviously safe too. > > > If so then set dirty is
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:38:38PM -0500, Andrea Arcangeli wrote: > On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote: > > I thought this patch was only for anonymous memory ie not file back ? > > Yes, the other common usages are on hugetlbfs/tmpfs that also don't > need to implement writeback and are obviously safe too. > > > If so then set dirty is
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > > On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > > > On 6/22/20 1:10 PM, Zi Yan wrote: > >> On 22 Jun 2020, at 15:36, Ralph Campbell wrote: > >> > >>> On 6/21/20 4:20 PM, Zi Yan wrote: > >>>> On 19 Jun 2020, at 17:56, Ralph Campbell wrote: > >>>>
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote: > Hello Jerome, > > On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: >> So for the above the easiest thing is to call set_page_dirty() from >> the mmu notifier callback. It is always safe to use the non locking >> variant from such callback. Well it is safe only if the page was >> map with write permission
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote: > Hello Jerome, > > On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: >> So for the above the easiest thing is to call set_page_dirty() from >> the mmu notifier callback. It is always safe to use the non locking >> variant from such callback. Well it is safe only if the page was >> map with write permission
2006 Sep 21
1
Page allocation failure and slow system
Hi all. Every few days I am getting a message that looks like the one I have cut and pasted below. It is accompanied by a severe slowdown of the system. In fact, it's practically locked. The machine runs gnome-desktops for 40 users plus about a hundred sessions of a curses based point of sale software, plus a few other functions, so it's a busy machine. Dual Xeon 3.2GHz processors.
2005 Jan 05
13
Digium T100P T1 Card
Hello All, I could use a recommendation if anyone has a moment. I have the T100P but I have not gotten my service yet. I want to have at least 12 lines of digital voice with DID. Should I just seek out a PRI ISDN provider or is there something else I should look for? I want to keep cost as low as possible. Also, I want to own my own router for the phones since it is always a hassle to get
2007 Dec 05
13
[PATCH] unshadow the page table page which are used as data page
The patch deals with the situation which guest OS uses unused page table pages as data pages and write data to them. The pages will still be grabbed by Xen as page table pages, and lots of unnecessary page faults occur. The patch will check if the data guest writes to the page table contains valid mfn or not, if not, we believe it is a data page now and then unshadow the page. The patch
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On 22 Jun 2020, at 15:36, Ralph Campbell wrote: > On 6/21/20 4:20 PM, Zi Yan wrote: >> On 19 Jun 2020, at 17:56, Ralph Campbell wrote: >> >>> Support transparent huge page migration to ZONE_DEVICE private memory. >>> A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to >>> indicate the huge page was fully mapped by the CPU. >>>
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org