search for: pg_data_t

Displaying 16 results from an estimated 16 matches for "pg_data_t".

2010 Aug 06
5
[PATCH] GSoC 2010 - Memory hotplug support for Xen guests - second fully working version - once again
...eturn balloon_stats.target_pages; > > Why does this need its own version? Because original version return values not bigger then initial memory allocation which does not allow memory hotplug to function. > >+int __ref xen_add_memory(int nid, u64 start, u64 size) > >+{ > >+ pg_data_t *pgdat = NULL; > >+ int new_pgdat = 0, ret; > >+ > >+ lock_system_sleep(); > >+ > >+ if (!node_online(nid)) { > >+ pgdat = hotadd_new_pgdat(nid, start); > >+ ret = -ENOMEM; > >+ if (!pgdat) > >+ goto out; > >+ new_pgdat = 1; > &gt...
2010 Aug 12
13
[PATCH] GSoC 2010 - Memory hotplug support for Xen guests - third fully working version
...start, u64 size) > > Could this be __meminit too then? Good question. I looked throught the code and could not find any simple explanation why mm/memory_hotplug.c authors used __ref instead __meminit. Could you (mm/memory_hotplug.c authors/maintainers) tell us why ??? > >+{ > >+ pg_data_t *pgdat = NULL; > >+ int new_pgdat = 0, ret; > >+ > >+ lock_system_sleep(); > > What''s this for? I see all its other users are in the memory hotplug > code, but presumably they''re concerned about a real S3 suspend. Do we > care about that here? Yes,...
2016 Feb 09
4
IR with no optimization
Hi all, I'm compiling linux kernel with clang. I want to generate IR with no optimization. However, kernel can only be compile with -O2 instead of -O0. Here is the source code snippet: struct zone *next_zone(struct zone *zone) { pg_data_t **pgdat* = zone->zone_pgdat; } I want to know there is an assignment from "zone" to "pgdat". I'm trying to iterate "store" instructions in IR. When I compile with -O2, I have the following IR: define %struct.zone* @next_zone(%struct.zone* readonly %zone) #0...
2016 Feb 09
2
IR with no optimization
...>> I'm compiling linux kernel with clang. I want to generate IR with no >> optimization. However, kernel can only be compile with -O2 instead of -O0. >> >> Here is the source code snippet: >> >> struct zone *next_zone(struct zone *zone) >> >> { pg_data_t **pgdat* = zone->zone_pgdat; >> >> } >> >> I want to know there is an assignment from "zone" to "pgdat". I'm trying >> to iterate "store" instructions in IR. >> >> When I compile with -O2, I have the following IR: >&g...
2016 Feb 09
2
IR with no optimization
...nt to generate IR with no >>>> optimization. However, kernel can only be compile with -O2 instead of -O0. >>>> >>>> Here is the source code snippet: >>>> >>>> struct zone *next_zone(struct zone *zone) >>>> >>>> { pg_data_t **pgdat* = zone->zone_pgdat; >>>> >>>> } >>>> >>>> I want to know there is an assignment from "zone" to "pgdat". I'm >>>> trying to iterate "store" instructions in IR. >>>> >>>>...
2016 May 20
0
[PATCH v6 02/12] mm: migrate: support non-lru movable page migration
...rn void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enu...
2016 May 31
0
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...rn void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enu...
2016 May 30
5
PATCH v6v2 02/12] mm: migrate: support non-lru movable page migration
...rn void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enu...
2016 May 30
5
PATCH v6v2 02/12] mm: migrate: support non-lru movable page migration
...rn void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enu...
2016 May 20
5
[PATCH v6 00/12] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and easy fork fail. The problem was fragmentation caused by zram and GPU driver mainly. With memory pressure, their pages were spread out all of pageblock and it cannot be migrated with current compaction algorithm which supports only LRU pages. In the end, compaction cannot
2016 May 20
5
[PATCH v6 00/12] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and easy fork fail. The problem was fragmentation caused by zram and GPU driver mainly. With memory pressure, their pages were spread out all of pageblock and it cannot be migrated with current compaction algorithm which supports only LRU pages. In the end, compaction cannot
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...tail, lruvec, list); + if (remap) + lru_add_page_tail(head, page_tail, lruvec, list); } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + pgoff_t end, unsigned long flags, bool remap) { struct page *head = compound_head(page); pg_data_t *pgdat = page_pgdat(head); @@ -2447,7 +2470,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, } for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(head, i, lruvec, list); + __split_huge_page_tail(head, i, lruvec, list, remap); /* Some pages can be bey...
2016 May 31
7
[PATCH v7 00/12] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and easy fork fail. The problem was fragmentation caused by zram and GPU driver mainly. With memory pressure, their pages were spread out all of pageblock and it cannot be migrated with current compaction algorithm which supports only LRU pages. In the end, compaction cannot
2016 May 31
7
[PATCH v7 00/12] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and easy fork fail. The problem was fragmentation caused by zram and GPU driver mainly. With memory pressure, their pages were spread out all of pageblock and it cannot be migrated with current compaction algorithm which supports only LRU pages. In the end, compaction cannot
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan