Displaying 16 results from an estimated 16 matches for "668,15".
Did you mean:
667,15
2016 Apr 18
0
[PATCH v4 35/37] clk: set clocks to pre suspend state after suspend
...dev *subdev)
if (clk->func->init)
return clk->func->init(clk);
- clk->astate = -1;
- clk->pstate = NULL;
- clk->exp_cstate = NVKM_CLK_CSTATE_DEFAULT;
- clk->set_cstate = NULL;
- nvkm_clk_update(clk, true);
+ nvkm_clk_update_impl(clk, true);
return 0;
}
@@ -672,8 +668,15 @@ nvkm_clk_ctor(const struct nvkm_clk_func *func, struct nvkm_device *device,
clk->func = func;
INIT_LIST_HEAD(&clk->states);
clk->domains = func->domains;
+
+ clk->pstate = NULL;
+ clk->astate = -1;
clk->ustate_ac = -1;
clk->ustate_dc = -1;
+
+ clk->exp...
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...12 ++++++------
net/core/net-sysfs.c | 33 ++++++++++++++++-----------------
3 files changed, 58 insertions(+), 27 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 5c88ab1..71b8bc4 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -668,15 +668,28 @@ extern struct rps_sock_flow_table __rcu *rps_sock_flow_table;
bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id,
u16 filter_id);
#endif
+#endif /* CONFIG_RPS */
/* This structure contains an instance of an RX queue. */
struct netdev_rx_queue {
+#if...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...12 ++++++------
net/core/net-sysfs.c | 33 ++++++++++++++++-----------------
3 files changed, 53 insertions(+), 27 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 5c88ab1..38929bc 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -668,15 +668,28 @@ extern struct rps_sock_flow_table __rcu *rps_sock_flow_table;
bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id,
u16 filter_id);
#endif
+#endif /* CONFIG_RPS */
/* This structure contains an instance of an RX queue. */
struct netdev_rx_queue {
+#if...
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2019 Apr 25
6
[PATCH v4 0/4] vmw_balloon: Compaction and shrinker support
VMware balloon enhancements: adding support for memory compaction,
memory shrinker (to prevent OOM) and splitting of refused pages to
prevent recurring inflations.
Patches 1-2: Support for compaction
Patch 3: Support for memory shrinker - disabled by default
Patch 4: Split refused pages to improve performance
v3->v4:
* "get around to" comment [Michael]
* Put list_add under page lock
2019 Apr 25
6
[PATCH v4 0/4] vmw_balloon: Compaction and shrinker support
VMware balloon enhancements: adding support for memory compaction,
memory shrinker (to prevent OOM) and splitting of refused pages to
prevent recurring inflations.
Patches 1-2: Support for compaction
Patch 3: Support for memory shrinker - disabled by default
Patch 4: Split refused pages to improve performance
v3->v4:
* "get around to" comment [Michael]
* Put list_add under page lock
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2019 Apr 23
5
[PATCH v3 0/4] vmw_balloon: compaction and shrinker support
VMware balloon enhancements: adding support for memory compaction,
memory shrinker (to prevent OOM) and splitting of refused pages to
prevent recurring inflations.
Patches 1-2: Support for compaction
Patch 3: Support for memory shrinker - disabled by default
Patch 4: Split refused pages to improve performance
v2->v3:
* Fixing wrong argument type (int->size_t) [Michael]
* Fixing a comment
2004 Feb 06
4
memory reduction
...>gid = gid;
#if SUPPORT_HARD_LINKS
- if (idev_len) {
- file->link_u.idev = (struct idev *)bp;
- bp += idev_len;
+ if (idev_len && flist->hlink_pool) {
+ file->link_u.idev = pool_talloc(flist->hlink_pool,
+ struct idev, 1, "inode_table");
}
#endif
@@ -668,15 +668,19 @@ void receive_file_entry(struct file_stru
#if SUPPORT_HARD_LINKS
if (idev_len) {
+ INO64_T inode;
if (protocol_version < 26) {
dev = read_int(f);
- file->F_INODE = read_int(f);
+ inode = read_int(f);
} else {
if (!(flags & XMIT_SAME_DEV))
dev = re...
2016 Apr 18
63
[PATCH v4 00/37] Volting/Clocking improvements for Fermi and newer
We are slowly getting there!
v4 of the series with some realy good improvements, so I am sure this is like
95% done and only needs some proper polishing and proper Reviews!
I also added the NvVoltOffsetmV module parameter, so that a user is able to
over and !under!-volt the GPU. Overvolting makes sense, when there are still
some reclocking issues left, which might be solved by a higher voltage.