Displaying 20 results from an estimated 54 matches for "biovec".
Did you mean:
  iovec
  
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...hmem_inode_cache    612    632    460    8    1 
posix_timers_cache      0      0    100   39    1 
uid_cache              7     59     64   59    1 
blkdev_ioc           103    127     28  127    1 
blkdev_queue          58     60    960    4    1 
blkdev_requests      354    418    176   22    1 
biovec-(256)         312    312   3072    2    2 
biovec-128           368    370   1536    5    2 
biovec-64            480    485    768    5    1 
biovec-16            480    495    256   15    1 
biovec-4             480    531     64   59    1 
biovec-1            1104   5481     16  203    1 
bio...
2019 Jul 30
1
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
...owever, what ODP can maybe do is represent a full multi-level page
table, so we could have 2M entries that map to a single DMA or to
another page table w/ 4k pages (have to check on this)
But the driver isn't set up to do that right now.
> The best API for mlx4 would of course be to pass a biovec-style
> variable length structure that hmm_fault could fill out, but that would
> be a major restructure.
It would work, but the driver has to expand that into a page list
right awayhow.
We can't even dma map the biovec with today's dma API as it needs the
ability to remap on a page...
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...135     28  135    1 : tunables  120   60    
8 : slabdata      1      1      0
blkdev_queue          32     40    400   10    1 : tunables   54   27    
8 : slabdata      4      4      0
blkdev_requests      149    216    148   27    1 : tunables  120   60    
8 : slabdata      8      8      0
biovec-(256)         260    260   3072    2    2 : tunables   24   12    
8 : slabdata    130    130      0
biovec-128           264    265   1536    5    2 : tunables   24   12    
8 : slabdata     53     53      0
biovec-64            272    275    768    5    1 : tunables   54   27    
8 : slabdata...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...1515   1521    404      9
posix_timers_cache            1     40     96     40
uid_cache                     2     59     64     59
blkdev_ioc                  178    254     28    127
blkdev_queue                548    548    900      4
blkdev_requests            1205   1265    168     23
biovec-(256)                260    260   3072      2
biovec-128                  264    265   1536      5
biovec-64                   290    290    768      5
biovec-16                   440    560    192     20
biovec-4                    284    295     64     59
biovec-1                  32822  41006...
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all !
I have problems with concurrent filesystem actions on a ocfs2
filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6
F.e.: If I have a LV called testlv which is mounted on /mnt on both
servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024
count=1000000" on server 1 and do at the same time a du -hs
/mnt/test.a it takes about 5 seconds for du -hs to execute:
270M
2011 Sep 01
1
No buffer space available - loses network connectivity
...0.09K     15       40        60K journal_head
   590    298  50%    0.06K     10       59        40K delayacct_cache
   496    424  85%    0.50K     62        8       248K size-512
   413    156  37%    0.06K      7       59        28K fs_cache
   404     44  10%    0.02K      2      202         8K biovec-1
   390    293  75%    0.12K     13       30        52K bio
   327    327 100%    4.00K    327        1      1308K size-4096
   320    190  59%    0.38K     32       10       128K ip_dst_cache
   308    227  73%    0.50K     44        7       176K skbuff_fclone_cache
   258    247  95%    0.62K...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes.  However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process.  They also noticed that
the swap space
2019 Jul 30
2
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 08:51:57AM +0300, Christoph Hellwig wrote:
> All users pass PAGE_SIZE here, and if we wanted to support single
> entries for huge pages we should really just add a HMM_FAULT_HUGEPAGE
> flag instead that uses the huge page size instead of having the
> caller calculate that size once, just for the hmm code to verify it.
I suspect this was added for the ODP
2011 Sep 01
0
No buffer space available - loses network connectivity
...0.09K     15       40        60K journal_head
  590    298  50%    0.06K     10       59        40K delayacct_cache
  496    424  85%    0.50K     62        8       248K size-512
  413    156  37%    0.06K      7       59        28K fs_cache
  404     44  10%    0.02K      2      202         8K biovec-1
  390    293  75%    0.12K     13       30        52K bio
  327    327 100%    4.00K    327        1      1308K size-4096
  320    190  59%    0.38K     32       10       128K ip_dst_cache
  308    227  73%    0.50K     44        7       176K skbuff_fclone_cache
  258    247  95%    0.62K     43...
2019 Jul 30
0
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
...eems really awkward in terms
of an API still.  AFAIK ODP is only used by mlx5, and mlx5 unlike other
IB HCAs can use scatterlist style MRs with variable length per entry,
so even if we pass multiple pages per entry from hmm it could coalesce
them.  The best API for mlx4 would of course be to pass a biovec-style
variable length structure that hmm_fault could fill out, but that would
be a major restructure.
2013 Nov 11
0
[GIT PULL] (xen) stable/for-linus-3.13-rc0-tag
...lb-xen: use xen_alloc/free_coherent_pages
      xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/device
      swiotlb-xen: use xen_dma_map/unmap_page, xen_dma_sync_single_for_cpu/device
      swiotlb: print a warning when the swiotlb is full
      arm,arm64: do not always merge biovec if we are running on Xen
      grant-table: call set_phys_to_machine after mapping grant refs
      swiotlb-xen: static inline xen_phys_to_bus, xen_bus_to_phys, xen_virt_to_bus and range_straddles_page_boundary
      swiotlb-xen: fix error code returned by xen_swiotlb_map_sg_attrs
      arm: make S...
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s.  These machines are attached via
scsi to an external HP disk enclosure.  They run 32bit RH AS 4.0 and
OCFS 1.2.4, the latest release.  They were upgraded from 1.2.3 only a
few days after 1.2.4 was released.  I had reported on the mailing list
that my developers were happy, and things seemed faster.  However, twice
in that time, the cluster has gone down due
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
...mapping. Free the entry in xen_swiotlb_free_coherent;
- rename xen_dma_seg to dma_info in xen_swiotlb_alloc/free_coherent to
avoid confusions;
- introduce and export xen_dma_ops;
- call xen_mm_init from as arch_initcall;
- call __get_dma_ops to get the native dma_ops pointer on arm;
- do not merge biovecs;
- add single page optimization: pin the page rather than bouncing.
Changes in v5:
- dropped the first two patches, already in the Xen tree;
- implement dma_mark_clean using dmac_flush_range on arm;
- add "arm64: define DMA_ERROR_CODE"
- better comment for XENMEM_exchange_and_pin return...
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <koverstreet at goog...
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <koverstreet at goog...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <kmo at daterainc.co...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <kmo at daterainc.co...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <kmo at daterainc.co...
2013 Oct 29
0
[PATCH 07/23] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <kmo at daterainc.co...
2013 Oct 29
0
[PATCH 07/23] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers
won't be able to use the biovec directly, they'll need to use helpers
that take into account bio->bi_iter.bi_bvec_done.
This updates callers for the new usage without changing the
implementation yet.
Signed-off-by: Kent Overstreet <kmo at daterainc.co...