search for: chunk_siz

Displaying 20 results from an estimated 121 matches for "chunk_siz".

Did you mean: chunk_size
2012 Jan 15
0
[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup
..., base: 3256MB, range: 8MB, type UC reg 3, base: 3264MB, range: 64MB, type UC reg 4, base: 3328MB, range: 256MB, type UC reg 5, base: 3584MB, range: 512MB, type UC reg 6, base: 17150MB, range: 2MB, type UC reg 7, base: 17152MB, range: 256MB, type UC total RAM covered: 16310M gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 126M gran_size: 64K chunk_size: 512K num_reg: 10 lose cover...
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
Good Day All, Today I looked at the dmesg log and I notice that the following messages regarding mtrr_gran_size/mtrr_chunk_size. I am currently running CentOS 6.3 and I installed CentOS 6.2 and 6.1 and I was seeing the same errors. When I installed CentOS 5.8 on the same laptop I do not see these errors. $ lsb_release -a LSB Version: :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd6...
2012 Apr 12
1
6.2 x86_64 "mtrr_cleanup: can not find optimal value"
...t has been running 5.x - 5.8 for a few years without issue and decided to move it to a fresh install of 6.2. First thing I noticed is a good part of the log has these mtrr messages finally ending with "mtrr_cleanup: can not find optimal value" and "please specify mtrr_gran_size/mtrr_chunk_size". I have been searching around and reading the kernel docs but are bit lost on the impact. The system CPU is a q6600 so it supports mtrr. It has 8gb of RAM and a G33 intel chipset (maximum memory is 8gb), 256mb onboard Intel video. Problem exists with DVD 6.2 kernel and latest in yum updat...
2020 May 24
3
[PATCH] file_checksum() optimization
When a whole-file checksum is performed, hashing was done in 64 byte blocks, causing overhead and limiting performance. Testing showed the performance improvement to go up quickly going from 64 to 512 bytes, with diminishing returns above, 4096 was where it seemed to plateau for me. Re-used CHUNK_SIZE (32 kB) as it already exists and should be fine to use here anyway. Noticed this because I'm playing with a parallel MD5 implementation, and it benchmarked about the same as xxhash on the CPUs I used for testing, which should not be the case. Discovered performance was limited by CSUM_CHUNK r...
2013 Jun 13
0
*BAD*gran_size
...OS 6.4, current, Dell PE R720. Had an issue today with a bus error, and googling only found two year old references to problems with non-Dell drives (we just added two WD Reds, and mdadm raided them). So, looking through dmesg and /var/log/messages, I ran into a *lot* of G gran_size: 128K chunk_size: 256K num_reg: 10 lose cover RAM: 0G gran_size: 128K chunk_size: 512K num_reg: 10 lose cover RAM: 0G gran_size: 128K chunk_size: 1M num_reg: 10 lose cover RAM: 0G gran_size: 128K chunk_size: 2M num_reg: 10 lose cover RAM: 0G gran_size: 128K...
2004 Aug 02
4
reducing memmoves
...Aug 2004 02:31:02 -0000 @@ -23,6 +23,7 @@ #include "rsync.h" extern int sparse_files; +int total_bytes_memmoved=0; static char last_byte; static int last_sparse; @@ -182,8 +183,7 @@ /* nope, we are going to have to do a read. Work out our desired window */ if (offset > 2*CHUNK_SIZE) { - window_start = offset - 2*CHUNK_SIZE; - window_start &= ~((OFF_T)(CHUNK_SIZE-1)); /* assumes power of 2 */ + window_start = offset; } else { window_start = 0; } @@ -212,6 +212,7 @@ read_offset = read_start - window_start; read_size = window_size - read_offset; memmove(m...
2013 Jun 08
0
[PATCH] Btrfs-progs: elaborate error handling of mkfs
..._root *root, int mixed) BTRFS_BLOCK_GROUP_SYSTEM, BTRFS_FIRST_CHUNK_TREE_OBJECTID, 0, BTRFS_MKFS_SYSTEM_GROUP_SIZE); - BUG_ON(ret); + if (ret) + goto err; if (mixed) { ret = btrfs_alloc_chunk(trans, root->fs_info->extent_root, &chunk_start, &chunk_size, BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_DATA); - BUG_ON(ret); + if (ret) + goto err; ret = btrfs_make_block_group(trans, root, 0, BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_DATA, BTRFS_FIRST_CHUNK_TREE_OBJECTID, chunk_start...
2019 May 27
3
[PATCH v2 2/8] s390/cio: introduce DMA pools to cio
...you did not get a pool here? I don't think that should happen unless things were really bad already? > +} > + > +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev, > + size_t size) > +{ > + dma_addr_t dma_addr; > + unsigned long addr; > + size_t chunk_size; > + > + addr = gen_pool_alloc(gp_dma, size); > + while (!addr) { > + chunk_size = round_up(size, PAGE_SIZE); > + addr = (unsigned long) dma_alloc_coherent(dma_dev, > + chunk_size, &dma_addr, CIO_DMA_GFP); > + if (!addr) > + return NULL; > + gen_pool_add_v...
2019 May 27
3
[PATCH v2 2/8] s390/cio: introduce DMA pools to cio
...you did not get a pool here? I don't think that should happen unless things were really bad already? > +} > + > +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev, > + size_t size) > +{ > + dma_addr_t dma_addr; > + unsigned long addr; > + size_t chunk_size; > + > + addr = gen_pool_alloc(gp_dma, size); > + while (!addr) { > + chunk_size = round_up(size, PAGE_SIZE); > + addr = (unsigned long) dma_alloc_coherent(dma_dev, > + chunk_size, &dma_addr, CIO_DMA_GFP); > + if (!addr) > + return NULL; > + gen_pool_add_v...
2020 Aug 20
2
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
..._kernel_vmap_range to properly handle virtually indexed caches. Or with remapping you mean using the iommu do de-scatter/gather? You can implement that trivially implement it yourself for the iommu case: { merge_boundary = dma_get_merge_boundary(dev); if (!merge_boundary || merge_boundary > chunk_size - 1) { /* can't coalesce */ return -EINVAL; } nents = DIV_ROUND_UP(total_size, chunk_size); sg = sgl_alloc(); for_each_sgl() { sg->page = __alloc_pages(get_order(chunk_size)) sg->len = chunk_size; } dma_map_sg(sg, DMA_ATTR_SKIP_CPU_SYNC); // you are guaranteed to get a...
2010 Aug 31
0
istream_read like zlib, but without zlib
.../* we're here because we seeked back within the read buffer. */ ret = emxstream->high_pos - stream->pos; stream->pos = emxstream->high_pos; emxstream->high_pos = 0; return ret; } emxstream->high_pos = 0; if (stream->pos + CHUNK_SIZE > stream->buffer_size) { /* try to keep at least CHUNK_SIZE available */ if (!emxstream->marked && stream->skip > 0) { /* don't try to keep anything cached if we don't have a seek mark. */ i_stream_compress(strea...
2010 Jul 14
1
[PATCH] gfxboot: fix buffer overrun when loading kernel/initramfs
...ax.h> #include <syslinux/loadfile.h> #include <syslinux/config.h> @@ -749,7 +750,7 @@ void *load_one(char *file, ssize_t *file_size) if(size) { buf = malloc(size); for(i = 1, cur = 0 ; cur < size && i > 0; cur += i) { - i = save_read(fd, buf + cur, CHUNK_SIZE); + i = save_read(fd, buf + cur, min(CHUNK_SIZE, size - cur)); if(i == -1) break; gfx_progress_update(i); } -- 1.7.1
2019 May 10
3
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Fri, 10 May 2019 00:11:12 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > On Thu, 9 May 2019 12:11:06 +0200 > Cornelia Huck <cohuck at redhat.com> wrote: > > > On Wed, 8 May 2019 23:22:10 +0200 > > Halil Pasic <pasic at linux.ibm.com> wrote: > > > > > On Wed, 8 May 2019 15:18:10 +0200 (CEST) > > > Sebastian Ott <sebott
2019 May 10
3
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Fri, 10 May 2019 00:11:12 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > On Thu, 9 May 2019 12:11:06 +0200 > Cornelia Huck <cohuck at redhat.com> wrote: > > > On Wed, 8 May 2019 23:22:10 +0200 > > Halil Pasic <pasic at linux.ibm.com> wrote: > > > > > On Wed, 8 May 2019 15:18:10 +0200 (CEST) > > > Sebastian Ott <sebott
2019 May 13
2
[PATCH 05/10] s390/cio: introduce DMA pools to cio
...@ -1063,7 +1063,10 @@ struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages) > static void __gp_dma_free_dma(struct gen_pool *pool, > struct gen_pool_chunk *chunk, void *data) > { > - dma_free_coherent((struct device *) data, PAGE_SIZE, > + > + size_t chunk_size = chunk->end_addr - chunk->start_addr + 1; > + > + dma_free_coherent((struct device *) data, chunk_size, > (void *) chunk->start_addr, > (dma_addr_t) chunk->phys_addr); > } > @@ -1088,13 +1091,15 @@ void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct de...
2019 May 13
2
[PATCH 05/10] s390/cio: introduce DMA pools to cio
...@ -1063,7 +1063,10 @@ struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages) > static void __gp_dma_free_dma(struct gen_pool *pool, > struct gen_pool_chunk *chunk, void *data) > { > - dma_free_coherent((struct device *) data, PAGE_SIZE, > + > + size_t chunk_size = chunk->end_addr - chunk->start_addr + 1; > + > + dma_free_coherent((struct device *) data, chunk_size, > (void *) chunk->start_addr, > (dma_addr_t) chunk->phys_addr); > } > @@ -1088,13 +1091,15 @@ void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct de...
2019 May 23
0
[PATCH v2 2/8] s390/cio: introduce DMA pools to cio
...ddr, + CIO_DMA_GFP); + if (!cpu_addr) + return gp_dma; + gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr, + dma_addr, PAGE_SIZE, -1); + } + return gp_dma; +} + +static void __gp_dma_free_dma(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + size_t chunk_size = chunk->end_addr - chunk->start_addr + 1; + + dma_free_coherent((struct device *) data, chunk_size, + (void *) chunk->start_addr, + (dma_addr_t) chunk->phys_addr); +} + +void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev) +{ + if (!gp_dma) + return; + /* th...
2019 May 29
0
[PATCH v3 2/8] s390/cio: introduce DMA pools to cio
...ddr, + CIO_DMA_GFP); + if (!cpu_addr) + return gp_dma; + gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr, + dma_addr, PAGE_SIZE, -1); + } + return gp_dma; +} + +static void __gp_dma_free_dma(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + size_t chunk_size = chunk->end_addr - chunk->start_addr + 1; + + dma_free_coherent((struct device *) data, chunk_size, + (void *) chunk->start_addr, + (dma_addr_t) chunk->phys_addr); +} + +void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev) +{ + if (!gp_dma) + return; + /* th...
2019 May 12
0
[PATCH 05/10] s390/cio: introduce DMA pools to cio
...+++ b/drivers/s390/cio/css.c @@ -1063,7 +1063,10 @@ struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages) static void __gp_dma_free_dma(struct gen_pool *pool, struct gen_pool_chunk *chunk, void *data) { - dma_free_coherent((struct device *) data, PAGE_SIZE, + + size_t chunk_size = chunk->end_addr - chunk->start_addr + 1; + + dma_free_coherent((struct device *) data, chunk_size, (void *) chunk->start_addr, (dma_addr_t) chunk->phys_addr); } @@ -1088,13 +1091,15 @@ void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev, { dma_addr_t...
2019 Jun 06
0
[PATCH v4 2/8] s390/cio: introduce DMA pools to cio
...ddr, + CIO_DMA_GFP); + if (!cpu_addr) + return gp_dma; + gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr, + dma_addr, PAGE_SIZE, -1); + } + return gp_dma; +} + +static void __gp_dma_free_dma(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + size_t chunk_size = chunk->end_addr - chunk->start_addr + 1; + + dma_free_coherent((struct device *) data, chunk_size, + (void *) chunk->start_addr, + (dma_addr_t) chunk->phys_addr); +} + +void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev) +{ + if (!gp_dma) + return; + /* th...