search for: 22gb

Displaying 20 results from an estimated 29 matches for "22gb".

Did you mean: 12gb
2001 Oct 17
9
large files
I'm reposting this problem (perhaps a bug) now I've got more information on it. This is another point of view of the situation and I hope someone could have run into the same trouble before (and solved it :-)) This is it: * with ntbackup 2000 I create a 22Gb .bkf file in the windows machine. * I can copy that file over a samba share and get correct info form the file in windows explorer. * ls -l also returns correct info, *WHILE* stat, mc, and other programs raise up with an error regarding a value too high for defined data type. * If I try to cr...
2004 Jan 12
1
Copying files between 1.0.9-4 and 1.0.9-12
...e25, ocfs 1.0.9-12) servers. When attempting to copy files from prod to dev we're getting the following messages: cp: writing `./mscx01.dbf': Input/output error cp: writing `./msdd01.dbf': No space left on device cp: writing `./msdx01.dbf': Input/output error Note that there are 22GB available on the target system. Some files appear to copy ok, lots get the messages. Is this an ocfs issue, and due to the difference in versions? Thanks a lot, Matt
2003 Mar 28
1
rsync speed
Hi, I poked around the website and did some digging in the list archives but thought i'd better pose my question here. I'm using rsync to synchronize two directories on a Solaris 8 e450 server. rsync copies about 22GB per night. It seems to take 50 minutes for the copy to complete. which seems a little bit slow. I'm currently running # /opt/bin/rsync -v rsync version 2.3.1 Copyright Andrew Tridgell and Paul Mackerras If i upgrade to 2.5.6 can i expect to see a speed increase. Thanks Karl
2007 Mar 14
1
Newbie index infrastructure and location questions
...again). Index questions: I understand the indices should be placed in a non-file-system-quoted place. I have in mind to create /var/dcndx 1) Roughly how much space do indices take up (so I can size space for a directory for them)? I have 4000 users, average mbox inbox size is 5.5MB (totalling 22GB), the number of messages/user mailbox I'd guess to average around 100 and go as high as 5000. They have maybe as much as 3x times space and messages in folders, though I guess the average is about half that of INBOX 2) I'm clear that DC will automagically create the index files, but a)...
2001 Oct 12
1
Large backup files from ntbackup to a samba share
...tbackup stops serving data and the file on the samba side appears to be downsized to 0 and growing ssssslowly. Once you stop ntbackup by killing the process, the file reappears at its real size (more than 4Gbs), but, since ntbackup stopped, it is impossoble to recover. I have also created a local 22Gb. file in the W2000 Server, then moved it to the Linux box via Samba and it worked OK!, so there's something between ntbackup and samba shares. Any idea, any help? Thanks P.S. I also had the idea of using a FIFO file to pipe the info from ntbackup into a gzipped file, but unfortunately ntba...
2012 May 01
2
kvm & virtio performance
hi, anyone test freebsd as guest on kvm with virtio drivers? any expirience?
2014 Oct 09
3
dovecot replication (active-active) - server specs
Hello, i have some questions about the new dovecot replication and mdbox format. my company has currently 3 old dovecot 2.0.x fileserver/backend with ca. 120k mailboxes and ca. 6 TB data used. They are synchronised per drbd/corosync. Each fileserver/backend have ca. 40k mailboxes im Maildir format. Our MX server is delivering ca. 30 GB new mails per day. Two IMAP proxy server get the
2009 Oct 27
3
Stack overflow in R 2.10.0 with sub()
Hi R developers: Congratulations for the new R 2.10.0 version. It is a huge effort! Thank you for your work and dedication. I just want to ask how to make this "strip blank" function to work again (it works on R.2.9.2). alumnos$AL_NUME_ID<-sub("(^ +)|( +$)","",alumnos$AL_NUME_ID),) "alumnos" is a data base with 900.000 rows and 72 columns. and
2014 Jan 16
0
[PATCH net-next v4 3/6] virtio-net: auto-tune mergeable rx buffer size for improved performance
...stem will not be scheduled on the benchmark CPUs. Trunk includes SKB rx frag coalescing. net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s net-next (MTU-size bufs): 13170.01Gb/s net-next + auto-tune: 14555.94Gb/s Jason Wang also reported a throughput increase on mlx4 from 22Gb/s using MTU-sized buffers to about 26Gb/s using auto-tuning. Signed-off-by: Michael Dalton <mwdalton at google.com> --- v2->v3: Remove per-receive queue metadata ring. Encode packet buffer base address and truesize into an unsigned long by requiring a minimum packet size a...
2014 Jan 07
0
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...stem will not be scheduled on the benchmark CPUs. Trunk includes SKB rx frag coalescing. net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s net-next (MTU-size bufs): 13170.01Gb/s net-next + auto-tune: 14555.94Gb/s Jason Wang also reported a throughput increase on mlx4 from 22Gb/s using MTU-sized buffers to about 26Gb/s using auto-tuning. Signed-off-by: Michael Dalton <mwdalton at google.com> --- v2: Add per-receive queue metadata ring to track precise truesize for mergeable receive buffers. Remove all truesize approximation. Never try to fill a full RX ring...
2013 Dec 26
3
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
...18a (PAGE_SIZE bufs): 14642.85Gb/s > net-next (MTU-size bufs): 13170.01Gb/s > net-next + auto-tune: 14555.94Gb/s > > Signed-off-by: Michael Dalton <mwdalton at google.com> The patch looks good to me and test this patch with mlx4, it help to increase the rx performance from about 22Gb/s to about 26 Gb/s.
2013 Dec 26
3
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
...18a (PAGE_SIZE bufs): 14642.85Gb/s > net-next (MTU-size bufs): 13170.01Gb/s > net-next + auto-tune: 14555.94Gb/s > > Signed-off-by: Michael Dalton <mwdalton at google.com> The patch looks good to me and test this patch with mlx4, it help to increase the rx performance from about 22Gb/s to about 26 Gb/s.
2017 May 05
10
[Bug 12769] New: error allocating core memory buffers (code 22) depending on source file system
https://bugzilla.samba.org/show_bug.cgi?id=12769 Bug ID: 12769 Summary: error allocating core memory buffers (code 22) depending on source file system Product: rsync Version: 3.1.0 Hardware: All OS: Linux Status: NEW Severity: normal Priority: P5 Component: core
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 08
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...benchmark CPUs. Trunk includes > SKB rx frag coalescing. > > net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s > net-next (MTU-size bufs): 13170.01Gb/s > net-next + auto-tune: 14555.94Gb/s > > Jason Wang also reported a throughput increase on mlx4 from 22Gb/s > using MTU-sized buffers to about 26Gb/s using auto-tuning. > > Signed-off-by: Michael Dalton <mwdalton at google.com> > --- > v2: Add per-receive queue metadata ring to track precise truesize for > mergeable receive buffers. Remove all truesize approximation. Never &...
2014 Jan 08
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...benchmark CPUs. Trunk includes > SKB rx frag coalescing. > > net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s > net-next (MTU-size bufs): 13170.01Gb/s > net-next + auto-tune: 14555.94Gb/s > > Jason Wang also reported a throughput increase on mlx4 from 22Gb/s > using MTU-sized buffers to about 26Gb/s using auto-tuning. > > Signed-off-by: Michael Dalton <mwdalton at google.com> > --- > v2: Add per-receive queue metadata ring to track precise truesize for > mergeable receive buffers. Remove all truesize approximation. Never &...
2014 Jan 09
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...enchmark CPUs. Trunk includes > SKB rx frag coalescing. > > net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s > net-next (MTU-size bufs): 13170.01Gb/s > net-next + auto-tune: 14555.94Gb/s > > Jason Wang also reported a throughput increase on mlx4 from 22Gb/s > using MTU-sized buffers to about 26Gb/s using auto-tuning. > > Signed-off-by: Michael Dalton <mwdalton at google.com> Sorry that I didn't notice early, but there seems to be a bug here. See below. Also, I think we can simplify code, see suggestion below. > --- > v2:...
2014 Jan 09
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...enchmark CPUs. Trunk includes > SKB rx frag coalescing. > > net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s > net-next (MTU-size bufs): 13170.01Gb/s > net-next + auto-tune: 14555.94Gb/s > > Jason Wang also reported a throughput increase on mlx4 from 22Gb/s > using MTU-sized buffers to about 26Gb/s using auto-tuning. > > Signed-off-by: Michael Dalton <mwdalton at google.com> Sorry that I didn't notice early, but there seems to be a bug here. See below. Also, I think we can simplify code, see suggestion below. > --- > v2:...
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation