search for: bucketing

Displaying 20 results from an estimated 906 matches for "bucketing".

2008 Oct 28
14
[PATCH 0/13] ocfs2: xattr bucket API
When the extended attribute namespace grows to a b-tree, the leaf clusters are organized by means of 'buckets'. Each bucket is 4K in size, regardless of blocksize. Thus, a bucket may be made of more than one block. fs/ocfs2/xattr.c has a nice little abstraction to wrap this, struct ocfs2_xattr_bucket. It contains a list of buffer_heads representing these blocks, and there is even an
2010 Mar 23
1
:has_many and :controller specified in routes.rb
Have an app using older version of Rails (2.3.2) that I need some routing assistance with if anyone has a minute. The app was originally designed to be purely html view, and an xml and json api was hacked on. To keep it clean, we are now moving the first version of the api (v1) under its own directory (v1) under app/ controllers but still responding to the old paths. One of the parts of the new
2006 Oct 13
3
HTB has 2 bucket?
in HTB use 2 bucket for manage 2 rate??? first bucket -> keep token for sending with rate second bucket -> keep ctoken for sending with ceil rate Is it true?? may be i''m misunderstand about token/bucket thoery _______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
2005 Jun 20
6
Bucketting data
Hi, Am sure this is a trivial question but for some reason, haven't been able to figure it out. I want to bucket data in a vector, and then iterate over the buckets. Say the data set is: > cleandata[,4] [1] 26 26 26 26 26 26 26 26 26 26 26 26 61 61 61 61 61 61 61 61 61 61 61 89 89 89 89 89 89 89 180 180 180 180 362 544 544 544 [39] 544 544 544 544 544 544
2008 Oct 26
1
[PATCH 1/1] ocfs2/xattr: Proper hash collision handle in bucket division.v3
Modification from V2 to V3: Use a more pefect code suggested by Joel. Thank Joel for it. In ocfs2/xattr, we must make sure the xattrs which have the same hash value exist in the same bucket so that the search schema can work. But in the old implementation, when we want to extend a bucket, we just move half number of xattrs to the new bucket. This works in most cases, but if we are lucky enough we
2019 Mar 05
0
[PATCH nbdkit] Add new filter for rate-limiting connections.
--- filters/delay/nbdkit-delay-filter.pod | 4 +- filters/rate/nbdkit-rate-filter.pod | 84 +++++++++ configure.ac | 2 + filters/rate/bucket.h | 62 +++++++ filters/rate/bucket.c | 173 +++++++++++++++++++ filters/rate/rate.c | 235 ++++++++++++++++++++++++++ TODO | 9 +
2008 Nov 19
2
Bucketing/Grouping Probabilities
...thing to do here? I'm too statistically naive to know one way or the other. I would appreciate any suggestions re correct approach and also (obviously) any tips on how one might go about this in R using canned functions. Many thanks! -- View this message in context: http://www.nabble.com/Bucketing-Grouping-Probabilities-tp20582544p20582544.html Sent from the R help mailing list archive at Nabble.com.
2019 Mar 05
2
[PATCH nbdkit] Add new filter for rate-limiting connections.
For virt-v2v we have been discussing how to limit network bandwidth. The initial discussion has been around how to use cgroups to do this limiting, and that is still probably what we will go with in the end. However this patch gives us another possibility for certain virt-v2v inputs, especially VDDK. We could apply a filter on top of the nbdkit plugin which limits the rate at which it copies
2009 Mar 09
4
[PATCH] ocfs2: Use xs->bucket to set xattr value outside.
Tristan, could you please run your xattr test against it? xs->base used to be allocated a 4K size and all the contents in the bucket are copied to the it. So in ocfs2_xattr_bucket_set_value_outside, we are safe to use xs->base + offset. Now we use ocfs2_xattr_bucket to abstract xattr bucket and xs->base is initialized to the start of the bu_bhs[0]. So xs->base + offset will overflow
2009 Apr 22
1
[PATCH 1/1] OCFS2: fasten dlm_lock_resource hash_table lookups
#backporting the 3 patches at http://kernel.us.oracle.com/~smushran/srini/ to 1.2. enlarge hash_table capacity to fasten hash_table lookups. Signed-off-by: Wengang wang <wen.gang.wang at oracle.com> -- diff -up ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c.orig ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c --- ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c.orig 2009-04-22 11:00:37.000000000 +0800 +++
2007 May 08
3
Token Bucket Filter and Dropping
I am trying to create my own Token Bucket Filter. However, I have a problem with packet dropping. Scenario : I got two streams 20KB/s each. I got one bucket with rate 20KB/s I put both streams into this bucket. When buffer is full packets need to be dropped. The problem is that only every other packet needs to be dropped in this scenario. Streams are the same so queue looks like that : S1 |
2011 Jul 05
0
Problem in accessing bucket of my AWS S3 account
I tried to establish connection to my aws s3 account like this in my irb console - AWS::S3::Base.establish_connection!(:access_key_id => ''my access key'', :secret_access_key => ''my secret key'', :server => " s3-ap-southeast-1.amazonaws.com") And it works well and prompt this - => #<AWS::S3::Connection:0x8cd86d0
2009 May 01
0
[PATCH 1/3] OCFS2: speed up dlm_lockr_resouce hash_table lookups
use multiple pages for the hash table. mainline git commit: 03d864c02c3ea803b1718940ac6953a257182d7a Authored-by: Daniel Phillips <phillips at google.com> Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> -- Index: ocfs2-1.2/fs/ocfs2/dlm/dlmdomain.c =================================================================== --- ocfs2-1.2/fs/ocfs2/dlm/dlmdomain.c (revision 1) +++
2009 Jan 08
1
[PATCH] ocfs2: Access the xattr bucket only before modifying it.
Hi Mark, This is the fix for 2.6.29(introduced by uniting the bucket journal access). Since I found no fixes-for-2.6.29 in your ocfs2.git, it is based on your original upstream-linus. In ocfs2_xattr_value_truncate, we may call b-tree codes which will extend the journal transaction. It has a potential problem that it may let the already-accessed-but-not-dirtied buffers gone. So we'd better
2009 Mar 12
0
[LLVMdev] a different hash for APInts
Stuart Hastings a écrit : > > { > {0x00000000}, {0x33800000}, {0x34000000}, {0x34400000}, > {0x34800000}, {0x34a00000}, {0x34c00000}, {0x34e00000}, > {0x35000000}, {0x35100000}, {0x35200000}, {0x35300000}, > {0x35400000}, {0x35500000}, {0x35600000}, {0x35700000}, > ... > {0xfffd8000}, {0xfffda000}, {0xfffdc000}, {0xfffde000}, > {0xfffe0000},
2012 Nov 17
4
survfit & number of variables != number of variable names
This works ok: > cox = coxph(surv ~ bucket*(today + accor + both) + activity, data = data) > fit = survfit(cox, newdata=data[1:100,]) but using strata leads to problems: > cox.s = coxph(surv ~ bucket*(today + accor + both) + strata(activity), > data = data) > fit.s = survfit(cox.s, newdata=data[1:100,]) Error in model.frame.default(data = data[1:100, ], formula = ~bucket + :
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > I am sorry that I don't quite get what you mean here. My point is that in > the hashing step, a cpu will need to scan an empty bucket to put the lock > in. In the interim, an previously used bucket before the empty one may get > freed. In the lookup step for that lock,
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > I am sorry that I don't quite get what you mean here. My point is that in > the hashing step, a cpu will need to scan an empty bucket to put the lock > in. In the interim, an previously used bucket before the empty one may get > freed. In the lookup step for that lock,
2006 Feb 09
1
[Bug 445] New: ipt_account reports: sleeping function called from invalid context at mm/slab.c:2063
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=445 Summary: ipt_account reports: sleeping function called from invalid context at mm/slab.c:2063 Product: netfilter/iptables Version: patch-o-matic-ng Platform: All OS/Version: other Status: NEW Severity: normal Priority: P2
2012 Oct 01
5
s3 as mysql directory
Hello list, I am soliciting opinion here as opposed technical help with an idea I have. I've setup a bacula backup system on an AWS volume. Bacula stores a LOT of information in it's mysql database (in my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and