search for: buckets

Displaying 20 results from an estimated 906 matches for "buckets".

2008 Oct 28
14
[PATCH 0/13] ocfs2: xattr bucket API
When the extended attribute namespace grows to a b-tree, the leaf clusters are organized by means of 'buckets'. Each bucket is 4K in size, regardless of blocksize. Thus, a bucket may be made of more than one block. fs/ocfs2/xattr.c has a nice little abstraction to wrap this, struct ocfs2_xattr_bucket. It contains a list of buffer_heads representing these blocks, and there is even an API to fill it...
2010 Mar 23
1
:has_many and :controller specified in routes.rb
...y designed to be purely html view, and an xml and json api was hacked on. To keep it clean, we are now moving the first version of the api (v1) under its own directory (v1) under app/ controllers but still responding to the old paths. One of the parts of the new routes.rb specifies: map.resources :buckets, :has_many => :apples, :controller => ''v1/ buckets'' However, when I do rake routes, this produces: ... buckets GET /buckets(.:format) {:controller=>"v1/ buckets", :action=>"index"} POST /buckets(.:format) {:controller=&g...
2006 Oct 13
3
HTB has 2 bucket?
in HTB use 2 bucket for manage 2 rate??? first bucket -> keep token for sending with rate second bucket -> keep ctoken for sending with ceil rate Is it true?? may be i''m misunderstand about token/bucket thoery _______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
2005 Jun 20
6
Bucketting data
Hi, Am sure this is a trivial question but for some reason, haven't been able to figure it out. I want to bucket data in a vector, and then iterate over the buckets. Say the data set is: > cleandata[,4] [1] 26 26 26 26 26 26 26 26 26 26 26 26 61 61 61 61 61 61 61 61 61 61 61 89 89 89 89 89 89 89 180 180 180 180 362 544 544 544 [39] 544 544 544 544 544 544 544 This has the buckets: 26 61 89 180 362 544 I'd like someth...
2008 Oct 26
1
[PATCH 1/1] ocfs2/xattr: Proper hash collision handle in bucket division.v3
...ich have the same hash value exist in the same bucket so that the search schema can work. But in the old implementation, when we want to extend a bucket, we just move half number of xattrs to the new bucket. This works in most cases, but if we are lucky enough we will make 2 xattrs into 2 different buckets. This cause a problem that the xattr existing in the previous bucket can be found any more. This patch fix this problem by finding the right position during extending the bucket and extend an empty bucket if needed. Signed-off-by: Tao Ma <tao.ma at oracle.com> Cc: Joel Becker <joel.becker...
2019 Mar 05
0
[PATCH nbdkit] Add new filter for rate-limiting connections.
...T LIABILITY, + * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT + * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +/* This filter is implemented using a Token Bucket + * (https://en.wikipedia.org/wiki/Token_bucket). There are two + * buckets per connection (one each for reading and writing) and two + * global buckets (also for reading and writing). + * + * We add tokens at the desired rate (the per-connection rate for the + * connection buckets, and the global rate for the global buckets). + * Note that we don't actually keep the b...
2008 Nov 19
2
Bucketing/Grouping Probabilities
...), 0.072), ((2), 0.121), ((11), 0.185)] and (3 * 0.024) + (5 * 0.049) + (2 * 0.072) + (1 x 0.121) + (1 x 0.185) ~= 1. My question is: What is the most 'correct' way to cluster these probabilities? In my case the problem is not totally unconstrained. I would like to specify the number of buckets (probably will always wish to use either 5 or 6), so I do not need an algorithm which determines the most appropriate number of buckets given some cost function. I just need to know for a given number of buckets, which entrants go in which buckets and what is the representative probability for each...
2019 Mar 05
2
[PATCH nbdkit] Add new filter for rate-limiting connections.
For virt-v2v we have been discussing how to limit network bandwidth. The initial discussion has been around how to use cgroups to do this limiting, and that is still probably what we will go with in the end. However this patch gives us another possibility for certain virt-v2v inputs, especially VDDK. We could apply a filter on top of the nbdkit plugin which limits the rate at which it copies
2009 Mar 09
4
[PATCH] ocfs2: Use xs->bucket to set xattr value outside.
Tristan, could you please run your xattr test against it? xs->base used to be allocated a 4K size and all the contents in the bucket are copied to the it. So in ocfs2_xattr_bucket_set_value_outside, we are safe to use xs->base + offset. Now we use ocfs2_xattr_bucket to abstract xattr bucket and xs->base is initialized to the start of the bu_bhs[0]. So xs->base + offset will overflow
2009 Apr 22
1
[PATCH 1/1] OCFS2: fasten dlm_lock_resource hash_table lookups
.../dlmdebug.c --- ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c.orig 2009-04-22 11:00:37.000000000 +0800 +++ ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c 2009-04-22 11:08:27.000000000 +0800 @@ -547,7 +547,7 @@ void dlm_dump_lock_resources(struct dlm_ spin_lock(&dlm->spinlock); for (i=0; i<DLM_HASH_BUCKETS; i++) { - bucket = &(dlm->lockres_hash[i]); + bucket = dlm_lockres_hash(dlm, i); hlist_for_each_entry(res, iter, bucket, hash_node) dlm_print_one_lock_resource(res); } diff -up ./svnocfs2-1.2/fs/ocfs2/dlm/dlmrecovery.c.orig ./svnocfs2-1.2/fs/ocfs2/dlm/dlmrecovery.c --- ./svnocfs2-...
2007 May 08
3
Token Bucket Filter and Dropping
I am trying to create my own Token Bucket Filter. However, I have a problem with packet dropping. Scenario : I got two streams 20KB/s each. I got one bucket with rate 20KB/s I put both streams into this bucket. When buffer is full packets need to be dropped. The problem is that only every other packet needs to be dropped in this scenario. Streams are the same so queue looks like that : S1 |
2011 Jul 05
0
Problem in accessing bucket of my AWS S3 account
...uot;my secret key", @http=#<Net::HTTP s3-ap-southeast-1.amazonaws.com:80 open=false>> I have a bucket which is based on "Singapore Region" and for that endpoint i.e. server is: s3-ap-southeast-1.amazonaws.com So when I try to access it using this command - AWS::S3::Service.buckets it fetches all buckets in my account correctly - => [#<AWS::S3::Bucket:0x8d291fc @attributes={"name"=>"bucket1", "creation_date"=>2011-06-28 10:08:58 UTC}, @object_cache=[]>, #<AWS::S3::Bucket:0x8d291c0 @attributes={"name"=>"bucket...
2009 May 01
0
[PATCH 1/3] OCFS2: speed up dlm_lockr_resouce hash_table lookups
...+ +} + /* * * spinlock lock ordering: if multiple locks are needed, obey this ordering: @@ -127,7 +154,7 @@ void __dlm_insert_lockres(struct dlm_ctx q = &res->lockname; q->hash = full_name_hash(q->name, q->len); - bucket = &(dlm->lockres_hash[q->hash % DLM_HASH_BUCKETS]); + bucket = dlm_lockres_hash(dlm, q->hash); /* get a reference for our hashtable */ dlm_lockres_get(res); @@ -151,7 +178,7 @@ struct dlm_lock_resource * __dlm_lookup_ hash = full_name_hash(name, len); - bucket = &(dlm->lockres_hash[hash % DLM_HASH_BUCKETS]); + bucket = dlm_l...
2009 Jan 08
1
[PATCH] ocfs2: Access the xattr bucket only before modifying it.
Hi Mark, This is the fix for 2.6.29(introduced by uniting the bucket journal access). Since I found no fixes-for-2.6.29 in your ocfs2.git, it is based on your original upstream-linus. In ocfs2_xattr_value_truncate, we may call b-tree codes which will extend the journal transaction. It has a potential problem that it may let the already-accessed-but-not-dirtied buffers gone. So we'd better
2009 Mar 12
0
[LLVMdev] a different hash for APInts
...ntries (1<<16). > There is no executable code in this testcase. > > The cause seems to be the APInt::getHashValue() function (near line > 626 of .../lib/Support/APInt.cpp). Some investigation using the > debugger suggests that every value was hashing into about three > buckets, and the DenseMap code was looping excessively over the > extremely long chains in those three buckets. > From what I can see the old hash function can be good, if the number of bucket is not a power of two. The problem is that dense map use 64 buckets initially and grow but doubling thi...
2012 Nov 17
4
survfit & number of variables != number of variable names
This works ok: > cox = coxph(surv ~ bucket*(today + accor + both) + activity, data = data) > fit = survfit(cox, newdata=data[1:100,]) but using strata leads to problems: > cox.s = coxph(surv ~ bucket*(today + accor + both) + strata(activity), > data = data) > fit.s = survfit(cox.s, newdata=data[1:100,]) Error in model.frame.default(data = data[1:100, ], formula = ~bucket + :
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > I am sorry that I don't quite get what you mean here. My point is that in > the hashing step, a cpu will need to scan an empty bucket to put the lock > in. In the interim, an previously used bucket before the empty one may get > freed. In the lookup step for that lock,
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > I am sorry that I don't quite get what you mean here. My point is that in > the hashing step, a cpu will need to scan an empty bucket to put the lock > in. In the interim, an previously used bucket before the empty one may get > freed. In the lookup step for that lock,
2006 Feb 09
1
[Bug 445] New: ipt_account reports: sleeping function called from invalid context at mm/slab.c:2063
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=445 Summary: ipt_account reports: sleeping function called from invalid context at mm/slab.c:2063 Product: netfilter/iptables Version: patch-o-matic-ng Platform: All OS/Version: other Status: NEW Severity: normal Priority: P2
2012 Oct 01
5
s3 as mysql directory
Hello list, I am soliciting opinion here as opposed technical help with an idea I have. I've setup a bacula backup system on an AWS volume. Bacula stores a LOT of information in it's mysql database (in my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and