search for: bucket

Displaying 20 results from an estimated 906 matches for "bucket".

Did you mean: buckets
2008 Oct 28
14
[PATCH 0/13] ocfs2: xattr bucket API
When the extended attribute namespace grows to a b-tree, the leaf clusters are organized by means of 'buckets'. Each bucket is 4K in size, regardless of blocksize. Thus, a bucket may be made of more than one block. fs/ocfs2/xattr.c has a nice little abstraction to wrap this, struct ocfs2_xattr_bucket. It contains a list of buffer_heads representing these blocks, and there is even an API to fill it...
2010 Mar 23
1
:has_many and :controller specified in routes.rb
...y designed to be purely html view, and an xml and json api was hacked on. To keep it clean, we are now moving the first version of the api (v1) under its own directory (v1) under app/ controllers but still responding to the old paths. One of the parts of the new routes.rb specifies: map.resources :buckets, :has_many => :apples, :controller => ''v1/ buckets'' However, when I do rake routes, this produces: ... buckets GET /buckets(.:format) {:controller=>"v1/ buckets", :action=>"index"} POST /buckets(.:format) {:controller=&...
2006 Oct 13
3
HTB has 2 bucket?
in HTB use 2 bucket for manage 2 rate??? first bucket -> keep token for sending with rate second bucket -> keep ctoken for sending with ceil rate Is it true?? may be i''m misunderstand about token/bucket thoery _______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl h...
2005 Jun 20
6
Bucketting data
Hi, Am sure this is a trivial question but for some reason, haven't been able to figure it out. I want to bucket data in a vector, and then iterate over the buckets. Say the data set is: > cleandata[,4] [1] 26 26 26 26 26 26 26 26 26 26 26 26 61 61 61 61 61 61 61 61 61 61 61 89 89 89 89 89 89 89 180 180 180 180 362 544 544 544 [39] 544 544 544 544 544 544 544 This has the...
2008 Oct 26
1
[PATCH 1/1] ocfs2/xattr: Proper hash collision handle in bucket division.v3
Modification from V2 to V3: Use a more pefect code suggested by Joel. Thank Joel for it. In ocfs2/xattr, we must make sure the xattrs which have the same hash value exist in the same bucket so that the search schema can work. But in the old implementation, when we want to extend a bucket, we just move half number of xattrs to the new bucket. This works in most cases, but if we are lucky enough we will make 2 xattrs into 2 different buckets. This cause a problem that the xattr existing...
2019 Mar 05
0
[PATCH nbdkit] Add new filter for rate-limiting connections.
--- filters/delay/nbdkit-delay-filter.pod | 4 +- filters/rate/nbdkit-rate-filter.pod | 84 +++++++++ configure.ac | 2 + filters/rate/bucket.h | 62 +++++++ filters/rate/bucket.c | 173 +++++++++++++++++++ filters/rate/rate.c | 235 ++++++++++++++++++++++++++ TODO | 9 + filters/rate/Makefile.am | 64 +++++++ tests/Makefile.am...
2008 Nov 19
2
Bucketing/Grouping Probabilities
...nt computations more tractable, I wish to cluster entrant win probabilities like so: [(1, 0.049), (2, 0.121), (3, 0.049), (4, 0.024), (5, 0.024), (6, 0.049), (7, 0.072), (8, 0.049), (9, 0.185), (10, 0.024), (11, 0.185), (12, 0.049), (13, 0.072), (14, 0.049)] viz. in this case I have 'bucketed' the entrant numbers against 5 representative probabilities and in subsequent computations will deem (for example) the win probability of 3 to be 0.049, so another way of visualising the result is: [((4, 5, 10), 0.024), ((3, 6, 8, 12, 14), 0.049), ((7, 13), 0.072), ((2), 0.121), ((11), 0...
2019 Mar 05
2
[PATCH nbdkit] Add new filter for rate-limiting connections.
...v2v inputs, especially VDDK. We could apply a filter on top of the nbdkit plugin which limits the rate at which it copies data. For example, to limit the rate to 1 Mbps (megabit per second) we could now do: nbdkit --filter=rate vddk [etc] rate=1M The filter is implemented using a simple Token Bucket (https://en.wikipedia.org/wiki/Token_bucket) and is quite simple while at the same time using the fully parallel thread model. Rich.
2009 Mar 09
4
[PATCH] ocfs2: Use xs->bucket to set xattr value outside.
Tristan, could you please run your xattr test against it? xs->base used to be allocated a 4K size and all the contents in the bucket are copied to the it. So in ocfs2_xattr_bucket_set_value_outside, we are safe to use xs->base + offset. Now we use ocfs2_xattr_bucket to abstract xattr bucket and xs->base is initialized to the start of the bu_bhs[0]. So xs->base + offset will overflow when the value root is stored outside...
2009 Apr 22
1
[PATCH 1/1] OCFS2: fasten dlm_lock_resource hash_table lookups
.../dlmdebug.c --- ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c.orig 2009-04-22 11:00:37.000000000 +0800 +++ ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c 2009-04-22 11:08:27.000000000 +0800 @@ -547,7 +547,7 @@ void dlm_dump_lock_resources(struct dlm_ spin_lock(&dlm->spinlock); for (i=0; i<DLM_HASH_BUCKETS; i++) { - bucket = &(dlm->lockres_hash[i]); + bucket = dlm_lockres_hash(dlm, i); hlist_for_each_entry(res, iter, bucket, hash_node) dlm_print_one_lock_resource(res); } diff -up ./svnocfs2-1.2/fs/ocfs2/dlm/dlmrecovery.c.orig ./svnocfs2-1.2/fs/ocfs2/dlm/dlmrecovery.c --- ./svnocfs2...
2007 May 08
3
Token Bucket Filter and Dropping
I am trying to create my own Token Bucket Filter. However, I have a problem with packet dropping. Scenario : I got two streams 20KB/s each. I got one bucket with rate 20KB/s I put both streams into this bucket. When buffer is full packets need to be dropped. The problem is that only every other packet needs to be dropped in this scenari...
2011 Jul 05
0
Problem in accessing bucket of my AWS S3 account
...naws.com", :port=>80, :access_key_id=>"my access key", :secret_access_key=>"my secret key"}, @access_key_id="my access key", @secret_access_key="my secret key", @http=#<Net::HTTP s3-ap-southeast-1.amazonaws.com:80 open=false>> I have a bucket which is based on "Singapore Region" and for that endpoint i.e. server is: s3-ap-southeast-1.amazonaws.com So when I try to access it using this command - AWS::S3::Service.buckets it fetches all buckets in my account correctly - => [#<AWS::S3::Bucket:0x8d291fc @attributes={"...
2009 May 01
0
[PATCH 1/3] OCFS2: speed up dlm_lockr_resouce hash_table lookups
...vec; +out_free: + dlm_free_pagevec(vec, i); + return NULL; + +} + /* * * spinlock lock ordering: if multiple locks are needed, obey this ordering: @@ -127,7 +154,7 @@ void __dlm_insert_lockres(struct dlm_ctx q = &res->lockname; q->hash = full_name_hash(q->name, q->len); - bucket = &(dlm->lockres_hash[q->hash % DLM_HASH_BUCKETS]); + bucket = dlm_lockres_hash(dlm, q->hash); /* get a reference for our hashtable */ dlm_lockres_get(res); @@ -151,7 +178,7 @@ struct dlm_lock_resource * __dlm_lookup_ hash = full_name_hash(name, len); - bucket = &(dlm-&...
2009 Jan 08
1
[PATCH] ocfs2: Access the xattr bucket only before modifying it.
Hi Mark, This is the fix for 2.6.29(introduced by uniting the bucket journal access). Since I found no fixes-for-2.6.29 in your ocfs2.git, it is based on your original upstream-linus. In ocfs2_xattr_value_truncate, we may call b-tree codes which will extend the journal transaction. It has a potential problem that it may let the already-accessed-but-not-dirtied buff...
2009 Mar 12
0
[LLVMdev] a different hash for APInts
...ntries (1<<16). > There is no executable code in this testcase. > > The cause seems to be the APInt::getHashValue() function (near line > 626 of .../lib/Support/APInt.cpp). Some investigation using the > debugger suggests that every value was hashing into about three > buckets, and the DenseMap code was looping excessively over the > extremely long chains in those three buckets. > From what I can see the old hash function can be good, if the number of bucket is not a power of two. The problem is that dense map use 64 buckets initially and grow but doubling th...
2012 Nov 17
4
survfit & number of variables != number of variable names
This works ok: > cox = coxph(surv ~ bucket*(today + accor + both) + activity, data = data) > fit = survfit(cox, newdata=data[1:100,]) but using strata leads to problems: > cox.s = coxph(surv ~ bucket*(today + accor + both) + strata(activity), > data = data) > fit.s = survfit(cox.s, newdata=data[1:100,]) Error in model.frame....
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > I am sorry that I don't quite get what you mean here. My point is that in > the hashing step, a cpu will need to scan an empty bucket to put the lock > in. In the interim, an previously used bucket before the empty one may get > freed. In the lookup step for that lock, the scanning will stop because of > an empty bucket in front of the target one. Right, that's broken. So we need to do something else to limit the lo...
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > I am sorry that I don't quite get what you mean here. My point is that in > the hashing step, a cpu will need to scan an empty bucket to put the lock > in. In the interim, an previously used bucket before the empty one may get > freed. In the lookup step for that lock, the scanning will stop because of > an empty bucket in front of the target one. Right, that's broken. So we need to do something else to limit the lo...
2006 Feb 09
1
[Bug 445] New: ipt_account reports: sleeping function called from invalid context at mm/slab.c:2063
...don't see that the lock does anything to protect the malloc/free. My changed version of these functions are: static void *account_seq_start(struct seq_file *s, loff_t *pos) { struct proc_dir_entry *pde = s->private; struct t_ipt_account_table *table = pde->data; unsigned int *bucket; bucket = kmalloc(sizeof(unsigned int), GFP_KERNEL); if (!bucket) return ERR_PTR(-ENOMEM); spin_lock_bh(&table->ip_list_lock); if (*pos >= table->count) return NULL; *bucket = *pos; return bucket; } static void account_seq_stop(struct seq_file *s, vo...
2012 Oct 01
5
s3 as mysql directory
...my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and its' easting up valuable disk space. So I had an idea. What about uses the fuse based s3fs to mount an S3 bucket on the local filesystem and use that as your mysql data dir? In other words mount your s3 bucket on /var/lib/mysql I used this article to setup the s3fs file system http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html And everything went as planned. So my question...