Displaying 14 results from an estimated 14 matches for "suckage".
2014 Dec 02
0
puzzle, need magic incantation (man pages)
...ll despite the command
> line echo showing that they were when I did the sudo make install, but I
> am forced to go into the docs directory and a man ./name-of-man-page to
> read it, and mode is only mentioned briefly in the example line which
> shows mode=none. That is a 10-33 torr suckage.
I think that other link I sent to the sample nut.conf has all of the possible values there - not sure what happened to those comments in your original file.
> This I think can be alleviated by setting up the env variable MANPATH,
> which is not apparently configured. Export that and it w...
2014 Dec 01
4
puzzle, need magic incantation
...manpages were install despite the command
line echo showing that they were when I did the sudo make install, but I
am forced to go into the docs directory and a man ./name-of-man-page to
read it, and mode is only mentioned briefly in the example line which
shows mode=none. That is a 10-33 torr suckage.
This I think can be alleviated by setting up the env variable MANPATH,
which is not apparently configured. Export that and it works. So put it
in my .bashrc
But since every other manpage on the system works without that env setting
of $MANPATH, showing "/usr/local/ups/share/man:/usr/sh...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...mise to provide some lock unfairness without
> sacrificing the good cacheline behavior of the queue spinlock.
But but but,.. any kind of queueing gets you into a world of hurt with
virt.
The simple test-and-set lock (as per the above) still sucks due to lock
holder preemption, but at least the suckage doesn't queue. Because with
queueing you not only have to worry about the lock holder getting
preemption, but also the waiter(s).
Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after
which cpu0 gets back online....
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...mise to provide some lock unfairness without
> sacrificing the good cacheline behavior of the queue spinlock.
But but but,.. any kind of queueing gets you into a world of hurt with
virt.
The simple test-and-set lock (as per the above) still sucks due to lock
holder preemption, but at least the suckage doesn't queue. Because with
queueing you not only have to worry about the lock holder getting
preemption, but also the waiter(s).
Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after
which cpu0 gets back online....
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...t;>sacrificing the good cacheline behavior of the queue spinlock.
> >But but but,.. any kind of queueing gets you into a world of hurt with
> >virt.
> >
> >The simple test-and-set lock (as per the above) still sucks due to lock
> >holder preemption, but at least the suckage doesn't queue. Because with
> >queueing you not only have to worry about the lock holder getting
> >preemption, but also the waiter(s).
> >
> >Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
> >preempted. cpu1 queues, cpu2 queues. Then cpu1 gets pr...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...t;>sacrificing the good cacheline behavior of the queue spinlock.
> >But but but,.. any kind of queueing gets you into a world of hurt with
> >virt.
> >
> >The simple test-and-set lock (as per the above) still sucks due to lock
> >holder preemption, but at least the suckage doesn't queue. Because with
> >queueing you not only have to worry about the lock holder getting
> >preemption, but also the waiter(s).
> >
> >Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
> >preempted. cpu1 queues, cpu2 queues. Then cpu1 gets pr...
2007 Feb 22
7
We can''t 100% remove our unit tests from the database, can we?
I hope this isn''t too rambly. This is sort of a brain dump of a
subject I''ve been thinking about for months as I''ve used RSpec.
Let''s say we''ve got a simple query method, that will find all the
users in the DB older than 18. Our model could look like
class User < ActiveRecord::Base
def self.find_older_than(age)
find :all, :conditions =>
2014 Mar 17
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...nfairness without
>> sacrificing the good cacheline behavior of the queue spinlock.
> But but but,.. any kind of queueing gets you into a world of hurt with
> virt.
>
> The simple test-and-set lock (as per the above) still sucks due to lock
> holder preemption, but at least the suckage doesn't queue. Because with
> queueing you not only have to worry about the lock holder getting
> preemption, but also the waiter(s).
>
> Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
> preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after
> w...
2014 Mar 19
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...he good cacheline behavior of the queue spinlock.
>>> But but but,.. any kind of queueing gets you into a world of hurt with
>>> virt.
>>>
>>> The simple test-and-set lock (as per the above) still sucks due to lock
>>> holder preemption, but at least the suckage doesn't queue. Because with
>>> queueing you not only have to worry about the lock holder getting
>>> preemption, but also the waiter(s).
>>>
>>> Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
>>> preempted. cpu1 queues, cpu2 queues...
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
> +static inline void arch_spin_lock(struct qspinlock *lock)
> +{
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + queue_spin_lock_unfair(lock);
> + else
> + queue_spin_lock(lock);
> +}
So I would have expected something like:
if (static_key_false(¶virt_spinlock)) {
while
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
> +static inline void arch_spin_lock(struct qspinlock *lock)
> +{
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + queue_spin_lock_unfair(lock);
> + else
> + queue_spin_lock(lock);
> +}
So I would have expected something like:
if (static_key_false(¶virt_spinlock)) {
while
2008 Oct 11
1
[PATCH] fstype: Fix ext4/ext4dev probing
Enhance fstype so it properly takes into account whether or not the
ext4 and/or ext4dev filesystems are present, and properly handles the
test_fs flag. The old code also has some really buggy checks --- for
example, where it compared the set of supported ro_compat features
against the incompat feature bitmask:
(sb->s_feature_incompat & __cpu_to_le32(EXT3_FEATURE_RO_COMPAT_SUPP)
I
2006 Sep 03
18
Recommentation: Sessions and PStore
Morning Folks,
As most of you know there were a few people who had the following three
bugs:
* CLOSE_WAIT: Lots of sockets in CLOSE_WAIT state.
* 99% CPU: Mongrel''s getting "stuck" pegged at 99% CPU.
* LEAK: Memory leak.
I''ve successfully fixed these bugs or attributed them to one main cause:
pstore.
First, the memory leak was because of a bug in how the GC in Ruby
2007 Feb 22
33
Scaling Puppet 0.22.1 to hunderdes of nodes.
Hi,
My environment is composed of ~250 workstations hitting a single
puppetmaster server, which has been working fairly well up until now.
The most recent change has been a migration of a lot of remote file copy
objects which were previously handled with cfengine.
client side puppetd calls to the puppetmaster.getconfig method are
taking unreasonably long, on the order of 2-3 minutes. It