Displaying 20 results from an estimated 4000 matches similar to: "rsync patch"
2007 Mar 21
2
store_class_name for Comatose:Page model
Hi,
I have three models: Comatose::Page, Article and Product.
In all of them, store_class_name is set to true.
Now, when i do:
results = Comatose::Page.multi_search("*", [Article,Product], options)
I get:
wrong constant name Comatose::Page
#{RAILS_ROOT}/vendor/plugins/acts_as_ferret/lib/class_methods.rb:438:in
`const_get''
2006 Aug 03
1
routeset mapper problem
hello,
I installed a rails app on dreamhost, which I''m building based on the
Comatose plugin, and it went smoothly for the first version.
Now I uploaded a second version where I broke down the Comatose code
into a regular rails app, which works alright locally, but can''t get
routing to work the same as before on the server. I believe I
double-checked all gotchas mentioned
2009 Mar 06
1
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount (revised)
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2016 Feb 09
3
Utility to zero unused blocks on disk
On Mon, 2016-02-08 at 14:22 -0800, John R Pierce wrote:
> the only truly safe way to destroy data on magnetic media is to grind
> the media up into filings or melt it down in a furnace.
I unscrew the casing, extract the disk platter(s), slide a very strong
magnet over both sides of the platter surface then bend the platter in
half.
How secure is that ?
I can't afford a machine that
2002 Aug 20
5
unmountable ext3 root recovery
After a (hardware) crash yesterday, I was unable to boot up due to
unrecoverable ide errors (according to the printk()s) when accessing
the root filesystem's journal for recovery.
Unable to recover, I tried deleting the has_journal option, but that was
disallowed given that the needs_recovery flag was set. I saw no way
to unset that flag.
Unable to access the backups (they were on a fw
2008 Mar 12
0
CMS act_as_tree problem and template_root
Hi,
I want use the comatose CMS plug in for creatin one application.
I installed all the things.
When I run this application I got this error.
"template_root".
Please clear my problem. And How can I create one application by using
the comatose. Please send the sample application if available.
Thank you,
srinivas rao.pala
--
Posted via http://www.ruby-forum.com/.
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2016 Feb 09
1
Utility to zero unused blocks on disk
> -----Original Message-----
> From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On
> Behalf Of EGO-II.1
> Sent: den 9 februari 2016 09:00
> To: CentOS mailing list
> Subject: Re: [CentOS] Utility to zero unused blocks on disk
>
>
>
> >> the only truly safe way to destroy data on magnetic media is to grind
> >> the media up into
2009 Mar 06
0
[PATCH 1/1] ocfs2: recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2016 Feb 09
0
Utility to zero unused blocks on disk
On 02/08/2016 07:38 PM, Always Learning wrote:
> On Mon, 2016-02-08 at 14:22 -0800, John R Pierce wrote:
>
>> the only truly safe way to destroy data on magnetic media is to grind
>> the media up into filings or melt it down in a furnace.
> I unscrew the casing, extract the disk platter(s), slide a very strong
> magnet over both sides of the platter surface then bend the
2007 Jul 05
1
First install No Sound
Hi Gang.
I installed CentOS last night and the sound card detection
failled to
detect
the sound card. I have a Asrock K8NF6G-VSTA mother board, with a nVidia
NF6100-405 chip set. I have tried various live distro's and I ran Fedora
7 on
it for just over 2 weeks and none of them detected the sound card. I'm
a
newbie to Red Hat based distributions but have been using Mandriva for
2007 Jul 06
0
Sent from CentOS box First install no Sound
Hi Gang.
I installed CentOS last night and the sound card detection failled to
detect
the sound card. I have a Asrock K8NF6G-VSTA mother board, with a nVidia
NF6100-405 chip set. I have tried various live distro's and I ran Fedora 7 on
it for just over 2 weeks and none of them detected the sound card. I'm a
newbie to Red Hat based distributions but have been using Mandriva for
2016 Feb 08
3
KVM
> I'm guessing you're using standard 7,200rpm platter drives? You'll need
> to share more information about your environment in order for us to
> provide useful feedback. Usually though, the answer is 'caching' and/or
> 'faster disks'.
Yes , 7.2k rpm disks. 2T mirror (soft). In fact, I had such a
preference for slightly more capacity.
Unfortunately very
2016 Feb 08
0
KVM
Slow disks will show up as higher I/Owait times.
If your seeing 99% cpu usage then your likely looking at some other problem.
If you run top what are you seeing on the %Cpu(s) line?
On 02/08/2016 02:20 PM, Gokan Atmaca wrote:
>> I'm guessing you're using standard 7,200rpm platter drives? You'll need
>> to share more information about your environment in order for us to
2016 Feb 08
0
KVM
You need to provide more information.
20% is what number.
There are something like 6 numbers on that line.
On 02/08/2016 02:56 PM, Gokan Atmaca wrote:
>> If you run top what are you seeing on the %Cpu(s) line?
> %20
>
>
> On Mon, Feb 8, 2016 at 9:30 PM, Alvin Starr <alvin at netvel.net> wrote:
>> Slow disks will show up as higher I/Owait times.
>> If your
2016 Feb 08
3
KVM
> If you run top what are you seeing on the %Cpu(s) line?
%20
On Mon, Feb 8, 2016 at 9:30 PM, Alvin Starr <alvin at netvel.net> wrote:
> Slow disks will show up as higher I/Owait times.
> If your seeing 99% cpu usage then your likely looking at some other problem.
>
> If you run top what are you seeing on the %Cpu(s) line?
>
>
> On 02/08/2016 02:20 PM, Gokan Atmaca
2016 Dec 18
3
PulseAudio is streaming with an excessive latency.
On Sat, 17 Dec 2016, John R Pierce wrote:
> On 12/17/2016 9:18 AM, Michael Hennebry wrote:
>> vlc.x86_64 2.0.10-1.el6 @rpmfusion-free-updates
>
> rpmfusion is comatose, that VLC rpm hasn't been updated in a couple years.
Does that imply that I should abandon VLC or just rpmfusion?
This site:
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
I figured the following ZFS ''success story'' may interest some readers here.
I was interested to see how much sequential read/write performance it would be
possible to obtain from ZFS running on commodity hardware with modern features
such as PCI-E busses, SATA disks, well-designed SATA controllers (AHCI,
SiI3132/SiI3124). So I made this experiment of building a fileserver by
2016 Feb 08
1
KVM
>>> If you run top what are you seeing on the %Cpu(s) line?
http://i.hizliresim.com/NrmV9Y.png
On Mon, Feb 8, 2016 at 10:53 PM, Alvin Starr <alvin at netvel.net> wrote:
> You need to provide more information.
> 20% is what number.
> There are something like 6 numbers on that line.
>
>
> On 02/08/2016 02:56 PM, Gokan Atmaca wrote:
>>>
>>> If you
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
On Jan 6, 2015, at 4:28 PM, Fran Garcia <franchu.garcia at gmail.com> wrote:
>
> On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote:
>> I've had a few systems with a lot of RAM and very busy filesystems
>> come up with filesystem errors that took a manual 'fsck -y' after what
>> should have been a clean reboot. This is particularly annoying on