Displaying 20 results from an estimated 20000 matches similar to: "Sessions And Active Record"
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2017 Dec 11
2
active/active failover
Dear all,
I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can I really use glusterfs to do failover in the way described
2018 Mar 09
1
wrong size displayed with df after upgrade to 3.12.6
Hi Stefan,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this. Please take a look at [1] and try out the workaround
provided in the comments.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
On 9 March 2018 at 13:37, Stefan Solbrig <stefan.solbrig at ur.de> wrote:
> Dear all,
>
> I have a problem with df after
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers
2018 Mar 09
2
wrong size displayed with df after upgrade to 3.12.6
Dear all,
I have a problem with df after I upgraded from 3.12.4 to 3.12.6
All four bricks are shown als online, and all bricks are being used.
gluster v stats shows the correct sizes for all devices.
However, df does not show the correct glusterfs volume size.
It seems to me that it "forgets" one brick. Although all bricks are used when I'm writing files.
best wishes,
Stefan
2007 Dec 05
5
Active Record, Migration, and Translation
Hi,
I think the columns and table in migration should be able to have an
optional "display_name" set manually. Something like:
create_table :people, {display_name => "Personne"} do |t|
t.column :first_name :string, :display_name => "Prénom".
end
Let me explain my point of view:
Rails is a framework made to write programs in English. You see it when
you
2018 Jan 21
1
mkdir -p, cp -R fails
Dear all,
I have problem with glusterfs 3.12.4
mkdir -p fails with "no data available" when umask is 0022, but works when umask is 0002.
Also recursive copy (cp -R or cp -r) fails with "no data available", independly of the umask.
See below for an example to reproduce the error. I already tried to change transport from rdma to tcp. (Changing the transport works, but
2006 Jul 15
13
Active Record: Can it auto-create database tables for you?
Hi,
Just get started with Rails and I''m trying to read ahead and find out
whether Active Record supports auto-creation of database tables for you?
Is this supported, or is the concept that you write your own database
DDL to do this?
Thanks
--
Posted via http://www.ruby-forum.com/.
2007 Nov 28
1
Replacing the database in Active Record with RPC-based interface
Hi there,
We''re about to move our persistent store from a MySQL database to an
RPC-based data store.
However, we really like programming against the AR-api, and would like
to keep as much as possible from, so that we can keep stuff like
callbacks, validations, relations etc.
How would you do it?
Does it make sense to replace the mysql-adapter with and RPC-based
adapter, translating
2018 May 30
2
RDMA inline threshold?
Forgot to mention, sometimes I have to do force start other volumes as
well, its hard to determine which brick process is locked up from the logs.
Status of volume: rhev_vms_primary
Gluster process
TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary
2024 Jun 11
1
[EXT] Replace broken host, keeping the existing bricks
Hi,
The method depends a bit if you use a distributed-only system (like me) or a replicated setting.
I'm using a distributed-only setting (many bricks on different servers, but no replication). All my servers boot via network, i.e., on a start, it's like a new host.
To rescue the old bricks, just set up a new server this the same OS, the same IP and and the same hostname (!very
2018 May 30
0
RDMA inline threshold?
Dear Dan,
thanks for the quick reply!
I actually tried restarting all processes (and even rebooting all servers), but the error persists. I can also confirm that all birck processes are running. My volume is a distrubute-only volume (not dispersed, no sharding).
I also tried mounting with use_readdirp=no, because the error seems to be connected to readdirp, but this option does not change
2018 May 29
2
RDMA inline threshold?
Dear all,
I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all.
The respective log file shows this
2006 Jun 27
4
Not Active Record Model Validation
I have a problem with ruby on rails validation
total_book_toy.rhtml
================
<%= text_field ''book1'', ''title1'' %>
<%= text_field ''book2'', ''title2'' %>
I want to validate these text_field so user can''t insert same title.
However, I was stuck how to do it.
Or maybe you have another way how to do it.
2007 Sep 02
5
hash_cache a bogus function that never worked?
Hi,
I''ve been investigating various caching methods provided by Rail. I
first looked at the hash_cache module and function. In testing it, I
noticed it wasn''t actually caching anything. Then looking at the source
code, I noticed that it attempted to hold its cache in a class variable
- which won''t actually be saved from page request to page because of the
way that rails
2006 Mar 03
4
DB data type enforcement in Active Record
I have a question about how ActiveRecord handles data types.
When I enter text in a text_field (meaning, a field in the GUI) which
belongs to a numeric field in the database, Active Record automaticly
converts it to 0, because that''s what the to_f/to_i method of a string
does. Is it also possible to have Active Record enforce the types, so
that when you enter text for a numeric
2017 Sep 13
3
glusterfs expose iSCSI
Hi all
I want to configure glusterfs to expose iSCSI target. I followed this
artical
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
but when I install tcmu-runner. It doesn't work.
I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli,
it not show *user:glfs* and *user:gcow*
*/>* ls
o- /
2005 Apr 23
7
Validation question
Hi all,
Is there a way to invoke validations at times other than save, create
and update? I know that I can do this by writing my own validation
checks using errors.add_[blah], but I''d like to leverage the existing
validation code.
What I have is two sets of fields in a record, set A and set B. Both
sets must be validated on record create. However, the trouble is that
after
2018 May 30
0
RDMA inline threshold?
Stefan,
Sounds like a brick process is not running. I have notice some strangeness
in my lab when using RDMA, I often have to forcibly restart the brick
process, often as in every single time I do a major operation, add a new
volume, remove a volume, stop a volume, etc.
gluster volume status <vol>
Does any of the self heal daemons show N/A? If that's the case, try forcing
a restart on
2017 Dec 18
2
interval or event to evaluate free disk space?
Hi all,
with the option "cluster.min-free-disk" set, glusterfs avoids placing files bricks that are "too full".
I'd like to understand when the free space on the bricks is calculated. It seems to me that this does not happen for every write call (naturally) but at some interval or that some other event triggers this.
i.e, if I write two files quickly (that together