Displaying 20 results from an estimated 2000 matches similar to: "AW: Largest file system being synced"
2012 Jun 23
0
puppetlabs-corosync help using multiple primitive operations
Setting up a HA iSCSI / NFS target using this document,
http://www.linbit.com/fileadmin/tech-guides/ha-iscsi.pdf, and I am unable
to find a way to use the puppetlabs-corosync module to emulate this command
crm(live)configure# primitive p_drbd_coraid23 ocf:linbit:drbd \
params drbd_resource=coraid23 \
op monitor interval=29 role=Master \
op monitor interval=31 role=Slave
crm(live)configure#
2008 Feb 07
0
drbd82 RPMS for testing
All,
There are drbd82 RPMS for CentOS-4 and CentOS-5 (i386 and x86_64) for
testing here:
http://people.centos.org/~hughesjr/drbd/
These are designed to be able to live in the same repository as the
current STABLE versions.
These RPMS are designed as CONFLICTS and not upgrades as there may be
some manual actions to upgrade, especially on CentOS-4.
Here is an article on the upgrade process:
2014 May 29
0
[DRBD-user] [Q] What would cause fsck running on a drbd device to just stop?
drbd-0.7.19 under kernel 2.6.17-rc4 is running on a primary node
standalone. There are 8 resources in the same group. fsck.ext3 -fv is
being run simultaneously on all of them. Each of the drbd devices are
running on an lv, which all belong to a single pv. The actual "disk"
is a hardware RAID connected via SCSI (i.e., the mpt driver).
Five of the fsck finished their tasks
2014 May 29
0
[DRBD-user] [Q] What would cause fsck running on a drbd device to just stop?
drbd-0.7.19 under kernel 2.6.17-rc4 is running on a primary node
standalone. There are 8 resources in the same group. fsck.ext3 -fv is
being run simultaneously on all of them. Each of the drbd devices are
running on an lv, which all belong to a single pv. The actual "disk"
is a hardware RAID connected via SCSI (i.e., the mpt driver).
Five of the fsck finished their tasks
2020 Oct 27
0
Understanding 'State change failed: (-2) Need access to UpToDate data'
Hi list,
I had to relocate my third node in a classic DRBD 8.4 three node set up
to a new host. I am having difficulty making the stacked resource the
primary. I am following this guide:
https://www.linbit.com/drbd-user-guide/users-guide-drbd-8-4/#s-three-nodes
Specifically this:
>
>
> 5.18.3. Enabling stacked resources
>
> To enable a stacked resource, you first
2004 Jan 23
1
AW: I got it (was: Cisco 7940 with asterisk)
Hi Siggi/Jan,
>If so, there's still a load version conflict (although I've
>never seen a
>7960 or 7940 care about the version communicated through SCCP):
>
>On the phone, press "Settings", then 4 for load information.
>watch out for the "App-Load-ID". On my 7940, this is
>"P00305000300". Yours
>is most likely a smaller number...
>
2011 May 10
3
DRBD, Xen, HVM and live migration
Hi,
I want to combine all the above mentioned technologies.
The Linbit pages warn not to use the drbd: VBD with HVM DomUs.
This page however:
http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pacemaker-drbd-ocfs2.en
(thank you Jean), simply puts two DRBD devices in dual primary mode and
starts Xen DomUs while pointing to the DRBD devices with phy: in the
DomU config files.
2001 Feb 08
1
Ext3 & InterMezzo issue
Hi Stephen,
We had some starvation/locks happening to us under very heavy load
in two cases:
- InterMezzo asked ext3 to do a journaled file write (for 1 block)
essentially using
ext3_write
- similarly for truncate.
These lockups went away when we started the transaction in
InterMezzo and reserved somewhat
more space than ext3 does.
Any clues as to what this might be? Are the ext3
2001 Aug 13
0
InterMezzo patch for ac?
Hi Alan,
Would you be opposed to including an InterMezzo patch for the ac
series soon? It doesn't touch anything outside of its fs directory.
It would probably help us get a few more users and fits nicely with
ext3 being in ac now. We are also quite far along Reiser support for
InterMezzo.
Please let me know your thoughts on this.
Thanks!
- Peter -
2009 Jul 15
1
CentOS-5.3 + DRBD-8.2 + OCFS2-1.4
I've run into a problem mounting an OCFS2 filesystem on a DRBD device. I think it's the same one discussed at http://lists.linbit.com/pipermail/drbd-user/2007-April/006681.html
When I try to mount the filesystem I get a ocfs2_hb_ctl: I/O error:
[root at node-6A ~]# mount -t ocfs2 /dev/drbd2 /cshare
ocfs2_hb_ctl: I/O error on channel while starting heartbeat
mount.ocfs2: Error when
2009 Aug 29
0
upcoming DRBD updates in CentOS Extras for CentOS 4 and 5
Ralph Angenendt, Fabian Arrotin, Akemi Yagi and I have been working on
new DRBD packages for CentOS Extras.
We have some packages that have been tested in the testing repository.
LinBit has discontinued their 8.2 tree, so drbd83 will replace drbd82 in
both CentOS 4 and CentOS 5 extras.
DRBD 0.7.25 will remain in CentOS 4, however it gets no upstream support
and I would recommend you upgrade to
2008 May 31
4
drbd strategy
I have an existing in-production LAMP server running Centos 5.1. It uses
physical partitions on top of hardware RAID1, having / /home /var and /boot
on separate partitions.
We have a near-identical system I am thinking of bringing in as a
DRBD/Heartbeat companion. One solution may be to use csync2
[http://oss.linbit.com/csync2/] on /etc and /usr/local (the only areas that
will differ from
2013 Mar 19
0
Remus DRBD frozen
Hi all,
I don''t know if my question doesn''t related to xen at all, how ever I am
trying to use DRBD as my disk replication when I ran Remus. However when I
run remus sometimes my Dom-U will be freezing. I see the log file and it
seem caused by drbd frozen :
875.616068] block drbd1: Local backing block device frozen?
[ 887.648072] block drbd1: Local backing block device
2014 Aug 03
0
CentOS-docs Digest, Vol 93, Issue 3
On 08/03/2014 08:00 AM, centos-docs-request at centos.org wrote:
> it's 9999fournines9999
6 protons 6 electrons 6 neutrons... carbon. ill trust science,
heretics may do as they will. when you're posting articles for data
replication, data integrity, data security you may want to post as "MAN"
or "ANONYMOUS." ill just push article updates to the mailing list,
2005 Nov 01
2
xen, lvm, drbd, bad kernel messages
Regardless of the filesystem (i''ve used reiserfs, xfs, ext3),
whenever I mount a fresh DRBD partition I get some nasty kernel
messages.
This is under Debian Sarge, Xen kernel 2.6.11.12-xen0 (dom0) using
DRBD v0.7.11 (pulled from Debian "testing").
This is what I did to create the partition.
On both nodes I created a new LVM storage device and started DRBD:
# lvcreate
2011 Mar 23
3
EXT4 Filesystem Mount Failed (bad geometry: block count)
Dear All,
Currently using RHEL6 Linux and Kernel Version is 2.6.32-71.el6.i686 and DRBD Version is 8.3.10
DRBD is build from source and Configured DRBD with 2 Node testing with Simplex Setup
Server 1 : 192.168.13.131 IP Address and hostname is primary
Server 2 : 192.168.13.132 IP Address and hostname is secondary
Finally found that drbd0, drbd1 mount failed problem
*Found some error messages
2001 Sep 20
0
NFS/InterMezzo ext3 problem
Hi,
We have encountered another funny with ext3 on 2.4.
Like the kernel NFS server, we have a routine in InterMezzo that does
something like looking up an inode by inode number. (compare
intermezzo's presto_ilookup or knfsd's nfsfh_iget)
Effectively both of these routines do
ilookup()
{
inode = iget(sb, ino)
if (inode->i_nlink==0)
iput(inode);
....
}
We find that
2005 May 06
3
CentOS Convert Question
Hi,
First, I want to say that I have fallen in love with CentOS4. I have been
using RedHat since the 5.x days. After RedHat dropped the stable system
to go to a unstable system and a Enterprise system I felt like I was being
left out in the cold. I quickly found out about WhiteBox and used it for
quite a while. Then I learned about CentOS...and switched to it for my
server needs. I have
2014 Jul 26
2
CentOS-docs Digest, Vol 92, Issue 5
On 07/26/2014 08:00 AM, centos-docs-request at centos.org wrote:
> Send CentOS-docs mailing list submissions to
> centos-docs at centos.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.centos.org/mailman/listinfo/centos-docs
> or, via email, send a message with subject or body 'help' to
> centos-docs-request at centos.org
>
>
2017 Sep 05
0
Is it possible to transfer a large, dynamic file in a piecemeal fashion?
On Mon, Sep 04, 2017 at 10:45:26PM +0000, Don Kuenz via rsync wrote:
> Greetings,
>
> Is it possible to use rsync to transmit a large, dynamic 2TB file in a
> piecemeal fashion during daylight hours over the course of a dozen days?
> On a good day, about 200GB of data can be transferred before rsync times
> out to enable a nightly local backup to complete. The local backup