Displaying 20 results from an estimated 52 matches for "unsync".
Did you mean:
insync
2017 Jul 25
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...e brick 'gdnode01:/gluster/engine/brick' of volume
'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
network found in cluster '00000002-0002-0002-0002-00000000017a'
How to assign "glusternw (???)" to the correct interface?
Other errors on unsync gluster elements still remain... This is a
production env, so, there is any chance to subscribe to RH support?
Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170725/a453f959/attachment.htm...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...; How to assign "glusternw (???)" to the correct interface?
>
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
"Storage network" section explains this. Please make sure that gdnode01 is
resolvable from engine.
>
> Other errors on unsync gluster elements still remain... This is a
> production env, so, there is any chance to subscribe to RH support?
>
The unsynced entries - did you check for disconnect messages in the mount
log as suggested by Ravi?
For Red Hat support, the best option is to contact your local Red Hat
repres...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...th hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
> We have this problem: "engine" gluster volume have always unsynced
> elements and we cant' fix the problem, on command line we have tried to use
> the "heal" command but elements remain always unsynced ....
>
> Below the heal command "status":
>
> [root at node01 ~]# gluster volume heal engine info
> Brick node01:/glu...
2020 Nov 05
0
doveadm sync usage ; root INBOX unsynced
Le Sat, 31 Oct 2020 13:36:32 +0100,
Fran?ois Poulain <fpoulain at metrodore.fr> a ?crit :
> I am trying to import a mail IMAP account using doveadm.
Am I wrong trying to do so?
Best regards.
Fran?ois
--
Fran?ois Poulain <fpoulain at metrodore.fr>
2020 Oct 31
0
doveadm sync usage ; root INBOX unsynced
Le Sat, 31 Oct 2020 13:36:32 +0100,
Fran?ois Poulain <fpoulain at metrodore.fr> a ?crit :
> Does someone have on hint?
I tried to remove all dirs and re-start using doveadm backup as did in
https://dovecot.org/pipermail/dovecot/2019-December/117963.html but it
didn't worked better.
Fran?ois
--
Fran?ois Poulain <fpoulain at metrodore.fr>
2010 Dec 10
1
Screen is unsynced with the window in Melty Blood: AC
I've got Melty Blood: Act Cadenza Ver B working somewhat through the latest OSX version of Wine (I have Snow Leopard), the game itself functions well but it's off center from the window, so there's some extra space left on the bottom of the window and the top area is obscured, like so:
[Image: http://img692.imageshack.us/img692/1411/screenshot20101210at254.png ]
Can this be fixed
2017 Sep 08
0
ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-19 11:22 GMT+02:00 yayo (j) <jaganz at gmail.com>:
> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't know this is the problem...
>
>
Hi,
I'm sorry for the follow up. I want
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 02:55 PM, yayo (j) wrote:
> 2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
>
> But it does say something. All these gfids of completed heals in
> the log below are the for the ones that you have given the
> getfattr output of. So what is likely happening is there is an
>
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
> We have this problem: "engine" gluster volume have always unsynced
> elements and we cant' fix the problem, on command line we have
> tried to use the "heal" command but elements remain always
> unsynced ....
>
> Below the heal command "status":
>
> [root at node01 ~]# gluster volume heal engine in...
2020 Oct 31
4
doveadm sync usage ; root INBOX unsynced
Hi,
I am trying to import a mail IMAP account using doveadm.
Following
https://serverfault.com/questions/605342/migrating-from-any-imap-pop3-server-to-dovecot
I did it with:
doveadm -D -v \
-o imapc_host=mail.oldserver.tld \
-o imapc_user=contact at domain.org \
-o imapc_password=xxxxxxx \
-o imapc_features=rfc822.size \
-o imapc_ssl=starttls \
-o
2017 Jul 21
1
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>
2015 Oct 07
2
gpo failure
...om\Policies\{12AEE8C7-1711-4B26-B5AB-DC7BF1CC2143}
> \\dc4.samba.company.com\SysVol\samba.company.com\Policies\{12AEE8C7-1711-4B26-B5AB-DC7BF1CC2143}
But I can NOT open the UNC
> \\dc3.samba.company.com\SysVol\samba.company.com\Policies\{12AEE8C7-1711-4B26-B5AB-DC7BF1CC2143}
So my dc3 seems unsynced or so.
So I am now checking to make sure that my rsync replication script works
as it should. (I'm guesssing it does NOT)
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
Sorry for follow up again, but, checking the ovirt interface I've found
that ovirt report the "engine" volume as an "arbiter" configuration and the
"data" volume as full replicated volume. Check these screenshots:
https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
But the "gluster volume info" command report that all 2
2007 Oct 08
2
safe zfs-level snapshots with a UFS-on-ZVOL filesystem?
...a zone).
It''s a bit unwieldy, but everything worked reasonably well -
performance isn''t much worse than straight ZFS (it gets much faster
with compression enabled, but that''s another story).
The only fly in the ointment is that ZVOL level snapshots don''t
capture unsynced data up at the FS level. There''s a workaround at:
http://blogs.sun.com/pgdh/entry/taking_ufs_new_places_safely
but I wondered if there was anything else that could be done to avoid
having to take such measures?
I don''t want to stop writes to get a snap, and I''d real...
2015 Aug 19
4
Optimum Block Size to use
Hi All
We use CentOS 6.6 for our application. I have profiled the application
and find that we have a heavy requirement in terms of Disk writes. On an
average when our application operates at a certain load i can observe
that the disk writes / second is around 2 Mbps (Average).
The block size set is 4k
*******************
[root at localhost ~]# blockdev --getbsz /dev/sda3
4096
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>
2007 Mar 03
1
docs (was Re: Re: [nut-commits] svn commit r831 - in trunk: .)
On 3/3/07, Arnaud Quette <aquette.dev@gmail.com> wrote:
> Lastly, yep the doc is still unsynced, ugly and incomplete :(
> I've recruited 3 people of which none is active.
I still think we need something simpler than docbook.
Suggestions?
--
- Charles Lepple
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
Regarding the UI showing incorrect information about engine and data
volumes, can you please refresh the UI and see if the issue persists plus
any errors in the engine.log files ?
Thanks
kasturi
On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N <ravishankar at redhat.com>
wrote:
>
> On 07/21/2017 11:41 PM, yayo (j) wrote:
>
> Hi,
>
> Sorry for follow up again,
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
These errors are because not having glusternw assigned to the correct
interface. Once you attach that these errors should go away. This has
nothing to do with the problem you are seeing.
sahina any idea about engine not showing the correct volume info ?
On Mon, Jul 24, 2017 at 7:30 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi,
>
> UI refreshed but problem still remain ...
>
2015 Aug 19
2
Optimum Block Size to use
...gt; Initial thought is, do you really care? 2Mbps is peanuts, so personally I'd
> leave everything at the defaults. There's really no need to optimise
> everything.
>
> Obviously the exact type of writes is important (lots of small writes written
> and flushed vs fewer big unsynced writes), so you'd want to poke it with
> iostat to see what kind of writes you're talking about.
to address this we use (sysctl)
vm.dirty_expire_centisecs
vm.dirty_writeback_centisecs
furthermore check the fs alignment with
the underlying disk ...
--
LF