search for: 41f8

Displaying 12 results from an estimated 12 matches for "41f8".

Did you mean: 418
2011 Aug 17
2
no dentry for non-root inode
...cid1 on client (samba server) and the same but Natty on the nodes. It was upgraded from 3.2.1. What is this? If a client try to access it, it freezes up. This is in log: [2011-08-17 12:29:43.108100] W [inode.c:1035:inode_path] 0-w-vol/inode: no dentry for non-root inode 1996985: b45eeb9d-5481-41f8-828a-2850c51e754c [2011-08-17 12:29:43.108135] W [fuse-bridge.c:508:fuse_getattr] 0-glusterfs-fuse: 35186424: GETATTR 139724065350900 (fuse_loc_fill() failed) [2011-08-17 12:29:45.149772] W [inode.c:1035:inode_path] 0-w-vol/inode: no dentry for non-root inode 1996985: b45eeb9d-5481-41f8-828a-2850...
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
...profile testvol info > /32mb_shard_and_2gb_dd.log sr:~# gluster volume profile testvol stop Stopping volume profile on testvol has been successful Also here is volume info: sr:~# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 3cc06d95-06e9-41f8-8b26-e997886d7ba1 Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2: sr-10-loc-50-14-18:/bricks/brick1 Brick3: sr-09-loc-50-14-18:/bricks/brick2 Brick4: sr-10-loc-50-14-18:/bricks/brick2 Brick5: sr-...
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
...profile testvol info > /32mb_shard_and_2gb_dd.log sr:~# gluster volume profile testvol stop Stopping volume profile on testvol has been successful Also here is volume info: sr:~# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 3cc06d95-06e9-41f8-8b26-e997886d7ba1 Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2: sr-10-loc-50-14-18:/bricks/brick1 Brick3: sr-09-loc-50-14-18:/bricks/brick2 Brick4: sr-10-loc-50-14-18:/bricks/brick2 Brick5: sr-...
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
...profile testvol stop > > Stopping volume profile on testvol has been successful > > > > Also here is volume info: > > > > sr:~# gluster volume info testvol > > > > Volume Name: testvol > > Type: Distributed-Replicate > > Volume ID: 3cc06d95-06e9-41f8-8b26-e997886d7ba1 > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 10 x 2 = 20 > > Transport-type: tcp > > Bricks: > > Brick1: sr-09-loc-50-14-18:/bricks/brick1 > > Brick2: sr-10-loc-50-14-18:/bricks/brick1 > > Brick3: sr-09-loc-50-14-1...
2018 Dec 20
1
Samba AD DC replication error - 2, 'WERR_BADFILE'
...rst-Site-Name\LOCATION-000001 via RPC DSA object GUID: 2fbf25e8-acff-485b-8dea-2bc116869f5c Last attempt @ Thu Dec 20 13:49:46 2018 UTC failed, result 2 (WERR_BADFILE) 29 consecutive failure(s). Last success @ NTTIME(0) ==== KCC CONNECTION OBJECTS ==== Connection -- Connection name: 6c51da6c-3fe9-41f8-a9ac-a99949a235e4 Enabled : TRUE Server DNS name : location-000001.example.corp Server DN name : CN=NTDS Settings,CN=LOCATION-000001,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=corp TransportType: RPC options: 0x00000001 Warning: No NC replicated for Connec...
2018 Dec 20
4
Samba AD DC replication error - 2, 'WERR_BADFILE'
...t-Site-Name\LOCATION-000001 via RPC DSA object GUID: 2fbf25e8-acff-485b-8dea-2bc116869f5c Last attempt @ Thu Dec 20 13:49:46 2018 UTC failed, result 2 (WERR_BADFILE) 29 consecutive failure(s). Last success @ NTTIME(0) ==== KCC CONNECTION OBJECTS ==== Connection -- Connection name: 6c51da6c-3fe9-41f8-a9ac-a99949a235e4 Enabled        : TRUE Server DNS name : location-000001.example.corp Server DN name  : CN=NTDS Settings,CN=LOCATION-000001,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=corp TransportType: RPC options: 0x00000001 Warning: No NC replicated for Connec...
2018 Dec 20
0
Samba AD DC replication error - 2, 'WERR_BADFILE'
...t-Site-Name\LOCATION-000001 via RPC DSA object GUID: 2fbf25e8-acff-485b-8dea-2bc116869f5c Last attempt @ Thu Dec 20 13:49:46 2018 UTC failed, result 2 (WERR_BADFILE) 29 consecutive failure(s). Last success @ NTTIME(0) ==== KCC CONNECTION OBJECTS ==== Connection -- Connection name: 6c51da6c-3fe9-41f8-a9ac-a99949a235e4 Enabled        : TRUE Server DNS name : location-000001.example.corp Server DN name  : CN=NTDS Settings,CN=LOCATION-000001,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=corp TransportType: RPC options: 0x00000001 Warning: No NC replicated for Connec...
2018 Dec 20
5
Samba AD DC replication error - 2, 'WERR_BADFILE'
Hello everyone, I have setup two Samba AD DC's with BIND9_DLZ dns backend. faiserver.example.corp is one of them hosting all FSMO Roles. location-000001.example.corp is the second one. Both are in different subnets but can reach each other. Unfortunately replication only works from faiserver.example.corp -> location-000001.example.corp. In the other direction location-000001.example.corp
2018 Dec 21
1
Samba AD DC replication error - 2, 'WERR_BADFILE'
...t-Site-Name\LOCATION-000001 via RPC DSA object GUID: 2fbf25e8-acff-485b-8dea-2bc116869f5c Last attempt @ Thu Dec 20 13:49:46 2018 UTC failed, result 2 (WERR_BADFILE) 29 consecutive failure(s). Last success @ NTTIME(0) ==== KCC CONNECTION OBJECTS ==== Connection -- Connection name: 6c51da6c-3fe9-41f8-a9ac-a99949a235e4 Enabled        : TRUE Server DNS name : location-000001.example.corp Server DN name  : CN=NTDS Settings,CN=LOCATION-000001,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=corp TransportType: RPC options: 0x00000001 Warning: No NC replicated for Connec...
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
Krutika, I?m sorry I forgot to add logs. I attached them now. Thanks, Gencer. From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com Sent: Thursday, July 6, 2017 10:27 AM To: 'Krutika Dhananjay' <kdhananj at redhat.com> Cc: 'gluster-user' <gluster-users at gluster.org> Subject: Re:
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Ki Krutika, After that setting: $ dd if=/dev/zero of=/mnt/ddfile bs=1G count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.7351 s, 91.5 MB/s $ dd if=/dev/zero of=/mnt/ddfile2 bs=2G count=1 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 23.7351 s, 90.5 MB/s $ dd if=/dev/zero of=/mnt/ddfile3 bs=1G count=1 1+0 records
2013 Feb 27
4
GlusterFS performance
...; echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/zero of=gluster.test.bin bs=1G count=1 Switching direct-io on and off doesn't have effect. Playing with glusterfs options too. What I can do with performance? My volumes: Volume Name: nginx Type: Replicate Volume ID: e3306431-e01d-41f8-8b2d-86a61837b0b2 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: control1:/storage/nginx Brick2: control2:/storage/nginx Volume Name: instances Type: Distributed-Replicate Volume ID: d32363fc-4b53-433c-87b7-ad51acfa4125 Status: Started Number of Bricks: 2 x 2 = 4 T...