Displaying 10 results from an estimated 10 matches for "15347".
Did you mean:
15307
2008 Oct 05
2
Attn Ivo. Re patches 15347 and 15376
Ivo,
Your patch number 16347 uses _fseeki64 when _WIN32 is defined.
Unfortunately, MinGW (or at least the Linux -> Win32 cross compiler
I'm using) defines _WIN32 but isn't aware of _fseeki64.
I have therefore modified your solution a little and commited it
as rev 15376. The code now looks like this:
#ifdef __MINGW32__
return fseeko64(f,off,whence);
#elif defined
2016 Dec 28
1
Resolving Schema & Configuration Replication Issues
...-sync"?
CN=Schema,CN=Configuration,DC=micore,DC=us
Default-First-Site-Name\TEMP2008R2DC via RPC
DSA object GUID: c8d5c583-a097-4265-858a-cb67797ebb05
Last attempt @ Wed Dec 28 10:46:41 2016 EST failed,
result 58 (WERR_BAD_NET_RESP)
15347 consecutive failure(s).
Last success @ Fri Nov 4 14:59:16 2016 EDT
CN=Configuration,DC=micore,DC=us
Default-First-Site-Name\TEMP2008R2DC via RPC
DSA object GUID: c8d5c583-a097-4265-858a-cb67797ebb05
Last attempt @ Wed Dec 28 10:46:44 201...
2012 Sep 24
1
Logging question regarding delete actions
A user is logged in via imap from multiple devices.
The log has this:
Sep 21 11:46:32 postamt dovecot: imap(awxxxxer): delete: box=INBOX, uid=15347, msgid=<1341851741.4ffb085d2e2b7 at swift.generated>, size=15675
Sep 21 11:46:32 postamt dovecot: imap(awxxxxer): delete: box=INBOX, uid=15739, msgid=<b23b2e42f6ae9ba1602690be42b7b5c7.squirrel at webmail.charite.de>, size=18134
Sep 21 11:46:32 postamt dovecot: imap(awxxxxer): delete: bo...
2020 Jan 27
3
Nut-upsuser Digest, Vol 175, Issue 32
...e my permissions and user info. Let me know if there is anything
else I should check
pi at nutpi:~ $ ls -alt /etc/nut
total 56
drwxr-xr-x 87 root root 4096 Jan 26 15:22 ..
drwxr-xr-x 2 root nut 4096 Jan 26 13:53 .
-rw-r----- 1 root nut 4719 Jan 26 13:15 upssched.conf
-rw-r----- 1 root nut 15347 Jan 26 11:25 upsmon.conf
-rw-r----- 1 root nut 1543 Jan 26 10:02 nut.conf
-rw-r----- 1 root nut 2191 Jan 26 09:40 upsd.users
-rw-r----- 1 root nut 4601 Jan 26 09:38 upsd.conf
-rw-r----- 1 root nut 5646 Jan 26 09:32 ups.conf
the user I created for nut is nutmon. Here is the directory pe...
2006 Jun 29
1
gem flaking out on me
# gem install rmagick
Bulk updating Gem source index for: http://gems.rubyforge.org
ERROR: While executing gem ... (OpenURI::HTTPError)
404 Not Found
# gem update sources
Updating installed gems...
Attempting remote update of sources
ERROR: While executing gem ... (Gem::GemNotFoundException)
Could not find sources (> 0) in the repository
# gem search rmagic --remote
*** REMOTE GEMS
2020 Jan 27
1
Timer doesn't appear to start
...know if there is anything
> else I should check
>
> pi at nutpi:~ $ ls -alt /etc/nut
> total 56
> drwxr-xr-x 87 root root 4096 Jan 26 15:22 ..
> drwxr-xr-x 2 root nut 4096 Jan 26 13:53 .
> -rw-r----- 1 root nut 4719 Jan 26 13:15 upssched.conf
> -rw-r----- 1 root nut 15347 Jan 26 11:25 upsmon.conf
> -rw-r----- 1 root nut 1543 Jan 26 10:02 nut.conf
> -rw-r----- 1 root nut 2191 Jan 26 09:40 upsd.users
> -rw-r----- 1 root nut 4601 Jan 26 09:38 upsd.conf
> -rw-r----- 1 root nut 5646 Jan 26 09:32 ups.conf
>
> the user I created for nut is nut...
2020 Jan 27
0
Timer doesn't appear to start
...know if there is anything
> else I should check
>
> pi at nutpi:~ $ ls -alt /etc/nut
> total 56
> drwxr-xr-x 87 root root 4096 Jan 26 15:22 ..
> drwxr-xr-x 2 root nut 4096 Jan 26 13:53 .
> -rw-r----- 1 root nut 4719 Jan 26 13:15 upssched.conf
> -rw-r----- 1 root nut 15347 Jan 26 11:25 upsmon.conf
> -rw-r----- 1 root nut 1543 Jan 26 10:02 nut.conf
> -rw-r----- 1 root nut 2191 Jan 26 09:40 upsd.users
> -rw-r----- 1 root nut 4601 Jan 26 09:38 upsd.conf
> -rw-r----- 1 root nut 5646 Jan 26 09:32 ups.conf
>
> the user I created for nut is nut...
2013 Jan 23
1
VMs fail to start with NUMA configuration
...s: 0 4 8 12 16 20 24 28
node 0 size: 16374 MB
node 0 free: 11899 MB
node 1 cpus: 32 36 40 44 48 52 56 60
node 1 size: 16384 MB
node 1 free: 15318 MB
node 2 cpus: 2 6 10 14 18 22 26 30
node 2 size: 16384 MB
node 2 free: 15766 MB
node 3 cpus: 34 38 42 46 50 54 58 62
node 3 size: 16384 MB
node 3 free: 15347 MB
node 4 cpus: 3 7 11 15 19 23 27 31
node 4 size: 16384 MB
node 4 free: 15041 MB
node 5 cpus: 35 39 43 47 51 55 59 63
node 5 size: 16384 MB
node 5 free: 15202 MB
node 6 cpus: 1 5 9 13 17 21 25 29
node 6 size: 16384 MB
node 6 free: 15197 MB
node 7 cpus: 33 37 41 45 49 53 57 61
node 7 size: 16368 MB...
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with
and without shard will be the same.
In any case, please attach the volume profile[1], so we can see what else
is slowing things down.
-Krutika
[1] -
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika,
I already have a preallocated disk on VM.
Now I am checking performance with dd on the hypervisors which have the
gluster volume configured.
I tried also several values of shard-block-size and I keep getting the same
low values on write performance.
Enabling client-io-threads also did not have any affect.
The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017