Displaying 20 results from an estimated 25 matches for "38m".
Did you mean:
38
2007 Jun 21
0
Network issue in RHCS/GFS environment
...35B 0 : 435k 19k: 60M 377k>
1 32 27 39 1 0| 176k 47M| 0 0 :1216k 25M: 51M 323k>
1 29 27 43 1 0| 192k 42M| 35B 35B:2042k 50M: 42M 249k>
0 29 38 32 1 0| 198k 41M| 936B 1293B:1748k 40M: 41M 233k>
1 26 34 38 0 0| 246k 38M| 0 35B:1804k 42M: 41M 231k>
1 27 33 38 1 0| 234k 41M| 35B 0 :1800k 40M: 40M 250k>
However, it is very stranger in node2: "eth1 recv and send" are both very
high! while eth0 and eth2 have low I/O.
# dstat -N eth0,eth3,eth4 2
----total-cpu-usage---- -dsk/...
2013 May 24
0
Problem After adding Bricks
..., 0.0%ni, 1.0%id, 5.6%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 16405712k total, 16310088k used, 95624k free, 12540824k buffers
Swap: 1999868k total, 9928k used, 1989940k free, 656604k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2460 root 20 0 391m 38m 1616 S 250 0.2 4160:51 glusterfsd
2436 root 20 0 392m 40m 1624 S 243 0.3 4280:26 glusterfsd
2442 root 20 0 391m 39m 1620 S 187 0.2 3933:46 glusterfsd
2454 root 20 0 391m 36m 1620 S 118 0.2 3870:23 glusterfsd
2448 root 20 0 391m 38m 1624 S 110...
2018 May 22
0
split brain? but where?
...3.8G 0 3.8G
0% /sys/fs/cgroup
/dev/mapper/centos-data1 112G 33M 112G
1% /data1
/dev/mapper/centos-var 19G 219M 19G
2% /var
/dev/mapper/centos-home 47G 38M 47G
1% /home
/dev/mapper/centos-var_lib 9.4G 178M 9.2G
2% /var/lib
/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 263G 669G
29% /bricks/brick1
/dev/sda1 950M 235M 715M
25% /boot
8><---
So...
2018 May 22
2
split brain? but where?
...ys/fs/cgroup
> > /dev/mapper/centos-tmp 3.8G 33M
> >3.7G 1% /tmp
> > /dev/mapper/centos-var 19G 213M
> > 19G 2% /var
> > /dev/mapper/centos-home 47G 38M
> > 47G 1% /home
> > /dev/mapper/centos-data1 112G 33M
> >112G 1% /data1
> > /dev/mapper/centos-var_lib 9.4G 178M
> >9.2G 2% /var/lib
> > /dev/mapper/vg--gluster--prod--1--2-gluster--...
2013 Aug 21
2
High Load Average on POP/IMAP.
...9824k total, 11913788k used, 4366036k free, 334308k buffers
Swap: 4192956k total, 0k used, 4192956k free, 10359492k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
408 mysql 18 0 384m 38m 4412 S 52.8 0.2 42221:44 mysqld
29326 nobody 15 0 22688 10m 1112 D 3.9 0.1 0:00.05 imap
29313 nobody 16 0 14892 4892 1000 S 3.1 0.0 0:00.07 im...
2018 May 22
1
split brain? but where?
...3.8G 0 3.8G
> 0% /sys/fs/cgroup
> /dev/mapper/centos-data1 112G 33M 112G
> 1% /data1
> /dev/mapper/centos-var 19G 219M 19G
> 2% /var
> /dev/mapper/centos-home 47G 38M 47G
> 1% /home
> /dev/mapper/centos-var_lib 9.4G 178M 9.2G
> 2% /var/lib
> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 263G
> 669G 29% /bricks/brick1
> /dev/sda1 950M 235M 715M...
2016 Apr 14
0
How to optimize for IBM db2?
...bigfile bs=8192
140384+0 records in
140384+0 records out
1150025728 bytes (1.2 GB) copied, 29.9992 s, 38.3 MB/s
I tried all buses, cache modes and IO modes; "Cache mode=none" is slower, others are almost equal.
I moved from image file to LVM. This raised the speed from 16M to 38M.
I also tried VirtualBox, but it's neither faster, nor slower, and it eats CPU too.
--
--------------------------------------------------------------------------------
Kind regards,
Ilya Basin
software engineer
Reksoft
Skype: basin_ilya
phone +7(812)324-24-40*553
2008 Jul 05
2
Question on number of processes engendered by BDRb
...S 0 10.1 0:45.18 mongrel_rails
* 7084 raghus 15 0 152m 73m 2036 S 11 7.2 116:31.68
packet_worker_r*
11129 raghus 15 0 134m 58m 3336 S 0 5.7 0:05.20 mongrel_rails
* 7085 raghus 15 0 131m 53m 2020 S 0 5.2 2:23.61
packet_worker_r*
5094 mysql 15 0 215m 38m 3272 S 0 3.7 44:13.99 mysqld
* 7083 raghus 15 0 97.9m 36m 1192 S 0 3.5 2:28.98
packet_worker_r*
7081 raghus 15 0 98.3m 34m 1036 S 0 3.4 3:21.40 ruby
10996 raghus 15 0 55820 12m 1340 S 0 1.2 0:06.16 god
11091 raghus 15 0 19748 3728 1384 S 0 0.4 0:...
2017 Jun 13
2
[Bug 12838] New: [PATCH] Log sent/received bytes even in case of error
https://bugzilla.samba.org/show_bug.cgi?id=12838
Bug ID: 12838
Summary: [PATCH] Log sent/received bytes even in case of error
Product: rsync
Version: 3.1.2
Hardware: All
OS: All
Status: NEW
Severity: normal
Priority: P5
Component: core
Assignee: wayned at samba.org
2006 Mar 12
0
See Xen in action
...s videos of Xen in action : (works great with mplayer)
1. Several OS simultaneously
(multipleOS.avi<http://mlc.homelinux.com:88/xenpr/Videos/multipleOS.avi>-
17M)
2. Installation of Debian and NetBSD on DomU
(DebianNetBSD.avi<http://mlc.homelinux.com:88/xenpr/Videos/DebianNetBSD.avi>-
38M)
3. Live migration (reloaction) (livemigration.avi
<http://mlc.homelinux.com:88/xenpr/Videos/livemigration.avi>- 20M)
4. Dynamic allocation of memory in DomU
(allocationdynmem.avi<http://mlc.homelinux.com:88/xenpr/Videos/allocationdynmem.avi>-
3.7M)
These video is free, you can use them...
2018 May 21
2
split brain? but where?
...3.8G 0
3.8G 0% /sys/fs/cgroup
/dev/mapper/centos-tmp 3.8G 33M
3.7G 1% /tmp
/dev/mapper/centos-var 19G 213M
19G 2% /var
/dev/mapper/centos-home 47G 38M
47G 1% /home
/dev/mapper/centos-data1 112G 33M
112G 1% /data1
/dev/mapper/centos-var_lib 9.4G 178M
9.2G 2% /var/lib
/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
668G 29% /bricks/brick1
/d...
2003 Aug 31
1
Filesystem problem
...4PM
0:00.00 /usr/bin/as -o comconsole.o -
last pid: 1252; load averages: 0.00, 0.00, 0.00
up 0+02:37:22
19:04:48
64 processes: 1 running, 63 sleeping
CPU states: 0.0% user, 0.0% nice, 0.0% system,
0.0% interrupt, 100% idle
Mem: 34M Active, 23M Inact, 38M Wired, 204K Cache, 22M
Buf, 906M Free
Swap: 2048M Total, 2048M Free
devel# vmstat
procs memory page disks
faults cpu
r b w avm fre flt re pi po fr sr ad0 da0
in sy cs us sy id
1 7 0 144612 928056 16 0 0 0 9 0 0 0
331 0...
2018 May 21
0
split brain? but where?
...8G 0
>3.8G 0% /sys/fs/cgroup
> /dev/mapper/centos-tmp 3.8G 33M
>3.7G 1% /tmp
> /dev/mapper/centos-var 19G 213M
> 19G 2% /var
> /dev/mapper/centos-home 47G 38M
> 47G 1% /home
> /dev/mapper/centos-data1 112G 33M
>112G 1% /data1
> /dev/mapper/centos-var_lib 9.4G 178M
>9.2G 2% /var/lib
> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2 932G 264G
>668...
2007 Nov 08
1
XEN HVMs on LVM over iSCSI - test results, (crashes) and questions
...e++ (Per Chr column write and read):
iSCSI dom0: w: 54M, r: 47M (no domU)
Local disk dom0: w: 59M, r: 51M (3 idle domU)
HVM single : w: 18M, r: 37M
I''ve then launched bonnie++ on two separate SLC4 HVM (cloned):
HVM 1 : w: 8M, r: 33M
HVM 2 : w: 8.5M, r: 38M
The same on three HVM:
HVM 1 : w: 4.4M, r: 11M
HVM 2, crashed, lost ssh, error: "hda: lost interrupt", need "xm reboot"
HVM 3, crashed, lost ssh, error: "hda: lost interrupt", need "xm reboot"
So, where is the limit of my configuration?
How can...
2018 Mar 23
1
Aw: Re: rsync very very slow with multiple instances at the same time.
An HTML attachment was scrubbed...
URL: <http://lists.samba.org/pipermail/rsync/attachments/20180323/66c46d5a/attachment.html>
2006 Jan 25
8
[Bug 400] connection tracking does not work on VLANs if underlying interface is a bridge
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=400
------- Additional Comments From kaber@trash.net 2006-01-25 12:55 MET -------
Please add a LOG rule to PRE_ROUTING in the mangle table and post the output.
BTW, are you using hardware checksumming (check with ethtool) on the underlying
ethernet device?
--
Configure bugmail:
2005 Jan 17
3
Vegas Video 4 installer
...le, it didn't help. Does anyone have a good hint ?
Thanks,
Xav
PS: in case you can't find it anymore, I have the installer available -
it's the unregistered/demo version, so it shoud be more or less harmless
to redistribute. But as it's a bit too big to put on my own public ftp
(38M), I'm giving the url by private mail.
[xav@bip:~/.wine/drive_c]$ WINEDEBUG=+loaddll,+tid wine vv
0009:trace:loaddll:load_dll Loaded module L"C:\\windows\\system\\msvcrt.dll" : native
0009:trace:loaddll:load_dll Loaded module L"c:\\windows\\system\\advapi32.dll" : builtin
0...
2007 Aug 21
0
Saftware RAID1 or Hardware RAID1 with Asterisk (Vidura Senadeera)
...ctive raid1 hdc5[1] hda5[0]
> 38081984 blocks [2/2] [UU]
>
> md6 : active raid1 hdc6[1] hda6[0]
> 38708480 blocks [2/2] [UU]
>
> unused devices: <none>
>
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/md1 236M 38M 186M 17% /
> tmpfs 249M 0 249M 0% /dev/shm
> /dev/md3 1.9G 1.2G 643M 65% /usr
> /dev/md5 36G 29G 5.3G 85% /var
> /dev/md6 37G 30G 4.7G 87% /archive
>
> $ cat /proc/swaps
> Filename...
2007 Aug 21
6
Saftware RAID1 or Hardware RAID1 with Asterisk
Dear All,
I would like to get community's feedback with regard to RAID1 ( Software or
Hardware) implementations with asterisk.
This is my setup
Motherboard with SATA RAID1 support
CENT OS 4.4
Asterisk 1.2.19
Libpri/zaptel latest release
2.8 Ghz Intel processor
2 80 GB SATA Hard disks
256 MB RAM
digium PRI/E1 card
Following are the concerns I am having
I'm planing to put this asterisk
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
>
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
>
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well: