similar to: Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.

Displaying 20 results from an estimated 700 matches similar to: "Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl."

2013 Feb 14
2
i386: vm.pmap kernel local race condition
Hi! I've got FreeBSD 8.3-STABLE/i386 server that can be reliably panicked using just 'squid -k rotatelog' command. It seems the system suffers from the problem described here: http://cxsecurity.com/issue/WLB-2010090156 I could not find any FreeBSD Security Advisory containing a fix. My server has 4G physical RAM (about 3.2G available) and runs squid (about 110M VSS) with 500
2003 Apr 06
1
load testing and tuning a 4GB RAM server
Hello everyone, First of all, great job on the 4.8-R. We have been a long standing user of FreeBSD and are very happy with everything. Now my question. I am trying to stress test a new Dell PowerEdge server and find the limits of its hardware and my tuning. Here are the server stats: * 2x Xeon 2.8 with SMP compiled, hyperthreading NOT compiled in kernel * 4 GB of RAM, 8 GB of swap on Raid 1
2007 Dec 07
6
4.x Collecting pv entries Suggest increasing PMAP_SHPGPERPROC,
Hello List, I know FreeBSD 4.x is old..., but we are using on a production system with postgres and apache. The above message is appearing periodically. I googled for the message but found no recommendation for adjusting it. Any suggestions. Thanks, Steve -- "They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety." (Ben Franklin)
2017 Jun 07
2
purrr::pmap does not work
Hi All, I try to do a scatterplot for a bunch of variables. I plot a dependent variable against a bunch of independent variables: -- cut -- graphics::plot( v01_r01 ~ v08_01_up11, data = dataset, xlab = "Dependent", ylab = "Independent #1" ) -- cut -- It is tedious to repeat the statement for all independent variables. Found an alternative, i.e. : -- cut -- mu
2018 Jan 15
2
Using the host name of the volume, its related commands can become very slow
Using the host name of the volume, its related gluster commands can become very slow .For example,create,start,stop volume,nfs related commands. and some time And in some cases, the command will return Error : Request timed out but If using ip address to create the volume. The volume all gluster commands are normal. I have configured /etc/hosts correctly,Because,SSH can normally use the
2018 Jan 16
0
Using the host name of the volume, its related commands can become very slow
On Mon, Jan 15, 2018 at 6:30 PM, ?? <chenxi at shudun.com> wrote: > Using the host name of the volume, its related gluster commands can become > very slow .For example,create,start,stop volume,nfs related commands. and > some time And in some cases, the command will return Error : Request timed > out > but If using ip address to create the volume. The volume all gluster >
2018 Mar 21
0
Brick process not starting after reinstall
Could you share the following information: 1. gluster --version 2. output of gluster volume status 3. glusterd log and all brick log files from the node where bricks didn't come up. On Wed, Mar 21, 2018 at 12:35 PM, Richard Neuboeck <hawk at tbi.univie.ac.at> wrote: > Hi all, > > our systems have suffered a host failure in a replica three setup. > The host needed a
2018 Mar 21
2
Brick process not starting after reinstall
Hi all, our systems have suffered a host failure in a replica three setup. The host needed a complete reinstall. I followed the RH guide to 'replace a host with the same hostname' (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts). The machine has the same OS (CentOS 7). The new machine got a minor version number newer
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is locked and we either have to wait > or restart
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2018 Apr 27
0
Turn off replication
Hi Jose, Why are all the bricks visible in volume info if the pre-validation for add-brick failed? I suspect that the remove brick wasn't done properly. You can provide the cmd_history.log to verify this. Better to get the other log messages. Also I need to know what are the bricks that were actually removed, the command used and its output. On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez
2003 Aug 22
3
PAE removal patch for testing
If you're one of the people who has cvsup'd to 4.8-stable since August 8th and you've since begun to experience panics on a previously stable system, please apply the attached patch and see if your previous stability has been restored. Please tell me your results. Thanks, Mike "Silby" Silbersack -------------- next part -------------- diff -u -r
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib: Any Ideas?? Jose [2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045)
2017 May 31
1
Snapshot auto-delete unmount problem
Hi I am having a problem deleting snapshots, gluster is failing to unmount them. I am running centos 7.3 with gluster-3.10.2-1 here is some log output: [2017-05-31 09:21:39.961371] W [MSGID: 106057] [glusterd-snapshot-utils.c:410:glusterd_snap_volinfo_find] 0-management: Snap volume 331ec972f90d494d8a86dd4f69d718b7.glust01-li.run-gluster-snaps-331ec972f90d494d8a86dd4f69d718b7-brick1-b not found
2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All, I am facing issues restarting the gluster volume. When I start the volume after stopping it, the gluster fails to start the volume Below is the message that I get on CLI /root> gluster volume start _home volume start: _home: failed: Commit failed on localhost. Please check the log file for more details. Logs says that it was unable to start the brick [2013-08-08
2013 Mar 19
1
Panic : bad pte
Hello, There it is, all my computers on FreeBSD 9.1-RELEASE had panic. I can just say there is a problem in the 9.1-RELEASE because I had no panic before. What afraid me is that my production server also panic'ed a few days ago, fortunately it does not appears so often. This is a panic that happened on my desktop computer, with a graphic card. The crash usually appears when X starts. GNU
2018 Apr 30
2
Turn off replication
Hi All We were able to get all 4 bricks are distributed , we can see the right amount of space. but we have been rebalancing since 4 days ago for 16Tb. and still only 8tb. is there a way to speed up. there is also data we can remove from it to speed it up, but what is the best procedures removing data , is it from the Gluster main export point or going on each brick and remove it . We would like
2018 May 02
0
Turn off replication
Hi, Removing data to speed up from rebalance is not something that is recommended. Rebalance can be stopped but if started again it will start from the beginning (will have to check and skip the files already moved). Rebalance will take a while, better to let it run. It doesn't have any down side. Unless you touch the backend the data on gluster volume will be available for usage in spite of
2019 Sep 27
5
[PATCH 1/2] drm/qxl: stop abusing TTM to call driver internal functions
The ttm_mem_io_* functions are actually internal to TTM and shouldn't be used in a driver. Instead call the qxl_ttm_io_mem_reserve() function directly. Signed-off-by: Christian K?nig <christian.koenig at amd.com> --- drivers/gpu/drm/qxl/qxl_drv.h | 2 ++ drivers/gpu/drm/qxl/qxl_object.c | 11 +---------- drivers/gpu/drm/qxl/qxl_ttm.c | 4 ++-- 3 files changed, 5 insertions(+),
2019 Sep 30
2
[Spice-devel] [PATCH 1/2] drm/qxl: stop abusing TTM to call driver internal functions
Am 27.09.19 um 18:31 schrieb Frediano Ziglio: >> The ttm_mem_io_* functions are actually internal to TTM and shouldn't be >> used in a driver. >> > As far as I can see by your second patch QXL is just using exported > (that is not internal) functions. > Not that the idea of making them internal is bad but this comment is > a wrong statement. See the history of