similar to: Consolidating LVM volumes..

Displaying 20 results from an estimated 200 matches similar to: "Consolidating LVM volumes.."

2006 Aug 05
0
Memory Usage after upgrading to pre-release and removing sendfile
After the upgrade my memory usage is shown like this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4592 flipl 16 0 197m 150m 2360 S 0.0 14.9 6:17.28 mongrel_rails 4585 mongrel 16 0 190m 140m 1756 S 0.0 13.9 0:52.86 mongrel_rails 4579 mongrel 16 0 200m 157m 1752 S 0.0 15.5 0:56.31 mongrel_rails 4582 mongrel 16 0 189m 139m 1752 S 0.0 13.8
2006 Aug 10
0
Consolidating error_messages_for for multiple objects
Hi there I have a form that saves a user and an address, which are seperate objects. A user has_many addresses. How to display error messages for the adddress object which gets saved via the same html form? I''m getting all my validation errors for the user object, but no detailed messages for the address object despite putting error_messages_for :address in my view.
2013 Nov 11
1
[LLVMdev] [lld] consolidating the usage of saving references
Hi, It looks like each flavor chooses to save references in its own way. The GNU flavor (ELF) uses a single vector of references and uses referenceStartIndex/referenceEndIndex to point to the references for each DefinedAtom. Darwin(Mach-O) / WinLink (PECOFF) uses a different way of storing references ? Is there a plan to make the WinLink/Darwin(Mach-O) use a single vector of references too
2009 Jan 13
1
consolidating the NUT documentation on permissions, hotplug and udev
Arnaud et al, I have been meaning to collect some of the documentation updates for permission-related errors, and I was wondering if you would mind if we moved the scripts/udev/README and scripts/hotplug/README files out of scripts/ and into the docs/ directory (probably docs/permissions.txt). We could also cover the *BSD /dev/usb* permission issues there, as well. Any thoughts on this? -- -
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all, I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 32 GB RAM installed. I am running Sol11Expr on this host and I use it to primarily serve Netatalk AFP shares. From day one, I have noticed that the amount of free RAM decereased and along with that decrease the overall performance of ZFS decreased as well. Now, since I am still quite a Solaris newbie, I seem to
2006 Mar 04
0
ntop **ERROR** Queue of address '???' failed, code -1 [addr queue=4096/max=4097]
Hi all, Running CentOS 4.2 with the following conditions: [root at gatekeeper bin]# uname -srvmpi Linux 2.6.9-22.0.1.EL #1 Thu Oct 27 12:26:11 CDT 2005 i686 i686 i386 [root at gatekeeper bin]# uptime 07:54:41 up 31 days, 13:45, 6 users, load average: 0.00, 0.02, 0.00 [root at gatekeeper bin]# rpm -q ntop ntop-3.1-1.2.el4.rf [root at gatekeeper bin]# df -h S.ficheros Tama?o Usado
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir <jim at palousetech.com> wrote: > Well, after a very stressful weekend, I think I have things largely > working. Turns out that most of the above issues were caused by the linux > permissions of the exports for all three volumes (they had been reset to > 600; setting them to 774 or 770 fixed many of the issues). Of course, I >
2013 Jul 24
2
Re: [libvirt-users] Resize errors with virt-resize/vgchange
Hi, >> >> >> > # virt-resize -d --expand /dev/sda1 --LV-expand /dev/mapper/prop-home >> >> >> > prop-1.img prop-expand.img >> >> >> > command line: virt-resize -d --expand /dev/sda1 --LV-expand >> >> >> > /dev/mapper/prop-home prop-1.img prop-expand.img >> >> >> > Examining prop-1.img ...
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2011 Jul 20
2
how to add file-based disk space to a guest
hi there, I'm following these documentations to add a file-based disk volume to a KVM guest under Centos 6.0 : http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization/chap-Virtualization-Storage_Volumes.html as instructed, I created a "pool" then a "volume", file-based, e.g : mkdir /mnt/raid/kvm_pool1 virsh # pool-define-as pool1 dir - - - -
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2013 Jul 25
2
Re: [libvirt-users] Resize errors with virt-resize/vgchange
Hi, >> Yes, here's the layout from the vm: >> >> # df -h >> Filesystem Size Used Avail Use% Mounted on >> devtmpfs 7.9G 0 7.9G 0% /dev >> tmpfs 7.9G 0 7.9G 0% /dev/shm >> tmpfs 7.9G 643M 7.3G 9% /run >> tmpfs 7.9G 0 7.9G 0%
2017 May 12
3
strange system outage
On Fri, May 12, 2017 at 11:44 AM, Larry Martell <larry.martell at gmail.com> wrote: > On Thu, May 11, 2017 at 7:58 PM, Alexander Dalloz <ad+lists at uni-x.org> > wrote: > > Am 11.05.2017 um 20:30 schrieb Larry Martell: > >> > >> On Wed, May 10, 2017 at 3:19 PM, Larry Martell <larry.martell at gmail.com > > > >> wrote: > >>>
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2017 May 11
2
strange system outage
Am 11.05.2017 um 20:30 schrieb Larry Martell: > On Wed, May 10, 2017 at 3:19 PM, Larry Martell <larry.martell at gmail.com> wrote: >> On Wed, May 10, 2017 at 3:07 PM, Jonathan Billings <billings at negate.org> wrote: >>> On Wed, May 10, 2017 at 02:40:04PM -0400, Larry Martell wrote: >>>> I have a CentOS 7 system that I run a home grown python daemon on. I
2007 Dec 19
1
Xen pae and 32G memory
Hello, I have a problem with a new install of xen. I work on a Dell poweredge 2950, bi xeon Quadcore ang 32G of memory. The domain0 run on a gentoo 32 Bits. I use xen 3.1.2 with pae xen kernel : 2.6.20 So when i boot on a normal kernel, i see all of my memory. But when i boot on the xen kernel, i have only 15G memory available. I try to force the size memory in the boot options in grub
2009 Dec 03
1
Setting loglevel for specified clients?
Hi all, is there a possibility to set the loglevel for just a few specified clients? With more than 500 clients and "smbd: 10" our log partition (~ 15G) will be filled in a few hours. Thanks Alex
2008 Jan 18
1
Mounting /var directory to a new HardDisk
Hi, I have a mailgw running Centos where trendmicro(IMSS) is installed. it works perfectly. Now, the problem is it is running out of Harddisk. pls see below and pay attention to / file system (/dev/sda6), where only 1.3 gb is available. these are not Logical volums (LVM) [root at gateway 17141]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda6 9.7G 7.9G 1.3G