Displaying 20 results from an estimated 80 matches similar to: "GlusterFS with NFS client hang up some times"
2013 Dec 09
1
[CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel
Hi,
I'm using glusterfs version 3.4.0 from gluster-epel[1].
Recently, I find out that there's a glusterfs version in base repo
(3.4.0.36rhs).
So, is it recommend to use that version instead of gluster-epel version?
If yes, is there a guide to make the switch with no downtime?
When run yum update glusterfs, I got the following error[2].
I found a guide[3]:
> If you have replicated or
2013 Apr 20
5
configure error using Lustre 2.3 and OFED 3.5
Configure fails at testing for openib - anyone an idea?
Thanks
Michaell
configure:10034: checking whether to enable OpenIB gen2 support
configure:10138: cp conftest.c build && make -d modules CC=gcc -f /home/mhebenst/lustre-2.3.0/build/Makefile LUSTRE_LINUX_CONFIG=/adm
in/extra/linux-2.6.32-358.2.1.el6.x86_64.crt1/.config LINUXINCLUDE= -I/usr/local/ofed/3.5/src/compat-rdma-3.5/include
2010 Apr 28
1
How to create R package
Hi,
Can you tell me how to create R package in Windows, and give me an
example that works ? Thanks.
[[alternative HTML version deleted]]
2012 Feb 03
6
Spectacularly disappointing disk throughput
Greetings!
I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU.
Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled.
The DomU sees six physical drives: one of them is a USB stick that I''ve
passed through in its entirety as a block device. The other five are SATA
drives attached to a controller that I''ve handed to the DomU with PCI
2014 Apr 17
0
KVM guests unusable after install
I'm finding that my KVM images are not bootable after i reboot my host, and
i've detailed my problem here....
http://serverfault.com/questions/589806/kvm-machines-unbootable-after-host-reboot
Any thoughts on why? Here is a repaste of the problem:
****************************************************************************************
Im finding that my KVM guests are unusable after i
2007 Mar 01
1
whoops, corrupted my filesystem
Hi all-
I corrupted my filesystem by not doing a RTFM first... I got an automated
email that the process monitoring the SMART data from my hard drive detected
a bad sector. Not thinking (or RTFMing), I did a fsck on my partition-
which is the main partition. Now it appears that I've ruined the
superblock.
I am running Fedora Core 6. I am booting off the Fedora Core 6 Rescue CD in
2015 Sep 17
0
Re: error: internal error: Failed to reserve port 5908
Try setting autoport to yes in the graphics section of your xml... then libvirt will autosearch for a free port. You can get the port number by running 'ps aux | grep [vmname]' and look in the line for the port number.
I gess you have another service running that is using port 5908...
-----Oorspronkelijk bericht-----
Van: libvirt-users-bounces@redhat.com
2005 Sep 11
3
mkfs.ext3 on a 9TB volume
Hello,
I have:
CentOS4.1 x86_64
directly-attached Infortrend 9TB array QLogic HBA seen as sdb
GPT label created in parted
I want one single 9TB ext3 partition.
I am experiencing crazy behavior from mke2fs / mkfs.ext3 (tried both).
If I create partitions in parted up to approx 4,100,000 MB in parted,
mkfs.ext3 works great. It lists the right number of blocks and creates
a filesystem that fills
2015 Sep 17
2
error: internal error: Failed to reserve port 5908
After saving a particular VM running WinXP, any attempt to resume it
(even when no other VM's are running) generates the following error:
olympus ~ # virsh restore /var/lib/libvirt/qemu/save/WinXP.save
error: Failed to restore domain from /var/lib/libvirt/qemu/save/WinXP.save
error: internal error: Failed to reserve port 5908
This started sometimes towards the end of last year with only the
2010 Dec 01
1
mem settings for dom0
Hi List,
I have just two basic questions:
1) should I set for example dom0_mem=2048M at grub?
2) should I set xm mem-max 0 4000 ?
irsh # dominfo 0
Id: 0
Name: Domain-0
UUID: 00000000-0000-0000-0000-000000000000
OS Type: linux
State: running
CPU(s): 8
CPU time: 115.3s
Max memory: 4096000 kB
Used memory: 2075136 kB
Autostart:
2019 Oct 12
1
qeum on centos 8 with nvme disk
Hi Alan,
Yes I have partitioned similar - with a swap. but as I mentioned slow!
What command line do you use ?
Device Boot Start End Blocks Id System
/dev/nvme0n1p1 2048 102402047 51200000 83 Linux
/dev/nvme0n1p2 102402048 110594047 4096000 82 Linux swap /
Solaris
/dev/nvme0n1p3 110594048 112642047 1024000 6 FAT16
2003 Oct 29
0
Different SAMBA reaction about file permissions
How come am I not able to delete this "aquila_test/toto2" file from our
"data" (see smb.conf below) directory but can delete it (different copy but
same permissions and owners/groups) from our "home" (see smb.conf below)
directory from a "Windows 2000" PC using samba ( we don`t have this
problem when doing unix commands)
File is on a SUN SunOS 5.8 server
2020 May 28
2
Recover from an fsck failure
This is CentOS-6x.
I have cloned the HDD of a CentOS-6 system. I booted a host with that drive
and received the following error:
checking filesystems
/dev/mapper/vg_voinet01-lv_root: clean, 128491/4096000 files, 1554114/16304000
blocks
/dev/sda1: clean, 47/120016 files, 80115/512000 blocks
/dev/mapper/vg_voinet01-lv_home: clean, 7429/204800 files, 90039/819200 blocks
2003 Aug 18
2
another seriously corrupt ext3 -- pesky journal
Hi Ted and all,
I have a couple of questions near the end of this message, but first I have
to describe my problem in some detail.
The power failure on Thursday did something evil to my ext3 file system (box
running RH9+patches, ext3, /dev/md0, raid5 driver, 400GB f/s using 3x200GB
IDE drives and one hot-spare). The f/s got corrupt badly and the symptoms
are very similar to what Eddy described
2003 Nov 08
2
malloc errors? out of memory with many files on HP-UX
Hi, folks.
I've started getting these errors from rsync, and any help would be
appreciated:
>ERROR: out of memory in string_area_new buffer
>rsync error: error allocating core memory buffers (code 22) at util.c(115)
>ERROR: out of memory in string_area_new buffer
>rsync error: error allocating core memory buffers (code 22) at util.c(115)
>ERROR: out of memory in
2016 Feb 08
1
[PATCH] tests: reduce sizes of scratch disks to 2 GB
1 GB should be enough to create a btrfs filesystem, even with 64K page
size; hence, make the /dev/sda and /dev/sdb test devices smaller so
there is less space taken during the test run.
Followup of commit 8ffad75e5b610274a664a00f1f1186070b602e18 and
commit 9e9b648770f9b8dbe8f280e4b5d1f80c4d689130.
---
docs/guestfs-hacking.pod | 4 ++--
generator/actions.ml | 10 +++++-----
2006 Apr 09
1
Table creation failed
Hello,
I come to you beacause i have something that i dont understand :
i m using udev on a debian sid with 2.6.15.1 kernel.
I have created an deprecated raid at /dev/md0
when i tried doing mkfs.ext3 /dev/md0 i have got :
mke2fs 1.39-WIP (29-Mar-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
4643968 inodes, 9277344 blocks
463867 blocks (5.00%)
2012 Mar 25
3
attempt to access beyond end of device and livelock
Hi Dongyang, Yan,
When testing BTRFS with RAID 0 metadata on linux-3.3, we see discard
ranges exceeding the end of the block device [1], potentially causing
dataloss; when this occurs, filesystem writeback becomes catatonic due
to continual resubmission.
The reproducer is quite simple [2]. Hope this proves useful...
Thanks,
Daniel
--- [1]
attempt to access beyond end of device
ram0: rw=129,
2013 Aug 30
0
Re: Strange fsck.ext3 behavior - infinite loop
On 2013-08-29, at 7:48 PM, Richards, Paul Franklin wrote:
> Strange behavior with fsck.ext3: how to remove a long orphaned inode list?
>
> After copying data over from one old RAID to another new RAID with rsync, the dump command would not complete because of filesystem errors on the new RAID. So I ran fsck.ext3 with the -y option and it would just run in an infinite loop restarting
2001 Oct 05
0
"File size limit exceeded" when running /sbin/mke2fs -j /dev/sdb1
Hi!
I have problem making ext3 FS on new disk. When I run mke2fs, it stops
and gives me: "File size limit exceeded". Is this known issue?
I'm running linux-2.4.10 with ext3 patch, e2fsprogs-1.25 freshly compiled.
Cheers,
Vita
Appended are outputs of following programs:
bash /usr/src/linux/scripts/ver_linux
/sbin/mke2fs -m0 -v -j /dev/sdb1
fdisk -l /dev/sdb
strace