similar to: quota broken for large NFS mount

Displaying 20 results from an estimated 110 matches similar to: "quota broken for large NFS mount"

2017 Nov 13
1
Shared storage showing 100% used
Hello list, I recently enabled shared storage on a working cluster with nfs-ganesha and am just storing my ganesha.conf file there so that all 4 nodes can access it(baby steps).? It was all working great for a couple of weeks until I was alerted that /run/gluster/shared_storage was full, see below.? There was no warning; it went from fine to critical overnight.
2010 Jul 14
1
GPT Partitions >2.2T with Centos 5.5
Dear all, unfortunately I observe drastic drops in read- performance when I connect LUNs >2.2T to our Centos 5.5 Servers. I suspect issues with the GUID Partition Table, since it happens reproducibly only on the LUNs >2.2T and goes back to normal performance when using a LUN <2T with "normal", Legacy MBR partitions. All machines are CentOS 5.5 (and RHEL 5.5) on IBM Blades
2017 Oct 03
0
multipath
I have inherited a system set up with multipath, which is not something I have seen before so I could use some advice The system is a Dell R420 with 2 LSI SAS2008 HBAs, 4 internal disks, and a MD3200 storage array attached via SAS cables. Oh and CentOS 6 lsblk shows the following: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdd 8:48 0 931.5G 0 disk ??sdd1
2009 Oct 08
12
resolv.conf rewritten every reboot. How to figure out who and why?
My machine has a static IP, with dhcp and IPv6 disabled. Every time I reboot, some process rewrites /etc/resolv.conf, including a comment about dhcpclient. The only package I have installed that shows up in "rpm -qa|grep -i dhcp" is dhcpv6-client-1.0.10-16.el5, and nothing in there is named dhcpclient. I'd like to figure out what software is rewriting this file and why. man 5
2011 Oct 18
1
problem with project command in rgdal
Hi I'm trying to analyse some data and need to set the geographic coordinate system before I can do the analysis. I've been trying to use the project command in rgdal but keep getting an error message saying: Error in project(locationsMatrix, PROJECTION.OUT) : latitude or longitude exceeded limits ( PROJECTION.OUT <- "+proj=aea +lat_1=-18 +lat_2=-36 +lat_0=0 +lon_0=132
2005 Mar 23
2
Mac/Samba capacity detection problem
Hi, We are having a problem between Linux 2.6.9-prep #12 SMP Mon Dec 6 12:08:34 CST 2004 x86_64 x86_64 x86_64 GNU/Linux (Samba version 3.0.9-1.fc3) and MAC OS X 10.3.5 Samba version 3.0.0rc2 Basically the issue is that the Mac reports the Linux share as being full, and won't allow files to be copied. We are able to work around it using this method: Create a folder on the Linux
2012 Jan 19
1
converting a for loop into a foreach loop
Dear all, Just wondering if someone could help me out converting my code from a for() loop into a foreach() loop or using one of the apply() function. I have a very large dataset and so I'm hoping to make use of a parallel backend to speed up the processing time. I'm having trouble getting selecting three variables in the dataset to use in the foreach() loops. My for() loop code is:
2004 Jan 30
3
Call quality questions
Our basic system is as follows: P4 3.0 Ghz w/ HT, 1GB PC3200 RAM, 120 GB HDD, RH 9.0 OS, * from CVS several weeks ago, working OK for routing, VM, and AA, calls in on separate PSTN lines to Adtran TSU 600, into * server through T100P card. The hardware is not taxed at all with little over 20% proc utilization ever, low mem use, etc. All Phones are SNOM 200's with various firmware revisions
2010 Mar 19
3
zpool I/O error
Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why
2012 Feb 01
2
Doubts about dsync, mdbox, SIS
I've been running continous dsync backups of our Maildirs for a few weeks now, with the destination dsync server using mdbox and SIS. The idea was that the destination server would act as a warm copy of all our active users data. The active servers are using Maildir, and has: $ df -h /usr/local/atmail/users/ Filesystem Size Used Avail Use% Mounted on /dev/atmailusers
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2005 Aug 22
2
64 bit hardware and filesystem size limit
We recently bought a 32-bit Xeon system with a 12-port 3Ware RAID card and a dozen 500GB drives. We wanted to create 4TB drive arrays; however, we soon discovered that there is about a 2.2TB drive array size limit on 32-bit hardware. Does that sound correct? Would replacing the 32-bit mobo/cpu with a 64-bit mobo/cpu allow us to use drive arrays larger than 2.2TB? Thanks.
2002 Jun 07
1
(no subject)
An HTML attachment was scrubbed... URL: http://www.winehq.org/pipermail/wine-users/attachments/20020607/ed307f23/attachment.htm -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: audio/x-midi Size: 87214 bytes Desc: not available Url : http://www.winehq.org/pipermail/wine-users/attachments/20020607/ed307f23/attachment.bin -------------- next
2003 Mar 30
1
[RFC][patch] dynamic rolling block and sum sizes II
Mark II of the patch set. The first patch (dynsumlen2.patch) increments the protocol version to support per-file dynamic block checksum sizes. It is a prerequisite for varsumlen2.patch. varsumlen2.patch implements per-file dynamic block and checksum sizes. The current block size calculation only applies to files between 7MB and 160MB setting the block size to 1/10,0000 of the file length for a
2007 Jan 23
1
ocfs2 kernel bug in Fedora Core 4 update kernel
OS: Fedora Core release 4 (Stentz) KERNEL: Linux rack1.ape 2.6.17-1.2142_FC4smp #1 SMP Tue Jul 11 22:57:02 EDT 2006 i686 i686 i386 GNU/Linux CLUSTER: 11 Linux kernels, mixed environment FC4,FC5,FC6 SAN: FC Infortrend storage, QLogic16 port FC switch, FC adapter LSI FC929X (21224,1):ocfs2_truncate_file:242 ERROR: bug expression: le64_to_cpu(fe->i_size) != i_size_read(inode)
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2011 Nov 09
3
Data distribution not even between vdevs
Hi list, My zfs write performance is poor and need your help. I create zpool with 2 raidz1. When the space is to be used up, I add 2 another raidz1 to extend the zpool. After some days, the zpool is almost full, I remove some old data. But now, as show below, the first 2 raidz1 vdev usage is about 78% and the last 2 raidz1 vdev usage is about 93%. I have line in /etc/system set
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2010 Oct 04
3
EXT4 mount issue
Hi All, When a couple of EXT4 filesystems are mounted in a server I get the message Oct 1 18:49:42 sraid3 kernel: EXT4-fs (sdb): mounted filesystem without journal Oct 1 18:49:42 sraid3 kernel: EXT4-fs (sdc): mounted filesystem without journal in the system logs. My confusion is why are they mounted without a journal? They were both created with mkfs -t ext4 /dev/sdb mkfs -t ext4