similar to: quota for ocfs2 do not warn a exceed for block soft-limit sometimes

Displaying 20 results from an estimated 500 matches similar to: "quota for ocfs2 do not warn a exceed for block soft-limit sometimes"

2016 Apr 22
7
[OT] disk utility showing message "the partition is misaligned by"
greetings. centos 6.7 [current] 'disk utility' has started showing message; WARNING: The partition is misaligned by 2560 bytes. This may result in very poor performance. Repartitioning is suggested. for sdc5 - /home partition. /dev/sdc5 302243312 156348604 130534968 55% /home /dev/sdc7 80854912 57088 76683952 1% /hdd/c/07 other than time involved to backup
2016 Apr 22
2
[OT] disk utility showing message "the partition is misaligned by"
On 04/22/16 08:19, Leon Fauster wrote: <<>> > check it with: > > blockdev --getalignoff /dev/sd > > (if a '0' is returned, the partition is aligned) > ===> Leon, thank you for reply. ]$ sudo blockdev --getalignoff /dev/sdc1 0 ]$ sudo blockdev --getalignoff /dev/sdc2 0 ]$ sudo blockdev --getalignoff /dev/sdc5 2560 ]$ sudo blockdev --report /dev/sdc1 RO
2012 Apr 23
5
'filesystem resize max' tries to use devid 1
Back story: I started my pool with a 200gb partition at the end of my drive (sdc5) , until I was able to clear out the data at the beginning of my drive. When I was ready, I ran `btrfs dev add /dev/sdc4 /` then `btrfs dev del /dev/sdc5 /`, $ sudo btrfs fi resize max / Resize ''/'' of ''max'' ERROR: unable to resize ''/'' - Invalid argument in
2015 Apr 09
2
install problem
I shot myself in the foot today. I had a centos 5.11 install running fine. Doing a backup, I overwrote the /bin directory by mistake. I couldn't get my machine to recognize a centos 6.5 or 6.6 install dvd, so I put in the original centos 5.10 install disc and re-installed. No problem. During the text installer, I told it to install grub on /dev/sdc1, which is /boot. My raid arrays with lots
2014 Jun 24
3
How to remove LVM Physical Volume from Volume Group?
Hi. I have a volume group (let's say) vg_data. It consists from /dev/sdd5 sdd6 sdd7 I added sdc5 Now I want to remove (free) sdd7 and you is to for RAID partition. What are the commands (ordered) I need to perform? I failed to find clear howto. vg-data has only one partition, total size is over 1TB, free space is about 500GB so
2017 Sep 22
2
sparse files on EC volume
Hello I'm running some tests to compare performance between Gluster FUSE mount and formated sparse files (located on the same Gluster FUSE mount). The Gluster volume is EC (same for both tests). I'm seeing HUGE difference and trying to figure out why. Here is an example: GlusterFUSE mount: # cd /mnt/glusterfs # rm -f testfile1 ; dd if=/dev/zero of=testfile1 bs=1G count=1 1+0 records
2003 May 01
1
Batch Mode?
I realize batch mode is still experimental, but I was hoping there might be a workaround for a problem I am getting. I have been trying to run some tests and I get the below error when I use the --read-batch option to. I can successfully create an initial set of batch files, then a second set based upon a few modified test files from the first batch. When I first run the --read-batch option
2016 Apr 22
0
[OT] disk utility showing message "the partition is misaligned by"
Am 22.04.2016 um 12:40 schrieb g <geleem at bellsouth.net>: > greetings. > > centos 6.7 [current] > > > 'disk utility' has started showing message; > > WARNING: The partition is misaligned by 2560 bytes. This may > result in very poor performance. Repartitioning is suggested. > > for sdc5 - /home partition. > > /dev/sdc5 302243312
2017 Sep 26
2
sparse files on EC volume
Hi Xavi At this time I'm using 'plain' bricks with XFS. I'll be moving to LVM cached bricks. There is no RAID for data bricks, but I'll be using hardware RAID10 for SSD cache disks (I can use 'writeback' cache in this case). 'small file performance' is the main reason I'm looking at different options, i.e. using formated sparse files. I spent considerable
2005 Mar 21
3
routes.rb question.
I have a simple program that read all HTML files from a directory and returns parts of the content and the file names which are HREFs to those file. I build the HREF string dynamicaly as PATH_TO_DOC_ROOT + "file_name". My PATH_TO_DOC_ROOT = http://127.0.0.1:3000/docman/public/docs. So, the final link that I am interested in might look like this:
2008 Aug 29
4
Best method for booting logical partions via syslinux...
Hi. Which is the preferable method for booting multiple logical partions via syslinux ? 1. Create seperate logical (fat 32) partitons for 'each' Live CD, extracted and prepared identically via syslinux/syslinux.cfg (sdc5, sdc6,sdc7 etc.) 2. Use Grub2 to attempt to boot seperate logical partitions with the same extracted Live CD plan as above. 3. Use the chain.c32 module that
2006 Aug 10
3
MD raid tools ... did i missed something?
Hi I have a degraded array /dev/md2 ===================================================================== $ mdadm -D /dev/md2 /dev/md2: Version : 00.90.01 Creation Time : Thu Oct 6 20:31:57 2005 Raid Level : raid5 Array Size : 221953536 (211.67 GiB 227.28 GB) Device Size : 110976768 (105.84 GiB 113.64 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 2
2017 Sep 26
0
sparse files on EC volume
Hi Dmitri, On 22/09/17 17:07, Dmitri Chebotarov wrote: > > Hello > > I'm running some tests to compare performance between Gluster FUSE mount > and formated sparse files (located on the same Gluster FUSE mount). > > The Gluster volume is EC (same for both tests). > > I'm seeing HUGE difference and trying to figure out why. Could you explain what hardware
2017 Jan 03
2
Inconsistent behavior using 3.1.2 from macOS 10.12.2 to an AFP mount
Hi, I've been attempting to use rsync 3.1.2 to copy files from a macOS 10.12.2 system to an AFP mounted share. The command I'm using is: rsync -avAX -M--fake-super ./testDir ./mnt/testDir/ This works fine and all the extended attributes are copied and readable. I then try to use rsync to copy these files back with: rsync -avAX --fake-super -M--super ./mnt/testDir/ ./testDir2/ This
2016 Apr 22
0
[OT] disk utility showing message "the partition is misaligned by"
On 04/22/2016 09:43 AM, g wrote: > ]$ sudo blockdev --getalignoff /dev/sdc1 > 0 > ]$ sudo blockdev --getalignoff /dev/sdc2 > 0 > ]$ sudo blockdev --getalignoff /dev/sdc5 > 2560 > ]$ sudo blockdev --report /dev/sdc1 > RO RA SSZ BSZ StartSec Size Device > rw 256 512 4096 2048 838860800 /dev/sdc1 > ]$ sudo blockdev --report
2013 Sep 10
2
large SCSI RAID, replacing server
I have a system running CentOS 6.3, with a SCSI attached RAID: http://www.raidweb.com/index.php/2012-10-24-12-40-09/janus-ii-scsi/2012-10-24-12-40-59.html For disaster recovery purposes, I want to build up a spare system which could take the place of the server hosting the RAID above. But here's what I see: # fdisk -l /dev/sdc WARNING: GPT (GUID Partition Table) detected on
2004 Nov 16
7
Problem on FC3
I''m getting a VFS kernel panic when trying to booting to FC3 from Xen. Its the one that says I must supply a valid "root=" option. (sorry I don''t have it verbatim.) I''m using the xen-2.0 binary installer, kernel 2.6.9. I am using ext3fs, but I am fairly certain that I have compiled support into the kernel. (I''ve tired compile xen from the 2.0
2017 Sep 27
0
sparse files on EC volume
Have you done any testing with replica 2/3? IIRC my replica 2/3 tests out performed EC on smallfile workloads, it may be worth looking into if you can't get EC up to where you need it to be. -b ----- Original Message ----- > From: "Dmitri Chebotarov" <4dimach at gmail.com> > Cc: "gluster-users" <Gluster-users at gluster.org> > Sent: Tuesday,
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss. I mounted the zfs-based
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line