Displaying 20 results from an estimated 33 matches for "512bytes".
2005 Sep 07
7
Asynchronous IO
Hi,
I have installed Xen on Linux 2.6.11.10 <http://2.6.11.10> and i am trying
to do Asynchronous Direct IO on SAS drives. The application which does the
asynchronous direct io on SAS drive is running on Domain 0. Actually the
IOPs what i get for a 512Bytes IO size is 67, but if i do the same operation
on Linux 2.6.11.10 <http://2.6.11.10> native kernel, i get 267
IOPs.Cananyone tell me why this huge differnece? Am i missing
something? In the
current setup on Xen, if i do Synchronous IO, then i am getting 265 IOPs
which is expected. So i am wo...
2015 Jun 01
2
Status of support for secotr sizes >512b
Hello,
can someone give me a short summary on the status of support for sector
sizes >512bytes in SYSLINUX?
The installer obviously still doesn't support it, at least it complains
about "unsupported sector size" when used on a 4k-sector disk. I read
somewhere that this is only a problem of the installer, and that
SYSLINUX itself would work if it got installed "by other to...
2020 Mar 24
2
Building a NFS server with a mix of HDD and SSD (for caching)
Hi list,
I'm building a NFS server on top of CentOS 8.
It has 8 x 8 TB HDDs and 2 x 500GB SSDs.
The spinning drives are in a RAID-6 array. They are 4K sector size.
The SSDs are in RAID-1 array and with a 512bytes sector size.
I want to use the SSDs as a cache using dm-cache. So here what I've done
so far:
/dev/sdb ==> SSD raid1 array
/dev/sdd ==> spinning raid6 array
I've added "allow_mixed_block_sizes = 1" to lvm.conf to be able to add
sdb and sdd in the same VG (because of the...
2019 Jan 05
0
Interaction with Windows bootloader
...nux). One way works, the other way does
> not...
For starters, what version? Official pre-compiled binaries?
> Use Case 1:
> In this scenario, Linux is installed to its *own* partition as is
> syslinux (syslinux -i /path/to/syslinux/files). Afterwards, I 'dd'
> the first 512bytes of that partition as the .bss file to chainload
> from the host OS bootloader. This method works for all tested host
> OS's including Windows.
>
>
> Use Case 2:
> In this scenario, Linux is installed to the *same* partition as the
> host OS. If the host OS is Linux, I jus...
2019 Jan 03
3
Interaction with Windows bootloader
...installed either to its own partition, or to the partition
of a host OS (Windows or Linux). One way works, the other way does
not...
Use Case 1:
In this scenario, Linux is installed to its *own* partition as is
syslinux (syslinux -i /path/to/syslinux/files). Afterwards, I 'dd'
the first 512bytes of that partition as the .bss file to chainload
from the host OS bootloader. This method works for all tested host
OS's including Windows.
Use Case 2:
In this scenario, Linux is installed to the *same* partition as the
host OS. If the host OS is Linux, I just add the option of booting
our s...
2008 Oct 14
4
Change the volblocksize of a ZFS volume
...Background:
I have a ZFS volume with the incorrect volume blocksize for the filesystem (NTFS) that it is supporting.
This volume contains important data that is proving impossible to copy using Windows XP Xen HVM that "owns" the data.
The disparity in volume blocksize (current set to 512bytes!!) is causing significant performance problems.
Question :
Is there a way to change the volume blocksize say via ''zfs snapshot send/receive''?
As I see things, this isn''t possible as the target volume (including property values) gets overwritten by ''zfs receive...
2020 Mar 24
0
Building a NFS server with a mix of HDD and SSD (for caching)
Hi,
> Hi list,
>
> I'm building a NFS server on top of CentOS 8.
> It has 8 x 8 TB HDDs and 2 x 500GB SSDs.
> The spinning drives are in a RAID-6 array. They are 4K sector size.
> The SSDs are in RAID-1 array and with a 512bytes sector size.
>
>
> I want to use the SSDs as a cache using dm-cache. So here what I've done
> so far:
> /dev/sdb ==> SSD raid1 array
> /dev/sdd ==> spinning raid6 array
Looks like you're using a hardware raid controller, right?
>
> I've added "allow_...
2007 Jul 10
1
Get journal position
Hi,
There is any way to figure out where physically is the journal on a ext3
fs and it's size?
Thanks!
Jordi
2005 Mar 15
1
Excessive chaining or multiple comboot files
...write several small programs which execute before
the OS takes control.
Today; I have a "bootloader" which copies the original bootsector to an
unused location on the hard drive; then puts itself into the exiting
bootsector. This loader is ofcourse limited like any boot sector to
<<512bytes... and I'm have zero space to add needed features.
If this sounds like a virus; it was... I took the stoned virus example and
crippled it so it only infects one HD when executed as a standalone c++
program... after several user prompts and confirmations.
The problem I'm running into is tha...
2013 Jan 18
8
migrate from physical disk problems in xen
I''ve been trying to migrate a win nt 4 machine to a xen domu for the past few months with no success. However, on my current attempt, the original hardware no longer boots, so I''m trying to resolve the issues with xen properly, or else take a long holiday...
Anyway, the physical machine had a 9G drive (OS drive), a 147 G drive (not in use) and a 300G drive (all SCSI Ultra320 on
2011 Jan 05
52
Offline Deduplication for Btrfs
Here are patches to do offline deduplication for Btrfs. It works well for the
cases it''s expected to, I''m looking for feedback on the ioctl interface and
such, I''m well aware there are missing features for the userspace app (like
being able to set a different blocksize). If this interface is acceptable I
will flesh out the userspace app a little more, but I believe the
2011 May 02
32
[PATCH] blkback: Fix block I/O latency issue
In blkback driver, after I/O requests are submitted to Dom-0 block I/O subsystem, blkback goes to ''sleep'' effectively without letting blkfront know about it (req_event isn''t set appropriately). Hence blkfront doesn''t notify blkback when it submits a new I/O thus delaying the ''dispatch'' of the new I/O to Dom-0 block I/O subsystem. The new I/O is
2012 Dec 22
7
9.1 minimal ram requirements
Guys, I've heard about some absurd RAM requirements
for 9.1, has anybody tested it?
e.g.
http://forums.freebsd.org/showthread.php?t=36314
--
View this message in context: http://freebsd.1045724.n5.nabble.com/9-1-minimal-ram-requirements-tp5771583.html
Sent from the freebsd-stable mailing list archive at Nabble.com.
2011 Jun 21
13
VM disk I/O limit patch
Hi all,
I add a blkback QoS patch.
You can config(dynamic/static) different I/O speed for different VM disk
by this patch.
----------------------------------------------------------------------------
diff -urNp blkback/blkback.c blkback-qos/blkback.c
--- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800
+++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800
@@ -44,6 +44,11 @@
2011 Jun 21
13
VM disk I/O limit patch
Hi all,
I add a blkback QoS patch.
You can config(dynamic/static) different I/O speed for different VM disk
by this patch.
----------------------------------------------------------------------------
diff -urNp blkback/blkback.c blkback-qos/blkback.c
--- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800
+++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800
@@ -44,6 +44,11 @@
2009 May 12
1
[PATCH 1/1] dm-ioband: I/O bandwidth controller
...uests on a given underlying block device use up their
+ tokens.
+
+ There are two policies for token consumption. One is that a token is
+ consumed for each I/O request. The other is that a token is consumed for
+ each I/O sector, for example, one I/O request which consists of
+ 4Kbytes(512bytes * 8 sectors) read consumes 8 tokens. A user can choose
+ either policy.
+
+ With this approach, a job running on an ioband group with large weight
+ is guaranteed a wide I/O bandwidth.
+
+ --------------------------------------------------------------------------
+
+Setup and Installation...
2009 May 12
1
[PATCH 1/1] dm-ioband: I/O bandwidth controller
...uests on a given underlying block device use up their
+ tokens.
+
+ There are two policies for token consumption. One is that a token is
+ consumed for each I/O request. The other is that a token is consumed for
+ each I/O sector, for example, one I/O request which consists of
+ 4Kbytes(512bytes * 8 sectors) read consumes 8 tokens. A user can choose
+ either policy.
+
+ With this approach, a job running on an ioband group with large weight
+ is guaranteed a wide I/O bandwidth.
+
+ --------------------------------------------------------------------------
+
+Setup and Installation...
2009 May 12
1
[PATCH 1/1] dm-ioband: I/O bandwidth controller
...uests on a given underlying block device use up their
+ tokens.
+
+ There are two policies for token consumption. One is that a token is
+ consumed for each I/O request. The other is that a token is consumed for
+ each I/O sector, for example, one I/O request which consists of
+ 4Kbytes(512bytes * 8 sectors) read consumes 8 tokens. A user can choose
+ either policy.
+
+ With this approach, a job running on an ioband group with large weight
+ is guaranteed a wide I/O bandwidth.
+
+ --------------------------------------------------------------------------
+
+Setup and Installation...
2009 Jun 16
1
[PATCH 1/2] dm-ioband: I/O bandwidth controller v1.12.0: main part
...hat have requests on a given
+ underlying block device use up their tokens.
+
+ The weight policy lets dm-ioband consume one token per one I/O request.
+ The weight-iosize policy lets dm-ioband consume one token per one I/O
+ sector, for example, one I/O request which consists of 4Kbytes (512bytes *
+ 8 sectors) read consumes 8 tokens.
+
+ With this approach, a job running on the ioband group with large weight
+ is guaranteed a wide I/O bandwidth.
+
+ --------------------------------------------------------------------------
+
+ range-bw policy
+
+ range-bw means the predicabl...
2009 Jun 16
1
[PATCH 1/2] dm-ioband: I/O bandwidth controller v1.12.0: main part
...hat have requests on a given
+ underlying block device use up their tokens.
+
+ The weight policy lets dm-ioband consume one token per one I/O request.
+ The weight-iosize policy lets dm-ioband consume one token per one I/O
+ sector, for example, one I/O request which consists of 4Kbytes (512bytes *
+ 8 sectors) read consumes 8 tokens.
+
+ With this approach, a job running on the ioband group with large weight
+ is guaranteed a wide I/O bandwidth.
+
+ --------------------------------------------------------------------------
+
+ range-bw policy
+
+ range-bw means the predicabl...