Displaying 18 results from an estimated 18 matches for "drive1".
Did you mean:
driver
2010 May 02
8
zpool mirror (dumb question)
...d opinions on doing
this... it prolly would be a bad idea, but hey, does it hurt to ask?****
I have been thinking, and would it be a good idea, to have on the 2TB
drives, say 1TB or 500GB "files" and then mount them as mirrored? So
basically, have a 2TB hard drive, set up like:
(where drive1 and drive2 are the paths to the mount points)
Mkfile 465gb /drive1/drive1part1
Mkfile 465gb /drive1/drive1part2
Mkfile 465gb /drive1/drive1part3
Mkfile 465gb /drive1/drive1part4
Mkfile 465gb /drive2/drive2part1
Mkfile 465gb /drive2/drive2part2
Mkfile 465gb /drive2/drive2part3
Mkfile 465gb /drive2/...
2001 Sep 25
1
guest account/config file/encrypt passwd problem
...YPTED PASSWORDS FOR NORMAL USERS
encrypt passwords = yes
smb passwd file = /etc/smbpasswd
; logon script = %U.bat
[netlogon]
comment = The domain logon service
path = /etc/samba/logon
public = no
writeable = no
browseable = no
[share-drive1]
path = /share-drive1
comment = Network Drive
create mode = 644
writeable = yes
guest ok = yes
# cat smb.conf.guest
[global]
workgroup = WORKGROUP
server string = Samba %v on %L
domain logons = yes
security = user...
2007 Aug 20
1
system() fails with fc.exe (PR#9868)
...TRUE)
character(0)
When I do the same from python 2.3, I get
>>> import os
>>> os.system("c:\\WINDOWS\\system32\\fc /?")
Compares two files or sets of files and displays the differences between
them
FC [/A] [/C] [/L] [/LBn] [/N] [/OFF[LINE]] [/T] [/U] [/W] [/nnnn]
[drive1:][path1]filename1 [drive2:][path2]filename2
FC /B [drive1:][path1]filename1 [drive2:][path2]filename2
/A Displays only first and last lines for each set of differences.
/B Performs a binary comparison.
/C Disregards the case of letters.
/L Compares files as...
2005 Aug 27
1
Samba clients can't see partitions mounted via loop device from image files
...ting, it is shared with full write
permissions, which I know is terrible.)
On the Mandrake (er, Mandriva) Linux box, I have image
files from a few small hard disk drives. While logged
on as root, I've mounted partitions on two of those
image files, via the loop device:
# first partition in drive1.ima is an NTFS partition:
losetup -r -o 32256 /dev/loop1 drive1.ima
mount -t ntfs /dev/loop1 /mnt/img1
# first partition in drive2.ima is a FAT32 partition:
losetup -o 32256 /dev/loop2 drive2.ima
mount -t vfat /dev/loop2 /mnt/img2
The drive*.ima files and /mnt/img* mount points are all
o...
2023 Mar 08
1
Mount removed raid disk back on same machine as original raid
...e drive, and have done so on another system to
confirm that the information I want is there.
My question is this:
What is going to happen when I try to mount a drive that the system
thinks is part of an existing array?
To put it another way:? I had two drives in md127.? I removed one (call
it drive1), and replaced it with a new drive.? Some files were
accidentally deleted from md127, so now I want to connect drive1 back to
the same machine and mount it as a separate array from md127 so I can
copy the files from drive1 back to md127.? What do I need to do to make
that happen?
Thanks,
Bowi...
2017 Jul 26
2
[PATCH] virtio_blk: fix incorrect message when disk is resized
The message printed on disk resize is incorrect. The following is
printed when resizing to 2 GiB:
$ truncate -s 1G test.img
$ qemu -device virtio-blk-pci,logical_block_size=4096,...
(qemu) block_resize drive1 2G
virtio_blk virtio0: new size: 4194304 4096-byte logical blocks (17.2 GB/16.0 GiB)
The virtio_blk capacity config field is in 512-byte sector units
regardless of logical_block_size as per the VIRTIO specification.
Therefore the message should read:
virtio_blk virtio0: new size: 524288 4096...
2017 Jul 26
2
[PATCH] virtio_blk: fix incorrect message when disk is resized
The message printed on disk resize is incorrect. The following is
printed when resizing to 2 GiB:
$ truncate -s 1G test.img
$ qemu -device virtio-blk-pci,logical_block_size=4096,...
(qemu) block_resize drive1 2G
virtio_blk virtio0: new size: 4194304 4096-byte logical blocks (17.2 GB/16.0 GiB)
The virtio_blk capacity config field is in 512-byte sector units
regardless of logical_block_size as per the VIRTIO specification.
Therefore the message should read:
virtio_blk virtio0: new size: 524288 4096...
2004 Aug 16
6
Mac OS X HFS+ metadata patch, take 2
Hi.
Several months ago, I posted my first pass at a patch to transfer
Mac OS X HFS+ metadata (resource forks and Finder metadata) to
non-HFS+ filesystems (Linux, Solaris, etc).
I finally got a chance to update the patch to reflect suggestions
offered on the list. Thanks for the ideas, this version should be
a big improvement.
The diff and a binary (and a fuller explanation) can be found at:
2008 Feb 14
2
btrfs v0.11 & btrfs v0.12 benchmark results
Hi,
I've recently benchmarked btrfs v0.11 & v0.12 against ext2, ext3, ext4,
jfs, reiserfs and xfs.
OS: Ubuntu Hardy
Kernel: 2.6.24(-5-server)
Hardware:
---------
Fu-Si Primergy RX330 S1
* AMD Opteron 2210 1.8 GHz
* 1 GB RAM
* 3 x 73 GB, 3Gb/s, hot plug, 10k rpm, 3.5" SAS HDD
* LSI RAID 128 MB
Fu-Si Econel 200
* Intel Xeon 5110
* 512 MB RAM
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2017 Aug 04
0
[PATCH] virtio_blk: fix incorrect message when disk is resized
...d, Jul 26, 2017 at 03:32:23PM +0100, Stefan Hajnoczi wrote:
> The message printed on disk resize is incorrect. The following is
> printed when resizing to 2 GiB:
>
> $ truncate -s 1G test.img
> $ qemu -device virtio-blk-pci,logical_block_size=4096,...
> (qemu) block_resize drive1 2G
>
> virtio_blk virtio0: new size: 4194304 4096-byte logical blocks (17.2 GB/16.0 GiB)
>
> The virtio_blk capacity config field is in 512-byte sector units
> regardless of logical_block_size as per the VIRTIO specification.
> Therefore the message should read:
>
> v...
2013 Feb 20
20
Feature Request for zfs pool/filesystem protection?
Hi!
My name is Markus and I living in germany. I''m new to this list and I
have a simple question
related to zfs. My favorite operating system is FreeBSD and I''m very
happy to use zfs on them.
It''s possible to enhance the properties in the current source tree with
an entry like "protected"?
I find it seems not to be difficult but I''m not an
2003 Apr 10
2
Crash dump in umount
Hello.
I'm having a 4.7Rp9 server which is since months quite unstable, so I've compiled a debug kernel and got a crash dump.
I'm a programmer and I know a little how to use gdb, I'm not so expert about FreeBSD kernel insides however, so I can't
get much from it. Any kind of help is appreciated.
This is the crash message:
IdlePTD at phsyical address 0x0032f000
initial pcb
2010 Mar 02
9
"Dos installer" from Win98se
On 02/27/2010 10:31 PM, swdamle at bsnl.in wrote:
> Hello,
> This has reference to ""Dos installer" from Win98se in Syslinux-3.84.
>
> Sorry to say the problem CONTINUES with Syslinux-3.85. Following is
> for your reference please.
>
You may want to try:
http://www.zytor.com/~hpa/syslinux/syslinux.com
... which is compiled with debugging information turned
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
I observed that there is one msix vector for config and one shared vector
for all queues in below qemu cmdline, when the num-queues for virtio-blk
is more than the number of possible cpus:
qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=6"
# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
... ...
24: 0
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
I observed that there is one msix vector for config and one shared vector
for all queues in below qemu cmdline, when the num-queues for virtio-blk
is more than the number of possible cpus:
qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=6"
# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
... ...
24: 0
2002 Jul 03
11
sync slowness. ext3 on VIA vt82c686b
...Prefetch Buffer: no no
Post Write Buffer: no no
Enabled: yes yes
Simplex only: no no
Cable Type: 80w 40w
-------------------drive0----drive1----drive2----drive3-----
Transfer Mode: UDMA PIO PIO PIO
Address Setup: 30ns 120ns 30ns 120ns
Cmd Active: 90ns 90ns 90ns 90ns
Cmd Recovery: 30ns 30ns 30ns 30ns
Data Active: 90ns 330ns 90ns...
2006 Feb 03
0
Warcraft III won't run
...7fd84ba0 00000000 7f9be14c 7c077488
0x7f9be0cc: 7e5ec96c 7f9be0e0 7e5cad40 7e5ef580
0x7f9be0dc: 7bff72d0 7f9be158 7bfdf838 7c077488
0200: sel=1007 base=7fe46000 limit=00001fff 32-bit rw-
Backtrace:
=>1 0x7e7ddb91 MSVCRT_sscanf+0x51 in msvcrt (0x7e7ddb91)
fixme:dbghelp:sffip_cb NIY on
'E:\Drive1\temp\buildwar3xloc\War3\bin\Game.pdb'
2 0x6f0ce7ff in game (+0xce7ff) (0x6f0ce7ff)
3 0x6f0caded in game (+0xcaded) (0x6f0caded)
4 0x7f83e83a WINPROC_wrapper+0x1a in user32 (0x7f83e83a)
5 0x7f83f49f in user32 (+0x9f49f) (0x7f83f49f)
6 0x7f8452cc CallWindowProcW+0x12c in user32 (0x7f845...