search for: r1w1e1

Displaying 8 results from an estimated 8 matches for "r1w1e1".

2012 Aug 16
2
Geom label lost after expanding partition
I have a GPT formatted disk where I recently expanded the size of a partition. I used "gpart resize -i 6 ada1" first to expand the partition to use the remaining free space and then growfs to modify the FFS file system to use the full partition. This was all done in single-user mode, of course, but when I enter "exit" to bring the system up, it failed to mount /usr. This was
2012 Apr 16
2
Any options on crypt+zfs ?
...andbook and made the geli step on the disks: Geom name: label/zfs1.eli State: ACTIVE EncryptionAlgorithm: AES-XTS KeyLength: 128 Crypto: software UsedKey: 0 Flags: NONE KeysAllocated: 38 KeysTotal: 38 Providers: 1. Name: label/zfs1.eli Mediasize: 160041881600 (149G) Sectorsize: 4096 Mode: r1w1e1 Consumers: 1. Name: label/zfs1 Mediasize: 160041885184 (149G) Sectorsize: 512 Mode: r1w1e1 all disks are this way (just 4 disks are on geli zfs). would it be faster, if I had geli over zfs, and not the other way (as is now) ? my performance is too low (I know the hardware is not that m...
2006 Apr 05
1
GEOM_RAID3: Device datos is broken, too few valid components
Hello list, Last night one disk of my desktop machine dead causing a hard lock of the computer. It was a component of a mirror volume so it wasn't as serious as it initially looked. Unfortunately, the metadata structure of my data partition (a geom raid3 array with tree components ) seems to be corrupted by this hard lock, the following message is scrolled constantly on the screen:
2008 Aug 27
1
Finding which GEOM provider is generating errors in a graid3
...ts: 5 Flags: VERIFY GenID: 0 SyncID: 1 ID: 3700500186 Zone64kFailed: 791239 Zone64kRequested: 49197268 Zone16kFailed: 40204 Zone16kRequested: 1283738 Zone4kFailed: 12005939 Zone4kRequested: 2445799003 Providers: 1. Name: raid3/data1 Mediasize: 1280291731456 (1.2T) Sectorsize: 2048 Mode: r1w1e1 ... $ atacontrol list ... ATA channel 6: Master: ad12 <ST3320620AS/3.AAK> Serial ATA v1.0 ATA channel 7: Master: ad14 <ST3320620AS/3.AAK> Serial ATA v1.0 ATA channel 8: Master: ad16 <ST3320620AS/3.AAK> Serial ATA v1.0 ATA channel 9: Master: ad18 <ST3320620A...
2012 Apr 20
1
GEOM_PART: integrity check failed (mirror/gm0, MBR) on FreeBSD 8.3-RELEASE
...Sectorsize: 512 Mode: r2w2e5 Geom name: mirror/gm0s1 modified: false state: OK fwheads: 255 fwsectors: 63 last: 976773104 first: 0 entries: 8 scheme: BSD Providers: 1. Name: mirror/gm0s1a Mediasize: 498597888000 (464G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 32256 Mode: r1w1e1 rawtype: 7 length: 498597888000 offset: 0 type: freebsd-ufs index: 1 end: 973823999 start: 0 2. Name: mirror/gm0s1b Mediasize: 1509941760 (1.4G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 381713920 Mode: r1w1e0 rawtype: 1 length: 1509941760...
2008 Jun 26
1
gmirror+gjournal: unable to boot after crash
...R[2]: Access request for mirror/gm0: r1w0e0. ad10: LSI (v2) check1 failed GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. GEOM_JOURNAL: Journal 2550245011: mirror/gm0 contains data. GEOM_JOURNAL: Journal 2550245011: mirror/gm0 contains journal. GEOM_MIRROR[2]: Access request for mirror/gm0: r1w1e1. GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. ad10: FreeBSD check1 failed GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. ATA PseudoRAID loaded GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. GEOM_MIRROR[2]: Access req...
2008 Sep 30
5
GELI partition mount on boot fails after 7.0 -> 7.1-PRERELEASE upgrade
I was using a GELI partition for /usr/home on 7.0, so it attaches and mounts on boot. The problem is it stopped working after the system was upgraded to RELENG_7/7.1-PRERELEASE. Here's how it goes: I have the following /etc/fstab: /dev/ad0s1b none swap sw 0 0 /dev/ad0s1a / ufs rw 1 1 /dev/ad0s1d
2013 Jun 19
3
shutdown -r / shutdown -h / reboot all hang and don't cleanly dismount
Hello -STABLE@, So I've seen this situation seemingly randomly on a number of both physical 9.1 boxes as well as VMs for I would say 6-9 months at least. I finally have a physical box here that reproduces it consistently that I can reboot easily (ie; not a production/client server). No matter what I do: reboot shutdown -p shutdown -r This specific server will stop at "All buffers