similar to: Geom label lost after expanding partition

Displaying 20 results from an estimated 200 matches similar to: "Geom label lost after expanding partition"

2008 Sep 30
5
GELI partition mount on boot fails after 7.0 -> 7.1-PRERELEASE upgrade
I was using a GELI partition for /usr/home on 7.0, so it attaches and mounts on boot. The problem is it stopped working after the system was upgraded to RELENG_7/7.1-PRERELEASE. Here's how it goes: I have the following /etc/fstab: /dev/ad0s1b none swap sw 0 0 /dev/ad0s1a / ufs rw 1 1 /dev/ad0s1d
2012 Apr 20
1
GEOM_PART: integrity check failed (mirror/gm0, MBR) on FreeBSD 8.3-RELEASE
I just did a source upgrade from 8.2 to 8.3. System boots but has this warning: GEOM_PART: integrity check failed (mirror/gm0, MBR) Google points to issues with FreeBSD 9 and the need to migrate to GPT but I wasn't expecting this with 8.3! Are there any quick fixes to eliminate this warning or is it safe to ignore please? sudo gpart list: Geom name: mirror/gm0 modified: false state:
2013 Jun 19
3
shutdown -r / shutdown -h / reboot all hang and don't cleanly dismount
Hello -STABLE@, So I've seen this situation seemingly randomly on a number of both physical 9.1 boxes as well as VMs for I would say 6-9 months at least. I finally have a physical box here that reproduces it consistently that I can reboot easily (ie; not a production/client server). No matter what I do: reboot shutdown -p shutdown -r This specific server will stop at "All buffers
2012 Nov 27
6
How to clean up /
Hello. I recently upgraded to 9.1-RC3, everything went fine, however the / partition its about to get full. Im really new to FreeBSD so I don?t know what files can be deleted safely. # find -x / -size +10000 -exec du -h {} \; 16M /boot/kernel/kernel 60M /boot/kernel/kernel.symbols 6.7M /boot/kernel/if_ath.ko.symbols 6.4M /boot/kernel/vxge.ko.symbols 9.4M
2006 Apr 05
1
GEOM_RAID3: Device datos is broken, too few valid components
Hello list, Last night one disk of my desktop machine dead causing a hard lock of the computer. It was a component of a mirror volume so it wasn't as serious as it initially looked. Unfortunately, the metadata structure of my data partition (a geom raid3 array with tree components ) seems to be corrupted by this hard lock, the following message is scrolled constantly on the screen:
2012 Apr 16
2
Any options on crypt+zfs ?
hail, I have a soekris running an atom and 2GB RAM and ZFS using 7 drives, small capacity though, to test and study if I can make my home server this box and this way. It will be a simple server, three users tops. I followed the handbook and made the geli step on the disks: Geom name: label/zfs1.eli State: ACTIVE EncryptionAlgorithm: AES-XTS KeyLength: 128 Crypto: software UsedKey: 0 Flags:
2007 Nov 29
1
lvresize --resizefs
Hi, There is a difference between the help of lvresize and its man page. In the manpage, there is nothing about the -r or --resizefs function. Centos4. [root at serv01 ~]# lvresize Please specify either size or extents (not both) lvresize: Resize a logical volume lvresize [-A|--autobackup y|n] [--alloc AllocationPolicy] [-d|--debug] [-h|--help]
2013 Jan 24
2
RFC: Suggesting ZFS "best practices" in FreeBSD
>> #1. Map the physical drive slots to how they show up in FBSD so if a >> disk is removed and the machine is rebooted all the disks after that >> removed one do not have an 'off by one error'. i.e. if you have >> ada0-ada14 and remove ada8 then reboot - normally FBSD skips that >> missing ada8 drive and the next drive (that used to be ada9) is now
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi, I''m seeing what can only be described as dismal striped write performance from lustre 1.6.3 clients :-/ 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2012 May 30
3
Boot hangs on v9 system at CD device probe
I sent a note about this a couple of weeks ago, but have not heard anything. I'm really getting a bit desperate. I have a system that I am trying to upgrade from 8.2 to 9.0. I have built it and installed the kernel, but it fails to boot. The boot freezes after probing for my hard drives during the probe of the CDROM. It just sits there, seemingly forever, though I have never waited longer
2005 Dec 15
5
Avery Lables, PDF::Writer or LaTex?
In a new app we are developing, we need to be able to dynamically create a PDF and send it to the browser (inline with send_data). I have been tinkering with PDF::Writer and love the simplicity and native ruby-ness of it all. However, one of the main uses for this functionality is to output a PDF of addresses to be printed on Avery 5161 labels. LaTex seems to be suited well for this, but it
2008 Jun 26
1
gmirror+gjournal: unable to boot after crash
Hi, after one month with gmirror and gjournal running on a 7.0-RELEASE #p2 amd64 (built from latest CVS source), the box hung a couple of times when on high disk load. Finally, while building some port it won't boot for no reason obvious to me. This is what I get with kernel.geom.mirror.debug=2: ata2-master: pio=PIO4 wdma=WDMA2 udma=UDMA133 cable=40 wire ad4: 476940MB <SAMSUNG HD501LJ
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello, as usual, I'm few years behind trends so I have learned about LVM thin volumes recently and I especially like that your volumes can be "sparse" - that you can have 1TB thin volume on 250GB VG/thin pool. Is it somehow possible to use that with libvirt? I have found this post from 2014: https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html which says
2012 Mar 24
3
FreeBSD 9.0 - GPT boot problems?
Hi, I just installed FreeBSD 9.0-release / amd64 on a new machine (Acer Aspire X1470). I installed from a usb memory stick (the default amd64 image), which I booted by pressing "F12" and selecting it from the boot menu on the machine. I installed on a SSD (which replaced the hard drive originally in the machine), using the default scheme for 9.0 (GPT). The installation was painless (many
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2013 Jun 13
1
zpool labelclear destroys GPT data
When i use zpool labelclear, it wipes the whole disk including gpt data. So the whole disk is empty and i need to create the gpt partitions again. Is this supposed to work like this? The man page suggests that it only wipes the ZFS metadata. zpool labelclear [-f] device Removes ZFS label information from the specified device. The device must not be part of an active pool
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2008 Aug 27
1
Finding which GEOM provider is generating errors in a graid3
I have a FreeBSD 6.2-based server running a 1.2TB graid3 volume, which consists of 5x 320gb SATA hard drives. I've been getting errors in /var/log/messages from the graid3 volume, which I suspect means an underlying fault with one of the disks, but is there any way to decipher which one of these drives is throwing errors? I've checked smartctl -a /dev/adXX but nothing shows up there..
2010 Oct 14
1
Upgrade a degraded pool
I know that this is not necessarily the right forum, but the FreeBSD forum haven''t been able to help me... I recently updated my FreeBSD 8.0 RC3 to 8.1 and after the update I can''t import my zpool. My computer says that no such pool exists, even though it can be seen with the zpool status command. I assume that it''s due to different zfs versions. That should be solved