similar to: Strangeness on btrfs balance..

Displaying 20 results from an estimated 600 matches similar to: "Strangeness on btrfs balance.."

2013 Mar 28
1
question about replacing a drive in raid10
Hi all, I have a question about replacing a drive in raid10 (and linux kernel 3.8.4). A bad disk was physical removed from the server. After this a new disk was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs FS. After this the server was rebooted and I mounted the filesystem in degraded mode. It seems that a previous started balance continued. At this point I want to
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi, I''m running OpenSuse 12.2 with kernel 3.5.3 HBA= LSI 1068e using the MPTSAS driver (patched) (https://patchwork.kernel.org/patch/1379181/) SANOS1:/media # uname -a Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64 x86_64 GNU/Linux I''ve tried to simulate a disk replacement but it seems that now /dev/sdg is stuck in the btrfs pool (RAID10) SANOS1:/media #
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello, I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data. So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2012 May 05
5
Is it possible to reclaim block groups once they are allocated to data or metadata?
Hello list, recently reformatted my home partition from XFS to RAID1 btrfs. I used the default options to mkfs.btrfs except for enabling raid1 for data as well as metadata. Filesystem is made up of two 1TB drives. mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show Label: none uuid: f08a8896-e03e-4064-9b94-9342fb547e47 Total devices 2 FS bytes used 888.06GB devid 1 size 931.51GB used
2023 Mar 22
1
[libnbd PATCH v4 0/2] lib/utils: introduce async-signal-safe execvpe()
On 3/22/23 11:42, Laszlo Ersek wrote: > Now the "podman build -f ci/containers/alpine-edge.Dockerfile -t > libnbd-alpine-edge" command is failing with a different error message -- > the download completes, but the internal relinking etc fails due to > permission errors, which I don't understand. I've asked Martin for comments. > > Meanwhile, your other email (=
2023 Mar 22
1
[libnbd PATCH v4 0/2] lib/utils: introduce async-signal-safe execvpe()
On Wed, Mar 22, 2023 at 12:13:49PM +0100, Laszlo Ersek wrote: > On 3/22/23 11:42, Laszlo Ersek wrote: > > > Now the "podman build -f ci/containers/alpine-edge.Dockerfile -t > > libnbd-alpine-edge" command is failing with a different error message -- > > the download completes, but the internal relinking etc fails due to > > permission errors, which I
2010 Dec 29
1
list files on a device
Hello, After another power loss (I am fortune, am I not?) I have the following situation : Label: none uuid: ac155851-0e31-4aed-9ba4-ee712506368a Total devices 3 FS bytes used 1.02TB devid 1 size 931.51GB used 70.00GB path /dev/sdd1 devid 3 size 1.79TB used 66.52GB path /dev/md2 devid 2 size 914.70GB used 914.50GB path /dev/sda4 btrfs device delete does not work on both md2 and
2013 Oct 26
0
[LLVMdev] Interfacing llvm with a precise, relocating GC
> On Oct 26, 2013, at 12:37 AM, Michael Lewis <don.apoch at gmail.com> wrote: > > I'm also highly interested in relocating-GC support from LLVM. Up until now my GC implementation has been non-relocating which is obviously kind of a bummer given that it inhibits certain classes of memory allocation/deallocation tricks. You can implement a copying GC (what the kids these days
2013 Oct 26
3
[LLVMdev] Interfacing llvm with a precise, relocating GC
I'm also highly interested in relocating-GC support from LLVM. Up until now my GC implementation has been non-relocating which is obviously kind of a bummer given that it inhibits certain classes of memory allocation/deallocation tricks. I wrote up a bunch of my findings on the implementation of my GC here: https://code.google.com/p/epoch-language/wiki/GarbageCollectionScheme Frankly I
2003 Nov 19
2
One more clue, maybe...
The stack dump message is white text on black background. Right above the stack dump message, still in yellow-on-blue from the PXELINUX settings, is the text: Building the boot loader arguments Relocating the loader and the BTX Starting the BTX loader <change to white-on-black> BTX loader 1.00 BTX version is 1.01 <stack dump> Would that business of "arguments...
2018 Nov 19
2
Non-relocating GC with liveness tracking
Thanks for reviving this. I completely forgot the details but I resolved this problem. Looking though the code, seems I forked RewriteStatepointsForGC pass, and change it to adding 'gc-livevars' bundle to the call/invoke inst after finding the livevars, instead of changing it to StatepointCall intrinsic. On Wed, Nov 14, 2018 at 11:48 AM Philip Reames <listmail at philipreames.com>
2013 Oct 28
1
[LLVMdev] Interfacing llvm with a precise, relocating GC
On 10/26/13 7:40 AM, Filip Pizlo wrote: > You can implement a copying GC (what the kids these days call > relocating) without accurate roots. I use "relocating" to brush over the distinction between "copying" and "compacting" collectors. For the purposes of our discussions, the two are interchangeable though. > Why aren't you just using the well-known
2013 Oct 26
0
[LLVMdev] Interfacing llvm with a precise, relocating GC
On 10/25/13 1:10 PM, Ben Karel wrote: > > > > On Thu, Oct 24, 2013 at 6:42 PM, Sanjoy Das <sanjoy at azulsystems.com > <mailto:sanjoy at azulsystems.com>> wrote: > > Hi Rafael, Andrew, > > Thank you for the prompt reply. > > One approach we've been considering involves representing the > constraint "pointers to heap objects
2017 Dec 08
4
Non-relocating GC with liveness tracking
Hi Team, I'm working on a new pure functional language and I'm trying to add GC support for that. Because all vars are immutable, the IR that my frontend generates are all register based, i.e. no alloca, and no readmem, writemem unless accessing/constructing structs. If I use the traditional GC with gcroot intrinsic, it will need to emit more code for liveness tracking, storing the IR
2023 Mar 22
1
[libnbd PATCH v4 0/2] lib/utils: introduce async-signal-safe execvpe()
On 3/22/23 12:42, Daniel P. Berrang? wrote: > On Wed, Mar 22, 2023 at 12:13:49PM +0100, Laszlo Ersek wrote: >> On 3/22/23 11:42, Laszlo Ersek wrote: >> >>> Now the "podman build -f ci/containers/alpine-edge.Dockerfile -t >>> libnbd-alpine-edge" command is failing with a different error message -- >>> the download completes, but the internal
2013 Oct 26
1
[LLVMdev] Interfacing llvm with a precise, relocating GC
On Fri, Oct 25, 2013 at 8:35 PM, Philip Reames <listmail at philipreames.com>wrote: > On 10/25/13 1:10 PM, Ben Karel wrote: > > > > > On Thu, Oct 24, 2013 at 6:42 PM, Sanjoy Das <sanjoy at azulsystems.com>wrote: > >> Hi Rafael, Andrew, >> >> Thank you for the prompt reply. >> >> One approach we've been considering involves
2010 Oct 14
0
AMD/Supermicro machine - AS-2022G-URF
Sorry for the long post but I know trying to decide on hardware often want to see details about what people are using. I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am starting to use. I successfully transferred a deduped zpool with 1.x TB of files and 60 or so zfs filesystems using mbuffer from an old 134 system with 6 drives - it ran at about 50MB/s or
2018 Aug 14
2
doveadm mailbox delete not working
Hi Aki, Am 14.08.18 um 16:42 schrieb Aki Tuomi: > Hi, > > the thing I'm actually looking for is that whether the sync causes the folder to be restored, so it might be a better idea for you to try and spot this from the logs. I assume that as an SP that you are using mail_log plugin, so that might be useful to spot if this happens. You can also try looking at the UIDVALIDITY value of
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2009 Dec 12
0
Messed up zpool (double device label)
Hi! I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered. After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status: pool: tank state: DEGRADED status: One or more devices could not be used because the label is missing or