similar to: [osol-discuss] Re: bare metal ZFS ? How To ?

Displaying 20 results from an estimated 300 matches similar to: "[osol-discuss] Re: bare metal ZFS ? How To ?"

2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2010 Jul 16
1
ZFS mirror to RAIDz?
Hi all, I currently have four drives in my OpenSolaris box. The drives are split into two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing other data (disks 2 & 3). I''m running out of space on my data mirror and am thinking of upgrading it to two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a RAIDz from the three new drives.
2015 Aug 25
0
OPUS on bare metal ARM
On 8/25/15, 12:25 AM, Treuillard, Benjamin wrote: > The aim of my project is to transmit voice over CAN bus. CAN? 8 byte transactions. CRC. Bit stuffing on 5 bit repeats. Automatic retransmits. No ordering. Really? I guess if you *have* to, but I would pick pretty much *any* interface standard *other* than CAN for audio. > The main issue I have is that opus fail to allocate memory,
2020 May 01
0
Bare Metal vs Containers/vms
Hi All, I vaguely remember someone at Astricon making the case for having multiple containers/vps each running asterisk vs using asterisk direct on bare metal. Something about getting better performance. Does anyone have any insight on this? TIA and stay safe Dovid PS I know vps != containers I just don’t recall if the argument was for vps, containers or both instead of installing direct on
2015 Jun 30
0
QEMU-KVM and bare metal performance impact
2019 Feb 11
0
bare-metal backup before update--options?
> Hi all! > > I'm a "nervous nellie", I have not yet updated my 7.5 desktop to 7.6 > because (1) it has an Nvidia card, and (2) I've heard of problems > upgrading on top of software RAID (using RAID1 with 2 drives). > > I need to upgrade it to stay secure, and I want to do a bare-metal backup > first (so I can put it all back as it now is, in case it
2015 Aug 25
0
OPUS on bare metal ARM
Andrew, Stephan Thanks for your help. Well, I really don?t have the choice of the interface it has to be CAN. My malloc is working but it may have some bug. I?m going to check the use of the CCRAM with an implementation of opus_alloc/opus_free. I will let you know how it works. Regards. Benjamin ________________________________ Comme vous le savez, les messages envoy?s par e-mail peuvent
2013 Aug 28
0
Investigating memory performance: bare metal vs. xen-pv vs. xen-hvm
I''ve been trying to compare memory access speed between bare-metal, xen-pv and xen-pvhvm (hvm with pv drivers). In all 3 setups I''m running the same kernel (3.6.6), built with support for xen, on a 64 core AMD Opteron 6378. The output of xm info (relevant parts): machine : x86_64 nr_cpus : 64 nr_nodes : 8 cores_per_socket : 16
2018 Oct 09
3
Serial ports: vm vs bare metal
I'm running libvirt under Fedora 28. I would like to attach a USB device to a VM, but when I select "Redirect USB Device" from the "Virtual Machine" menu in virt-manager and then select the device, I get the error: USB redirection error spice-client-error-quark: Could not redirect [device name] at 1-11: Error setting USB device node ACL: 'Not authorized' (0)
2019 Feb 12
1
bare-metal backup before update--options?
On Mon, Feb 11, 2019 at 04:16:38PM +0100, Simon Matter via CentOS wrote: > > Hi all! > > > > I'm a "nervous nellie", I have not yet updated my 7.5 desktop to 7.6 > > because (1) it has an Nvidia card, and (2) I've heard of problems > > upgrading on top of software RAID (using RAID1 with 2 drives). > > > > I need to upgrade it to stay
2010 Mar 30
3
bare metal xen hypervisor
hi, i am new to xen. pls. guide me what is best os should be used with xen. also let me know if there is any baremetal xen hypervisor available as i read that i require one OS on which xen hypervisor will be installed. want to use it with 32 bit machine. thanks for your help. -- With Best Wishes Balwant _______________________________________________ Xen-users mailing list
2019 Feb 11
3
bare-metal backup before update--options?
Hi all! I'm a "nervous nellie", I have not yet updated my 7.5 desktop to 7.6 because (1) it has an Nvidia card, and (2) I've heard of problems upgrading on top of software RAID (using RAID1 with 2 drives). I need to upgrade it to stay secure, and I want to do a bare-metal backup first (so I can put it all back as it now is, in case it explodes in my face), so I'm trying to
2009 Oct 09
3
Bare Metal vs virtualization
Hello to all: I know this list is generally Linux-only, but I figured I'd try to gain wisdom from those with hard-core Windows needs, too. I was recently pricing out a high-end desktop system for a user who will doing a lot of CAD, Matlab, SolidWorks, and other apps that will utilize a lot of number crunching and video. The quote for the desktop (64-bit Vista is likely), which included 12
2007 Feb 21
12
suggestion: directory promotion to filesystem
Not sure how technically feasible it is, but something I thought of while shuffling some files around my home server. My poor understanding of ZFS internals is that the entire pool is effectivly a tree structure, with nodes either being data or metadata. Given that, couldnt ZFS just change a directory node to a filesystem with little effort, allowing me do everything ZFS does with filesystems on
2015 Aug 25
4
OPUS on bare metal ARM
Hi everyone, I?m currently trying to use opus on a ST ARM (STM32F407) without any OS (bare metal). The aim of my project is to transmit voice over CAN bus. The main issue I have is that opus fail to allocate memory, the ALLOC macro always return a NULL pointer. I have sure that I have enough free space to allocate buffers. Is there anyone who already try this or have meet this issue ? Thanks
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms). I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2020 Jul 03
2
Exceptions not getting caught on bare-metal target
Hi, We're working on adding exception handling support for a downstream bare-metal target. I read through the LLVM exception handling docs [1] and went through some patches from other backends to understand what parts we need to implement. We're now at a point were it feels like it should work, but unfortunately exceptions are still not getting caught. Our target uses DWARF
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd. According to our tester, Oracle writes are extremely slow (high latency). Below is a snippet of iostat: r/s w/s