similar to: Looking for a RAID1 box

Displaying 20 results from an estimated 400 matches similar to: "Looking for a RAID1 box"

2023 Jan 09
2
RAID1 setup
Hi > Continuing this thread, and focusing on RAID1. > > I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn > it off if I want). What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all. > > I am planning two groupings of RAID1 (it has 4 bays). > > There is also an internal USB boot port. >
2023 Jan 06
2
Looking for a RAID1 box
Once upon a time, Simon Matter <simon.matter at invoca.ch> said: > Are you sure that's still true? I've done it that way in the past but it > seems at least with EL8 you can put /boot/efi on md raid1 with metadata > format 1.0. That way the EFI firmware will see it as two independent FAT > filesystems. Only thing you have to be sure is that nothing ever writes to >
2019 Apr 05
2
Seek some help on operating system drivers. Thanks!
Hello, I am a user of CentOS 7.6. I came here for help. I recently bought a microserver called HPE Proliant MicroServer Gen10(X3216 APU). CentOS 7.6 was installed for it. I found that the fan of the machine would keep turning and the noise was very loud. I searched everywhere but I couldn't find any way out. Pardon my ignorance showing. I don?t know where to seek help. When I use
2023 Jan 09
1
Help with an HP Proliant gen10 plus?
Just starting and trying to boot off the SPP firmware update ISO image on a USB stick. I made the stick with: # mkfs.vfat /dev/sdb # dd bs=4M if=P52581_001_gen10spp-2022.09.01.00-SPP2022090100.2022_0930.1.iso of=/dev/sdb status=progress The usb drive is 16GB and the iso is 9GB. seem to boot from it and go into auto install of firmware then died with starting initrd... warning!!! Unable to
2023 Jan 09
1
Help with an HP Proliant gen10 plus?
> Just starting and trying to boot off the SPP firmware update ISO image > on a USB stick. > > I made the stick with: > > # mkfs.vfat /dev/sdb ^^^^^^^^^^^^^^^^^^^^ Why create an MS-DOS filesystem on the stick which gets immediately overwritten in the next step? > # dd bs=4M > if=P52581_001_gen10spp-2022.09.01.00-SPP2022090100.2022_0930.1.iso > of=/dev/sdb
2023 Jan 09
2
Help with an HP Proliant gen10 plus?
On 1/9/23 01:37, Simon Matter wrote: >> Just starting and trying to boot off the SPP firmware update ISO image >> on a USB stick. >> >> I made the stick with: >> >> # mkfs.vfat /dev/sdb > ^^^^^^^^^^^^^^^^^^^^ > Why create an MS-DOS filesystem on the stick which gets immediately > overwritten in the next step? I think the idea is to get that first
2023 Jan 08
1
RAID1 setup
Continuing this thread, and focusing on RAID1. I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn it off if I want). I am planning two groupings of RAID1 (it has 4 bays). There is also an internal USB boot port. So I am really a newbie in working with RAID.? From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device. Will it work to put / on
2017 Jul 19
0
[ANNOUNCE] libdrm 2.4.82
Anusha Srivatsa (3): intel: PCI Ids for S SKU in CFL intel: PCI Ids for H SKU in CFL intel: PCI Ids for U SKU in CFL Ben Widawsky (1): intel/gen10: Add missed gen10 stuff Christian Gmeiner (1): etnaviv: submit full struct drm_etnaviv_gem_submit Dave Airlie (6): amdgpu: sync amdgpu_drm with kernel. drm: update drm.h to latest in drm-next. libdrm:
2018 Aug 03
2
rsync versioning problem
I seem to have an rsync versioning problem. The sender is an old ClearOS6 server with rsynv 3.0.6 The receiver is a brand new Centos7-armv7 server with rsync 3.1.2 I am running rsync over ssh Got the error: rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6] And researching this it comes down to a versioning issue.? But all I have found was to upgrade the
2018 Oct 04
0
[ANNOUNCE] libdrm 2.4.95
This release adds a fallback for realpath() which was blocked by the web-browser sand-boxing. While the browsers are fixed-up they seem to have little incentive to roll bugfix releases :-\ -Emil Ayan Kumar Halder (1): libdrm: headers: Sync with drm-next Christian König (4): tests/amdgpu: add unaligned VM test amdgpu: remove invalid check in amdgpu_bo_alloc test/amdgpu:
2019 Jul 01
5
HPE ProLiant - support Linux Vendor Firmware Service ?
hi guys does anybody here runs on HPE ProLiant? I was hoping you can tell whether HPE support Linux Vendor Firmware Service and you actually get to upgrade ProLiants' BIOS/firmware via fwupdmgr? many thanks, L.
2007 Aug 17
2
Help in starting spamassassin
I have installed spamassassin, per the instructions on Scalix's wiki, and it is working, with some important caviats. So I asked for help on the spamassassin user list, and got some, but I think I am butting up against some Centos specific issues... This is what I am seeing in the maillog: Aug 17 14:39:59 z9m9z sendmail[13082]: l7HIdvGf013082: Milter add: header: X-Spam-Checker-Version:
2017 Oct 04
0
[ANNOUNCE] intel-gpu-tools 1.20
A new intel-gpu-tools quarterly release is available with the following changes: Library changes: - Added helpers for launching external processes and capturing their outputs. (Abdiel Janulgue) - Increased max pipe count to 6 to support AMD GPUs. (Leo (Sunpeng) Li) - Various improvements for Chamelium support. (Paul Kocialkowski) - Added Coffeelake platform support. (Rodrigo Vivi, Anusha
2020 Feb 17
1
anybody runs HPE ProLiant DL385 Gen10 ?
hi guys, what I would like to ask is DDR4 3200Mhz in these server - has anybody tried? many thanks, L.
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, any news about this issue? Can I ignore this kind of error message or I have to do something to correct it? Thank you in advance and sorry for my insistence. Regards, Mauro > Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Nithya, > > thank you very much for your support and sorry for the late. > Below
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya, thank you very much for your support and sorry for the late. Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version: gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks:
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] feature once ready should fix this issue by storing time stamps as xattrs on the bricks. i.e. all bricks
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro, What version of Gluster are you running and what is your volume configuration? IIRC, this was seen because of mismatches in the ctime returned to the client. I don't think there were issues with the files but I will leave it to Ravi and Raghavendra to comment. Regards, Nithya On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Hi All,
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi, thank you very much for your support and explanation. If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right? Thank you again, Mauro > Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto: > > I think it is safe to ignore it. The problem exists due to the
2017 Sep 25
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Dear Gluster Users, I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the following options: [root at s01 tier2]# gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2: