Displaying 20 results from an estimated 1000 matches similar to: "self-encrypting drives"
2007 Jan 03
2
[PATCH] [Bochs/32-Bit BIOS] [2/3] TCG Bios extensions
This patch adds TCG BIOS extensions to the high memory area along with
some often-used libc utility functions. The TCG extensions are described
here:
https://www.trustedcomputinggroup.org/specs/PCClient/TCG_PCClientImplementationforBIOS_1-20_1-00.pdf
I have tried to keep the patching with rombios.c to a minimum, but some
amount of code needs to be inserted at various locations.
The code is
2017 Nov 03
2
low end file server with h/w RAID - recommendations
John R Pierce wrote:
> On 11/2/2017 9:21 AM, hw wrote:
>> Richard Zimmerman wrote:
>>> hw wrote:
>>>> Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List!
I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck?
Also, if
2010 Jan 05
9
OpenSSH daemon security bug?
A co-worker argues we can login using only password to a "ssh-key restricted
host (PasswordAuthentication no)", without being asked by any passphase; just
by putting a key (no need to be the private key) on another password-based
host.
It that true? I do not think so. I would name that as an "important OpenSSH
daemon security bug". That is because I think it is not true.
2015 Feb 03
3
Very slow disk I/O
On 2/2/2015 8:11 PM, Jatin Davey wrote:
> disk 252:1 | 0-0-0 | 9XG7TNQVST91000640NS CC03 | Online, Spun Up
> disk 252:2 | 0-0-1 | 9XG4M4X3ST91000640NS CC03 | Online, Spun Up
> disk 252:3 | 0-1-1 | 9XG4LY7JST91000640NS CC03 | Online, Spun Up
> disk 252:4 | 0-1-0 | 9XG51233ST91000640NS CC03 | Online, Spun Up
> End of Output **************************
>
> Let me know if i need to
2001 May 13
2
Change in behavior from 2.5p2 to 2.9p1
Under 2.5p2, if I ssh'd back to myself I would get a prompt asking for my
passphrase, and if that was incorrect it would then ask for my password.
Assuming I had a authorized_keys file with my identity.pub in it.
Under 2.9.p1 it goes straight to enter password instead of asking for my
passphrase.
This wouldn't be a problem except that when I have "PasswordAuthentication
no" I
2006 Dec 07
7
[PATCH] [Firmware] TCG BIOS extensions for the Bochs BIOS
This patch adds an implementation of the TCG BIOS extensions to the
Bochs BIOS and enables logging of boot measurements using the previously
implemented support for TCPA ACPI tables. A low-level driver for a TPM
TIS device and an Atmel device is provided.
The implemented specification is described here:
2008 Sep 06
1
Time delaying (or time lagging)
Hello everyone,
I have problem with time delaying on my CentOS powered server. I have tried
to set time in BIOS and in OS (with saving time to BIOS), but time still
delaying, so after month is about five minutes delayed. In past I was using
this box with Windows and there wasn?t this time problem, but I cant
warrant, that there are no hardware issues. So I was decided, that easiest
way to fix
2015 Feb 27
4
OT: AF 4k sector drives with 512 emulation
Still have good quality older sata hardware raid cards that require 512
bytes/sector. As far as I know HD manufacturers are not making native 512
bytes/sector drives any more.
Some have better 512e emulation than others. Looking for some advice on
which to avoid and which are recommended. Thanks. PS this is for a CentOS6
server.
2014 Sep 23
1
vTPM manager for Xen
Hello everyone,
I am sorry for interrupting your work I am following the
correspondence in silence.
I am trying to build a vtpm implementation into xen 6.2 but
I was not able to find all the means to do it.
What can be found is just abstract knowledge. Most of the
info always forwards me to this doc
2013 May 04
4
Scrub CPU usage ...
I just subscribed to this list so in case this subject has already been
discussed at length, my apologies. I have been waiting for btrfs
forever. I have been waiting for it to become reasonably stable. In
the wake of escalating problems with my old hardware RAID setup, I
decided now was the time to make the transition. At this point I have
been completely transitioned to btrfs for nearly
2008 Feb 25
2
Panic when ZFS pool goes down?
Is it still the case that there is a kernel panic if the device(s) with the ZFS pool die?
I was thinking to attach some cheap SATA disks to a system to use for nearline storage. Then I could use ZFS send/recv on the local system (without ssh) to keep backups of the stuff in the main pool. The main pool is replicated across 2 arrays on the SAN and we have multipathing and it''s quite
2011 May 10
5
Tuning disk failure detection?
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
arrays (Solaris 10 U9).
The disk began throwing errors like this:
May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci at 0,0/pci8086,3410 at 9/pci15d9,400 at 0 (mpt_sas0):
May 5 04:33:44 dev-zfs4 mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31110610
And errors for the drive were
2008 Jan 25
2
Capturing credentials for imap sync
Hi List
All the imap sync apps I could find requires the username /password
credentails to be known before a sync occurs. I have Dovecot using ldap
acting as a nearline backup mail server to MS Exchange. Every hour
imapsync syncs mail between Exchange and Dovecot. This all works fine
becuase the users credentials are known, but when new users are added I
would like the process to work
2017 Nov 02
11
low end file server with h/w RAID - recommendations
Richard Zimmerman wrote:
> hw wrote:
>> Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
>
> I will second Marks comments here. Yes,
2011 May 20
2
scsi3 persistent reservations in cluster storage fencing
I'm interested in the idea of sharing a bunch of SAS JBOD devices
between two CentOS servers in an active-standby HA cluster sort of
arrangement, and found something about using scsi3 persistent
reservations as a fencing method. I'm not finding a lot of specifics
about how this works, or how you configure two initiator systems on a
SAS chain. I don't have any suitable
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ...
I''m actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks.
I want to
2014 May 05
1
SYSLINUX PXE LOCALBOOT Bitlockers
That's a great question, actually, I should have remembered to mention that! You can control what factors are used for the TPM's integrity check to release the bitlocker key on boot. Depending on whether your on a BIOS or EFI machine, there are slight differences, but definitely controllable by group policy. http://technet.microsoft.com/en-us/library/ee706521(v=ws.10).aspx#BKMK_depopt3
I
2007 Jul 19
4
Multiple WAN link -- CentOS Suitability
My situation:
I have a cable modem (COMCAST 6Mbit d/l) and am about to also have DSL
(Verizon 3 Mbit d/l). I was thinking of using CentOS (4.4, 4.5, or 5??) as
a router/dhcp server/firewall for my home network consisting of 3 to 6
computers at any given time. I seek the wisdom of the members of this list
on the following issues:
-- Is CENTOS a good direction to go? I do not mind manually
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big
RAID). What I'm considering is, rather than chopping it up into 14TB or
16TB filesystems, of using xfs for really big filesystems. The question
that's come up is: what's the state of xfs on CentOS6? I've seen a number
of older threads seeing problems with it - has that mostly been resolved?
How does