Displaying 14 results from an estimated 14 matches for "honkin".
Did you mean:
hankin
2014 Mar 25
3
NVidia, again
Got a HBS (y'know, Honkin' Big Server, one o' them technical terms), a
Dell 720 with two Tesla GPUs. I updated the o/s, 6.5, and I cannot get the
GPUs recognized. As a last resort, I d/l NVidia's proprietary
driver/installer, 325, and it builds fine... I've yum removed the
kmod-nvidia I had on the system, no...
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big
RAID). What I'm considering is, rather than chopping it up into 14TB or
16TB filesystems, of using xfs for really big filesystems. The question
that's come up is: what's the state of xfs on CentOS6? I've seen a number
of older threads seeing problems with it - has that most...
2017 Jun 20
2
CentOS 6 and luksOpen
...e command that tells it to create
the device in /dev/mapper from the info in /etc/crypttab.
Clues for the poor? Yes, the server will, at some point in the future, go
to CentOS 7, but that needs my user to be off for a while, and his jobs
run literally for weeks, with loads upwords of 30 on an HBS (honkin' big
server)....
mark
2017 Jun 20
2
CentOS 6 and luksOpen
...he device in /dev/mapper from the info in /etc/crypttab.
>>
>> Clues for the poor? Yes, the server will, at some point in the future,
>> go to CentOS 7, but that needs my user to be off for a while, and his jobs
>> run literally for weeks, with loads upwords of 30 on an HBS (honkin' big
>> server)....
>
> MAPDEVICE=/dev/sdxy ; cryptsetup luksOpen ${MAPDEVICE} luks-$(cryptsetup
> luksUUID ${MAPDEVICE})
Something's not right. I did
cryptsetup luksOpen /dev/sdb luks-$(cryptsetup luksUUID $(/dev/sdb))
--key-file /etc/crypt.pw
It did want the password, so...
2013 Nov 04
5
[OT] Building a new backup server
Guys,
I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TB SATA-disks for
the data-backup to build a backup server. It's built around an Asus Z87-A that
seems to have problems with anything Linux unfortunately.
Anyway, BackupPC is my preferred backup-solution, so I went ahead to install
another favourite, CentOS 6.4 - and failed.
The raid controller is a Highpoint RocketRAID
2015 Feb 19
0
Anyone using torque/pbs/munge?
CentOS 6.6
I've got two servers, server1 and hbs (honkin' big server). Both are
running munge, and torque... *separately*. My problem is that I've got
users who want to be able to submit from server1 to hbs. I see that munged
can be pointed to an alternate keyfile... but is there any way to tell
qsub what to use?
(And yes, I got on the torque us...
2017 Jun 20
0
CentOS 6 and luksOpen
...o create
> the device in /dev/mapper from the info in /etc/crypttab.
>
> Clues for the poor? Yes, the server will, at some point in the future, go
> to CentOS 7, but that needs my user to be off for a while, and his jobs
> run literally for weeks, with loads upwords of 30 on an HBS (honkin' big
> server)....
MAPDEVICE=/dev/sdxy ; cryptsetup luksOpen ${MAPDEVICE} luks-$(cryptsetup luksUUID ${MAPDEVICE})
MAPDEVICE=/dev/sdxy ; mount /dev/mapper/luks-$(cryptsetup luksUUID ${MAPDEVICE}) /mnt
--
LF
2017 Jun 20
0
CentOS 6 and luksOpen
.../mapper from the info in /etc/crypttab.
>>>
>>> Clues for the poor? Yes, the server will, at some point in the future,
>>> go to CentOS 7, but that needs my user to be off for a while, and his jobs
>>> run literally for weeks, with loads upwords of 30 on an HBS (honkin' big
>>> server)....
>>
>> MAPDEVICE=/dev/sdxy ; cryptsetup luksOpen ${MAPDEVICE} luks-$(cryptsetup
>> luksUUID ${MAPDEVICE})
>
> Something's not right. I did
> cryptsetup luksOpen /dev/sdb luks-$(cryptsetup luksUUID $(/dev/sdb))
> --key-file /etc/cr...
2011 Jul 01
0
Horrible mouse bug fixed?
Wow! Is that horrendous mouse warp bug finally fixed? Since updating the Wine version from the repo for my distro recently all the games I have that are affected by this huge honkin' hairy humongous mouse bug seem to be working better.
If it is really officially fixed and not just some weird anomaly in my machine, I'd like to know just exactly what was causing it and how they fixed it.
2017 Oct 12
3
Kernel crash
Hi everyone,
I updated the kernel from 3.10.0-514.16.1.el7.x86_64
to 3.10.0-693.2.2.el7.x86_64 . While I was following these steps
https://wiki.centos.org/HowTos/Laptops/Wireless/Broadcom (I knew that I
needed to compile again everything) in order to activate WIFI, the laptop
crashed doing
# depmod -a
# modprobe wl
Noting that I replaced (naively) # depmod $(uname -r) from the guide
(stupid
2006 Jul 03
13
Eager loading ActiveRecord objects
I have a complex graph of ActiveRecord entities that I want to load with
one big honkin'' join query to avoid ripple loading. It''s for a report.
However :include doesn''t get me all the way there because I want to load
parents of parents (or children of children). I don''t mind doing the
SQL by hand but I''m not quite sure what to do to ge...
2005 May 12
2
Smashing EXT3 for fun and profit (or: how to loose all your data)
Hello everyone,
I've just lost my whole EXT3 linux partition by what was probably a
bug. For your reading pleasure, and in the hope there is enough
information to fix this problem in the future, here the story of a
violent ending:
This tragic history starts actually on windows: MS Word had wiped out
an important file on a floppy, and I got the task of retrieving what
was possible. Using
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have