Displaying 20 results from an estimated 6000 matches similar to: "no priority on the console?"
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2013 Jun 19
1
Weird I/O hangs (9.1R, arcsas, interrupt spikes on uhci0)
Hi,
very periodically, we see I/O hangs for about 10 seconds, roughly once per minute.
Each time this happens, the I/O rate simply drops to zero, and all disk access hangs; this is also very noticeable on the shell, for NFS clients etc. Everything else (networking, kernel, ?) seems to continue normally.
Environment: FreeBSD 9.1R GENERIC on amd64, using ZFS, on a ARC1320 PCIe with 24x Seagate
2008 Jul 25
18
zfs, raidz, spare and jbod
Hi.
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I
created a raidz2-pool with ten disks and one spare.
2009 May 13
2
With RAID-Z2 under load, machine stops responding to local or remote login
Hi world,
I have a 10-disk RAID-Z2 system with 4 GB of DDR2 RAM and a 3 GHz Core 2 Duo.
It''s exporting ~280 filesystems over NFS to about half a dozen machines.
Under some loads (in particular, any attempts to rsync between another
machine and this one over SSH), the machine''s load average sometimes
goes insane (27+), and it appears to all be in kernel-land (as nothing
in
2010 Jul 26
1
areca 1100 kmod / kernel support
Hi,
is the areca 1100 Raid Controller supported by CentOS 5? Or is there a
kmod rpm available?
Thanks
Juergen
2008 Mar 26
1
freebsd 7 and areca controller
Hi.
I'm looking at deploying a freebsd 7-release server with some storage
attached to an areca ARC-1680 controller. But this card is not
mentioned in 'man 4 arcmsr'
(http://www.freebsd.org/cgi/man.cgi?query=arcmsr&sektion=4&manpath=FreeBSD+7.0-RELEASE).
Areaca's website does mention freebsd as a supported OS
(http://www.areca.com.tw/products/pcietosas1680series.htm).
Has
2012 Apr 03
2
CentOS 6.2 + areca raid + xfs problems
Two weeks ago I (clean-)installed CentOS 6.2 on a server which had been running 5.7.
There is a 16 disk = ~11 TB data volume running on an Areca ARC-1280 raid card with LVM + xfs filesystem on it. The included arcmsr driver module is loaded.
At first it seemed ok, but with in a few hours I started getting I/O error message on directory listings, and then a bit later when I did a vgdisplay
2008 Jan 30
3
newfs locks entire machine for 20seconds
----- Original Message -----
From: "Ivan Voras" <ivoras@freebsd.org>
>> The machine is running with ULE on 7.0 as mention using an Areca 1220
>> controller over 8 disks in RAID 6 + Hotspare.
>
> I'd suggest you first try to reproduce the stall without ULE, while
> keeping all other parameters exactly the same.
Ok tried with an updated 7 world / kernel as
2011 Jan 17
2
Question on how to get Samba to use larger pread/write calls.
We are testing Samba 3 (and 4) on Fedora Core 13,
10Gbit connection with a Mac OS 10.6.4 system
as the client. We will be adding some Windows
machines sooner or later with 10Gbit interfaces.
We are seeing 100-150MBytes/sec read or write
performance between the Mac and the FC13 system
over 10Gbit interface but it should be capable of
400-500MBytes/sec. We have a local raid
on the FC13 system
2010 Apr 04
15
Diagnosing Permanent Errors
I would like to get some help diagnosing permanent errors on my files. The machine in question has 12 1TB disks connected to an Areca raid card. I installed OpenSolaris build 134 and according to zpool history, created a pool with
zpool create bigraid raidz2 c4t0d0 c4t0d1 c4t0d2 c4t0d3 c4t0d4 c4t0d5 c4t0d6 c4t0d7 c4t1d0 c4t1d1 c4t1d2 c4t1d3
I then backed up 806G of files to the machine, and had
2015 Sep 24
1
Logrotate problems
It?s interesting in your world, where ?broken? is ?functions exactly as it is documented to work?
If you want it to match subdirectories then you should add to the logrotate, or add another one yourself for each subdirectory. It?s not hard, and it?s certainly not broken. It does what you tell it to do.
On Sep 24, 2015, at 6:33 AM, Andrew Holway <andrew.holway at gmail.com> wrote:
> Hmm,
2015 Sep 24
2
Logrotate problems
Actually, doing what logrotate suggests causes other problems. We don't
have this problem on any other system so I am keen to understand the root
of the issue rather than start messing around with the default permissions
of the log directories.
logrotate only matches /var/log/nginx/*log - /var/log/nginx/access.log &
/var/log/nginx/error.log
On the server where we have problems we have
2012 Jan 09
14
scaling projections for dashboard database?
So I got dashboard up and running on our production system on Thursday before I left. Within 48 hours it had completed filled the /var filesystem. The ibdata1 file is currently at 8GB in size.
1. What size should I expect for ~500 nodes reporting every 30 minutes?
2. Are there some database cleanup scripts which I have managed to overlook that need to be run?
--
Jo Rhett
Net Consonance :
2011 Dec 02
12
puppet master under passenger locks up completely
I came in this morning to find all the servers all locked up solid:
# passenger-status
----------- General information -----------
max = 20
count = 20
active = 20
inactive = 0
Waiting on global queue: 236
----------- Domains -----------
/etc/puppet/rack:
PID: 2720 Sessions: 1 Processed: 939 Uptime: 9h 22m 18s
PID: 1615 Sessions: 1 Processed: 947 Uptime: 9h 23m
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2013 Feb 08
11
Puppet dashboard stuck pending jobs
Hi Guys,
I am a new puppet user and wanted some type of monitoring for puppet so
deployed puppet-dashboard. It has been working very well for a few days
not, but all of a sudden I start getting pending tasks and they never
finish even after restarting all processes. They keep accumulating and
never seem to finish even though the clients are running fine. I have the
puppet-dashboard
2015 Sep 24
2
Logrotate problems
Hi Y'all,
We have nginx set up and we are having problems with logrotate. The
permissions and users do not seem to be any different from other machines
that are working ok however the /var/log/nginx does have a directory in
there that we are using to collect some special log stuff.
Could this subdirectory be interfering with the logrotate process?
ta
Andrew
[root@ ~]# logrotate -d
2011 Jul 21
2
fyi: RHEL 5.7 is out
hi fyi,
it seems redhat has just pushed RHEL 5.7 out.
I see amoung others:
kernel-2.6.18-274.el5.x86_64.rpm
redhat-release-5Server-5.7.0.3.x86_64.rpm
Rainer
2015 Sep 24
0
Logrotate problems
Hmm, so it seems that logrotate might be broken for nginx on Centos7. I
filed a bug with epel.
https://bugzilla.redhat.com/show_bug.cgi?id=1266105
On 24 September 2015 at 11:49, Andrew Holway <andrew.holway at gmail.com>
wrote:
> Actually, doing what logrotate suggests causes other problems. We don't
> have this problem on any other system so I am keen to understand the root