Displaying 8 results from an estimated 8 matches for "fcal".
Did you mean:
fcall
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2011 Mar 08
0
Slices and reservations Was: Re: How long should an empty destroy take?
...roller and OS. This is what oem''s
do to ensure that drives from different manufacturers and revisions
all have the same capacity.
Different routes need to be followed for (S)ATA and SCSI/SAS drives,
because of different command sets. I''ve no experience with setting the
capacity on FCAL drives. I just bought the drives from the oem when
using FCAL drives.
First S(ATA), the feature to use is the Device Configuration Overlay
(DCO). Various tools exist to set and reset/recover the capacity of
the drive. Some drive manufacturers have a (diagnostic) tool which can
set the DCO. Another...
2004 Jan 20
0
nlminb function
...written in S-plus which I think is converted successfully to R with the exception of part of the opt.param function written.
In S-plus it is:
nlminb(start=x0, obj=negllgamma.f, scale=1, lower=c(0.01,0.0001),
upper=c(10,0.9999), gamma=gamma, maxlik=maxlik,
y=ldose, s=lse, max.iter = 1000, max.fcal = 1000)$par
and so far with R I've got to:
optim(par=x0, fn=negllgamma.f, method="L-BFGS-B", lower=c(0.01,0.0001),
upper=c(10,0.9999), gamma=gamma, maxlik=maxlik,
y=ldose, s=lse, control=list(maxit = 1000))$par
however I've failed to find an equivalent to "max....
2007 Apr 18
1
Cheap Array Enclosure for ZFS pool?
We have 14 500GB PATA drives left over from another project. Given that ZFS seems to prefer working with jbod''s, does anyone know of an inexpensive enclosure with an fcal interface to host the disks?
This message posted from opensolaris.org
2013 Jul 25
0
FNIC nested PVM
....2.2 to allow me to do PCI passthrough), but as soon as I start up a VM
with a phys device passed through, the first layer loses all connectivity to
the SAN.
Setup:
* 2 UCS B200M2 blades - configured with 9 vHBAs.
* Linux machines running LIO-ORG''s and SCST''s FCAL target mode
(trying both to decide on which to use going forwards)
The first vHBA is passed through to the "bare metal" OVM3.2.2, and the rest
are managed through xen-pciback. I then install another OVM3.2.4 instance
in an HVM with 1 vHBA passed through.
Within the nested OVS (wh...
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have
2007 Jan 02
0
[PATCH 1/4] add scsi-target and IO_CMD_EPOLL_WAIT patches
...kefile
++++ b/drivers/scsi/Makefile
+@@ -21,6 +21,7 @@ CFLAGS_seagate.o = -DARBITRATE -DPARIT
+ subdir-$(CONFIG_PCMCIA) += pcmcia
+
+ obj-$(CONFIG_SCSI) += scsi_mod.o
++obj-$(CONFIG_SCSI_TGT) += scsi_tgt.o
+
+ obj-$(CONFIG_RAID_ATTRS) += raid_class.o
+
+@@ -122,6 +123,7 @@ obj-$(CONFIG_SCSI_FCAL) += fcal.o
+ obj-$(CONFIG_SCSI_LASI700) += 53c700.o lasi700.o
+ obj-$(CONFIG_SCSI_NSP32) += nsp32.o
+ obj-$(CONFIG_SCSI_IPR) += ipr.o
++obj-$(CONFIG_SCSI_SRP) += libsrp.o
+ obj-$(CONFIG_SCSI_IBMVSCSI) += ibmvscsi/
+ obj-$(CONFIG_SCSI_SATA_AHCI) += libata.o ahci.o
+ obj-$(CONFIG_SCSI_SATA_SVW) +=...
2007 Oct 24
182
Yager on ZFS
Not sure if it''s been posted yet, my email is currently down...
http://weblog.infoworld.com/yager/archives/2007/10/suns_zfs_is_clo.html
Interesting piece. This is the second post from Yager that shows
solaris in a pretty good light. I particularly like his closing
comment:
"If you haven''t checked out ZFS yet, do, because it will eventually
become ubiquitously implemented