similar to: RE: SAMBA 15TB Volume?

Displaying 20 results from an estimated 600 matches similar to: "RE: SAMBA 15TB Volume?"

2004 Jan 25
0
RE: SAMBA 15TB Volume?
Thanks Paul, I really appreciate your help. Regards, Thomas Massano Systems Engineer One World Financial Center New York, NY 212.416.0710 Office 212.416.0740 FAX Thomas_Massano@StorageTek.com INFORMATION made POWERFUL -----Original Message----- From: Green, Paul [mailto:Paul.Green@stratus.com] Sent: Saturday, January 24, 2004 7:11 PM To: Green, Paul; Massano, Thomas Cc: 'Samba
2005 Mar 03
1
OPENSSH Question
Hi, We are in the process of implementing 15 HP Integrity Itanium servers Running HP-UX 11.23 and We would like to know if OpenSSH will work fine Like is working now on all the HP 9000 servers running HP-UX 11.11. Thanks in advance for your help! Jose Luis Salas, M.E. EDS UNIX Team @ StorageTek Office:(303) 661-5774 Page: (877) 705-5928 Mail: jose.salas at eds.com jose_salas at
2012 Apr 06
4
Legend based on levels of a variable
I have a bivariate plot of axis2 against axis1 (data below). I would like to use different size, type and color for points in the plot for the point coming from different region. For some reasons, I cannot get it done. Below is my code. col <- rep(c("blue", "red", "darkgreen"), c(16, 16, 16)) ## Choose different size of points cex <- rep(c(1, 1.2, 1), c(16, 16,
2009 Apr 26
1
1.6.1: "DNS error" but ping works
With 1.6.1 svn: [2009-04-26 15:01:00] NOTICE[1844]: chan_sip.c:9927 sip_reg_timeout: -- Registration for '17470121145 at proxy01.sipphone.com' timed out, trying again (Attempt #30) [2009-04-26 15:01:00] WARNING[1844]: acl.c:376 ast_get_ip_or_srv: Unable to lookup 'proxy01.sipphone.com' [2009-04-26 15:01:00] WARNING[1844]: chan_sip.c:10037 transmit_register: Probably a DNS
2013 May 29
3
How can I use "puppet apply" with hiera?
I''m running Puppet v2.7.14. I have a puppet master server with Hiera and it works great. I also want to be able to apply my manifests locally on a node. I have installed Hiera on my node and I can verify using the Hiera command line application that values can be looked up: user@tag5-4-qa-sjc:~$ hiera corp_puppet_server region=northamerica datacenter=sjc environment=qa --debug DEBUG:
2003 Mar 25
2
AS/400 - Unix Connectivity
Hi Folks, I've used Samba in the past for Windows NT - Unix connectivity situations. I wondered if Samba also supported AS/400 - Unix connectivity, specifically to allow AS/400 to read/write to a Unix file system...?? Regards Ian Gill Principal Consultant Professional Services (+1) 727.784.4475 Office (+1) 727.784.1278 Fax (+1) 727.560.6710 Cell. Ian_Gill@StorageTek.com
2008 Jun 20
1
zfs corruption...
Hi all, It would appear that I have a zpool corruption issue to deal with... pool is exported, but upon trying to import it, server panics.  Are there any tools available on a zpool that is in an exported state?  I''ve got a separate test bed in which I''m trying to recreate, but I keep getting messages to the effect of need to import the pool first.  Suggestions? thanks Jay
2014 Oct 04
2
Mounting LUNs from a SAN array - LUN mappings to devices in /dev/ - are they static?
Hi All :) I am currently involved in a project in which there is a SAN array (Sun Storagetek 2540) which exports LUNs for some servers with Centos 5.2 x86. I will be performing a migration to Centos 5.9 x86_64 in some time and am gathering needed info now :) I am trying to find the place in the OS where there is the information about LUN mappings to /dev/ devices. For example on array level I
2007 Jan 19
0
Re: [osol-discuss] Possibility to change GUID zfs pool at import
Hello Nico, Friday, January 19, 2007, 7:53:06 PM, you wrote: NVdM> Hi, NVdM> Looking if someone already found a solution, or workaround, to change the GUID of a zfs pool. NVdM> Explain me some more in depth, by use of tools like ShadowImage NVdM> on storage Arrays like the Sun Storagetek 99xx (but also for SUN NVdM> related storage Arrays or IBM, EMC and other storage vendors)
2011 Jan 27
0
Move zpool to new virtual volume
Hello all, I want to reorganize the virtual disk/ storage pool /volume layout on a StorageTek 6140 with two CSM200 expansion units attached (for example stripe LUNs across trays, which is not the case at the moment). On a data server I have a zpool "pool1" over one of the volumes on the StorageTek. The zfs file systems in the pool are mounted locally and exported via nfs to clients. Now
2004 Apr 26
1
Problems registering with Sipphone
Has anyone else had problems registering with Sipphone over the last few weeks? Previously, this had worked fine. I contacted Sipphone technical support, but they're not much help. register => 17471234567:password@northamerica.sipphone.com/123
2003 Oct 09
1
[Bug 740] Sun's pam_ldap account management is not working
http://bugzilla.mindrot.org/show_bug.cgi?id=740 Summary: Sun's pam_ldap account management is not working Product: Portable OpenSSH Version: 3.7.1p1 Platform: UltraSparc OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: PAM support AssignedTo: openssh-bugs at mindrot.org
2010 Feb 24
0
disks in zpool gone at the same time
Hi, Yesterday I got all my disks in two zpool disconected. They are not real disks - LUNS from StorageTek 2530 array. What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed things with resilvering. Here is how it started (/var/adm/messages) Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info] /pci at 0,0/pci10de,5d at
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches. This is the performance achieved (on a 32GB file) in February last year: KB reclen write rewrite read reread 33554432
2011 Dec 08
4
Backup Redux
Hey folks, I just went through the archives to see what people are doing for backups, and here is what I found : - amanda - bacula - BackupPC - FreeNAS Here is my situation : we have pretty much all Sun hardware with a Sun StorageTek SL24 tape unit backing it all up. OSes are a combination of RHEL and CentOS. The software we are using is EMC NetWorker Management Console version
2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2009 Oct 01
1
rsync file corruption when destination is a SAN LUN (Solaris 9 & 10)
I have run into a problem using 'rsync' to copy files from local disk to a SAN mounted LUN / file-system. The 'rsync' seems to run fine and it reports no errors, but some files are corrupted (check-sums don't match originals, and file data is changed). So far, I have found this problem on both Solaris 9 and Solaris 10 OSes and on several different models of Sparc systems
2011 Nov 23
3
P2Vs seem to require a very robust Ethernet
Now that we can gather diagnostic info, I think I know why our P2Vs kept failing last week. Another one just died right in front of my eyes. I think either the Ethernet or NFS server at this site occasionally "blips" offline when it gets busy and that messes up P2V migrations. The RHEV export domain is an NFS share offered by an old Storagetek NAS, connected over a 10/100 Ethernet.
2010 Mar 18
0
Extremely high iowait
Hello, We have a 5 node OCFS2 volume backed by a Sun (Oracle) StorageTek 2540. Each system is running OEL5.4 and OCFS2 1.4.2, using device-mapper-multipath to load balance over 2 active paths. We are using the default multipath configuration for our SAN. We are observing iowait time between 60% - 90%, sustaining at over 80% as I'm writing this, driving load averages to >25 during an rsync