Displaying 16 results from an estimated 16 matches for "stornext".
2023 Feb 01
1
dyn.load(now = FALSE) not actually lazy?
? Wed, 1 Feb 2023 14:16:54 +1100
Michael Milton <ttmigueltt at gmail.com> ?????:
> Is this a bug in the `dyn.load` implementation for R? If not, why is
> it behaving like this? What should I do about it?
On Unix-like systems, dyn.load forwards its arguments to dlopen(). It
should be possible to confirm with a debugger that R passes RTLD_NOW to
dlopen() when calling dyn.load(now =
2023 Feb 01
2
dyn.load(now = FALSE) not actually lazy?
...w=FALSE)` the first one, R seems to try to resolve the symbols
immediately, causing the load to fail.
For example, I have `libtorch` installed on my HPC. Note that it links to
various libs such as `libcudart.so` and `libmkl_intel_lp64.so.2` which
aren't currently in my library path:
? ~ ldd
/stornext/System/data/nvidia/libtorch-gpu/libtorch-gpu-1.12.1/lib/libtorch_cpu.so
linux-vdso.so.1 => (0x00007ffcab58c000)
libgomp.so.1 =>
/stornext/System/data/apps/gcc/gcc-11.2.0/lib64/libgomp.so.1
(0x00007f8cb22bf000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f8cb2...
2010 Nov 13
1
StorNext CVFS
Morning All!
Anyone ever tried exporting a StorNext CVFS filesystem from a Linux box???
I?ve got this Samba server (3.5.6) running on CentOS 5.4 and it?s working
fine, exporting ext3, nfs and an IBM GPFS filesystem just fine. So I know
Samba is good an my configuration is working.
I tried to add the exportation of a StorNext CVFS volume and that...
2002 Oct 09
5
Value too large for defined data type
...achines. I recently started getting an error that is confusing and
I can't find info
documented on it. I searched the news group and found it mentioned but no
solution yet.
I get the error when sync'ing from a Solaris 8 machine to my Solaris 8
server.
stat space/sunpci/drives/F.drive/docs/StorNext/LinuxPort/devfs.README.txt :
Value too large for defined data type
stat space/sunpci/drives/F.drive/docs/StorNext/StorNextNotes.doc : Value too
large for defined data type
I also see the same error going from our IRIX 6.5.15 machine, and the error
is seen on a directory
vs a file:
stat apps1/fsdev...
2010 Nov 05
2
xServes are dead ;-( / SAN Question
...ckup-site), with another SAN, same aplication etc. But this
announce has come put a little delay. We do have several servers running
CentOS (about 10 or so), on intel server platform.
Now with this said, I am searching for documentation on operating a SAN
under linux. We are looking at Quantum StorNext FS2 product for the SAN
itselft.
And I am searching info about accessing volumes on a fiber channel network
by label. I know I can label individual ext3 partition, but how to do so on
a raid array via fiber channel ?
Basicly, I search for a linux starter guide to fiber channel storage.
Thanks...
2017 Nov 02
4
samba 4.x slow ...
Hi,
we are running samba 4.4 on two machines as file servers.
Both are running a GFS (stornext). The storage is attached using 8G HBA.
You can get up to 800MB/s local speed. We are exporting the shares using
2x1GB
and 2x10G. However the clients are only getting 40-50MB/s. With samba3 I
think we had up to 80-90MB/s.
Using a 100MB/s link for the client we see 12-13MB/s (wire speed).
Usin...
2011 Mar 07
2
connection speeds between nodes
Hi All,
I've been asked to setup a 3d renderfarm at our office , at the start it
will contain about 8 nodes but it should be build at growth. now the
setup i had in mind is as following:
All the data is already stored on a StorNext SAN filesystem (quantum )
this should be mounted on a centos server trough fiber optics , which
in its turn shares the FS over NFS to all the rendernodes (also centos).
Now we've estimated that the average file send to each node will be
about 90MB , so that's what i like the average co...
2007 Jun 05
2
Performance tweaking for lots of files
...tructures with many thousands (up to 100k or more) of
relatively small files (around 10MB each) and whatever I try, it goes
pretty damn slow. I'm getting about 1 file every second, which comes
down to somewhere between 50 and 100Mbit (and we've got a gigabit network).
The files are on a StorNext filesystem that usually has no problem
delivering 300MB/sec or more, so that shouldn't be a problem. Netperf
also has no problems filling the lines, so that works as well.
Anybody have any suggestions on getting things faster, or at least to
check what's going wrong?
Kind regards,
Je...
2008 Jun 02
2
RE: Largish filesystems [was Re: XFS install issue]
...configuring with 7+ TB of
storage is one of the smaller storage servers for our
systems. Using the same configuration with more drives we
are planning several 20TB+ systems.
For the work we do a single file system over 100TB is not
unreasonable. We will be replacing a 80TB SAN system based
on StorNext with a Isilon system with 10G network
connections.
If there was a way to create a Linux (Centos) 100TB ?
500TB or larger clustered file system with the nodes
connected via infiniband that was easily manageable with
throughput that can support multiple 10Gbps Ethernet
connections I would be ve...
2017 Nov 02
0
samba 4.x slow ...
...u as well:
https://lists.samba.org/archive/samba-technical/2017-October/123611.html
Hope this helps & best regards
Andreas
Am 02.11.2017 um 14:05 schrieb Dr. Peer-Joachim Koch via samba:
> Hi,
>
> we are running samba 4.4 on two machines as file servers.
> Both are running a GFS (stornext). The storage is attached using 8G HBA.
> You can get up to 800MB/s local speed. We are exporting the shares
> using 2x1GB
> and 2x10G. However the clients are only getting 40-50MB/s. With samba3
> I think we had up to 80-90MB/s.
> Using a 100MB/s link for the client we see 12-13MB...
2017 Nov 02
1
samba 4.x slow ...
...e/samba-technical/2017-October/123611.html
>
> Hope this helps & best regards
> Andreas
>
> Am 02.11.2017 um 14:05 schrieb Dr. Peer-Joachim Koch via samba:
> >Hi,
> >
> >we are running samba 4.4 on two machines as file servers.
> >Both are running a GFS (stornext). The storage is attached using 8G HBA.
> >You can get up to 800MB/s local speed. We are exporting the shares
> >using 2x1GB
> >and 2x10G. However the clients are only getting 40-50MB/s. With
> >samba3 I think we had up to 80-90MB/s.
> >Using a 100MB/s link for the cli...
2010 May 28
0
Samba reporting only ~4G on a much larger filesystem
I tried this on the IRC channel and got not response...
I have a 2.2TB filesystem. The filesystem itself is a stornext (Quantum) filesystem, which in the /etc/fstab is type cvfs.
When a client mounts the samba share, it only reports ~4GB (3.9 or 3.7 GB) total space. We have a third party app, which will/can not be updated which checks the disk space available and fails if it's not enough. Consequently, the t...
2009 Jul 28
1
LUN Aggregation
Greetings.
Is there an approved way to aggregate LUNs when using OCFS2? I have several 1TB LUNs I'd
like to make into a single filesystem. Would I use something like Linux software raid?
Brett
2010 Jun 16
6
clustered file system of choice
Hi all,
I am just trying to consider my options for storing a large mass of
data (tens of terrabytes of files) and one idea is to build a
clustered FS of some kind. Has anybody had any experience with that?
Any recommendations?
Thanks in advance for any and all advice.
Boris.
2014 Jul 03
0
ctdb split brain nodes doesn't see each other
...2
public_addresses:
10.98.81.2/24 bond0
Ctdb:
CTDB_RECOVERY_LOCK=/mnt/media23/.ctdb_lock/lock.file
CTDB_DEBUGLEVEL=ERR
CTDB_MANAGES_SAMBA=yes
CTDB_PUBLIC_INTERFACE=bond0
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_SET_NoIpFailback=1
CTDB_SET_DeterministicIPs=0
The lock Filesystem is a Stornext Filesystem
Any help would be apreciated.
Cheers
Axel
--
View this message in context: http://samba.2283325.n4.nabble.com/ctdb-split-brain-nodes-doesn-t-see-each-other-tp4668664.html
Sent from the Samba - General mailing list archive at Nabble.com.
2007 Oct 24
182
Yager on ZFS
Not sure if it''s been posted yet, my email is currently down...
http://weblog.infoworld.com/yager/archives/2007/10/suns_zfs_is_clo.html
Interesting piece. This is the second post from Yager that shows
solaris in a pretty good light. I particularly like his closing
comment:
"If you haven''t checked out ZFS yet, do, because it will eventually
become ubiquitously implemented