Displaying 20 results from an estimated 1100 matches similar to: "RE: Largish filesystems [was Re: XFS install issue]"
2008 Jun 02
2
Largish filesystems [was Re: XFS install issue]
On Mon, Jun 2, 2008 at 2:03 PM, Johnny Hughes <johnny at centos.org> wrote:
> I would also not use XFS in production ... but that is just me.
Interesting, I thought that XFS was fairly safe for use. What would
you recommend for filesystems in the 50-500 terabyte range?
(And yes, we do actually run a 70 TB at the moment, so I'm not asking
just to annoy you; I'm genuinely
2008 May 30
3
XFS install issue
I am attempting to implement XFS on a new system.
System:
Supermicro SC846 TQ-R900B - rack-mountable
SUPERMICRO X7DWN+ - motherboard
3ware 9650SE-24M8 - storage controller
10 Hitachi DeskStar 7K1000 - hard drive - 1 TB
8GB Ram
2 Intel Quad-Core Xeon E5420 / 2.5 GHz processor
Installed Centos 5.1 X86 64 from DVD. System on /dev/sda1
? 250GB ext3 (raid 5). /home will be on /dev/sdb1 ? over
7TB
2008 Jun 02
0
RE: Largish filesystems [was Re: XFS install issue]
Alain Terriault wrote:
> Just wondering if any one ever consider/use Coraid for massive storage
> under CentOS?
> http://www.coraid.com
> It seems like a very reasonable option.. comments ?
I have one of those installed on CentOS 4.6 with 1TB of storage. I'm
sharing it between three servers. I can't say how well it works for
multi-TB storage, but it works well enough for me
2008 Jun 06
2
Samba AD valid users issue
I have setup a new server centos 5.1 server as a storage
server with over 7TB of storage. The server has been
integrated into a large Active Directory network there are
5 primary AD servers and a large number of local AD server
at each location (over 20). There are also over 15 trusted
domains hundreds of groups and thousands of users. It has
been quite a challenge to integrate the Linux
2011 Apr 09
1
Compression of largish expression array files in the DAAGbio/inst/doc directory?
The inst/doc directory of the DAAG package has 6 files coral551.spot, ... that
are around 0.85 MB each. It would be useful to be able to zip then, but that
as matters stand interferes with the use of the Sweave file that uses them to
demonstrate input of expression array data that is in the "spot" format. They
do not automatically get unzipped when required. I have checked that
2008 Jun 21
5
recommendations for copying large filesystems
I need to copy over 100TB of data from one server to another via network.
What is the best option to do this? I am planning to use rsync but is there
a better tool or better way of doing this?
For example, I plan on doing
rsync -azv /largefs /targetfs
/targetfs is a NFS mounted filesystem.
Any thoughts?
TIA
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total
300TB, usable 100TB due to replica factor 3)
We would like to expand the existing volume by adding another 3 nodes, but
each will only have a 50TB brick. I think this is possible, but will it
affect gluster performance and if so, by how much. Assuming we run a
rebalance with force option, will this distribute the existing data
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi,
Yes this is possible. Make sure you have cluster.weighted-rebalance enabled
for the volume and run rebalance with the start force option.
Which version of gluster are you running (we fixed a bug around this a
while ago)?
Regards,
Nithya
On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote:
> We currently have a 3 node gluster setup each has a 100TB brick (total
> 300TB,
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big
RAID). What I'm considering is, rather than chopping it up into 14TB or
16TB filesystems, of using xfs for really big filesystems. The question
that's come up is: what's the state of xfs on CentOS6? I've seen a number
of older threads seeing problems with it - has that mostly been resolved?
How does
2023 Feb 01
1
dyn.load(now = FALSE) not actually lazy?
? Wed, 1 Feb 2023 14:16:54 +1100
Michael Milton <ttmigueltt at gmail.com> ?????:
> Is this a bug in the `dyn.load` implementation for R? If not, why is
> it behaving like this? What should I do about it?
On Unix-like systems, dyn.load forwards its arguments to dlopen(). It
should be possible to confirm with a debugger that R passes RTLD_NOW to
dlopen() when calling dyn.load(now =
2018 May 26
2
glustefs as vmware datastore in production
> Hi,
>
> Does anyone have glusterfs as vmware datastore working in production in a
> real world case? How to serve the glusterfs cluster? As iscsi, NFS?
>
>
Hi,
I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore.
Our Environment is
- 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i)
- Totally 100TB service volume
- 10G Storage Network and Service
2010 Dec 11
8
What NAS device(s) do you use? And why?
If you use any NAS (or a SAN) devices, what do you use? And I'm
referring more to larger scale network storage than your home PC or
home theater system.
We've had very good experiences with our NetGear ReadyNAS devices but
I'm in the market for something new. The NetGear's aren't the cheapest
ones around but they do what it says on the box. My only real gripe
with them is the
2023 Feb 01
2
dyn.load(now = FALSE) not actually lazy?
On Linux, if I have a .so file that has a dependency on another .so, and I
`dyn.load(now=FALSE)` the first one, R seems to try to resolve the symbols
immediately, causing the load to fail.
For example, I have `libtorch` installed on my HPC. Note that it links to
various libs such as `libcudart.so` and `libmkl_intel_lp64.so.2` which
aren't currently in my library path:
? ~ ldd
2023 Aug 02
1
How to tune rsync to speed up?
Hi there,
we're facing some flapping traffic when rsyncing atm 70T from one server to
an DELL Isilon.
Both systems are connected with 10G Fiber (not Channel).
So we started with one simple "rsync -a /src /dest" to the DELL by using
NFS3.
So it runs with around 3Gbit for some seconds, than it were 30Kbit for also
some seconds and again "some" Gig were transferred and than
2010 Nov 13
1
StorNext CVFS
Morning All!
Anyone ever tried exporting a StorNext CVFS filesystem from a Linux box???
I?ve got this Samba server (3.5.6) running on CentOS 5.4 and it?s working
fine, exporting ext3, nfs and an IBM GPFS filesystem just fine. So I know
Samba is good an my configuration is working.
I tried to add the exportation of a StorNext CVFS volume and that doesn?t
work. All the other volumes still work
2011 Jun 07
2
Disk free space, quotas and GPFS
I am migrating the main file servers at work onto a new storage platform
based on GPFS. I am using RHEL 5.6 with the samba3x packages (aka 3.5.4)
recompiled to get the vfs_gpfs and tsmsm modules, with a couple of extra
patches to vfs_gpfs module to bring it 3.5.8 level. It is running with
ctdb against Windows AD 2008 R2 domain controllers with all the
idmapping been held in the AD.
In order to
2018 May 28
0
glustefs as vmware datastore in production
Nice to read this. Any particular reason to *not* run the OS image in
glusterfs cluster?
Thanks
On 05/26/2018 02:56 PM, ??? wrote:
>
> Hi,
>
> Does anyone have glusterfs as vmware datastore working in
> production in a real world case? How to serve the glusterfs
> cluster? As iscsi, NFS?
>
>
> Hi,
>
> I am using glusterfs 3.10.x for VMware ESXi
2004 Dec 06
1
Maximum ext3 file system size ??
Hi,
If the ext3 file system maximum size updated or it is still 4TB for
2.6.* kernel? The site at
http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html says that it
is 4TB yet, but I would like to know if it is possible to create and use
stable & easy-to-fix (or at least as stable & easy-to-fix as ext3) file
systems as big as 100TB for 32 bit Linux architecture?
Any experience
2002 Oct 09
5
Value too large for defined data type
Howdy,
I am just starting to use rsync and now have an rsync server set up
to back up our
various machines. I recently started getting an error that is confusing and
I can't find info
documented on it. I searched the news group and found it mentioned but no
solution yet.
I get the error when sync'ing from a Solaris 8 machine to my Solaris 8
server.
stat
2010 Nov 05
2
xServes are dead ;-( / SAN Question
Hi !
As some of you might know, Apple has discontinued it's xServes server as of
january 31st 2011.
We have a server rack with 12 xserves ranging from dual G5's to dual
quand-core xeon lastest generation, 3 xserve-raid and one activeraid 16 TB
disk enclosure. We also use xSan to access a shared file system among the
servers. Services are run from this shared filesystem, spreaded