Displaying 7 results from an estimated 7 matches for "netsuite".
Did you mean:
getsuite
2011 Apr 01
0
Netsuite data fetch
Hello Friends,
I have a requirement where I need to fetch netsuite Inventory Items, for the
same I am using "netsuite_client" gem but somehow I can only fetch a single
record via "client.find_by(''ItemSearchBasic'', " method is there any way I
can fetch all the inventory items..
Thanks for any help any suggestion.
Abhis
--
Y...
2012 Nov 14
1
Howto find out volume topology
Hello,
I would like to find out the topology of an existing volume. For example,
if I have a distributed replicated volume, what bricks are the replication
partners?
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121114/b203ea91/attachment.html>
2012 Dec 17
2
Transport endpoint
Hi,
I've got Gluster error: Transport endpoint not connected.
It came up twice after trying to rsync 2 TB filesystem over; it reached about 1.8 TB and got the error.
Logs on the server side (on reverse time order):
[2012-12-15 00:53:24.747934] I [server-helpers.c:629:server_connection_destroy] 0-RedhawkShared-server: destroyed connection of
2012 Dec 18
2
Gluster and public/private LAN
I have an idea I'd like to run past everyone. Every gluster peer would
have two NICs - one "public" and the other "private" with different IP
subnets. The idea that I am proposing would be to have every gluster
peer have all private peer addresses in /etc/hosts, but the public
addresses would be in DNS. Clients would use DNS.
The goal is to have all peer-to-peer
2011 Sep 06
1
Inconsistent md5sum of replicated file
I was wondering if anyone would be able to shed some light on how a file
could end up with inconsistent md5sums on Gluster backend storage.
Our configuration is running on Gluster v3.1.5 in a distribute-replicate
setup consisting of 8 bricks.
Our OS is Red Hat 5.6 x86_64. Backend storage is an ext3 RAID 5.
The 8 bricks are in RR DNS and are mounted for reading/writing via NFS
automounts.
2012 Jul 26
2
kernel parameters for improving gluster writes on millions of small writes (long)
This is a continuation of my previous posts about improving write perf
when trapping millions of small writes to a gluster filesystem.
I was able to improve write perf by ~30x by running STDOUT thru gzip
to consolidate and reduce the output stream.
Today, another similar problem, having to do with yet another
bioinformatics program (which these days typically handle the 'short
reads' that
2012 Sep 25
2
GlusterFS performance
GlusterFS newbie (less than a week) here. Running GlusterFS 3.2.6 servers
on Dell PE2900 systems with four 3.16 GHz Xeon cores and 16 GB memory
under CentOS 5.8.
For this test, I have a distributed volume of one brick only, so no
replication. I have made performance measurements with both dd and
Bonnie++, and they confirm each other; here I report only the dd numbers
(using bs=1024k). File