search for: metocean

Displaying 5 results from an estimated 5 matches for "metocean".

Did you mean: netocean
2012 Oct 13
3
3.2.6 or 3.3
...few distributed, replicated volumes on two servers with maybe a dozen or so clients. Secondary use will be a distributed volume spread over all machines. What are people's experiences with 3.2.6 vs 3.3? Would you recommend 3.3 for operational use? Thanks, Frank Frank Sonntag Meteorologist, MetOcean Solutions Ltd PO Box 441, New Plymouth, New Zealand 4340 T: +64 7-825 0540 M: +64 21-0245 2275 f.sonntag .A.T. metocean.co.nz http://www.metocean.co.nz
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi, We have a cluster with 130 compute nodes with an NAS-type central storage under gluster (3 bricks, ~50TB). When we run large number of ocean models we can run into bottlenecks with many jobs trying to write to our central storage. It was suggested to us that we could also used gluster to unite the disks on the compute nodes into a single "disk" in which files would be written
2008 Jul 22
11
Windows 2000 DomU
Hello, I am doing tests to convert old rotting servers into a big shiny new Xen platform. I have been able to migrate a Windows 2003 server without a scratch. I am trying to do the same thing with Windows 2000 Server but things aren''t so great... I made an image from the disk that worked in a machine. Then I boot on this image using Xen. Windows 2000 starts to boot up then give me a
2020 Nov 18
2
samba / debian 10 / security=ads
...bility servers > IFR\exchange trusted subsystem > IFR\exchange servers > IFR\compliance management > IFR\hygiene management > root at vans-d10-cl:~# wbinfo --domain-groups | tail > IFR\sgc > IFR\hdfstest > IFR\ofseair > IFR\rhldcm > IFR\gcelimer > IFR\gpacl > IFR\metocean > IFR\drhdajf > IFR\grotor > IFR\workflowums In /etc/nsswitch.conf I have added winbind as source for passwd and group > root at vans-d10-cl:~# grep winbind /etc/nsswitch.conf > passwd: files winbind nis compat > group: files winbind nis compat And the host see...
2013 Jan 18
0
large memory usage on rebalance
Hello all, I had gluster volume distributed over 7 machines with a single brick each. After adding two more bricks I started the rebalance command, but the memory usage of the glusterfs process handling this ate up all the available memory on the machine I started the rebalance on. I tried again on another peer with more memory and here it also ate up 60 GB and showed no sign of getting