search for: generic_118855

Displaying 8 results from an estimated 8 matches for "generic_118855".

Did you mean: generic_118844
2007 Apr 09
5
CAD application not working with zfs
Hello, was use several cad applications and with one of those we have problems using zfs. OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad application is catia v4. There are several configuration and data files stored on the server and shared via nfs to solaris and aix clients. The application is crashing on the aix client except the server is sharing those files from a ufs filesystem. Has anybody informations...
2006 May 22
3
how to recover from anon detrace hang at boot
Hi, I have a faulty dtrace script (most probably too big bufsize). My machine hangs at boot (amd64 5.10 Version Generic_118855-10 64-bit). NOTICE: enabling probe 11 (dtrace:::ERROR) WARNING: /etc/svc/volatile: File system full, swap space limit exceeded WARNING: Sorry, no swap space to grow stack for pid 5 (autopush) WARNING: /etc/svc/volatile: File system full, swap space limit exceeded How to recover from that ? I have...
2007 May 24
1
Samba 3.0.25 crash
Hello, I install the new version 3.0.25 on SunOS name.rz.RWTH-Aachen.DE 5.10 Generic_118855-33 i86pc i386 i86pc But it crashes if I try to write something at a share. In the samba logs I can find the lines INTERNAL ERROR: Signal 11 in pid 13573 (3.0.25) Please read the Trouble-Shooting section of the Samba3-HOWTO and in the syslog I see enunix: [ID 603404 kern.notice] NOTICE: core_log...
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi, I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit and in [b]/etc/system[/b] I put: [b]set zfs:zfs_nocacheflush = 1[/b] And after rebooting, I get the message: [b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b] So is this variable not available in the Solaris kernel? I...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2007 Apr 02
0
zpool scrub checksum error
Hello, i?ve already read many posts about checksum error on zpools but i like to have some more informations, please. We use 2 sun servers (amd x64, SunOS, 5.10 Generic_118855-36, hopefully all patches) with two hardware raids (raid 10) connected through fibre channel. Disk space is about 3 TB split into 4 pool including several (about 10 - 15) zfs each. After 22 days uptime i got a first checksum error entry after zpool scrub (running once a week on every pool). As i go...
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface