Displaying 20 results from an estimated 78 matches for "nexentastore".
Did you mean:
nexentastor
2010 Aug 04
1
Blktap-control under 2.6.32.16-1.2.108.xendom0.fc13.x86_64
1. Attempt to load Nexenta under 2.6.32.16-1.2.108.xendom0.fc13.x86_64
Xen 4.0.1-rc6-pre & 2.6.32.16-1.2.108.xendom0.fc13.x86_64 on top F13
[root@fedora13 NexentaStor-Community-3.0.2]# uname -a
Linux fedora13 2.6.32.16-1.2.108.xendom0.fc13.x86_64 #1 SMP Fri Jul 23 17:09:30 MSD 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@fedora13 NexentaStor-Community-3.0.2]# xm create -c
2010 Jun 21
7
Third release candidate for Xen 4.0.1
Folks,
The tag 4.0.1-rc3 has been added to
http://xenbits.xensource.com/xen-4.0-testing.hg
Please test!
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2010 Jan 13
0
NexentaStor 2.2.1 Developer Edition Released
Hi All,
I''d like to announce the immediate availability of NexentaStor
Developer Edition v2.2.1.
Changes since v2.2 include many bug fixes. More information:
* This is a major stable release.
* Storage limit increased to 4TB.
* Built-in antivirus capability.
* Consistent snapshots Oracle and MySQL databases.
* A Citrix StorageLink adapter
* Asynchronous reverse replication support
*
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release.
Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134?
These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2010 Jul 16
12
Recommended RAM for ZFS on various platforms
I''m currently planning on running FreeBSD with ZFS, but I wanted to double-check
how much memory I''d need for it to be stable. The ZFS wiki currently says you
can go as low as 1 GB, but recommends 2 GB; however, elsewhere I''ve seen someone
claim that you need at least 4 GB. Does anyone here know how much RAM FreeBSD
would need in this case?
Likewise, how much RAM
2010 Aug 25
6
(preview) Whitepaper - ZFS Pools Explained - feedback welcome
Hello list,
while following this list for more then 1 year, I feel that this list was a great way to get insights into ZFS. Thank you all for contributing.
Over the last month''s I was writing a little "whitepaper" trying to consolidate the knowledge collected here. It has now reached a "beta" state and I would like to share the result with you. I call it
-
2011 Jan 28
3
OT: Recommendations for a virtual storage server
Hi all,
I need to install a virtual machine acting as a virtual storage server under
CentOS 5.x (using kvm, xen, virtualbox or vmware). This virtual storage machine
needs to server storage to another ESXi server and at the same time to the host
where is installed.
This is due to the limitations of hardware I have available. Both hosts needs to
server several machines.
It is very
2011 Nov 07
1
Nexenta: "load_mib2nut: using pw mib" package / config / or firewall problem?
G'day
I'm currently using NUT on Debian squeeze box in the same subnet with an IBM
3000 HV (branded Eaton 5125 with Web/SNMP card), now I wanted to get our
storage appliance running NexentaStor 3.1 set up to be using NUT too. Nexenta-
Stor is an OpenSolaris / illumos-based appliance OS currently using
NUT packages
from ubuntu hardy.
My question here is whether I'm running into a
-
2011 Dec 02
14
LSI 3GB HBA SAS Errors (and other misc)
During the diagnostics of my SAN failure last week we thought we had seen a backplane failure due to high error counts with ''lsiutil''. However, even with a new backplane and ruling out failed cards (MPXIO or singular) or bad cables I''m still seeing my error count with LSIUTIL increment. I''ve got no disks attached to the array right now so I''ve also
2010 Apr 28
6
Compellant announces zNAS
Today, Compellant announced their zNAS addition to their unified storage
line. zNAS uses ZFS behind the scenes.
http://www.compellent.com/Community/Blog/Posts/2010/4/Compellent-zNAS.aspx
Congrats Compellant!
-- richard
ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010
2010 Jul 23
2
ZFS volume turned into a socket - any way to restore data?
I have recently upgraded from NexentaStor 2 to NexentaStor 3 and somehow one of my volumes got corrupted. Its showing up as a socket. Has anyone seen this before? Is there a way to get my data back? It seems like it''s still there, but not recognized as a folder. I ran zpool scrub, but it came back clean.
Attached is the output of #zdb data/rt
2.0K sr-xr-xr-x 17 root root 17 Jul
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here.
I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2012 Jul 02
14
HP Proliant DL360 G7
Hello,
Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated.
Thanks in advance!
-Anh
2011 Jan 28
2
OT: Recommendations for a virtual storage server
Hi all,
I need to install a virtual machine acting as a virtual storage server under
CentOS 5.x (using kvm, xen, virtualbox or vmware). This virtual storage machine
needs to server storage to another ESXi server and at the same time to the host
where is installed.
This is due to the limitations of hardware I have available. Both hosts needs to
server several machines.
It is very
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all
Sorry if it''s kind of off-topic for the list but after talking
to lots of vendors I''m running out of ideas...
We are looking for JBOD systems which
(1) hold 20+ 3.3" SATA drives
(2) are rack mountable
(3) have all the nive hot-swap stuff
(4) allow 2 hosts to connect via SAS (4+ lines per host) and see
all available drives as disks, no RAID volume.
In a
2010 May 04
1
xen-4.0.0: Could not connect to DomU console
Hi.
I use xen-4.0.0
My domU successfully started, but when i tried to connect to DomU console i
get error:
root@debian-office:/opt/src/NexetaStor# xm console NexentaStor-EVAL-2.2.0
Unexpected error: <type ''exceptions.TypeError''>
Please report to xen-devel@lists.xensource.com
Traceback (most recent call last):
File "/usr/sbin/xm", line 7, in <module>
2010 May 04
1
xen-4.0.0: Could not connect to DomU console
Hi.
I use xen-4.0.0
My domU successfully started, but when i tried to connect to DomU console i
get error:
root@debian-office:/opt/src/NexetaStor# xm console NexentaStor-EVAL-2.2.0
Unexpected error: <type ''exceptions.TypeError''>
Please report to xen-devel@lists.xensource.com
Traceback (most recent call last):
File "/usr/sbin/xm", line 7, in <module>
2010 Mar 29
2
pool won''t import
root at cs6:~# zpool import
pool: content3
id: 14184872052409584084
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the ''-f'' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:
content3
2010 Feb 15
3
zfs questions wrt unused blocks
Gents,
We want to understand the mechanism of zfs a bit better.
Q: what is the design/algorithm of zfs in terms of reclaiming unused
blocks?
Q: what criteria is there for zfs to start reclaiming blocks
Issue at hand is an LDOM or zone running in a virtual
(thin-provisioned) disk on a NFS server and a zpool inside that vdisk.
This vdisk tends to grow in size even if the user writes and deletes
2010 Mar 06
3
Monitoring my disk activity
Recently, I''m benchmarking all kinds of stuff on my systems. And one
question I can''t intelligently answer is what blocksize I should use in
these tests.
I assume there is something which monitors present disk activity, that I
could run on my production servers, to give me some statistics of the block
sizes that the users are actually performing on the production server.