similar to: space_map again nuked!!

Displaying 20 results from an estimated 600 matches similar to: "space_map again nuked!!"

2010 Nov 11
8
zpool import panics
Hi, I just had my Dell R610 reboot with a kernel panic when I threw a couple of zfs clone commands in the terminal at it. Now, after the system had rebooted zfs will not import my pool anylonger and instead the kernel will panic again. I have had the same symptom on my other host, for which this one is basically the backup, so this one is my last line if defense. I tried to run zdb -e
2007 Jul 10
1
ZFS pool fragmentation
I have a huge problem with ZFS pool fragmentation. I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0 I found workaround for now - changing recordsize - but I want better solution. The best solution would be a defragmentator tool, but I can see that it is not easy. When ZFS pool is fragmented then: 1. spa_sync function is
2016 Mar 31
1
reduced set of alternatives in package mlogit
code? example data? We can only guess based on your vague post. "PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code." Moreover, this sounds like a statistical question, not a question about R programming, and so might be more appropriate for a statistical list like stats.stackexchange.com .
2007 Nov 14
0
space_map.c ''ss == NULL'' panic strikes back.
Hi. Someone currently reported a ''ss == NULL'' panic in space_map.c/space_map_add() on FreeBSD''s version of ZFS. I found that this problem was previously reported on Solaris and is already fixed. I verified it and FreeBSD''s version have this fix in place...
2016 Apr 01
0
reduced set of alternatives in package mlogit
-----Original Message----- From: Bert Gunter [mailto:bgunter.4567 at gmail.com] Sent: quinta-feira, 31 de mar?o de 2016 20:22 To: Jose Marcos Ferraro <jose.ferraro at LOGITeng.com> Cc: r-help at r-project.org Subject: Re: [R] reduced set of alternatives in package mlogit code? example data? We can only guess based on your vague post. "PLEASE do read the posting guide
2011 Feb 16
0
ZFS space_map
Hello all, I am trying to understand how the allocation of space_map happens. What I am trying to figure out is how the recursive part is handled. From what I understand a new allocation (say appending to a file) will cause the space map to change by appending more allocs that will require extra space on disk and as such will change the space map again. I understand that the space map is treated
2016 Mar 31
2
reduced set of alternatives in package mlogit
I'm trying to estimate a multinomial logit model but in some choices only alternatives from a subset of all possible alternatives can be chosen. At the moment I get around it by creating "dummy" variables to mean the alternative is not available and let it estimate this coefficient as highly negative. Is there a better way to do it? [[alternative HTML version deleted]]
2010 Jul 24
2
Severe ZFS corruption, help needed.
I''m running FreeBSD 8.1 with ZFS v15. Recently some time after moving my mirrored pool from one device to another system crashes. From that time on zpool cannot be used/imported - any attempt fails with: solaris assert: sm->space + size &lt;= sm->size, file: /usr/src/sys/moules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c, line: 93 Debugging reveals that:
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2013 Jan 08
4
XCP Debian 7 - Routed mode
Hi everyone, I just got a brand new server by Hetzner.de. The thing is, when you have additional IP, to use routed mode (precisely a subnet of IP, story of exposed MAC in their data center etc.) Bridged mode is NOT possible for getting connectivity in VM''s. Traditionally, I''m more confident with "standard" Xen setup, so it''s easy to switch from bridged to
2014 Feb 25
3
assigning a single IP to the guest with "typical" hosting provider
I have a server with a hosting company, Hetzner. The servers at this hosting company have a public IP, let's say, A.B.C.D/255.255.255.x. Additionally, one can order extra IPs like below: 1) additional subnet (let's say X.Y.Z.0 / 28) 2) single IP (let's say, E.F.G.H) With additional subnet, assigning the IP to libvirt guest is simple: - assign X.Y.Z.1 on the host - assign X.Y.Z.2
2007 Oct 30
2
WebDav Support.
Hi guys, I read that mongrel supports the webdav protocol. I need to "create" the webdav files and allow a third party app access it just for read purposes. What do you recommend? -- Fernando Lujan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20071030/1527bd35/attachment.html
2010 Jan 05
4
xen domU ID and static routing
Hi, I use a EQ 4 dedicated root server by hetzner with ubuntu jaunty. I installed xen 3.3 with debian kernel and brought up 3 domUs. Now comes networking... Hetzner does not allow bridged networking so I have to use routed mode in xen. No Problem so far, but the Problem actually is: When I bring up a domU the routing table is created by the xen-script vif-routing. The network interface name
2007 Sep 07
35
multi threaded theoretically useful?
So here''s a random question: if a (Ruby) multi-threaded rails server could exist (bug free), would it be faster than using a mongrel cluster? (I.e. 10 mongrel processes versus 10 Ruby threads). I''m not sure if it would. RAM it might save, though. Any thoughts? -Roger -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Sep 08
10
computer stalls instead of reboot
Hi, I''m still seeing a very strange issue here. First, let''s clarify that the issue has never occurred with the good old xen 3.x and the good old 2.6.18 kernel. So the issue is, that with xen 4.x (including 4.1.1) pretty much any kernel (including kernel from [1] and vanilla 3.0.0, didn''t test the 2.6.18), the machine freezes during a reboot. The machine won''t
2007 Feb 05
2
recv vs. read in HTTPRequest#read_socket
Hello all, The following change to Mongrel::HttpRequest: def read_socket(len) if !@socket.closed? data = @socket.recv(len) # <--- formerly @socket.read(len) if !data raise "Socket read return nil" elsif data.length != len raise "Socket read returned insufficient data: #{data.length}" else data end else raise "Socket
2008 Mar 08
2
Grey windows under wine
I seem to be having an issue with font rendering under wine. Whenever I open a window that relies on "standard" Windows layouts (fonts, dropdown boxes, scrollbars, confirmation buttons) I just a grey window (screenshot). Anyone know how to fix this? Am I missing a library? screenshot: [Image: http://www.johannesklug.de/bildschirmfoto.png ]
2008 Jan 09
9
mongrel, monit, and the many, many messages
Monit 4.9, Mongrel 1.0.1, Rails 1.2.6, Mac OS X 10.4.11 (PPC) I don''t know whether this is a mongrel issue or a monit issue. I''m trying to poke my way around a system set up by someone else. I have no more experience w/ mongrel that local Rails dev at this point, and a conceptual understanding of how monit is working. I have the Deploying Rails beta book, and I''m
2018 Feb 04
1
Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup: Distributed volume without replication. Sharding enabled. [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925 Status: Started Snapshot Count: 0 Number of Bricks: 27 Transport-type: tcp Bricks: Brick1:
2018 Feb 04
1
Fwd: Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup: Distributed volume without replication. Sharding enabled. # cat /etc/centos-release CentOS release 6.9 (Final) # glusterfs --version glusterfs 3.12.3 [root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info Volume Name: gv0 Type: Distribute Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925 Status: