search for: hadrwar

Displaying 10 results from an estimated 10 matches for "hadrwar".

Did you mean: hadrware
2018 Apr 15
0
Hadrware Donation GTS450
Hello If needed I could donate a Gainward GTS450. Regards, Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://lists.freedesktop.org/archives/nouveau/attachments/20180415/7ca7150f/attachment.html>
2004 Jun 06
5
Zapata?
Whatever happened to the Zapatatelephony project? The last information there is from 2001 and a lot of new cards have been released since. Has GNU hardware development completely stopped?
2011 Apr 21
1
XCP question about similar hardware
Hi all, I am planning to improve virtualization in my company. So I have already a pool named p.e. Pool1. I have 5 servers Dell PowerEdge 1950 in it. Now I have 4 servers Dell PowerEdge 2850 too. Can I add 2850 to Pool1? I can''t find anything about compatibility for different hadrware in new release. -- Best regards, Den. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2004 Jun 17
19
HTB is nor fair when ''borrowing? Can someone correct me or maybe Devik''s HTB has a bug?
Hello there! Yesterday I started my experiments with HTB. I configured it this way: 1: root HTB qdisc | 1:1 HTB class rate 1000kbit | /-------+------\ 1:40 1:50 1:60 user1 user2 user3 rate 333 & ceil 1000 for everyone. User2 is disconnected and user1 and user3 are downloading. For all the time (t1-t5) there are ONLY these two users downloading! HTB should give
2005 Jun 20
0
Is Xen stable for production use
How stable ist xen ? I am Using Xen with a Hadrware Raid controller an on hight disk activity the scsi bus hung with an timeout and sometimes some partitions get readonly. Is this an known Xen problem or what could be wrong ? How Stable is Xen on hight i/o or cpu load or low memory situations ? Martin _________________________________________...
2004 Jul 06
3
Cannnot Load image
.... I'm new to syslinux, I've been readin all the documentation that I could but had no luck.. My problem is that, I have a simple configuration dhcp.conf 3.01rc12 with this options group next-server servidor filename "pxelinux.0" hostname terminal-2 hadrware ethernet xxxxxxxx fixed address xxxxx option root-path "clients/ip" The pxelinux.cfg label mylabel kernel mykernel under /tftpboot I have the following files mykernel -> a custom compile kernel and pxelinux.cfg I restart dhcpd, I tun on the terminal that boot...
2008 Mar 25
5
Assign Physical NIC to domU
Hello everyone! Well i want to know, if there is a way to specify a physical nic (like eth0 or eth1) to a domU and how can i do it. The server has two nics, what i want to do is assign physical nic (eth0) to the dom0 and assign physical nic (eht1) to the domU. I appreciate your help. Regards Ivan ____________________________________________________________________________________ Be a
2016 Apr 26
0
Re: /proc/meminfo
...root@tst-mxs2 ~]# ./a.out Alloc 100 Mb Alloc 200 Mb Alloc 300 Mb Alloc 400 Mb Alloc 500 Mb Alloc 600 Mb Alloc 700 Mb Alloc 800 Mb Alloc 900 Mb Alloc 1000 Mb Killed As You can see, limit worked and "free" inside container show correct values 3) Check situation outside container, from top hadrware node: [root@node01]# cat /sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmxs2.test.scope/memory.limit_in_bytes 1073741824 4) Check list of pid in cgroups (it's IMPOTANT moment): [root@node01]# cat /sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmxs2.t...
2016 Mar 23
7
/proc/meminfo
Has anyone seen this issue? We're running containers under CentOS 7.2 and some of these containers are reporting incorrect memory allocation in /proc/meminfo. The output below comes from a system with 32G of memory and 84GB of swap. The values reported are completely wrong. # cat /proc/meminfo MemTotal: 9007199254740991 kB MemFree: 9007199224543267 kB MemAvailable: 12985680
2016 Apr 26
2
Re: /proc/meminfo
...0 Mb > Alloc 400 Mb > Alloc 500 Mb > Alloc 600 Mb > Alloc 700 Mb > Alloc 800 Mb > Alloc 900 Mb > Alloc 1000 Mb > Killed > > As You can see, limit worked and "free" inside container show correct values > > 3) Check situation outside container, from top hadrware node: > [root@node01]# cat > /sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmxs2.test.scope/memory.limit_in_bytes > 1073741824 > 4) Check list of pid in cgroups (it's IMPOTANT moment): > [root@node01]# cat > /sys/fs/cgroup/memory/machine.slice/machine-l...