Displaying 18 results from an estimated 18 matches for "10us".
Did you mean:
10s
2002 May 31
6
I will pay you $10US (via Paypal) out of my own pocke t if you can solve this CUPS & Samba problem.
...to be available and read from the
linux server. The PPD files do contain one which is specific to my printer
and has worked on another machine.
-----Original Message-----
From: Blake Patton [mailto:pattonb@spots.ca]
Sent: May 31, 2002 11:32 AM
To: WEBSTER, Greg
Subject: RE: [Samba] I will pay you $10US (via Paypal) out of my own pocket
if you can solve this CUPS & Samba problem.
that looks ok, simply click ok and load the printer driver from whichever
os version you are using. ie go get the latest drivers for the printer you
want to use
and the os you have on the workstation. simpy those pr...
2002 May 31
2
I will pay you $10US (via Paypal) out of my own pocket if you can solve this CUPS & Samba problem.
Seriously. I can't afford to be down much longer or I'm going to be in
serious trouble.
Running Redhat 7.2.
Here's the scoop:
Cups is installed and running.
Samba is installed and running and is sharing files properly to 100+ people.
I can print a test page from the Cups web-interface.
I can print a test page by "cat foo | lpr" or "cat foo | lpr.cups" no
2013 Sep 29
9
DomU vs Dom0 performance.
Hi,
I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
into an issue where domU
performed better than dom0. So I ran few experiments to check if it is
just diskIO performance.
I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
i7 Q720 machine. I have also installed
archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
with 8 vcpus.
2018 Sep 17
2
[Bug 107959] New: System hangs up when loading nouveau for NVIDIA MX150 card
...Max snoop latency: 71680ns
Max no snoop latency: 71680ns
Capabilities: [258 v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+
L1_PM_Substates+
PortCommonModeRestoreTime=255us PortTPowerOnTime=10us
L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
T_CommonMode=0us LTR1.2_Threshold=0ns
L1SubCtl2: T_PwrOn=10us
Capabilities: [128 v1] Power Budgeting <?>
Capabilities: [420 v2] Advanced Error Reporting...
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...not impact the performance even when we failed to mask the
notification. Anyway for consistency I fixed rx routine as well as tx.
Performance numbers:
- Bulk transfer from guest to external physical server.
[Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
- Set 10us busypoll.
- Guest disables checksum and TSO because of host XDP.
- Measured single flow Mbps by netperf, and kicks by perf kvm stat
(EPT_MISCONFIG event).
Before After
Mbps kicks/s Mbps kicks/s
UDP_STREAM 1472byte...
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...not impact the performance even when we failed to mask the
notification. Anyway for consistency I fixed rx routine as well as tx.
Performance numbers:
- Bulk transfer from guest to external physical server.
[Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
- Set 10us busypoll.
- Guest disables checksum and TSO because of host XDP.
- Measured single flow Mbps by netperf, and kicks by perf kvm stat
(EPT_MISCONFIG event).
Before After
Mbps kicks/s Mbps kicks/s
UDP_STREAM 1472byte...
2003 Jun 14
1
permission to use drivers
Hi there,
If I may keep this short, I would like your permission
to use the drivers: ISOLINUX and MEMDISK, ISO 9660/El
Torito CD-ROMs bootloader, to make a FreeDOS bootable
CD-ROM.
I plan to distribute the CDs for troubleshooting
purposes and to cover my costs i plan to sell them for
$10US each.
I noticed that the most of the troubleshooting Cds are
breaking copyright licensing so I wanted to keep it
all legal.
Regards,
Reece George
__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
2009 Jun 22
0
Speex for TI MSP430 microcontroller - estimating CPU speed requirements?
2009 Jun 20
2
Speex for TI MSP430 microcontroller - estimating CPU speed requirements?
Interested in building a speex codec (basically audio <-> speex <-> data
stream) using TI's small MSP430 microcontroller. Is there any way to
estimate feasibility based on CPU requirements? Example - speex is happily
encoding on an old Pentium-1 processor (166 MHz) using about half the CPU
(as reported under Linux); the TI microcontrollers are much slower yet
(8-16-25 MHz) and
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...-(wire)--> [Server]
>
> Just to confirm in this case since zerocopy is enabled, we are in fact
> use the generic XDP datapath?
For some reason zerocopy was not applied for most packets, so in most
cases driver XDP was used. I was going to dig into it but not yet.
>
>> - Set 10us busypoll.
>> - Guest disables checksum and TSO because of host XDP.
>> - Measured single flow Mbps by netperf, and kicks by perf kvm stat
>> ?? (EPT_MISCONFIG event).
>>
>> ???????????????????????????? Before????????????? After
>> ?????????????????????????? Mbps?...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...-(wire)--> [Server]
>
> Just to confirm in this case since zerocopy is enabled, we are in fact
> use the generic XDP datapath?
For some reason zerocopy was not applied for most packets, so in most
cases driver XDP was used. I was going to dig into it but not yet.
>
>> - Set 10us busypoll.
>> - Guest disables checksum and TSO because of host XDP.
>> - Measured single flow Mbps by netperf, and kicks by perf kvm stat
>> ?? (EPT_MISCONFIG event).
>>
>> ???????????????????????????? Before????????????? After
>> ?????????????????????????? Mbps?...
2018 Jun 29
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...en we failed to mask the
> notification. Anyway for consistency I fixed rx routine as well as tx.
>
> Performance numbers:
>
> - Bulk transfer from guest to external physical server.
> [Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
> - Set 10us busypoll.
> - Guest disables checksum and TSO because of host XDP.
> - Measured single flow Mbps by netperf, and kicks by perf kvm stat
> (EPT_MISCONFIG event).
>
> Before After
> Mbps kicks/s Mbps kicks/...
2018 Jul 02
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...ht, just to confirm this. This is expected.
In tuntap, we do native XDP only for small and non zerocopy packets. See
tun_can_build_skb(). The reason is XDP may adjust packet header which is
not supported by zercopy. We can only use XDP generic for zerocopy in
this case.
>
>>> - Set 10us busypoll.
>>> - Guest disables checksum and TSO because of host XDP.
>>> - Measured single flow Mbps by netperf, and kicks by perf kvm stat
>>> ?? (EPT_MISCONFIG event).
>>>
>>> ???????????????????????????? Before????????????? After
>>> ?????...
2013 May 14
59
HVM Migration of domU on Qemu-upstream DM causes stuck system clock with ACPI
This is problem 1 of 3 problems we are having with live migration and/or ACPI on Xen-4.3 and Xen-4.2.
Any help would be appreciated.
Detailed description of problem:
We are using Xen-4.3-rc1 with dom0 running Ubuntu Precise and 3.5.0-23-generic kernel, and domU running Ubuntu Precise (12.04) cloud images running 3.2.0-39-virtual. We are using the xl.conf below on qemu-upstream-dm and HVM and
2018 Jun 29
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...t;
> Performance numbers:
>
> - Bulk transfer from guest to external physical server.
> [Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
Just to confirm in this case since zerocopy is enabled, we are in fact
use the generic XDP datapath?
> - Set 10us busypoll.
> - Guest disables checksum and TSO because of host XDP.
> - Measured single flow Mbps by netperf, and kicks by perf kvm stat
> (EPT_MISCONFIG event).
>
> Before After
> Mbps kicks/s Mbps kick...
2018 Jul 03
11
[PATCH v2 net-next 0/4] vhost_net: Avoid vq kicks during busyloop
...Tx performance is greatly improved by this change. I don't see notable
performance change on rx with this series though.
Performance numbers (tx):
- Bulk transfer from guest to external physical server.
[Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
- Set 10us busypoll.
- Guest disables checksum and TSO because of host XDP.
- Measured single flow Mbps by netperf, and kicks by perf kvm stat
(EPT_MISCONFIG event).
Before After
Mbps kicks/s Mbps kicks/s
UDP_STREAM 1472byte...
2023 Aug 31
3
[PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation
Hi,
On 8/31/23 13:18, Danilo Krummrich wrote:
> On Thu, Aug 31, 2023 at 11:04:06AM +0200, Thomas Hellstr?m (Intel) wrote:
>> Hi!
>>
>> On 8/30/23 17:00, Danilo Krummrich wrote:
>>> On Wed, Aug 30, 2023 at 03:42:08PM +0200, Thomas Hellstr?m (Intel) wrote:
>>>> On 8/30/23 14:49, Danilo Krummrich wrote:
>>>>> Hi Thomas,
>>>>>
2004 Nov 17
9
serious networking (em) performance (ggate and NFS) problem
Dear best guys,
I really love 5.3 in many ways but here're some unbelievable transfer rates,
after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
my performance problem (*laugh*):
(In short, see *** below)
Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
Desktop adapter MT) connected directly without a switch/hub and "device