Displaying 20 results from an estimated 81 matches for "tbyte".
Did you mean:
byte
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update
apt-clone upgrade
Any first impressions?
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2010 Nov 09
2
time for "balance"
Hallo, linux-btrfs,
I''m working with btrfs for some days.
btrfs-progs-20101101, kernel 2.6.35.8 (both self compiled).
First step:
mkfs.btrfs /dev/sdd1
mount /dev/sdd1 /srv/MM
for a 2 TByte partition, worked well.
Copying about 1,5 TByte data to this partition worked well.
Second step:
btrfs device add /dev/sdc1 /srv/MM
btrfs filesystem balance /srv/MM
adds /dev/sdc1 with about 1,5 TByte ("df" tells so), and the system
works the second line ("balan...
2019 Nov 22
2
[PATCH net-next v2] drivers: net: virtio_net: Implement a dev_watchdog handler
...ting the virtio-net driver. */
+ struct work_struct reset_work;
+
/* Does the affinity hint is set for virtqueues? */
bool affinity_hint_set;
@@ -1721,7 +1726,7 @@ static void virtnet_stats(struct net_device *dev,
int i;
for (i = 0; i < vi->max_queue_pairs; i++) {
- u64 tpackets, tbytes, rpackets, rbytes, rdrops;
+ u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
struct receive_queue *rq = &vi->rq[i];
struct send_queue *sq = &vi->sq[i];
@@ -1729,6 +1734,7 @@ static void virtnet_stats(struct net_device *dev,
start = u64_stats_fetch_begin_irq(&...
2019 Nov 22
2
[PATCH net-next v2] drivers: net: virtio_net: Implement a dev_watchdog handler
...ting the virtio-net driver. */
+ struct work_struct reset_work;
+
/* Does the affinity hint is set for virtqueues? */
bool affinity_hint_set;
@@ -1721,7 +1726,7 @@ static void virtnet_stats(struct net_device *dev,
int i;
for (i = 0; i < vi->max_queue_pairs; i++) {
- u64 tpackets, tbytes, rpackets, rbytes, rdrops;
+ u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
struct receive_queue *rq = &vi->rq[i];
struct send_queue *sq = &vi->sq[i];
@@ -1729,6 +1734,7 @@ static void virtnet_stats(struct net_device *dev,
start = u64_stats_fetch_begin_irq(&...
2019 Oct 07
0
[PATCH RFC net-next 1/2] drivers: net: virtio_net: Add tx_timeout stats field
...gt;
> > static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> > @@ -1721,7 +1723,7 @@ static void virtnet_stats(struct net_device *dev,
> > int i;
> >
> > for (i = 0; i < vi->max_queue_pairs; i++) {
> > - u64 tpackets, tbytes, rpackets, rbytes, rdrops;
> > + u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
> > struct receive_queue *rq = &vi->rq[i];
> > struct send_queue *sq = &vi->sq[i];
> >
> > @@ -1729,6 +1731,7 @@ static void...
2017 Jun 26
1
mirror block devices
Hi folks,
I have to migrate a set of iscsi backstores to a new target via network.
To reduce downtime I would like to mirror the active volumes first, next
stop the initators, and then do a final incremental sync.
The backstores have a size between 256 GByte and 1 TByte each. In toto
its about 8 TByte.
Of course I have found the --copy-devices patch, but I wonder if this
works as expected? Is it worth the effort?
Every helpful comment is highly appreciated.
Harri
2011 Jun 15
3
[PATCH] virtio-net: per cpu 64 bit stats
...tats64 *virtnet_stats(struct net_device *dev,
+ struct rtnl_link_stats64 *tot)
+{
+ struct virtnet_info *vi = netdev_priv(dev);
+ int cpu;
+ unsigned int start;
+
+ for_each_possible_cpu(cpu) {
+ struct virtnet_stats __percpu *stats
+ = per_cpu_ptr(vi->stats, cpu);
+ u64 tpackets, tbytes, rpackets, rbytes;
+
+ do {
+ start = u64_stats_fetch_begin(&stats->syncp);
+ tpackets = stats->tx_packets;
+ tbytes = stats->tx_bytes;
+ rpackets = stats->rx_packets;
+ rbytes = stats->rx_bytes;
+ } while (u64_stats_fetch_retry(&stats->syncp, start));
+
+...
2011 Jun 15
3
[PATCH] virtio-net: per cpu 64 bit stats
...tats64 *virtnet_stats(struct net_device *dev,
+ struct rtnl_link_stats64 *tot)
+{
+ struct virtnet_info *vi = netdev_priv(dev);
+ int cpu;
+ unsigned int start;
+
+ for_each_possible_cpu(cpu) {
+ struct virtnet_stats __percpu *stats
+ = per_cpu_ptr(vi->stats, cpu);
+ u64 tpackets, tbytes, rpackets, rbytes;
+
+ do {
+ start = u64_stats_fetch_begin(&stats->syncp);
+ tpackets = stats->tx_packets;
+ tbytes = stats->tx_bytes;
+ rpackets = stats->rx_packets;
+ rbytes = stats->rx_bytes;
+ } while (u64_stats_fetch_retry(&stats->syncp, start));
+
+...
2019 Oct 06
7
[PATCH RFC net-next 0/2] drivers: net: virtio_net: Implement
From: Julio Faracco <jcfaracco at gmail.com>
Driver virtio_net is not handling error events for TX provided by
dev_watchdog. This event is reached when transmission queue is having
problems to transmit packets. To enable it, driver should have
.ndo_tx_timeout implemented. This serie has two commits:
In the past, we implemented a function to recover driver state when this
kind of event
2019 Oct 06
7
[PATCH RFC net-next 0/2] drivers: net: virtio_net: Implement
From: Julio Faracco <jcfaracco at gmail.com>
Driver virtio_net is not handling error events for TX provided by
dev_watchdog. This event is reached when transmission queue is having
problems to transmit packets. To enable it, driver should have
.ndo_tx_timeout implemented. This serie has two commits:
In the past, we implemented a function to recover driver state when this
kind of event
2010 Jun 28
1
ACE does not work for me at all.
...event handlers
void __stdcall WFAudioPlayer::WaveOutProc(HWAVEOUT hWaveOut,UINT uMsg,DWORD
dwInstance,DWORD dwParam1,DWORD dwParam2)
{
if(uMsg != WOM_DONE)
return;
WFAudioPlayer* player = reinterpret_cast<WFAudioPlayer*>(dwInstance);
if (player)
{
// get the frame to be played back next.
tByte* nextframe = player->GetNextBlockFrame();
/**
* No matter the following two lines are commented or not, the voip
performances are the same, with echo.
*/
// make speex AEC buffer it for echo cancellation for recorder
if (nextframe)
player->GetConverter()->SpeexEchoPlayback(nextframe);
}...
2012 Jun 06
9
[PATCH] virtio-net: fix a race on 32bit arches
...s->tx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_end(&stats->tx_syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
dev_kfree_skb_any(skb);
@@ -703,12 +704,16 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
u64 tpackets, tbytes, rpackets, rbytes;
do {
- start = u64_stats_fetch_begin(&stats->syncp);
+ start = u64_stats_fetch_begin(&stats->tx_syncp);
tpackets = stats->tx_packets;
tbytes = stats->tx_bytes;
+ } while (u64_stats_fetch_retry(&stats->tx_syncp, start));
+
+ do {
+...
2012 Jun 06
9
[PATCH] virtio-net: fix a race on 32bit arches
...s->tx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_end(&stats->tx_syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
dev_kfree_skb_any(skb);
@@ -703,12 +704,16 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
u64 tpackets, tbytes, rpackets, rbytes;
do {
- start = u64_stats_fetch_begin(&stats->syncp);
+ start = u64_stats_fetch_begin(&stats->tx_syncp);
tpackets = stats->tx_packets;
tbytes = stats->tx_bytes;
+ } while (u64_stats_fetch_retry(&stats->tx_syncp, start));
+
+ do {
+...
2019 Nov 26
0
[net-next V3 2/2] drivers: net: virtio_net: Implement a dev_watchdog handler
...ting the virtio-net driver. */
+ struct work_struct reset_work;
+
/* Does the affinity hint is set for virtqueues? */
bool affinity_hint_set;
@@ -1721,7 +1726,7 @@ static void virtnet_stats(struct net_device *dev,
int i;
for (i = 0; i < vi->max_queue_pairs; i++) {
- u64 tpackets, tbytes, rpackets, rbytes, rdrops;
+ u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
struct receive_queue *rq = &vi->rq[i];
struct send_queue *sq = &vi->sq[i];
@@ -1729,6 +1734,7 @@ static void virtnet_stats(struct net_device *dev,
start = u64_stats_fetch_begin_irq(&...
2019 Nov 22
0
[PATCH] drivers: net: virtio_net: Implement a dev_watchdog handler
...ting the virtio-net driver. */
+ struct work_struct reset_work;
+
/* Does the affinity hint is set for virtqueues? */
bool affinity_hint_set;
@@ -1721,7 +1726,7 @@ static void virtnet_stats(struct net_device *dev,
int i;
for (i = 0; i < vi->max_queue_pairs; i++) {
- u64 tpackets, tbytes, rpackets, rbytes, rdrops;
+ u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
struct receive_queue *rq = &vi->rq[i];
struct send_queue *sq = &vi->sq[i];
@@ -1729,6 +1734,7 @@ static void virtnet_stats(struct net_device *dev,
start = u64_stats_fetch_begin_irq(&...
2019 Nov 22
0
[PATCH net-next v2] drivers: net: virtio_net: Implement a dev_watchdog handler
...struct reset_work;
> +
> /* Does the affinity hint is set for virtqueues? */
> bool affinity_hint_set;
>
> @@ -1721,7 +1726,7 @@ static void virtnet_stats(struct net_device *dev,
> int i;
>
> for (i = 0; i < vi->max_queue_pairs; i++) {
> - u64 tpackets, tbytes, rpackets, rbytes, rdrops;
> + u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
> struct receive_queue *rq = &vi->rq[i];
> struct send_queue *sq = &vi->sq[i];
>
> @@ -1729,6 +1734,7 @@ static void virtnet_stats(struct net_device *dev,
> start =...
2013 Dec 02
2
backup mdbox best strategy
Hello,
i have to backup (tape library) a mailsystem with about 300.000
Mailboxes on 2 backends. Summary of all mailboxes are 2 TByte.
The mailstore is mdbox.
Is it save to do a simple filesystem backup (full and incremental) with
backupsoftware?
What is the prefered strategy to do a backup for desaster recovery
(mailsystem crash) and restoring single usermailboxes?
Regards,
Claus
2011 Sep 09
1
1 TByte (99,5%) data missing on rsync backup???
Hi,
I have a weird problem with RSync:
On a NAS are about 1,2 TB data. When I plug in a harddisk and make a
backup with RSync, "df -h" shows the backup disk filling up to nearly 1,2
TB. But after RSync has
finished, there are only 3.7 GB on the backup disk.
System:
Linux nas 2.6.37-gentoo-r4 #1 SMP Tue May 3 19:54:31 CEST 2011
x86_64 Intel(R) Atom(TM) CPU 330 @ 1.60GHz GenuineIntel
2012 Nov 21
5
mixing WD20EFRX and WD2002FYPS in one pool
Hi,
after a flaky 8-drive Linux RAID10 just shredded about 2 TByte worth
of my data at home (conveniently just before I could make
a backup) I''ve decided to both go full redundancy as well as
all zfs at home.
A couple questions: is there a way to make WD20EFRX (2 TByte, 4k
sectors) and WD200FYPS (4k internally, reported as 512 Bytes?)
work well together...
2011 Nov 08
2
Multiple Patitions with with mdbox
Having > 10 TByte mailstore filesystem-checks takes too much time.
At the moment we have four different partitions, but I don't like to set
symlinks or LDAP-flags to sort customers and their domains to there
individual mount-point. I'd like to work with mdbox:/mail/%d/%n to calculate
the path automatical...