Displaying 12 results from an estimated 12 matches for "set_act".
Did you mean:
set_acl
2006 May 15
20
[PATCH 0/3] xenoprof fixes
These patches address issues in the kernel part of xenoprof:
* Ill-advised use of on_each_cpu() can lead to sleep with interrupts
disabled.
* Race conditions in active_domains code.
* Cleanup of active_domains code.
Comments welcome.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2017 Sep 29
1
Gluster geo replication volume is faulty
...vol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
[2017-09-29 15:53:30.742360] I
[resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register time
time=1506700410
[2017-09-29 15:53:30.754738] I
[gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker Status
Change status=Active
[2017-09-29 15:53:30.756040] I
[gsyncdstatus(/gfs/arbiter/gv0):247:set_worker_crawl_status] GeorepStatus:
Crawl Status Change status=History Crawl
[2017-09-29 15:53:30.756280] I [master(/gfs/arbiter/gv0):1429:crawl]
_GMaster: starting history cra...
2017 Oct 06
0
Gluster geo replication volume is faulty
...r%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
> [2017-09-29 15:53:30.742360] I
> [resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register
> timetime=1506700410
> [2017-09-29 15:53:30.754738] I
> [gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker
> Status Changestatus=Active
> [2017-09-29 15:53:30.756040] I
> [gsyncdstatus(/gfs/arbiter/gv0):247:set_worker_crawl_status]
> GeorepStatus: Crawl Status Changestatus=History Crawl
> [2017-09-29 15:53:30.756280] I [master(/gfs/arbiter/gv0):1429:crawl]
>...
2006 Apr 28
8
[PATCH] Xenoprof passive domain support
Hi Renato,
This patch is to add Xenoprof passive domain support in SMP environment.
Basically:
- It allocates per vcpu buffers for passive domain and maps them into
primary domain''s space.
- When primary domain gets sampled and triggers virq, its kernel module
will handle passive domain'' samples besides its owns. There is potential
buffer overflow if passive is very busy while
2010 Sep 18
1
find bug:syslinux.exe
hi
I am a syslinux user, but no developer, so I don't subscribe syslinux
mail list. I just report a bug, no more information.
bug file:
syslinux.exe
source:
win\syslinux.c
in function FixMBR:
==========================
BOOL FixMBR(int driveNum, int partitionNum, int write_mbr, int set_active)
{
BOOL result = TRUE;
HANDLE drive;
char driveName[128];
sprintf(driveName, "\\\\.\\PHYSICALDRIVE%d", driveNum);
==========================
It need a drive num!!! It get it from function :
==========================
STORAGE_DEVICE_NUMBER sdn;
if (GetStorageDeviceNumberByHandle(d_han...
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...b/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker /opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time [{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker /opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker /opt/tier1data2019/brick):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2024-01-24 19:51:29.174558] I [master(worker /opt/tier1data2019/brick):1576...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker /opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time [{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker /opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker /opt/tier1data2019/brick):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2024-01-24 19:51:29.174558] I [master(worker /opt/tier1data2019/brick):15...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker /opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time [{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker /opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker /opt/tier1data2019/brick):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2024-01-24 19:51:29.174558] I [master(worker /opt/tier1data2019/brick):15...
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2008 Mar 21
12
[Bug 1450] New: Support for ConsoleKit on Linux through dbus calls
https://bugzilla.mindrot.org/show_bug.cgi?id=1450
Summary: Support for ConsoleKit on Linux through dbus calls
Classification: Unclassified
Product: Portable OpenSSH
Version: 4.7p1
Platform: Other
2009 Nov 06
18
xenoprof: operation 9 failed for dom0 (status: -1)
Renato,
When I tried running "opcontrol --start" (after previously running
"opcontrol --start-daemon") in dom0, I get this error message:
/usr/local/bin/opcontrol: line 1639: echo: write error: Operation not
permitted
and this message in the Xen console:
(XEN) xenoprof: operation 9 failed for dom 0 (status : -1)
It looks like opcontrol is trying to do this: echo 1 >
2009 Nov 06
18
xenoprof: operation 9 failed for dom0 (status: -1)
Renato,
When I tried running "opcontrol --start" (after previously running
"opcontrol --start-daemon") in dom0, I get this error message:
/usr/local/bin/opcontrol: line 1639: echo: write error: Operation not
permitted
and this message in the Xen console:
(XEN) xenoprof: operation 9 failed for dom 0 (status : -1)
It looks like opcontrol is trying to do this: echo 1 >