Steve Linabery
2008-Aug-07 07:06 UTC
[Ovirt-devel] [PATCH] Adds max/min methods to StatsDataList. Limited cleanup in graph_controller.rb. First stab at stats data retrieval for new graphing approach.
Sorry for the attachment; haven't set up my esmtp yet. I didn't clean up a great deal of the existing code in graph_controller because I suspect most of it will be removed. I did edit that loop on total_memory so that it wouldn't make mongrel time out. Some whitespace cleanup in graph_controller. Main thing for review is the new method I wrote in graph_controller to retrieve stats in either xml or json (the generation of which is yet to be implemented) for a single host or vm. I want to avoid these big long lists of stats requests to Stats.rb, because a) there's no performance hit AFAICT in breaking them up into smaller requests, and b) the graph (or whatever walks the tree and assembles the data lists for the graph) needs to make more precise requests for data. Comments & suggestions welcome. Good day, Steve -------------- next part -------------->From 0947ea5f141a8ec30d6f09693dee6db5c0854e93 Mon Sep 17 00:00:00 2001From: Steve Linabery <slinabery at redhat.com> Date: Thu, 7 Aug 2008 01:56:11 -0500 Subject: [PATCH] Adds max/min methods to StatsDataList. Limited cleanup in graph_controller.rb. First stab at stats data retrieval for new graphing approach. --- wui/src/app/controllers/graph_controller.rb | 194 ++++++++++++++++++++------- wui/src/app/util/stats/Stats.rb | 77 ++++++++++-- wui/src/app/util/stats/StatsDataList.rb | 58 ++++++--- wui/src/app/util/stats/statsTest.rb | 54 +++++--- 4 files changed, 283 insertions(+), 100 deletions(-) diff --git a/wui/src/app/controllers/graph_controller.rb b/wui/src/app/controllers/graph_controller.rb index dbe2afc..87fc52d 100644 --- a/wui/src/app/controllers/graph_controller.rb +++ b/wui/src/app/controllers/graph_controller.rb @@ -3,6 +3,65 @@ require 'util/stats/Stats' class GraphController < ApplicationController layout nil + # returns data for one pool/host/vm, one target + def one_resource_graph_data + + # the primary key for the resource + id = params[:id] + + # times are in milliseconds since the epoch + startTime = params[:start] + endTime = params[:end] + + # what statistic to retrieve data for? (cpu, memory, net i/o, etc) + target = params[:target] + + # the desired resolution (may or may not be resolution of results) + resolutionIn = params[:resolution] + + # host or vm + poolType = params[:poolType] + + #TODO: add authorization check (is user allowed to view this data?) + + devClass = DEV_KEY_CLASSES[target] + counter = DEV_KEY_COUNTERS[target] + duration = endTime - startTime + resolution = _valid_resolution(resolutionIn) + #TODO: adjust number of cpus properly + cpus = 0 + + resourceName = "" + if poolType == "host" + resourceName = Host.find(id).hostname + elsif poolType == "vm" + #FIXME: Will Stats allow querying by Vm UUID? + resourceName = Vm.find(id).uuid + end + + requestList = [ ] + #TODO: need new method in Stats to average results for >1 cpu + requestList.push(StatsRequest.new(resourceName, devClass, cpus, counter, + startTime, duration, resolution, + DataFunction::Average), + StatsRequest.new(resourceName, devClass, cpus, counter, + startTime, duration, resolution, + DataFunction::Peak), + StatsRequest.new(resourceName, devClass, cpus, counter, + startTime, duration, resolution, + DataFunction::RollingPeak), + StatsRequest.new(resourceName, devClass, cpus, counter, + startTime, duration, resolution, + DataFunction::RollingAverage)) + + @statsList = getStatsData?(requestList) + respond_to do |format| + format.xml + format.json #is this supported? + end + end + + # generate layout for avaialability bar graphs def availability_graph @id = params[:id] @@ -41,7 +100,7 @@ class GraphController < ApplicationController unlimited = false total=0 used= pools.inject(0) { |sum, pool| sum+pool.allocated_resources[:current][resource_key] } - pools.each do |pool| + pools.each do |pool| resource = pool.total_resources[resource_key] if resource total +=resource @@ -74,9 +133,9 @@ class GraphController < ApplicationController devclass = DEV_KEY_CLASSES[target] counter = DEV_KEY_COUNTERS[target] @pool = Pool.find(@id) - + hosts = @pool.hosts - # temporary workaround for vm resource history + # temporary workaround for vm resource history # graph until we have a more reqs / long term solution if poolType == "vm" hosts = [] @@ -89,18 +148,18 @@ class GraphController < ApplicationController startTime = 0 duration, resolution = _get_snapshot_time_params(myDays.to_i) - + requestList = [ ] @pool.hosts.each { |host| if target == "cpu" 0.upto(host.num_cpus - 1){ |x| - requestList.push( StatsRequest.new(host.hostname, devclass, x, counter, startTime, duration, resolution, DataFunction::Average), + requestList.push( StatsRequest.new(host.hostname, devclass, x, counter, startTime, duration, resolution, DataFunction::Average), StatsRequest.new(host.hostname, devclass, x, counter, startTime, duration, resolution, DataFunction::Peak), StatsRequest.new(host.hostname, devclass, x, counter, startTime, duration, resolution, DataFunction::RollingPeak), StatsRequest.new(host.hostname, devclass, x, counter, startTime, duration, resolution, DataFunction::RollingAverage)) } else - requestList.push( StatsRequest.new(host.hostname, devclass, 0, counter, startTime, duration, resolution, DataFunction::Average), + requestList.push( StatsRequest.new(host.hostname, devclass, 0, counter, startTime, duration, resolution, DataFunction::Average), StatsRequest.new(host.hostname, devclass, 0, counter, startTime, duration, resolution, DataFunction::Peak), StatsRequest.new(host.hostname, devclass, 0, counter, startTime, duration, resolution, DataFunction::RollingPeak), StatsRequest.new(host.hostname, devclass, 0, counter, startTime, duration, resolution, DataFunction::RollingAverage)) @@ -120,9 +179,11 @@ class GraphController < ApplicationController valueindex = (data.get_timestamp?.to_i - dat[0].get_timestamp?.to_i) / resolution times.size.upto(valueindex) { |x| time = Time.at(dat[0].get_timestamp?.to_i + valueindex * resolution) - ts = Date::ABBR_MONTHNAMES[time.month] + ' ' + time.day.to_s - ts += ' ' + time.hour.to_s + ':' + time.min.to_s if myDays.to_i == 1 - times.push ts + if myDays.to_i == 1 + times.push _time_long_format(time) + else + times.push _time_short_format(time) + end } [@avg_history, @peak_history, @roll_avg_history, @roll_peak_history].each { |valuearray| valuearray[:values].size.upto(valueindex) { |x| @@ -157,10 +218,8 @@ class GraphController < ApplicationController } end - total_peak = 0 - total_roll_peak = 0 - 0.upto(@peak_history[:values].size - 1){ |x| total_peak = @peak_history[:values][x] if @peak_history[:values][x] > total_peak } - 0.upto(@roll_peak_history[:values].size - 1){ |x| total_roll_peak = @roll_peak_history[:values][x] if @roll_peak_history[:values][x] > total_roll_peak } + total_peak = @peak_history.get_max_value? + total_roll_peak = $roll_peak_history.get_max_value? scale = [] if target == "cpu" @@ -168,12 +227,23 @@ class GraphController < ApplicationController scale.push x.to_s } elsif target == "memory" - #increments = @pool.hosts.total_memory / 512 - 0.upto(@pool.hosts.total_memory) { |x| - if x % 1024 == 0 - scale.push((x / 1024).to_s) # divide by 1024 to convert to MB - end - } + megabyte = 1024 + totalMemory = @pool.hosts.total_memory + tick = megabyte + if totalMemory >= 10 * megabyte && totalMemory < 100 * megabyte + tick = 10 * megabyte + elsif totalMemory >= 100 * megabyte && totalMemory < 1024 * megabyte + tick = 100 * megabyte + else + tick = 1024 * megabyte + end + + counter = 0 + while counter * tick < totalMemory do + counter += 1 #this gives us one tick mark beyond totalMemory + scale.push((counter * tick / 1024).to_s) # divide by 1024 to convert to MB + end + elsif target == "load" 0.upto(total_peak){|x| scale.push x.to_s if x % 5 == 0 @@ -186,7 +256,7 @@ class GraphController < ApplicationController graph_object = { :timepoints => times, :scale => scale, - :dataset => + :dataset => [ { :name => target + "roll_peak", @@ -196,7 +266,7 @@ class GraphController < ApplicationController }, { :name => target + "roll_average", - :values => @roll_avg_history[:values], + :values => @roll_avg_history[:values], :stroke => @roll_avg_history[:color], :strokeWidth => 2 }, @@ -208,7 +278,7 @@ class GraphController < ApplicationController }, { :name => target + "average", - :values => @avg_history[:values], + :values => @avg_history[:values], :stroke => @avg_history[:color], :strokeWidth => 1 } @@ -237,13 +307,13 @@ class GraphController < ApplicationController if load_value.nil? load_value = 0 elsif load_value > 10 # hack to cap it as we have nothing to compare against - load_value = 10 + load_value = 10 end load_remaining = 10 - load_value - + graph_object = { :timepoints => [], - :dataset => + :dataset => [ { :name => target, @@ -264,7 +334,7 @@ class GraphController < ApplicationController render :json => graph_object end - + # generate layout for snapshot graphs def snapshot_graph @id = params[:id] @@ -275,7 +345,7 @@ class GraphController < ApplicationController :scale => { 'load' => 10, 'cpu' => 100, 'memory' => 0, 'netin' => 1000, 'netout' => 1000}, # values which to scale graphs against :peak => { 'load' => 0, 'cpu' => 0, 'netin' => 0, 'netout' => 0, 'memory' => 0 }} @data_points = { :avg => { 'load' => 0, 'cpu' => 0, 'netin' => 0, 'netout' => 0, 'memory' => 0 }, - :scale => { 'load' => 10, 'cpu' => 100, 'memory' => 0, 'netin' => 1000, 'netout' => 1000}, + :scale => { 'load' => 10, 'cpu' => 100, 'memory' => 0, 'netin' => 1000, 'netout' => 1000}, :peak => { 'load' => 0, 'cpu' => 0, 'netin' => 0, 'netout' => 0, 'memory' => 0 }} duration = 600 @@ -291,7 +361,7 @@ class GraphController < ApplicationController host.nics.each{ |nic| @snapshots[:scale]['netin'] += 1000 @snapshots[:scale]['netout'] += 1000 - # @snapshots[:scale]['netin'] += nic.bandwidth + # @snapshots[:scale]['netin'] += nic.bandwidth # @snapshots[:scale]['netout'] += nic.bandwidth } elsif @poolType == 'vm' @@ -319,7 +389,7 @@ class GraphController < ApplicationController } } end - + statsList = getStatsData?( requestList ) statsList.each { |stat| if stat.get_status? == StatsStatus::SUCCESS @@ -395,32 +465,32 @@ class GraphController < ApplicationController DEV_CLASS_KEYS = DEV_KEY_CLASSES.invert # TODO this needs fixing / completing (cpu: more than user time? disk: ?, load: correct?, nics: correct?) - DEV_KEY_COUNTERS = { 'cpu' => CpuCounter::CalcUsed, 'memory' => MemCounter::Used, 'disk' => DiskCounter::Ops_read, + DEV_KEY_COUNTERS = { 'cpu' => CpuCounter::CalcUsed, 'memory' => MemCounter::Used, 'disk' => DiskCounter::Ops_read, 'load' => LoadCounter::Load_1min, 'netin' => NicCounter::Octets_rx, 'netout' => NicCounter::Octets_tx } DEV_COUNTER_KEYS = DEV_KEY_COUNTERS.invert def _create_host_snapshot_requests(hostname, duration, resolution) requestList = [] requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['memory'], 0, DEV_KEY_COUNTERS['memory'], - 0, duration, resolution, DataFunction::Average) - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['memory'], 0, DEV_KEY_COUNTERS['memory'], - 0, duration, resolution, DataFunction::Peak ) + 0, duration, resolution, DataFunction::Average) + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['memory'], 0, DEV_KEY_COUNTERS['memory'], + 0, duration, resolution, DataFunction::Peak ) requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['load'], 0, DEV_KEY_COUNTERS['load'], 0, duration, resolution, DataFunction::Average) - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['load'], 0, DEV_KEY_COUNTERS['load'], + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['load'], 0, DEV_KEY_COUNTERS['load'], 0, duration, resolution, DataFunction::Peak ) - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['cpu'], 0, DEV_KEY_COUNTERS['cpu'], + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['cpu'], 0, DEV_KEY_COUNTERS['cpu'], 0, duration, resolution, DataFunction::Average) # TODO more than 1 cpu - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['cpu'], 0, DEV_KEY_COUNTERS['cpu'], + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['cpu'], 0, DEV_KEY_COUNTERS['cpu'], 0, duration, resolution, DataFunction::Peak ) # TODO more than 1 cpu - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netout'], 0, DEV_KEY_COUNTERS['netout'], - 0, duration, resolution, DataFunction::Average) - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netout'], 0, DEV_KEY_COUNTERS['netout'], - 0, duration, resolution, DataFunction::Peak ) + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netout'], 0, DEV_KEY_COUNTERS['netout'], + 0, duration, resolution, DataFunction::Average) + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netout'], 0, DEV_KEY_COUNTERS['netout'], + 0, duration, resolution, DataFunction::Peak ) requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netin'], 0, DEV_KEY_COUNTERS['netin'], - 0, duration, resolution, DataFunction::Average) - requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netin'], 0, DEV_KEY_COUNTERS['netin'], - 0, duration, resolution, DataFunction::Peak ) + 0, duration, resolution, DataFunction::Average) + requestList << StatsRequest.new(hostname, DEV_KEY_CLASSES['netin'], 0, DEV_KEY_COUNTERS['netin'], + 0, duration, resolution, DataFunction::Peak ) return requestList end @@ -438,11 +508,11 @@ class GraphController < ApplicationController end def _get_snapshot_value(value, devClass, function) - if ( ( devClass != DEV_KEY_CLASSES["cpu"]) && + if ( ( devClass != DEV_KEY_CLASSES["cpu"]) && ( function != DataFunction::RollingAverage) && ( function != DataFunction::RollingPeak) && - ( value.nan?) ) - return 0 + ( value.nan?) ) + return 0 end # massage some of the data: @@ -451,7 +521,7 @@ class GraphController < ApplicationController elsif devClass == DEV_KEY_CLASSES["netout"] && counter == DEV_KEY_COUNTER["netout"] return (value.to_i * 8 / 1024 / 1024).to_i #mbits elsif devClass == DEV_KEY_CLASSES["netin"] && counter == DEV_KEY_COUNTER["netin"] - return (value.to_i * 8 / 1024 / 1024).to_i # mbits + return (value.to_i * 8 / 1024 / 1024).to_i # mbits elsif devClass == DEV_KEY_CLASSES["memory"] return (value.to_i / 1000000).to_i end @@ -462,16 +532,38 @@ class GraphController < ApplicationController now = Time.now if myDays.to_i == 1 0.upto(152){|x| - time = now - 568 * x # 568 = 24 * 60 * 60 / 152 = secs / interval - times.push Date::ABBR_MONTHNAMES[time.month] + ' ' + time.day.to_s + ' ' + time.hour.to_s + ':' + time.min.to_s + # 568 = 24 * 60 * 60 / 152 = secs / interval + time = now - 568 * x + times.push _time_long_format(time) } elsif 1.upto(myDays.to_i * 3){|x| - time = now - x * 28800 # 24 * 60 * 60 / ~2 - times.push Date::ABBR_MONTHNAMES[time.month] + ' ' + time.day.to_s + # 24 * 60 * 60 / ~2 + time = now - x * 28800 + times.push _time_short_format(time) } end times.reverse! end + def _time_short_format(time) + time.strftime("%b %d") + end + + def _time_long_format(time) + time.strftime("%b %d %H:%M") + end + + def _validate_resolution(resolution) + if resolution <= RRDResolution::Short + RRDResolution::Short + elsif resolution <= RRDResolution::Medium + RRDResolution::Medium + elsif resolution <= RRDResolution::Long + RRDResolution::Long + else + RRDResolution::Default + end + end + end diff --git a/wui/src/app/util/stats/Stats.rb b/wui/src/app/util/stats/Stats.rb index f6ced4b..2741e4a 100644 --- a/wui/src/app/util/stats/Stats.rb +++ b/wui/src/app/util/stats/Stats.rb @@ -29,6 +29,8 @@ require 'util/stats/StatsRequest' def fetchRollingAve?(rrdPath, start, endTime, interval, myFunction, lIndex, returnList, aveLen=7) final = 0 + my_min = 0 + my_max = 0 # OK, first thing we need to do is to move the start time back in order to # have data to average. @@ -55,7 +57,6 @@ def fetchRollingAve?(rrdPath, start, endTime, interval, myFunction, lIndex, retu value = 0 value = vdata[lIndex] value = 0 if value.nan? - roll.push(value) if ( i >= aveLen) @@ -65,19 +66,34 @@ def fetchRollingAve?(rrdPath, start, endTime, interval, myFunction, lIndex, retu final += rdata end final = (final / aveLen ) + + # Determine min / max to help with autoscale. + if ( final > my_max ) + my_max = final + end + if ( final < my_min ) + my_min = final + end returnList.append_data( StatsData.new(fstart + interval * ( i - indexOffset), final )) # Now shift the head off the array roll.shift end end - + + # Now add the min / max to the lists + returnList.set_min_value(my_min) + returnList.set_max_value(my_max) + return returnList end def fetchRollingCalcUsedData?(rrdPath, start, endTime, interval, myFunction, lIndex, returnList, aveLen=7) + my_min = 0 + my_max = 0 + # OK, first thing we need to do is to move the start time back in order to have data to average. indexOffset = ( aveLen / 2 ).to_i @@ -120,12 +136,24 @@ def fetchRollingCalcUsedData?(rrdPath, start, endTime, interval, myFunction, lIn final += rdata end final = (final / aveLen) + + # Determine min / max to help with autoscale. + if ( final > my_max ) + my_max = final + end + if ( final < my_min ) + my_min = final + end returnList.append_data( StatsData.new(fstart + interval * ( i - indexOffset), final )) # Now shift the head off the array roll.shift end end + # Now add the min / max to the lists + returnList.set_min_value(my_min) + returnList.set_max_value(my_max) + return returnList end @@ -137,6 +165,9 @@ def fetchCalcUsedData?(rrdPath, start, endTime, interval, myFunction, lIndex, re # We also need to handle NaN differently # Finally, we need to switch Min and Max + my_min = 0 + my_max = 0 + lFunc = "AVERAGE" case myFunction when "MAX" @@ -155,13 +186,26 @@ def fetchCalcUsedData?(rrdPath, start, endTime, interval, myFunction, lIndex, re data.each do |vdata| i += 1 value = vdata[lIndex] - value = 100 if value.nan? - if ( value > 100 ) - value = 100 - end - value = 100 - value + value = 100 if value.nan? + if ( value > 100 ) + value = 100 + end + value = 100 - value + + # Determine min / max to help with autoscale. + if ( value > my_max ) + my_max = value + end + if ( value < my_min ) + my_min = value + end + returnList.append_data( StatsData.new(fstart + interval * i, value )) end + + # Now add the min / max to the lists + returnList.set_min_value(my_min) + returnList.set_max_value(my_max) return returnList end @@ -169,6 +213,9 @@ end def fetchRegData?(rrdPath, start, endTime, interval, myFunction, lIndex, returnList) + my_min = 0 + my_max = 0 + (fstart, fend, names, data, interval) = RRD.fetch(rrdPath, "--start", start.to_s, "--end", \ endTime.to_s, myFunction, "-r", interval.to_s) i = 0 @@ -177,9 +224,21 @@ def fetchRegData?(rrdPath, start, endTime, interval, myFunction, lIndex, returnL # Now, lets walk the returned data and create the ojects, and put them in a list. data.each do |vdata| + value = vdata[lIndex] i += 1 - returnList.append_data( StatsData.new(fstart + interval * i, vdata[lIndex] )) + if ( value > my_max ) + my_max = value + end + if ( value < my_min ) + my_min = value + end + + returnList.append_data( StatsData.new(fstart + interval * i, value )) end + + # Now add the min / max to the lists + returnList.set_min_value(my_min) + returnList.set_max_value(my_max) return returnList end @@ -294,7 +353,7 @@ def getStatsData?(statRequestList) counter = request.get_counter? tmpList =fetchData?(request.get_node?, request.get_devClass?,request.get_instance?, request.get_counter?, \ request.get_starttime?, request.get_duration?,request.get_precision?, request.get_function?) - + # Now copy the array returned into the main array myList << tmpList end diff --git a/wui/src/app/util/stats/StatsDataList.rb b/wui/src/app/util/stats/StatsDataList.rb index d6de29c..9f20a12 100644 --- a/wui/src/app/util/stats/StatsDataList.rb +++ b/wui/src/app/util/stats/StatsDataList.rb @@ -21,7 +21,7 @@ #define class StatsData List class StatsDataList def initialize(node,devClass,instance, counter, status, function) - # Instance variables + # Instance variables @node = node @devClass = devClass @instance = instance @@ -29,41 +29,63 @@ class StatsDataList @data=[] @status = status @function = function - end + @min_value = 0 + @max_value = 0 + end - def get_node?() + def get_node?() return @node - end + end - def get_devClass?() + def get_node?() + return @node + end + + def get_devClass?() return @devClass - end + end - def get_instance?() + def get_instance?() return @instance - end + end - def get_counter?() + def get_counter?() return @counter - end + end - def get_data?() + def get_data?() return @data - end + end - def get_status?() + def get_status?() return @status - end + end - def get_function?() + def get_function?() return @function - end + end - def append_data(incoming) + def append_data(incoming) @data << incoming - end + end def length() return @data.length end + + def set_min_value(min) + @min_value = min + end + + def set_max_value(max) + @max_value = max + end + + def get_min_value?() + return @min_value + end + + def get_max_value?() + return @max_value + end end diff --git a/wui/src/app/util/stats/statsTest.rb b/wui/src/app/util/stats/statsTest.rb index baedbc0..1005b32 100644 --- a/wui/src/app/util/stats/statsTest.rb +++ b/wui/src/app/util/stats/statsTest.rb @@ -33,11 +33,20 @@ require 'util/stats/Stats' # requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::Load, 0, LoadCounter::Load_15min, 0, 0, RRDResolution::Long ) # requestList << StatsRequest.new("node7.priv.ovirt.org", DevClass::NIC, 0, NicCounter::Octets_rx, 0, 0, RRDResolution::Long ) # requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::NIC, 1, NicCounter::Octets_rx, 0, 0, RRDResolution::Long ) - requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::NIC, 0, NicCounter::Octets_tx, 0, 604800, RRDResolution::Medium ) +# requestList << StatsRequest.new("node5.priv.ovirt.org", DevClass::NIC, 0, NicCounter::Octets_tx, 0, 604800, RRDResolution::Long, DataFunction::Average ) +# requestList << StatsRequest.new("node5.priv.ovirt.org", DevClass::NIC, 0, NicCounter::Octets_tx, 0, 604800, RRDResolution::Long, DataFunction::Peak ) +# requestList << StatsRequest.new("node5.priv.ovirt.org", DevClass::NIC, 0, NicCounter::Octets_tx, 0, 604800, RRDResolution::Long) # requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::Disk, 0, DiskCounter::Octets_read, 0, 0, RRDResolution::Long ) # requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::Disk, 0, DiskCounter::Octets_write, 0, 0, RRDResolution::Long ) # requestList << StatsRequest.new("node3.priv.ovirt.org", "cpu", 0, "idle", 1211688000, 3600, 10 ) -# requestList << StatsRequest.new("node4.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::Idle, 0, 3600, RRDResolution::Short ) + + requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::CalcUsed, 0, 300, RRDResolution::Default, DataFunction::Average ) + requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::NIC, 0, NicCounter::Octets_rx, 0, 0, RRDResolution::Default ) +# requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::Idle, 0, 300, RRDResolution::Default, DataFunction::RollingAverage ) +# requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::Idle, 0, 300, RRDResolution::Default, DataFunction::Average ) + requestList << StatsRequest.new("node3.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::CalcUsed, 0, 300, RRDResolution::Default, DataFunction::RollingAverage ) +# requestList << StatsRequest.new("node4.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::Idle, 0, 3600, RRDResolution::Short, DataFunction::Average ) +# requestList << StatsRequest.new("node4.priv.ovirt.org", DevClass::CPU, 0, CpuCounter::CalcUsed, 0, 3600, RRDResolution::Short, DataFunction::Min ) # requestList << StatsRequest.new("node5.priv.ovirt.org", "cpu", 0, "idle", 1211688000, 3600, 500 ) # requestList << StatsRequest.new("node5.priv.ovirt.org", DevClass::Memory, 0, MemCounter::Used, 0, 3600, 10 ) @@ -52,27 +61,28 @@ require 'util/stats/Stats' # puts statsListBig.length statsListBig.each do |statsList| - myNodeName = statsList.get_node?() - myDevClass = statsList.get_devClass?() - myInstance = statsList.get_instance?() - myCounter = statsList.get_counter?() - myStatus = statsList.get_status?() - - case myStatus - when StatsStatus::E_NOSUCHNODE - puts "Can't find data for node " + myNodeName - when StatsStatus::E_UNKNOWN - puts "Can't find data for requested file path" - end - if tmp != myNodeName then - puts + myNodeName = statsList.get_node?() + myDevClass = statsList.get_devClass?() + myInstance = statsList.get_instance?() + myCounter = statsList.get_counter?() + myStatus = statsList.get_status?() + + case myStatus + when StatsStatus::E_NOSUCHNODE + puts "Can't find data for node " + myNodeName + when StatsStatus::E_UNKNOWN + puts "Can't find data for requested file path" end - list = statsList.get_data?() - list.each do |d| - print("\t", myNodeName, "\t", myDevClass, "\t", myInstance, "\t", myCounter, "\t",d.get_value?, "\t",d.get_timestamp?) + if tmp != myNodeName then + puts + end + list = statsList.get_data?() + list.each do |d| + print("\t", myNodeName, "\t", myDevClass, "\t", myInstance, "\t", myCounter, "\t",d.get_value?, "\t",d.get_timestamp?) + puts + end puts - end tmp = myNodeName + print("\tmin_value is: ", statsList.get_min_value?(), "\tmax_value is: ", statsList.get_max_value?()) + puts end - - -- 1.5.5.2
Jim Meyering
2008-Aug-07 15:31 UTC
[Ovirt-devel] [PATCH] Adds max/min methods to StatsDataList. Limited cleanup in graph_controller.rb. First stab at stats data retrieval for new graphing approach.
Steve Linabery <slinabery at redhat.com> wrote:> Sorry for the attachment; haven't set up my esmtp yet. > > I didn't clean up a great deal of the existing code in graph_controller because I suspect most of it will be removed. I did edit that loop on total_memory so that it wouldn't make mongrel time out. > > Some whitespace cleanup in graph_controller. > > Main thing for review is the new method I wrote in graph_controller to retrieve stats in either xml or json (the generation of which is yet to be implemented) for a single host or vm. I want to avoid these big long lists of stats requests to Stats.rb, because a) there's no performance hit AFAICT in breaking them up into smaller requests, and b) the graph (or whatever walks the tree and assembles the data lists for the graph) needs to make more precise requests for data.Hi Steve, A fine net change, overall. It's great to do whitespace clean-up, but please keep that sort of change separate from anything substantial. Otherwise, it's more work for the reviewer to separate the trivially-ignorable whitespace changes from the ones that are significant. If you run this, it'll make git colorize diff output: (it modifies settings in ~/.gitconfig) git config --global color.ui auto Then do "git log -p -1" (but don't pipe it into anything) and you'll see the most recent commit. If it's this one, you should see dark red rectangles marking the offending white spaces on lines added by that commit. When I applied that change set via "git am FILE", I saw these warnings: $ g am r Applying: Adds max/min methods to StatsDataList. Limited cleanup in graph_controller.rb. First stab at stats data retrieval for new graphing approach. /home/meyering/work/co/ovirt/.git/rebase-apply/patch:173: space before tab in indent. scale.push((counter * tick / 1024).to_s) # divide by 1024 to convert to MB /home/meyering/work/co/ovirt/.git/rebase-apply/patch:338: space before tab in indent. times.push _time_long_format(time) /home/meyering/work/co/ovirt/.git/rebase-apply/patch:345: space before tab in indent. time = now - x * 28800 /home/meyering/work/co/ovirt/.git/rebase-apply/patch:346: space before tab in indent. times.push _time_short_format(time) /home/meyering/work/co/ovirt/.git/rebase-apply/patch:535: trailing whitespace. warning: squelched 8 whitespace errors warning: 13 lines add whitespace errors. ----------------------- Another nit-picky, but important detail: Note your long subject line. It seems rude of git to concatenate things like that, but that's the way it works. You can accommodate it by changing the log (git commit --amend) to have a single empty line after the first "summary" line: -------- Add max/min methods to StatsDataList. Limited cleanup in graph_controller.rb. First stab at stats data retrieval for new graphing approach. ---------------- [with this next bit, we're getting really minor, but comments/logs matter, too] Also, note that "Add ..." is recommended over "Adds ..." in all types of documentation (active voice vs passive, direct vs indirect). Another example: I'd change this comment: - # returns data for one pool/host/vm, one target + # return data for one pool/host/vm, one target ===================================================On to the more substantial: In Stats.rb, there are four blocks nearly identical to this one: if ( final > my_max ) my_max = final end if ( final < my_min ) my_min = final end This is more concise and equivalent: my_min = [my_min, final].min my_max = [my_max, final].max Also, I noticed that the pre-existing style is to initialize variables at the top of a block or function. It's better to avoid that style, and instead to place any initialization as near as possible to the first use. For example, if you move the initializations of my_max and my_min down so they're nearer their first uses, readers don't have to worry about whether they're used uninitialized (now the initialization is just before the loop), or if the values are modified between initialization and whatever use the reader is looking at. ================== I like your peak_history and _time_short_format changes. Much more readable that way. I.e., when the resulting lines fit in 80-columns, side-by-side diffs are more useful. ================== In graph_controller.rb, + megabyte = 1024 + totalMemory = @pool.hosts.total_memory + tick = megabyte + if totalMemory >= 10 * megabyte && totalMemory < 100 * megabyte + tick = 10 * megabyte + elsif totalMemory >= 100 * megabyte && totalMemory < 1024 * megabyte + tick = 100 * megabyte + else + tick = 1024 * megabyte + end + + counter = 0 + while counter * tick < totalMemory do + counter += 1 #this gives us one tick mark beyond totalMemory + scale.push((counter * tick / 1024).to_s) # divide by 1024 to convert to MB + end Maybe that while loop test should be "<=", in case totalMemory is exactly 100*1024 or 1024*1024? Otherwise, it looks like there will be no ticks for those two edge cases.