You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 39 Next »

Suggestions and Recipes for Log Analysis

Please add your tips and tricks to this page. Extraction scripts and Excel Macros etc welcome!

Generating Appropriate JMeter Logs

The default data logged during JMeter testing is quite sparse - important request and response fields are stored, but the full response sent back by the server is not stored. This is done to avoid bogging down the test client during load testing.

Using the following JMeter parameters on the command line generates log entries with corresponding higher detail for the test run.

-Jjmeter.save.saveservice.data_type=true -Jjmeter.save.saveservice.label=true  -Jjmeter.save.saveservice.response_code=true -Jjmeter.save.saveservice.response_data=true -Jjmeter.save.saveservice.response_message=true -Jjmeter.save.saveservice.successful=true -Jjmeter.save.saveservice.thread_name=true  -Jjmeter.save.saveservice.time=true"

The JMeter timestamp_format parameter can be used to specify logentry timestamps format. For eg:

-Jjmeter.save.saveservice.timestamp_format="yyyy-MM-dd HH:mm:ss"

As of April 2006, JMeter 2.1.1 must also be run with the following flag to ensure logfile uniformity:

-Dfile_format.testlog=2.0

This forces JMeter to use the old logformat which generates <sampleResult> element for all log entries. The newer format instead generates <httpSample> elements for HTTP requests and <sampleResult> for JDBC sampler entries - this is inconvenient from a automated log processing point of view. Refer this [http://mail-archives.apache.org/mod_mbox/jakarta-jmeter-user/200604.mbox/%3cE5D178BFB0356B48A48B8330B2903721ED5CA6@syd1exc04.CE.CORP%3e Jmeter-user thread] for more details

The above settings can also be defined as global properties in the JMeter.properties configuration file.

The JMeter Log Format

The format of JMeter log entries generated when using the flags in the previous section is defined in the table below. (Editing note: 4 spaces are used to denote one level of XML 'indentation'.)

XML Element

Explanation

{{/testResults }}

Root element for XML test log

    @version    	

Version of test results. Currently (JMeter 2.1.1), set to "1.1" irrespective of testlog format flag.

    /sampleResult/...

All log data is stored under an array of 'sampleResult' elements.

        @timeStamp	

Timestamp

        @dataType	

Datatype - typically "text"

        @threadName	

Name set for the thread group, with affixed at the end " <iteration>-<thread_id>". For eg "Integration Tests Thread Group 1-1"

        @label		

Label set for the sampler. For eg "Login to Custom URL using test account credentials"

        @time		

Time in milliseconds for request to complete. Eg "2515"

        @responseMessage	

Response message. Eg "OK"

        @responseCode	

Response code. Eg "200"

        @success		

String indicating status of the request. Can be "true" or "false"

        /sampleResult/...	

HTTP Redirects are represented as an array of nested 'sampleResult' elements. Only 1 level of nesting occurs (i.e. the nested subresults do not nest further).

        /property		

A string containing POST Data, Query Data and Cookie Data

            @xml:space	

XML attribute indicating whether that white space is significant. Set to "preserve"

            @name		

Set to "samplerData"

        /assertionResult/...	

Assertion information are stored in an array of assertionResult

            @failureMessage	

The failure message when the assertion fails

            @error		

Set to "true" or "false" to indicate error in assertion (stays "false" on assertion failure)

            @failure		

Set to "true" or "false" to indicate whether assertion failed or not

        /binary		

Data returned in response

Perl Based Method for Log Extraction

Here are some steps to import and process JMeter log data in Excel. Steps #3 and 4 are painstaking and can be automated by Excel macros - however, this is beyond my abilities at the moment:

  1. First, generate delimited file for import into Excel.

I use Perl to parse the XML logs and generate a delimited file for import into Excel. The heart of the Perl script is the regular expression below. Each line in the JMeter logfile must be matched to this expression:

# Regular expression extracts relevant fields from the log entry. 
/timeStamp="(\d+)".+?threadName="(.*?)".+?label="(.+?)" time="(\d+?)".+?success="(.+?)"/;

# Data in the regex variables is accessed for use in building suitably-delimited line
my $timestamp = $1; # unix timestamp 
my $threadname = $2; # thread label
my $label = $3; # operation label 
my $time = $4; # operation time in milliseconds
my $success = $5; # boolean success indicator for this operation

The complexity of the regular expression is required to parse nested log entries that occur during HTTP redirects. A normal log entry line logs one HTTP 'sampler' operation. For eg:

<sampleResult ... time="156" ... />

However, when the opertion had a HTTP redirect (HTTP status code 302), JMeter records the redirects as nested <sampleResult> elements – these which still occur on the same line as the log entry:
{{{<sampleResult ... time="2750"> <sampleResult ... time="2468" ... /> <sampleResult time="141" .../> <sampleResult ... time="141" .../> </sampleResult>
}}}

The outermost <sampleResult> element has time = 2750 milliseconds. This is the sum of times of the nested redirect elements. We are only interested in the data contained in the outermost element. Hence the regular expression uses the non-greedy pattern match operator ( .*? or .+?) to ignore the latter <sampleResult> elements.

On the other hand, Excel 2002 can directly import JMeter XML format logs. However, it has problems with entries for HTTP 302 redirects. The nested log entry example above will generate three rows in Excel, with the total time repeated thrice. i.e:

Login	2750
Login	2750
Login	2750 

2. Convert Timestamps to Excel Format

Once the data is in Excel, I convert the timestamp column from Jmeter's Unix timestamp format (base year 1970) to the Excel format (base year 1900) using this following formula. This formula is applied to the entire timestamp column.

For GMT time

=(x/1000+((365*70+17)*86400))/86400

For local time (replace t with your current offset from GMT)

=(x/1000+(t * 3600)+((365*70+17)*86400))/86400

3. Now sort rows on the operation name (i.e. JMeter sampler name)

4. Generate suitable reports and graphs manually.

For instance, one can generate a graph of page load times v/s time for different operations (e.g.: login, add 1 line to the order, etc). A different series in the graph is needed for each operation type used - this can be quite painstaking to add to a graph when there is a lot of data.

The graphs generated can be quite useful in visualizing performance of websites. See the graph below for an example of one such dummy website. The JMeter test that generated the data imposed a steady "normal load" component, but also imposed two short-term, high volume "surge loads" spread 10 minutes apart. (This was done using a separate JMeter thread group).

attachment:DummyWebsitePerformance.JPG

XSL script to extract Jmeter logs

(Peter Thomas) There are XSL scripts that I found that come along with the Jmeter distribution in the "extras" folder as "jmeter-results-report.xsl" and "jmeter-results-detail-report.xsl". However the HTML rendered is very complex, doesn't work well in Firefox and is bit difficult to grab into Excel.

Here's a simple script that just converts the Jmeter XML into an HTML table and does nothing else. You can do a "select all" on the HTML, and paste into Excel and it works smoothly.

The advantage of the XSL approach is the handling of the nested "sampleResult" elements if they are encountered. In the script below they are ignored. You can uncomment the XML comment that appears toward the end of the script and pick up the nested "sampleResult"s if you want but they really get in the way. The primary level "sampleResult" elements contain the sum of nested "sampleResult" times so this should be OK.

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="html" indent="yes" encoding="US-ASCII" doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN" />

<xsl:template match="/">
  <html>   
    <body>
       <xsl:apply-templates/>
    </body>
  </html>
</xsl:template>

<xsl:template match="testResults">
  <table border="1">
    <tr>
      <th>timeStamp</th>
      <th>dataType</th>
      <th>threadName</th>
      <th>label</th>
      <th>time</th>
      <th>responseMessage</th>
      <th>responseCode</th>
      <th>success</th>
    </tr>    
    <xsl:apply-templates/>
  </table>
</xsl:template>


<xsl:template match="sampleResult">		
  <tr>
    <td><xsl:value-of select="@timeStamp"/></td>
    <td><xsl:value-of select="@dataType"/></td>
    <td><xsl:value-of select="@threadName"/></td>
    <td><xsl:value-of select="@label"/></td>
    <td><xsl:value-of select="@time"/></td>
    <td><xsl:value-of select="@responseMessage"/></td>
    <td><xsl:value-of select="@responseCode"/></td>
    <td><xsl:value-of select="@success"/></td>
  </tr>
  <!--<xsl:apply-templates/>-->
</xsl:template>
    
</xsl:stylesheet>

It should be easy to apply this XSL stylesheet on your Jmeter file. I use a small Java program. I think you can add an xsl stylesheet processing instruction at the start of your XML file and open it in IE or Firefox directly. Someone can post on how to exactly do this.

Once within Excel, pivot tables are the way to go.

5. You can also use the following package that mixes Excel Macro, java transformation and xsl transformation of the .jtl files. It automatically generates Excel graphs using a Macro. The package is available here :
attachment:scripts_jmeter.zip

Testing a High Volume Webservice: Simple Data Writer, Bash scripts & Excel Graphs

JMeter was used to test a new high-availability webservice/integration layer, exposed to a few internal clients (see also [:../UserManual/BuildWSTest:BuildWSTest]). Several instances of JMeter represented the different clients and varying load profiles. The main problems were:
The default JMeter XML output is too verbose, and the files grew too enormous, so the *Simple Data Writer was used to produce much friendlier CSV results files.
*JMeter's inbuilt Graph function was not sufficient to process the volume of data. An example of the type of graph required follows. The reader will see that the JMeter test plan produced variations in load (normal, high & spike), and the response times were quite well-behaved. If the integration layer was performing poorly, the graph would show inconsistent throughput and fluctuating response times. Unix/Linux BASH shell scripts were used extensively for analysis, and preparing data for Excel graphs. [BR] attachment:throughput-graph.png [BR]

Simple Data Writer JTL files

JMeter's [http://jakarta.apache.org/jmeter/usermanual/build-monitor-test-plan.html Simple Data Writer] produces JTL output files which convey the same information as the default XML output, but the CSV format is much denser. Example:

queryBalance.jtl

timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,bytes,grpThreads,allThreads,URL
1158477785863,351,SL_queryBalance,200,,queryBalance 1-1,text,true,87,7,7,http://123.45.67.89:8080/WebService
1158477785953,291,SL_queryBalance,200,,queryBalance 1-4,text,true,87,7,7,http://123.45.67.89:8080/WebService
1158477785883,431,SL_queryBalance,200,,queryBalance 1-2,text,true,87,8,8,http://123.45.67.89:8080/WebService
1158477786013,301,SL_queryBalance,200,,queryBalance 1-5,text,true,87,8,8,http://123.45.67.89:8080/WebService
1158477785883,481,SL_queryBalance,200,,queryBalance 1-3,text,true,87,9,9,http://123.45.67.89:8080/WebService
1158477786113,331,SL_queryBalance,200,,queryBalance 1-6,text,true,87,10,10,http://123.45.67.89:8080/WebService
1158477786183,301,SL_queryBalance,200,,queryBalance 1-7,text,true,87,11,11,http://123.45.67.89:8080/WebService
1158477786404,120,SL_queryBalance,200,,queryBalance 1-10,text,true,87,11,11,http://123.45.67.89:8080/WebService
1158477786334,200,SL_queryBalance,200,,queryBalance 1-9,text,true,87,11,11,http://123.45.67.89:8080/WebService
1158477786254,290,SL_queryBalance,200,,queryBalance 1-8,text,true,87,11,11,http://123.45.67.89:8080/WebService
1158477786474,140,SL_queryBalance,200,,queryBalance 1-11,text,true,87,12,12,http://123.45.67.89:8080/WebService
...

Excel Throughput Graph

Script: attachment:jtlmin.sh.txt BR JMeter's output graph is too granular to depict throughput for extended test intervals (anything from 2 to 24 hours). An Excel constraint its maximum of 65536 rows. So JTL files of ~100k rows should be summarized into increments of 1 minute (or 2,5,n minutes depending on requirements).
BRFor each minute: throughput = count of transactions in that minute ; response time = average of 'elapsed' values in that minute. BRThe script jtlmin.sh summarizes large JTL files into 1 minute increments producing an OUT file that can be imported to Excel and a graph produced. The core functionality in jtlmin.sh is this piece of awk code:

    # scan a JTL file for records in a specified interval
    # and return record count & average response time.
    BEGIN { 
      avgresponse=0; sumresponse=0; trancount=0; 
    }
    { 
      if(($1 >= lastmin) && ($1 < thismin)) {
        trancount++
        sumresponse += $2
        avgresponse = sumresponse / trancount
      }
    }
    END { 
      printf("%d %d %d %d",lastmin,sumresponse,trancount,avgresponse);
      print " ",strftime("%Y.%b.%d %H:%M",lastmin)
      }

An example session, using jtlmin.sh to process a JTL file. The file produced, queryBalance.jtl.OUT (tab-delimited), can now be used to produce throughput graph. Response times can also be included on the secondary axis, as in the diagram above. These graphs were very good at showing when the integration layer was slow to respond and when throughput varied from the original JMeter plan.

$ jtlmin.sh
Usage: jtlmin.sh <filename> 
Summarizes JMeter JTL output into 1-minute blocks

$ jtlmin.sh queryBalance.jtl
Processing queryBalance.jtl

$ ls q*
queryBalance.jtl  queryBalance.jtl.OUT

$ head queryBalance.jtl.OUT
/c/jmeter/performance/Myserver/output/queryBalance.jtl
unixtime        date    time    thruput(tpm)    response(ms)
1160354940      2006.Oct.09     13:49   65      0
1160355000      2006.Oct.09     13:50   0       0
1160355060      2006.Oct.09     13:51   0       0
1160355120      2006.Oct.09     13:52   56      0
1160355180      2006.Oct.09     13:53   98      108
1160355240      2006.Oct.09     13:54   84      125
1160355300      2006.Oct.09     13:55   0       0
1160355360      2006.Oct.09     13:56   0       0

Overview of Several Output files

Script: attachment:jtltotals.sh.txt [BR] After a test run, all the JTL output files were gathered together (20 or so files) in a bunch of subdirectories. The analysis was conducted on a Windows PC with MinGW/MinSYS and a few other tools (msys-dtk, gnu bc, gnu paste, gVim). For an overview of total vs. projected throughput, I used the shell script jtltotals.sh (a bit kludgy but hey I'm a tester not a developer!). It collates [total throughput, start time, end time, time elapsed, average response time] for each output file. This script will produce a (comma-delimited) file 'jtl-file-totals.txt'. A sample of output is shown below.

jtl-file-totals.txt

JMeter-Output-file,total-throughput,start,end,elapsed-sec,elapsed-hms,response-av
WebGUI/output.1/queryFCNs.jtl,33,20061103.105342 local,20061103.105830 local,288,00:04:48,225.59
WebGUI/output.1/queryPackages.jtl,55,20061103.105342 local,20061103.105555 local,133,00:02:13,234.06
WebGUI/output.2/queryFCNs.jtl,42,20061103.113435 local,20061103.114155 local,440,00:07:20,212.12
WebGUI/output.2/queryPackages.jtl,59,20061103.113435 local,20061103.113737 local,182,00:03:02,238.78
WebGUI/output.3/queryPackages.jtl,272,20061103.121135 local,20061103.122042 local,547,00:09:07,260.03
Myserver/output/applyDebit.jtl,22219,20060912.154822 local,20060912.162945 local,2483,00:41:23,1265.12
Myserver/output/queryBalance.jtl,360,20061009.134916 local,20061009.150914 local,4798,01:19:58,96.31
total,23040,,,,,

Conversion of JMeter timestamps

Script: attachment:utime2ymd.txt BR The first field of a JTL output file is a Unix timestamp extended to milliseconds. The above script jtltotals.sh calls another script utime2ymd to convert start & end times into year-month-day.hour-min-sec (yyyymmdd.HHMMss). Usually the JTL timestamps are adjusted for your local timezone (eg. GMT plus or minus a few hours). The utime2ymd script uses the local timezone by default, but can also provide GMT values – useful for converting x-thousand elapsed seconds into hhmmss. Example of usage:

$ utime2ymd
Usage: utime2ymd <timestamp> [local|gmt] 

Convert 10-digit Unix timestamp to yyyymmdd.hhmmss format 
 use local time zone (default) or UTC/GMT

$ utime2ymd  1158477785863 
20060917.192305 local

$ utime2ymd 3601 gmt
19700101.010001 gmt

Extract from JMeter Test Plan (JMX file)

Script: attachment:jmxparse.sh.txt BR Another possibly useful tool which will give a text based summary of what's in your JMeter JMX script. Mainly uses grep and sed.

$ jmxparse.sh STEPTEST_Myserver.jmx
  TestPlan.enabled=true
  clientID   Myserver
  wsdl   http://12.34.56.78:8080/webservice.wsdl
  hostIP   12.34.56.78
  hostPort   8080
  serialFile   C:\jmeter\prioritisation\serial.txt
  rampupInterval   3600
  rampdownInterval   3600
  spikeInterval   300
testname= applyCredit  enabled=false
  num_threads   2
  rampup5_1   9
  rampup5_2   34
  rampdown5_1   34
  rampdown5_2   9
  spikeLoad5   33
testname= queryBalance  enabled=true
  num_threads   14
  rampup2_1   125
  rampup2_2   461
  rampdown2_1   461
  rampdown2_2   125
  spikeLoad2   497

I tried to extract this information using XPath but it's not really designed to report on numerous pieces of data. Anyway, here's the XPath expression, just FYI.

//@testname|
//elementProp/@name|
//elementProp/stringProp[@name="Argument.value"]/text()|
//ThreadGroup/@enabled|
//stringProp[@name="RunTime.seconds"]/text()|
//stringProp[@name="throughput"]/text()|
//stringProp[@name="filename"]/text()

About the Bash scripts

I used [http://www.mingw.org/ MinGW] with the familiar *nix text processing tools; here's a list
*[http://prdownloads.sourceforge.net/mingw/MinGW-3.1.0-1.exe?download MinGW], [http://prdownloads.sourceforge.net/mingw/MSYS-1.0.10.exe?download MSYS], [http://prdownloads.sourceforge.net/mingw/msysDTK-1.0.1.exe?download MSys-DTK], [http://sourceforge.net/project/showfiles.php?group_id=23617&package_id=26968 Gnu-bc], [http://prdownloads.sourceforge.net/mingw/join_paste_textutils-2.1-MSYS.tar.bz2?download paste], [http://www.bzip.org/1.0.3/bzip2-103-x86-linux26 bzip2], [ftp://ftp.vim.org/pub/vim/pc/gvim70.exe gVim 7.0]
*Windows alternatives [http://www.cygwin.com Cygwin], [http://www.microsoft.com/technet/interopmigration/unix/sfu/default.mspx Windows SFU], or linux on a Virtual PC
*Or use a Linux/Unix OS exclusively to run the above scripts ..

Java Class to Quickly Summarize JMeter Results

I used the JMeter Ant task, [http://www.programmerplanet.org/pages/projects/jmeter-ant-task.php], to produce very large output files. I wrote up this Java class to Summarize this information. I hope you find this useful. - Andy G.

Java Class: attachment:JMeterSummary.java

Sample Ouput:

All Urls:
cnt: 333, avg t: 535 ms, max t: 30755 ms, min t: 10 ms, result codes: {200=291, 302=42}, failures: 0, cnt by time: [0.0 s - 0.5 s = 312, 0.5 s - 1.0 s = 16, 30.5 s - 31.0 s = 5]
avg conn: 17 ms, max conn: 120 ms, min conn: 0 ms, elapsed seconds: 71 s, cnt per second: 5

URL: /
cnt: 27, avg t: 44 ms, max t: 90 ms, min t: 20 ms, result codes: {200=27}, failures: 0, cnt by time: [0.0 s - 0.5 s = 27]

URL: /home.html
cnt: 30, avg t: 173 ms, max t: 581 ms, min t: 70 ms, result codes: {200=30}, failures: 0, cnt by time: [0.0 s - 0.5 s = 27, 0.5 s - 1.0 s = 3]

URL: /news/
cnt: 9, avg t: 93 ms, max t: 120 ms, min t: 70 ms, result codes: {200=9}, failures: 0, cnt by time: [0.0 s - 0.5 s = 9]

URL: /signout.jsp
cnt: 21, avg t: 30 ms, max t: 60 ms, min t: 20 ms, result codes: {302=21}, failures: 0, cnt by time: [0.0 s - 0.5 s = 21]
...

cnt

number of requests

avg t

average time per request

max t

longest request

min t

shortest request

result codes

http result code and number of times received

failures

number of failures

cnt by time

a break down of how many requests returned in the specified time range

avg conn

average time doing connection overhead (time ms - latency ms = conn ms)

max conn

max connection overhead

min conn

min connection overhead

elapsed seconds

total elapsed time of test (last time stamp - first time stamp)

cnt per second

throughput (number of requests / elapsed seconds)

Based on this sort of input:

<?xml version="1.0" encoding="UTF-8"?>
<testResults version="1.2">
<httpSample t="511" lt="461" ts="1184177284608" s="true" lb="http://www.website.com/home.html" rc="200" rm="OK" tn="Thread Group 1-1" dt="text"/>
<httpSample t="581" lt="481" ts="1184177284718" s="true" lb="http://www.website.com/home.html" rc="200" rm="OK" tn="Thread Group 1-1" dt="text"/>
...

Extracting JTL files to CSV with Python (JMeter 2.3.x)

This script does two things. First it filters the JTL file for a regular expression. Then it strips them and outputs a CSV file. This also includes the conversion of the timestamp to a readable format.

The script only works with JTL files from JMeter 2.3.x (JTL version 2.1). Please see http://jakarta.apache.org/jmeter/usermanual/listeners.html#xmlformat2.1 for details.

Usage is:
program.py <JTL input file> <CSV output file> "<regular expression>"

#!/usr/bin/python

"""
Description : Split JTL file into a comma delimited CVS
by          : Oliver Erlewein (c)2008
Date        : 04.02.2008
Lang        : Python 2.4+

JMeter JTL field contents:

Attribute & Content
by      Bytes
de      Data encoding
dt      Data type
ec      Error count (0 or 1, unless multiple samples are aggregated)
hn      Hostname where the sample was generated
lb      Label
lt      Latency = time to initial response (milliseconds) - not all samplers support this
na      Number of active threads for all thread groups
ng      Number of active threads in this group
rc      Response Code (e.g. 200)
rm      Response Message (e.g. OK)
s       Success flag (true/false)
sc      Sample count (1, unless multiple samples are aggregated)
t       Elapsed time (milliseconds)
tn      Thread Name
ts      timeStamp (milliseconds since midnight Jan 1, 1970 UTC)
"""

import sys
import string
import re
import datetime
import time

startTime = time.time()
cnt                     = 0
cnt2                    = 0
failCnt                 = 0
reCompile = re.compile("\s([^\s]*?)=\"(.*?)\"")
delimiterCharacterOut = ","

def writeCSVLine(line):
    x = reCompile.findall(line)
    a = dict((row[0], row[1]) for row in x)
    try:
        a['ts'] = str(int(int(a['ts'])/1000))
        x = str(datetime.datetime.fromtimestamp(float(a['ts'])))[0:19]
        b = a['ts'] + ",\"" + x + "\"," + a['t'] + "," + a['lt'] + ",\"" + a['s'] + "\",\"" + a['lb'] + "\"," + a['rc'] + ",\"" + a['rm'] + "\",\"" + a['tn'] + "\",\"" + a['dt'] + "\"," + a['by'] + "\n"
    except:
        return -1
    o.write(b)
    return 1

print "Splitting JTL file"

try:
    runArgv             = sys.argv                      # Save the command line
    jtlInfile           = str(sys.argv[1])              # Name of JTL input file
    cvsOutfile          = str(sys.argv[2])              # Name of CVS output file
    reFilter            = str(sys.argv[3])              # Filter the labels (lb) for the filter
except:
    print "Error: Input format: <input file> <output file> <Filter by regular expression>"
    raise

try:
    f = open(jtlInfile, "r")
    o = open(cvsOutfile, "w")
except:
    raise

print "Filtering on regular expression : " + reFilter 
cmpFilter = re.compile(reFilter)

for line in f:
    try:
        if cmpFilter.search(line):
           returnVal = writeCSVLine(line)
           if returnVal < 0:
              failCnt += 1
           else:
              cnt2 += 1
    except:
        print 'Error in line : ', cnt, line
        raise

    cnt += 1

endTime = time.time()
print "Time taken                    : ", str(endTime-startTime)
print "Lines processed              : ", cnt
print "Lines that passed the filter : ", cnt2
print "Lines skipped (error?)       : ", failCnt

f.close()
o.close()

Generating charts with Perl

(Christoph Meissner) When it comes to exploit jmeters logfiles generated by many clients over a long testing period one might run into time consuming efforts putting all data together into charts. To reduce effort I wrote a Perl script that parses any number of jmeter logs and aggregates the data into several different charts (examples follow below):

chart type

comment

stacked chart abs

cusps for aggregated respone times per measure point

stacked chart rel

cusps for aggregated respone times per measure point expressed in percentages

entire user

aggregated response times opposed to number of active threads

entire throughput

aggregated response times opposed to throughput

<measure point> user

aggregated response times for this measure point opposed to number of active threads

<measure point> throughput

aggregated response times for this measure point opposed to number of active threads

Here are exemplary charts showing a test against an address validation tool stressed with 25 active threads (a very small test (wink) ):

Stacked chart showing cusps of response times

Measure points are shown at the x-axis.

attachment:cm_ChartStacked.png

Stacked chart showing response times in %

attachment:cm_ChartStackedPct.png

Chart opposing total response times to total threads

x-axis shows the timestamps of the test, left y-axis holds response times, right y-axis holds number of active threads (it is also possible to display a 1 standard deviation line for the response times (see script below)).

attachment:cm_entire_users.png

Chart opposing total response times to total throughput

x-axis shows the timestamps of the test, left y-axis holds response times, right y-axis holds throughput (it is also possible to display a 1 standard deviation line for the response times (see script below))

attachment:cm_entire_throughput.png

Chart opposing response times of a single measure point to active threads

attachment:cm_AdressService_users.png

Chart opposing response times of a single measure point to throughput

attachment:cm_AdressService_throughput.png

Configure jmeters testplan

To make jmeter to generate appropriate log files you will have to configure your testplan properly.

One task will be to name your requests (the pipettes) in a way that allows the script to cumulate response times, active threads and throughput together into measure points. Also you will have to tell jmeter to log particular data.

For both requirements look at the example screenshot, please:

attachment:cm_jmeter.png

  1. insert an 'Aggregate Report' Listener to your testplan (top level). 2. enter a filename where to save the results 3. invoke 'Configure' and check the boxes as shown. 4. Change the names of your requests.

For instance in my exemplified testplan all requests for images were named as 'GIF', all requests for java scripts were named as 'JS'. Special points of interest also get their appropriate name (eg. 'Login', 'Address Search', ...).

This step is necessary because the script collects data into labels of the same name. This way you make sure that you get a chart that will show the response times for all images or all 'Login' or all 'Address Searches'.

The script

The attachment:jmetergraph.pl requires Chart 2.4.1 to be installed (http://search.cpan.org/~chartgrp/Chart-2.4.1/).

You can call the script in this way:

perl jmetergraph.pl [-stddev] [-range] <jtl file 1> [ ... <jtl file n>]

If you pass -stddev the graphs will show the normal standard deviation of the response times, too.

attachment:cm_entire_users_stddev.png

If you pass -range then a bar chart will be generated showing the average range of the reponse times.

attachment:cm_entire_users_range.png

  • No labels