HammerDB CLI 101

This post is to give anyone starting out with HammerDB a guide on using the CLI or command line interface for text based environments.  As the workflow in the CLI and GUI are the same we will show equivalent commands side by side to help you quickly get up to speed on using the CLI in both interactive and scripted scenarios.

Help and Navigation

To begin with run the hammerdbcli command in Linux or hammerdbcli.bat in Windows and type help at the hammerdb prompt.

help command

This displays the available CLI commands with “help command” providing detailed information about the command and arguments required.

To navigate and edit at the CLI use the standard Ctrl commands as follows:

Ctrl Command Action
Ctrl-P Move to previous command
Ctrl-N Move to next command
Ctrl-F Move cursor forward
Ctrl-B Move cursor backward
Ctrl-A Move cursor to the start
Ctrl-E Move cursor to the end
Ctrl-G Clear Line
Ctrl-K Cut
Ctrl-Y Paste
Ctrl-H Backspace

Librarycheck

One of the first things you will want to do is make sure that we can access the 3rd party driver libraries for the database that we want to use. This is done with the librarycheck command.  In this example we are using SQL Server so the message shows that everything is in order and we can proceed with running tests.  If the library failed to load consult the HammerDB documentation on installing and configuring your libraries with the PATH environment variable for Windows or LIBRARY_PATH environment for Linux.

librarycheck

Selecting a database

Select a database

The next thing you will want to do is to select your preferred database. In the GUI we can select from the menu or double right-click the database heading.

This will show the benchmark options dialog.

Benchmark Options

In the CLI this corresponds to the dbset command with the database set using the db argument  according to the prefix in the XML configuration which are ora, mssqls, db2, mysql, pg for Oracle, Microsoft SQL Server, IBM Db2, MySQL and PostgreSQL respectively.

dbset db command

and benchmark set with the bm argument.  You can use either the TPROC or TPC terminology at the CLI.

dbset bm command

Building the Schema

Expanding the GUI menu presents the workflow with our first task of building the schema.

GUI Workflow

Selecting schema build and options presents the schema build options dialog. In the example below we have modified the SQL Server, number of warehouses to build and the virtual users to build them.

Build Options

In the CLI the print dict command shows us the available options.

print dict

These can be modified with the diset command specifying the option and the value to be changed. The example below shows the same settings made in the GUI. We have set the connection value of mssqls_server and then the tpcc value of the warehouse count and the number of virtual users to build them. Note that for the mssqls_server value there is the backslash special character and therefore the entered value is wrapped with curly brackets {…} to preserve the special character.

diset command

In the GUI clicking on Build presents the build dialog. Clicking yes will start the schema build.

Create Schema Dialog

In the CLI buildschema shows the same prompt and accepts automatically.

buildschema command

A key aspect is being able to visualise the multithreaded nature of the Virtual Users. In the GUI the Virtual User output is shown in a grid and status to a table easily enabling us to see the multithreaded nature of the workload. In the CLI all output is printed to the console preceded by the name of Virtual User producing it.  Nevertheless the CLI is multithreaded in the same way as the GUI.  For both the time it will take for the build to complete will depend on the HammerDB client CPU and the performance of the database server being loaded, during this time each action will be printed to the display. You may need a number of minutes for the build to complete.

Build in progress

When the build is complete Virtual User 1 will show TPCC SCHEMA COMPLETE.  The schema build is the same process whether built from the GUI or the CLI.

Build Complete

Using the vustatus command we can now see the status of the Virtual Users as having completed successfully. Note that as the CLI is running in interactive mode the vustatus command can be also be run while a workload is running. Press return for a prompt and then type the command needed.  vudestroy will perform the equivalent action as pressing the stop button in the GUI to close the Virtual Users Down.  Similarly doing the same while a workload is running will also do the same action as pressing the stop button while a workload is running in the GUI.

vustatus

Loading the Driver and running the test

We have now built a schema. The next step in the workflow is to define the driver script options.  In the GUI we are presented with an options dialog to set the configuration.  In the example we have again set the server name, have chosen the timed workload and also selected the Use All Warehouses option.

GUI Driver Options

To do the same in the CLI we again use the diset command.

diset Driver Options

Once the options have been chosen the driver script is loaded automatically in the GUI or can be re-loaded with the Load command.

GUI Driver Script

The loadscript command does the same at the CLI with the print script command showing the script loaded.  Note that the driver script is identical in both the GUI and CLI (as long as you have chosen the same options) meaning that the workload that is run is also identical regardless of the interface chosen. You can also load a modified script using the customscript command meaning that you can edit a script in the HammerDB GUI save it and then load it to run in the CLI.

loadscript

Referring back to the GUI for our workflow the next step is the creation of Virtual Users for running the driver script loaded.

GUI Virtual User Options

In the CLI the vuset command sets the Virtual User options and the print vuconf command displays the setting.

CLI Virtual User Options

With the Virtual User configuration set the next stage is to create the Virtual Users.  Having chosen the timed workload we see a monitor Virtual User in addition to the active Virtual Users chosen.

GUI Create Virtual Users

In the CLI the vucreate command creates the Virtual Users and the vustatus command shows the status that is shown in the status column of the Virtual User table in the GUI.

vucreate

ln the GUI we would then run the Virtual Users, in the CLI the workload is started with with the vurun command.

vurun

When the workload is complete we see the TEST RESULT output and the status of the Virtual Users. The vudestroy command will close down the Virtual Users in the same way as pressing the red stop button in the GUI.

TEST RESULT

Additional CLI Functionality

At this stage we have followed the GUI workflow to use the CLI to create the schema and run the TPROC-C workload with a number of Virtual Users.  It is of note that much of the additional GUI functionality is also available with CLI commands, for example primary and replica instances can be created and connected in the CLI and also as shown the CLI transaction counter.

CLI Transaction Counter

Scripting the workloads

We have seen how to run the HammerDB CLI interactively by typing commands in a similar manner that we would use the GUI to build schemas and run workloads. However one major benefit of the command line use is also the ability to script workloads. There are 2 approaches to scripting HammerDB CLI commands. Firstly we can run a script from the interactive prompt using the source command. Secondly we can use the auto argument to run a script directly without the interactive prompt.

To run a script using the source command we can take a text editor and enter the commands into a file with a .tcl extension. In this example we are running a timed workload with 2 active virtual users, we are logging the output and also running the transaction counter with logged output. Note one additional command has been added to what has been seen previously when running interactively, namely runtimer.  This is where having run the same workload in the GUI helps understand the concepts. HammerDB ins multithreaded and the Virtual Users run independently as operating system threads. Consequently if vurun is followed by vudestroy in a script then the Virtual Users will be immediately terminated by the main thread as soon as they are started. This is unlikely to be the desired effect. Therefore runtimer keeps the main HammerDB thread busy and will not continue to the next command until one of 2 things happen. Firstly if the vucomplete command returns true or the seconds value is reached. For this reason the runtimer seconds value should exceed both the rampup and duration time.  Then only when the Virtual Users have completed the workload will vudestroy be run.

sqlrun.tcl

Now we can run the source command giving our script as an argument and the commands will be run without further interaction.

source sqlrun.tcl

We can see how the script ran to completion, called vudestroy and returned us to the interactive prompt. If desired the quit command returns from the interactive prompt to the shell prompt.

ALL VIRTUAL USERS COMPLETE

The HammerDB CLI is not restricted only to the commands shown in the help menu.  The CLI instead supports the full syntax of the TCL language meaning you can build more complex workloads.

There is an introductory tutorial to TCL here.  We also recommend the following book as a comprehensive reference to TCL :  TCL Programming Language by Ashok P. Nadkarni.

A simple example is shown using the foreach command to implement the autopilot feature from the GUI.

foreach loop

Running this script we are now executing a loop of tests with 1 then 2 then 4 Active Virtual Users in an unattended manner.

Unattended CLI Test

When the final iteration in the loop is complete the CLI returns to the prompt.

TEST SEQUENCE COMPLETE

and the log file provides a summary of all of the workloads.

Vuser 1:1 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 33643 NOPM from 77186 SQL Server TPM
...
Vuser 1:2 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 66125 NOPM from 152214 SQL Server TPM
...
Vuser 1:4 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 106080 NOPM from 243964 SQL Server TPM
...

Running the CLI from the OS Shell

Of course you can use TCL scripting to configure complex build and test scenarios including the execution of host commands.  Another way is to run multiple CLI scripts from the OS shell such as Bash on Linux and Powershell on Windows. The following example shows a build and test script for SQL Server. Note the additional waittocomplete command in the build script. This command is a subset of runtimer and causes the CLI to wait indefinitely until all of the Virtual Users return a complete status. At this point the CLI will return. In this case it is followed by a quit command to exit the CLI.  These scripts are called sqlbuild2.tcl and sqlrun2.tcl respectively.

sqlbuild2.tcl
sqlrun2.tcl

In Windows we can now write a powershell script called buildrun2.ps1 that calls the build and run scripts in turn. In Linux we would do the same with a bash script. In this case we use the auto argument to run the script in a non-interactive mode.  These commands can be interspersed with other operating system or database commands at the shell level. In this case we have only written to the output however any additional database configuration commands can be used to build a complex test scenario.

buildrun2.ps1

Starting the powershell we change to the HammerDB directory and run the powershell script.  (You should always run the scripts after changing to the HammerDB directory rather than running them from another directory). This starts running  the build script to build the schema.

Once the build is complete the build script exits, the powershell takes over and follows it by running the driver script.

After the test is complete the powershell exits and returns to the command prompt.

Summary

Once you have an overview of the HammerDB workflow by following the GUI menu driven system using the CLI should be straightforward using the same approach. The key concept is to understand that both the GUI and the CLI are multithreaded and the Virtual Users themselves run entirely independently as operating system threads and therefore you interact with the Virtual Users by interacting with the main interface thread and passing messages to the VUs. This means if you exit too early from the main interface in either GUI or CLI the entire workload will be stopped.

Once you have familiarity with how the CLI works it is then not difficult to adapt this understanding to build complex automated workflows using TCL scripting, shell scripting or a combination of both.

 

What programming languages does HammerDB use and why does it matter?

HammerDB is a load testing and benchmarking application for relational databases. All the databases that HammerDB tests implement a form of MVCC (multi-version concurrency control). This helps to minimise locking allowing multiple sessions to access the same data at the same time. On high-performance multi-core systems all the supported databases can return performance in the many millions of transactions per minute. However, it is crucial that the benchmarking application does not have inherent bottlenecks that artificially limits the scalability of the database. This is why the choice of programming language is so important from the outset.

This post explains why HammerDB made the language decisions it made to make it the best performing and most usable database benchmarking software.

Basic Benchmarking Concepts

As we have seen databases are designed to handle multiple database sessions at the same time. To benchmark a database we introduce the concept of a Virtual User. The benchmarking software simulates the actions of multiple individual users and these users must run in parallel to test the MVCC (Multiversion Concurrency Control) capabilities of the database. There is a key distinction here between parallelism and concurrency. It is important that the concurrency between sessions is handled at the database not at the client because that is how databases are accessed in the real world. When we have multiple CPU cores on both the benchmark client and database server it is crucial that these database sessions run independently of each other at the same time, in parallel. For simplicity, we do not include networking or transaction management middleware in this discussion because although important in the real world they do not affect the key concepts.

Database benchmarking in parallel

SQL

Firstly, for a database benchmarking application it should not come as a huge surprise that the key language used for testing databases is Structured Query Language known as SQL. For HammerDB both TPROC-C and TPROC-H run all of their workloads on the database being tested in SQL. The following is an example from TPROC-C from SQL Server.

SELECT @st_o_id = district.d_next_o_id
FROM dbo.district
WHERE district.d_w_id = @st_w_id AND district.d_id = @st_d_id

SELECT @stock_count = count_big(DISTINCT stock.s_i_id)
FROM dbo.order_line
, dbo.stock
WHERE order_line.ol_w_id = @st_w_id
AND order_line.ol_d_id = @st_d_id
AND (order_line.ol_o_id < @st_o_id) AND order_line.ol_o_id >= (@st_o_id - 20)
AND stock.s_w_id = @st_w_id
AND stock.s_i_id = order_line.ol_i_id
AND stock.s_quantity < @threshold
OPTION (LOOP JOIN, MAXDOP 1) 

and the following from TPROC-H

select top 100 
s_acctbal, s_name, n_name, p_partkey, p_mfgr, s_address, s_phone, s_comment 
from part, supplier, partsupp, nation, region 
where p_partkey = ps_partkey 
and s_suppkey = ps_suppkey 
and p_size = 47 
and p_type like '%COPPER' 
and s_nationkey = n_nationkey 
and n_regionkey = r_regionkey 
and r_name = 'EUROPE' 
and ps_supplycost = ( 
select min(ps_supplycost) 
from partsupp, supplier, nation, region 
where p_partkey = ps_partkey 
and s_suppkey = ps_suppkey 
and s_nationkey = n_nationkey 
and n_regionkey = r_regionkey 
and r_name = 'EUROPE'
) 
order by s_acctbal desc, n_name, s_name, p_partkey option (maxdop 2)

Application Logic in Stored Procedures

So the interaction with the database is in SQL. For the TPROC-H workload this is all we need the queries are long-running analytics queries so once executed on the database do not need to wait for the benchmarking client. TPROC-C however is derived from the TPC-C specification and requires application logic around the SQL. HammerDB implements the TPROC-C application logic in the form of stored procedures for all the supported databases.

DatabaseApplication Logic
OraclePL/SQL
SQL ServerT-SQL
Db2SQL PL
PostgreSQLPL/pgSQL
MySQLstored program language
HammerDB Stored Procedures.

So now our TPROC-C example from the Stock Level stored procedure on SQL Server begins as follows.

CREATE PROCEDURE [dbo].[slev]  
 @st_w_id int,
 @st_d_id int,
 @threshold int
 AS 
 BEGIN
 DECLARE
 @st_o_id int, 
 @stock_count int 
 BEGIN TRANSACTION
 BEGIN TRY
 SELECT @st_o_id = district.d_next_o_id 
 FROM dbo.district 
 WHERE district.d_w_id = @st_w_id AND district.d_id = @st_d_id
 ....

Why does it matter that we implement the application logic in the form of stored procedures? To prevent the roundtrip between the client and database becoming the bottleneck. As an illustration if we compare HammerDB with a sysbench workload both running on the same system and same MySQL database we can observe that both workloads are driving 75-80% of the CPU on database throughput however sysbench is utilising 20% system time compared to 3% for HammerDB. (The HammerDB workload shows that database locking prevents full CPU utilisation on the MySQL server).

Why is this happening? sysbench has the application logic in the client and is therefore spending 6-7X of the time on socket communication because every single SQL statement requires a separate round trip to the client (recvfrom and sendto).

Top 10 Processes
Top Files

However, having the application logic in the client is even worse because you are sacrificing key database efficiencies of prepared statements. Of course, you can prepare individual statements, however you can see major efficiency gains by using natively compiled stored procedures. With HammerDB the stored procedures are compiled (where supported by the database) at the time of schema creation by the HammerDB build. When the HammerDB driver is run we parse the statement to call the procedure once (and only once) per session. The example below is for the NEW ORDER stored procedure for the Oracle database.

"BEGIN neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,'YYYYMMDDHH24MISS')); END;"

We therefore don’t use CPU repeatedly parsing any of the SQL used for the workload. Instead, we are using bind variables and each time we call the stored procedure we bind the INPUT variables, execute the stored procedure and fetch the OUTPUT variables. The driver only needs to generate strings that correspond to the bind variables and therefore not only is it doing much more work per roundtrip it is sending and fetching a lot less data as well.

Database interfaces in C

So our application logic is in SQL and stored procedures and we are preparing once then binding and executing statements multiple times. So lets take a look at an extract of how this is done in HammerDB for Db2 and the language used.

int Db2_bind_exec (ClientData cData, Tcl_Interp * interp, int argc,
CONST84 char *argv[])
{
int i = 1;
Tcl_Channel conn_channel;
Db2Connection *conn;
char id[MAX_ID_LENGTH + 1];
char buff[MAX_ID_LENGTH + 1];
SQLHANDLE hdbc, hstmt;
SQLLEN ival = SQL_NULL_DATA;
short num_params;
int nparam;
char **paramList;

...

conn->rc = SQLFreeStmt(hstmt, SQL_RESET_PARAMS);
    if (conn->rc != SQL_SUCCESS)
    {
        SQLError (henv, conn->hdbc, hstmt,
                (SQLCHAR *) & conn->sql_state,
                &conn->native_error,
                (SQLCHAR *) & conn->error_msg,
                sizeof (conn->error_msg), &conn->size_error_msg);
        Tcl_AppendResult (interp, conn->error_msg, (char *)NULL);
        if (paramList) ckfree((char *) paramList);
        return TCL_ERROR;
    }

for (i = 0; i < nparam; i++)
{
if (strncmp (paramList[i], "NULL", 4) == 0)
{
/* null bind / conn->rc = SQLBindParameter (hstmt, i + 1, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, 0, 0, NULL, 0, &ival); 
} 
else 
{ / bind value */
conn->rc = SQLBindParameter (hstmt, i + 1, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, 0, 0, paramList[i], 0, NULL);
}

 if (conn->rc != SQL_SUCCESS)
    {
        SQLError (henv, conn->hdbc, hstmt,
                (SQLCHAR *) & conn->sql_state,
                &conn->native_error,
                (SQLCHAR *) & conn->error_msg,
                sizeof (conn->error_msg), &conn->size_error_msg);
        Tcl_AppendResult (interp, conn->error_msg, (char *)NULL);
        if (paramList) ckfree((char *) paramList);
        return TCL_ERROR;
    }
}

conn->rc = SQLExecute (hstmt);

...

If this looks like a lot like the C programming language, that is because it is or to be more precise it is a C program using the Db2 Call Level Interface (CLI) that is the ‘C’ ‘C++’ programming interface for Db2 ‘to pass dynamic SQL statements as function arguments’ which is exactly what we want to do. For all the databases supported we are using a compiled C interface on the client to use the lowest level of most efficient form of access possible.

Database Programming Interface
OracleOCI
SQL ServerODBC
Db2CLI
PostgreSQLLibpq
MySQLMySQL Native Driver
HammerDB Database Interfaces

Glue language

So now we have all of our application logic in SQL and stored procedures, natively compiled that we can bind and execute. We also have our low-level C level interfaces that can run both SQL and stored procedures at high performance. The next thing we need is what is known as a Glue language. One that can stick all the components together in an application that we can use to run databases benchmarks.

Why not Python as a Glue Language

Surely we can just use any language that we are familiar with? Unfortunately it is not quite this simple. Let’s take a look at using Python as a language to build a driver for a workload derived from TPC-C. There are different language implementations of Python, as we have seen for our high performance benchmarking application our interfaces are written in C so CPython is our only choice. So lets take a look at how this would work in practice.

Referring back to our benchmarking basics we want to run database sessions in parallel. For this reason we need to implement our database sessions in the form of operating system threads. We could use processes, however given we may want to create hundreds or thousands of virtual users multithreading is the best approach to implement a Virtual User.

This is where we hit a roadblock with using Python as a glue language for a benchmarking application, the Python GIL. The GIL or Global Interpreter Lock is a mutex that stops multiple Python threads executing Python bytecodes at the same time.

The Python GIL

In other words our Virtual Users instead of executing in parallel are now effectively running their database sessions in serial and the more sessions we have the more performance will degrade as each session tries to acquire the GIL. As it says on the Python Wiki ‘it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.’ – this describes exactly the scenario that we use for database benchmarking and is discussed further in the section Bytecode Execution.

So having an application client mutex lock that artificially stalls our Virtual Users does not sound good. Unfortunately, however our glue language doesn’t know that we are testing a high performance database processing numerous databases sessions concurrently. Our sessions will also be taking out locks on the database itself meaning that blocked Virtual Users on the client could themselves be blocking running Virtual Users on the database. Conversely, if throughput is limited we could also be preventing the database from handling the database locking that results from running multiple sessions accessing the same data at the same time. In either scenario we are preventing the database from managing sessions concurrently when this is precisely the scenario we want to test.

Why Tcl as a Glue Language

HammerDB abandoned Python as a glue language at the design stage because of the lack of multithreading and parallel capabilities. The only language that met such specific requirements for high throughput database benchmarking was Tcl or Tool Command Language. Not only was it designed from the ground up to interface with applications built in C, but it also supports true multithreading enabling our Virtual Users to be implemented as an independent operating system thread and genuinely run in parallel.

Tcl Multithreading in parallel

How does Tcl do this if Python can’t? Whereas within Python all threads run in a single interpreter (after acquiring the GIL) in Tcl each thread has its own copy of the interpreter. This is possible because the Tcl interpreter is exceptionally compact and lightweight (Also for this reason Tcl is often used as an embedded language in hardware such as Cisco Routers). By default, there is no shared data between threads, instead each thread runs an event loop completely independently and communication is done by passing messages to run events in those threads.

So what if we pre-created a number of operating system threads, loaded a C level interface to communicate with a database and then passed a script for the threads to evaluate in their event loop. That script would create strings of data for parameters and then either run SQL statements or call databases stored procedures? That is exactly what HammerDB does. In this scenario we have true multithreading and linear scalability. When our benchmarking client has multiple cores and threads we can take advantage of them and run entirely in parallel ensuring a true concurrent workload on the database server. Of course if the threads need to communicate they can through messages in a thread safe way, so for example if you press the stop button in HammerDB you send a message to all Virtual Users to end the current running workload and exit the thread. When you create Virtual Users in HammerDB you can see that you have created threads as follows. (run this command in the console or in the CLI).

puts [ thread::names ]
tid000000000000360C tid0000000000000944 tid00000000000032A8 tid0000000000002E10 tid00000000000019E4 tid0000000000001B38 tid0000000000002E80 tid00000000000004F4 tid00000000000006D0 tid0000000000003C44 tid0000000000002CF4

There is one small exception to this parallelism. Of course, you can only update a graphical user interface with the main application thread and therefore any output to be displayed must be sent via a message to this main application thread. Therefore, the Test workload in HammerDB that prints the output from all user sessions requires every session to pass its output to the main display. For this reason the Timed workload suppresses output unless there is an error.

Bytecode Execution

So as we saw in Python ‘it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.’ so lets see what happens in HammerDB. Running a performance profiling tool such as perf in Linux we can see that the top event is TEBCresume standing for Tcl Execute Byte Code.

Samples: 67K of event 'cycles:ppp', Event count (approx.): 33450114923
 Overhead  Shared Object                  Symbol
   33.56%  libtcl8.6.so                   [.] TEBCresume
    7.68%  libtcl8.6.so                   [.] Tcl_GetDoubleFromObj
    6.28%  libtcl8.6.so                   [.] EvalObjvCore
    6.14%  libtcl8.6.so                   [.] TclNRRunCallbacks

It is not a coincidence that as a Tcl proc is compiled to bytecode, HammerDB implements the calling of the database stored procedures as procs. You can see the bytecode generated with a command such as follows.

puts "bytecode:[::tcl::unsupported::disassemble proc slev]"

with output such as the following for the SQL Server slev stored procedure we saw earlier.

bytecode:ByteCode 0x00000211562A4AA0, refCt 1, epoch 17, interp 0x00000211560F30D0 (epoch 17) 
 Source 
 set threshold [ RandomNumber 10 20 ]
 if  {[catch {se… Cmds 21, src 514, inst 314, litObjs 14, aux 0, stkDepth 8, code/src 0.00 Proc 0x0000021155F1E060, refCt 1, args 4, compiled locals 8 
 slot 0, scalar, arg, slev_st 
 slot 1, scalar, arg, w_id 
 slot 2, scalar, arg, stock_level_d_id 
 slot 3, scalar, arg, RAISEERROR 
 slot 4, scalar, threshold 
 slot 5, scalar, message 
 slot 6, scalar, resultset 
 slot 7, scalar, slrows 
...

So lets look at the slev proc before disassembly. We are calling the stored procedure with the warehouse id and stock level district id parameters, setting the threshold, executing the stored procedure on the database and fetching the results.

#STOCK LEVEL
 proc slev { slev_st w_id stock_level_d_id RAISEERROR } {
 set threshold [ RandomNumber 10 20 ]
 if  {[catch {set resultset [ $slev_st execute [ list st_w_id $w_id st_d_id $stock_level_d_id threshold $threshold ]]} message ]} {
     if { $RAISEERROR } {
 error "Stock Level : $message"
     } else {
 puts "Stock Level : $message"
     }
       } else {
 if {[catch {set slrows [ $resultset allrows ]} message ]} {
 catch {$resultset close}
 if { $RAISEERROR } {
 error "Stock Level Fetch : $message"
     } else {
 puts "Stock Level Fetch : $message"
     }} else {
 catch {$resultset close}
     }
     }
 }

As seen before the application logic and our workload is on the database. Our Virtual User is generating and passing strings of data to call these stored procedures meaning the client logic is exceptionally lightweight. Not only is it lightweight but as we have seen each Virtual User is an operating system thread running compiled bytecode generating strings so not only is it lightweight and parallel it is also very fast. But let’s quantify fast by running the same calculation in SQL Server (see the HammerDB documentation for the Oracle example)

USE [tpcc]
 GO
 / Object:  StoredProcedure [dbo].[CPUSIMPLE]    Script Date: 25/02/2021 17:41:35 /
 SET ANSI_NULLS ON
 GO
 SET QUOTED_IDENTIFIER ON
 GO
 ALTER PROCEDURE [dbo].[CPUSIMPLE] 
 AS
    BEGIN
       DECLARE
          @n numeric(16,6) = 0,
          @a DATETIME,
          @b DATETIME
       DECLARE
          @f int
       SET @f = 1
       SET @a = CURRENT_TIMESTAMP
       WHILE @f <= 10000000 
          BEGIN
       SET @n = @n % 999999 + sqrt(@f)
             SET @f = @f + 1
          END
          SET @b = CURRENT_TIMESTAMP
 PRINT 'Timing = ' + ISNULL(CAST(DATEDIFF(MS, @a, @b)AS VARCHAR),'')
 PRINT 'Res = ' + ISNULL(CAST(@n AS VARCHAR),'')
    END

Timing = 7767
Res = 873729.721235
(1 row affected)
Completion time: 2021-02-25T17:40:16.1261747+00:00

and in Tcl Bytecode.

proc runcalc {} {
 set n 0
 for {set f 1} {$f <= 10000000} {incr f} {
 set n [ expr {[::tcl::mathfunc::fmod $n 999999] + sqrt($f)} ] 
 }
 return $n
 }
 puts "bytecode:[::tcl::unsupported::disassemble proc runcalc]"
 set start [clock milliseconds]
 set output [ runcalc ]
 set end [ clock milliseconds]
 set duration [expr {($end - $start)}]
 puts "Res = [ format %.02f $output ]"
 puts "Time elapsed : [ format %.03f [ expr $duration/1000.0 ] ]"

hammerdb>source runcalc.tcl
Res = 873729.72
Time elapsed : 3.553
hammerdb>

So on the same test system SQL Server completed the T-SQL calculation in 7.7 secs and Tcl completed the calculation in 3.5 secs. So for this example the client language is 2X faster than one of the fastest server languages. Yet remember the client logic is only generating strings to call the server side stored procedures (or SQL statements for TPROC-H) and all the Virtual Users are running independently of each other meaning that the client side of the workload is minimal compared to the database side.

As HammerDB tests multiple databases we also have the insight into client performance from comparing and contrasting the throughput from both commercial and open source databases. Consequently, we know that when we see a result for a highly tuned commercial database that is more than 10X higher throughput than a comparative database on the same hardware system we have 100% confidence that the limitation does not reside in the HammerDB client side of the test but instead in the database.

What about Coroutines?

We have seen that for database benchmarking it is important that our Virtual Users run in parallel meaning that each Virtual User should operate as an operating system thread. However, there is one scenario where you could raise an objection. What about when we want to run thousands of database sessions? In this scenario it would not be possible to run thousands of operating system threads due to the overhead on system resources and therefore couldn’t we run an implementation using coroutines instead? The answer is yes and when we want to run thousands of database sessions this is exactly what HammerDB does.

proc promise::async {name paramdefs body} {
     # Defines an procedure that will run a script asynchronously as a coroutine.

This option is chosen when you select the Asynchronous Scaling checkbox. You define the number of Virtual Users (operating system threads) and the number of clients per Virtual User sessions managed with coroutines. However, when you select Asynchronous Scaling note that it also activates keying and thinking time for you, this is not coincidental.

A coroutine implementation is appropriate for managing many sessions concurrently (rather than in parallel) when there is a clear and defined time delay between transactions per session. This is why HammerDB uses threads for Virtual Users for maximum throughput and threads and coroutines for asynchronous scaling when sessions will sleep for keying and thinking time. Coroutines alone would not enable the parallelism required for maximum throughput.

Building a GUI with Tk

HammerDB can run in command line mode but has always had a GUI that runs on both Linux and Windows platforms. For most Python applications the graphical interface used is called Tkinter which is the Python interface to Tcl/Tk. HammerDB bypasses this interface and uses Tk directly meaning that all the features available to a Tkinter application are also available to HammerDB but also more as well, meaning for example that HammerDB could take advantage of SVG graphics for high definition displays before a Python Tkinter application could, creating a native display for both Linux and Windows.

HammerDB GUI

Summary

We have discussed why HammerDB is written in the programming languages it uses and why running the clients in parallel in operating system threads is so important when we want to test concurrency on the database being tested. We have seen that the workloads are written in SQL and stored procedures and the client logic is compiled into Bytecode for performance. All of this is wrapped in an application interface that is simple and intuitive so all you need to do is point HammerDB at your database and start testing.

HammerDB v4.0 Signed Installer for Windows

There is now a signed installer available for HammerDB v4.0 for Windows from the HammerDB releases. The installation steps from HammerDB-4.0-Win-x64-Signed-Setup.exe is identical to HammerDB-4.0-Win-x64-Setup.exe and the contents in the installed directory are the same.

The key difference between the 2 installers is that the Signed Setup has been signed with a code signing certificate, whereas for the unsigned installer you can verify the installer itself manually with the provided checksums. When running the signed setup it will confirm the Verified publisher as the TPC Council as shown.

If you choose to Show more details you can view the Certificate Information from the TPC Council.

For Linux installations and the provided zip file for Windows, verification of the release files continues to be done through checksums.

HammerDB v4.0 New Features Pt4: Connect Pooling for Clusters

Prior to HammerDB v4.0 for the TPROC-C test there was the option to connect to one database instance only. If it was required to connect to multiple instances in a cluster then the Primary/Replica modes were used to create multiple HammerDB instances to connect to the separate database instances simultaneously. HammerDB has introduced a connect pool feature whereby a single instance of HammerDB can create a pool of multiple database instance connections with policies defined at the stored procedure level to determine how the individual stored procedures are run on which connections to the database instances. For example in an environment where there are primary read-write instances and secondary read-only it would be possible to define a policy whereby the neworder, payment and delivery stored procedures run against the read-write instance and stocklevel and orderstatus run against the read-only instance. Where there are multiple instances serving a similar purpose the policy can determine how an individual transaction is assigned. For example if there are three read write-instances then the neworder stored procedure can be defined to execute a transaction at each in a round-robin fashion or instead select an instance at random.

Connect Pool

To define the connect pool there are new XML files in the config/connectpool directory. These provide a template for the multiple connections with the same connection options for the standard interface defined in the connections section. The connections are named c, c2, c3 and so on with no limit on the number of connections that you define. There is also an sprocs section where you define what connections each stored procedure should use and what policy to apply, the policy can be  first_named, last_named, random or round_robin.

pgcpool.xml

When you have defined your configuration, select the XML Connect Pool option when loading the driver script. Your active Virtual Users will then use your XML defined connections and connect to each defined one thereby holding a pool of connections open to distribute the transactions to. For all databases the connect pool connections use prepared statements and once the connection is established will prepare statements for all of the stored procedures against each connection.

PostgreSQL XML Connect Pool

Within the driver script there is a commented line that can be uncommented to report details on all of the connections and prepared statements that are made.

postgresql connection information

Finally it is important to note that the main monitor connection continues to connect to the standard defined connection and reports NOPM and TPM from that single instance. Where a clustered environment such as Oracle RAC reports performance data for the entire cluster this will report cluster performance. If however you have defined connections to separate unrelated instances then this monitor connection will only report out for the instance it is connected to. For this reason where a database will not report performance data across the cluster the XML connect pool driver script will also report client side transactions for each Virtual User and in total to provide a guide to the workload directed to each instance.

The connect pool can be used in conjunction with other features such as use all warehouses and asynchronous connections. For more information on configuring the XML connect pool see the relevant section in the HammerDB documentation.

HammerDB v4.0 New Features Pt3: Refactored Stored Procedures

Another key feature introduced with HammerDB v4.0 is the refactoring of the stored procedures for some of the TPROC-C workloads. This means that the performance metrics reported in NOPM/TPM could be different from previous releases as well as the ratio between NOPM and TPM for these workloads. Therefore results from v4.0 may not be directly comparable with the results from previous releases for your database. The reason for these changes is that over time for some databases more advanced features or drivers have been introduced that improve performance since the time that the original HammerDB transactions or interfaces were written. These changes were contributed and accepted into HammerDB v4.0.

NOPM vs TPM

Firstly it is important to understand the metrics NOPM, TPM and the difference between them. The TPROC-C workload is derived from the TPC-C workload, the primary metric for TPC-C is called tpmC, the number of new order transactions processed per minute. As HammerDB TPROC-C is a derived workload it is not permitted to use tpmC and therefore instead uses a metric called NOPM that records the number of new order transactions processed per minute. Although a similar metric to tpmC it is not considered to be directly comparable. From HammerDB v4.0 NOPM should be considered the primary metric and is the only one that should be used for a cross database comparison. For this reason from HammerDB v4.0 NOPM is printed first.

NOPM Primary Metric

However for backward compatibility the generic.xml configuration file contains an option called first_result, setting this to TPM will print the results in the same format as v3.3 and earlier.

<benchmark>
...
<first_result>TPM</first_result>
...
</benchmark>

So why not just print NOPM and report a single metric for TPROC-C as per the official TPC-C workloads? The key reason is that HammerDB is intended to give us more insights into database performance and whereas NOPM is a cross-database comparable metric TPM is a database specific metric that can give more details on the workload and be related to other database specific metrics but cannot be compared between different databases. For example for the Oracle database the TPM value is calculated from the number of user commits + user rollbacks and you can query these metrics in the v$sysstat table during the test. This metric is the same used in Oracle tools such as AWR reports and as shown the TPM metric captured by HammerDB is exactly the same as Transactions and Rollbacks in the Load Profile section when these values are multiplied by a value of 60. This can then be used to compare with other Oracle specific metrics such as redo size and logical reads.

Load Profile

Per SecondPer TransactionPer ExecPer Call
DB Time(s):151.20.00.000.00
DB CPU(s):124.90.00.000.00
Background CPU(s):0.00.00.000.00
Redo size (bytes):862,578,321.35,390.2  
Logical read (blocks):10,369,967.864.8  
Block changes:5,009,042.831.3  
Physical read (blocks):498.80.0  
Physical write (blocks):23.60.0  
Read IO requests:329.70.0  
Write IO requests:2.20.0  
Read IO (MB):6.80.0  
Write IO (MB):0.20.0  
IM scan rows:0.00.0  
Session Logical Read IM:0.00.0  
User calls:122,685.10.8  
Parses (SQL):74,085.20.5  
Hard parses (SQL):0.00.0  
SQL Work Area (MB):155.90.0  
Logons:0.00.0  
User logons:0.00.0  
Executes (SQL):3,303,622.120.6  
Rollbacks:270.40.0  
Transactions:160,026.2 
Oracle AWR Report Load Profile

Similarly for SQL Server TPM records Batches/sec which is the same value seen in the Activity Monitor in SSMS. Using the database specific metric means that we can sample the transaction rate in real time for the HammerDB transaction counter whereas doing so for the NOPM value would impact the test by reading from a table being modified during the test. It can be seen from the example below that the HammerDB Transaction Counter

Transaction Counter

reports the same data per minute as Activity Monitor reports per second and can therefore be related directly to the other data that Activity Monitor provides such as Database I/O.

Activity Monitor

Refactoring Differences

When using HammerDB v4.0 the notable differences between TPM and NOPM rates than can be observed are as follows:

SQL Server

For SQL Server note that a change in the TPM metric was introduced at v3.3 and continues into v4.0. Initially these releases were planned close together however v4.0 was delayed to introduce UHD scalable graphics. The key difference is that in v3.2 the TPM value is reported as 2X the value of v4.0 (and v3.3) due to a change in the SQL Server driver interface used. The reason for the change was that there was a bug raised for the previous ODBC v1 interface tclodbc the bug meant that the 3rd party library would crash if SQL Server was shutdown during a test and then the Virtual Users subsequently closed after they had correctly reported the error. The fix to this was to move to a newer ODBCv3 interface called tdbc::odbc. The previous tclodbc interface required 2 commits per transaction one in the stored procedure and one outside in the driver script. If the external one was removed or commented then the driver would error saying that the connection was already in use by a previous command. Moving to the newer interface this external commit was no longer needed and was removed, as a result the TPM value has reduced however the NOPM value is exactly the same. (From tests it should be slightly higher with the newer ODBC v3 interface). It is important to note that as far as the HammerDB TPROC-C workload is concerned it is not doing any more work as such – just doing one less commit for the same work. If you compare the driver scripts for the different versions you can observe where the extra commit that was removed. For comparisons with previous releases NOPM can be used unmodified as the primary metric or the TPM value multiplied by 2X to account for the removed commit.

Oracle

For Oracle from v4.0 the stored procedures have been updated to use bulk processing features. As we are now doing bulk operations  we are processing more changes per database commit and this can seen in an AWR report that the redo generated per transaction has also increased from v3.3 to v4.0.  For this reason the NOPM value has increased whilst the TPM value has remained level, therefore the TPM to NOPM ratio has changed for Oracle between these releases. This has been measured at 3.0X TPM to NOPM for v3.3 and earlier but has reduced to 2.11X for v4.0. Therefore HammerDB v4.0 improves the Oracle performance by approximately 1.35-1.45X measured in NOPM for the same TPM. This improvement in NOPM is a recognition of actual improved TPROC-C transactional throughput by using advanced PL/SQL features.

PostgreSQL

The PostgreSQL stored procedures and functions have undergone similar changes as Oracle to take advantage of bulk processing features. These changes have improved NOPM and TPM by approximately 1.35X in HammerDB v4.0 compared to previous releases and is also a recognition of the use of more advanced features.

In conclusion in v4.0 HammerDB has undergone a number of evolutionary changes for some of the database workloads that impact performance results when compared to earlier versions. These differences should be accounted for when comparing results. Although the aim is to keep performance as consistent as possible between releases wherever possible as some databases had introduced new features or newer more advanced interfaces were available it was considered reasonable to modify the stored procedures or use these new interfaces to use available features even though reported results are different in the newer version.

It is also important to note that HammerDB is open source and therefore if it is considered that there are underused features in any of the supported databases then a pull request can be made to ensure that those features are fairly represented in the results reported by HammerDB. The overall goal is to remain consistent as possible whilst providing an accurate representation of the capabilities that each supported database provides as their features are updated and improved.

HammerDB v4.0 New Features Pt2: Scalable UHD Graphics

Up to version 3.3 the HammerDB GUI operated with a fixed scaling factor of 1.33 pixels per point. That means regardless of monitor resolution HammerDB was set to use a fixed number of pixels. Therefore on a higher resolution screen as shown the HammerDB GUI would appear smaller than on a standard DPI display.

HammerDB v3.3 on UHD

For most purposes this did not present an Issue and the HammerDB display rendered correctly up to 1920 x 1080 Full HD displays. However a GitHub Issue was raised because once moving to UHD displays with a pixel density beyond 1920×1080 the HammerDB interface became too small for use. Additionally these type of displays on devices such as PixelSense on Microsoft Surface Book were popular for presentations and demos and therefore the task was to update the HammerDB display to support scalable graphics. This meant that HammerDB would detect the display density and size the application interface accordingly. At HammerDB v4.0 the graphical interface has been extensively updated as shown to support UHD displays with pixel densities beyond 1920 x 1080 on both Windows and Linux.

HammerDB v4.0 on UHD

To achieve this scaling it was necessary to move from fixed PNG based graphics to SVG requiring all graphics and icons to be regenerated in the updated format and the interface rewritten to use SVG. Similarly it was necessary to use scalable fonts to ensure that the text scaled alongside the graphics. It was also needed to move to updated themes so that all dialogs and options scaled alongside the main display. With the move to the updated nomenclature of TPROC-C and TPROC-H already needing extensive re-documentation it was decided to delay the release of v4.0 in summer 2020 to invest the time in rewriting the interface to ensure usability on devices such as PixelSense displays.

Once the main interface was updated it was also necessary to update the transaction counter, otherwise the fixed display would have continue to render at a smaller size within the updated interface.

Transaction Counter

Similarly the CPU Metrics display was modified to scale in proportion to the main interface.

CPU Metrics

Finally it was needed to update the graphical Oracle metrics with the same scalability settings. Although this feature is only available for Oracle at v4.0 the plan is to make it available for other databases with future releases.

Oracle Metrics

With these changes the entire HammerDB application supports scalable graphics with the feature tuneable by the settings in the generic.xml file in the config directory.

<theme>
<scaling>auto</scaling>
<scaletheme>auto</scaletheme>
<pixelsperpoint>auto</pixelsperpoint>
</theme>

By default all of the theme settings are set to auto and this is the recommended configuration. However they can be modified if required. Firstly the scaling setting can be changed to fixed to revert to the previous fixed non-scaling graphics. This could for example be used when using a remote X-windows display for faster graphics rendering when a faster alternative such as VNC is not available. For the scaletheme you can use settings of “auto”, “awlight”, “arc” or “breeze”, with “awlight” the default on Linux and “breeze” the default on Windows, with the plan to introduce more theme options over time.

awlight theme

Finally pixelsperpoint is for expert usage to fine tune the scaling of HammerDB according to different screen settings.

Now that HammerDB is available with scalable graphics from v4.0 there is no longer any barrier to running database performance presentations and demos from devices with UHD displays.

HammerDB v4.0 New Features Pt1: TPROC-C & TPROC-H

One of the key differences that stands out with HammerDB v4.0 compared to previous releases is that the workload names have changed from TPC-C and TPC-H to TPROC-C and TPROC-H respectively and therefore a key question is how are the v4.0 workloads different from the previous releases of v3.3 and earlier, what has changed and how does this impact interpreting results? The simple answer is nothing, the workloads are exactly the same workloads derived from the TPC-C and TPC-H specifications and HammerDB v4.0 can be seen as a continuation and enhancement from previous releases, only the name has changed, not the workload itself.  From an engineering perspective this may be all that you need to know. However a reasonable follow-on question is then why change the name at all? This post aims to give some of the background to the change and provide you with the information of where and how the TPROC-C and TPROC-H differ from a ‘real’ TPC-C and TPC-H respectively.

First of all it has always been clear in the HammerDB documentation that the TPROC-C/TPC-C and TPROC-H/TPC-H workloads have not been ‘real’ audited and published TPC results instead providing a tool to run workloads based on these specifications.  For example HammerDB has not used tpmC terminology to report TPC-C based metrics instead using TPM and NOPM nomenclature.  Initially from TPC documentation it was not specified whether using TPC-C and TPC-H terminology for derived workloads was permitted. Additionally both commercial and open source tools based on the specifications also continued to use TPC-C and TPC-H to describe these workloads.  This has now been made explicitly clear, using TPC-C or TPC-H for a non-compliant workload to the full specification is a trademark violation.  For this reason HammerDB has changed the workload names to be legally compliant.  Any tools using TPC-C, TPC-H (or other trademarked TPC workload names) for tools or workloads both commercial and open source should also consider renaming for trademark compliance purposes or take legal guidance.

But why is this important? Surely it was clear that as a derived workload HammerDB results were not actual fully audited TPC results so that this non-compliance was implicit? In the vast majority of cases this was the case and even when using TPC terminology it was clear that using for example TPC-C meant “derived from TPC-C”. However there were cases where intermediate understanding based on some familiarity with the specifications (but without the implicit understanding that the difference was already clear) have questioned the validity of published performance data by users based on this implicit understanding. This change makes that difference explicit and puts everyone on the same page, TPROC-C means derived from TPC-C.

So why derive a workload from TPC-C or TPC-H at all and instead just rely on vendor published and fully audited results? This question is the very essence of what HammerDB is, a tool that takes the well designed and scalable TPC-C and TPC-H specifications and implements a workload derived from them that is accurate and repeatable yet tests database capabilities to the full (compared to alternative simple 1-table no contention workloads) so that running database performance benchmarks becomes fast, low cost and accessible to all, making database benchmarks open source thereby allowing anyone to compare the performance of their databases.

A full understanding of why this is important requires some knowledge of the evolution of database hardware and software.  The HammerDB TPROC-C workload by design intended as CPU and memory intensive workload derived from TPC-C – so that we get to benchmark at maximum CPU performance at a much smaller database footprint.  In the days before highly performant SSDs and persistent memory, database benchmarks had a significant challenge in comparing performance due to the available I/O performance.  For TPC-C this meant enough available spindles to reduce I/O latency and for TPC-H enough bandwidth for data throughput.  Fully audited configurations would require multiple racks of I/O capacity to reach maximum CPU performance. This was both expensive and time consuming to configure.  Even with superior I/O performance today the I/O configuration required on-premise or in the cloud remains costly.

Additionally a fully audited benchmark requires multiple client servers to sustain the large volume of clients as well as a TP Monitor (Transaction Processing Monitor) This TP monitor acts as middleware transferring the transactions between the clients and the database server.  The challenge was how to take the essence of the TPC specifications that are made available for free and implement the workloads in a method that maximises CPU performance and can be run on anything from laptops to servers without the significant expense of I/O capacity and multiple client servers to run the workloads.  Therefore for HammerDB TPROC-C we eliminated keying and thinking time and eliminated the requirement for terminals. This meant we could dispense with the TP-monitor, reduce the number of clients, reduce the storage and the schema size but still run similar transactions to the TPC-C specification so that it was running a scalable and repeatable workload in a CPU and memory intensive manner.

Over time since we first ran HammerDB workloads we noticed that the CPU generation to generation performance ratios between systems was the same with this CPU intensive default mode as the official published TPC-C benchmarks. I.e. if system A generated 1.5X more transactions than system B in the fully audited benchmark then the HammerDB result was also 1.5X better.  Consequently we were getting very similar insights both faster and cheaper meaning we could test orders of magnitude more configurations in the same amount of time generating relative performance ratios.

Why would this be the case?  Surely if the database schema is smaller, the workload more intensive, and there is not the same I/O capacity demands then the results would be different. The key aspect is the presence of the TP Monitor and this is arguably the area where the most confusion arose in stating that a HammerDB TPC-C workload was not an actual TPC-C workload which should have already been clear. Typically in a fully audited TPC-C the client sessions are not connecting directly to the database with many 1000’s of sessions. Instead all of the client servers are connecting to the TP monitor.  This is not a new concept, as given in the description for the earlier TPC-B workload  “a transaction monitor can multiplex transaction streams to match the processing profile of the database subsystem. For example, 1000 user terminals would present transactions with human think times and delays, and the transaction monitor will concentrate them down to a steady stream of, say, 50 concurrent processes.”   Therefore the actual database server workload is both CPU and memory intensive with actual connections typically numbered in the hundreds processing transactions for the clients managed by the TP monitor.  HammerDB eliminates this stage instead implementing a workload that connects the steady stream of transactions directly to the database.  Additionally because the number of Virtual Users is lower for the HammerDB   default mode (see below for alternatives) each Virtual User will choose a home warehouse at random. Once that home warehouse is chosen then most of the work takes place against that warehouse – therefore for example you hit max CPU at 200 VUsers then most of the work is on 200 warehouses regardless of how many you have created.  As the home warehouse is chosen at random however you want enough to ensure an even spread of Virtual Users across the warehouses. Therefore the general advice is 250-500 warehouse per socket and for example a starting point of 1000 warehouses for a 2 socket server regardless of the database is a good starting point. This is also the reason why the default maximum number of warehouses is 5000. You can change this if you wish, however this limit is set to prevent a typical error of over configuring the database size in the expectation that it will improve performance.

Nevertheless more recently SSDs and persistent memory are lowering the price points for high performance I/O increasing the desire to test more I/O that wasn’t there when people needed to buy a dedicated storage array to do so. For this reason beyond the default mode there are the 3 more advanced features:

  1. Use All Warehouses (Choose a new warehouse per transaction)
  2. Event driven scaling (asynchronous clients with keying and thinking time)
  3. XML Connect Pooling (test distributed clusters)

These can be used separately or together for the sort of scenarios where it is desired to increase the I/O load, the number of Virtual Users or to test a distributed environment.  However to test a large number of Virtual Users with event driven scaling will need increasing the number of HammerDB clients with primary and replica modes and you will also need to implement a form of middleware to concentrate these connections. Therefore you are moving away from a more agile and rapid form of testing to more complex configurations.  You are entirely in control of your test environment and therefore the choice is yours dependent on the scenarios you wish to test rather than being bound by a more rigid specification.

In terms of analytics and the TPROC-H workload derived from TPC-H, this specification  does not require middleware so when running TPROC-H you are close to the specification,  however there remain differences.  For example the TPC-H specification includes measuring database load times that is beyond the scope of HammerDB and you may wish to implement features such as in-memory column stores, partitioning and compression to improve performance. HammerDB cannot implement advanced features that may not be available in all test environments and therefore builds a base configuration that would be available to all. The user can then modify the schema as they wish. For this reason TPROC-H is also a workload derived from TPC-H and has moved away from providing information on calculating the QphH figure to focusing on actual query times for power and throughput tests and a total geometric mean of query times.

In conclusion, TPROC-C and TPROC-H are the new names for the same HammerDB specific workloads that mean “derived from TPC-C” and “derived from TPC-H” respectively to make running workloads based on these specification both faster and cheaper and available to all.  Official TPC-C and TPC-H compliant results can as has always been the case only be found on the official TPC website.

 

 

Automating CLI Tests on Windows

The information in this post is a duplicate of this GitHub Issue https://github.com/TPC-Council/HammerDB/issues/84. The issue regards running build and driver scripts automatically in Windows.  As both build and driver examples are not given together in previous posts, the examples are copied here. This is intended as a template that you can take and modify for your own needs.

A few things to note. Firstly these scripts are multithreaded so once you do build schema and vurun it is running multiple threads. We need a different approach for both because in the schema build we are waiting for all of the virtual users to finish (vwait forever), however for the driver we are waiting for a set period of time before terminating them. For the build I have created more warehouses but with 5 virtual users to show the multithreaded nature of the build. For the driver we have set a rampup and duration of 3 minutes in total (180 secs) – this runs in the virtual users, then in the main monitor virtual user it runs a timer – this is set to 200 seconds more than the rampup and driver combined (if it is less it will terminate the test before ending).

With both I copied the scripts to C:\Program Files\HammerDB-3.3. I then did:

hammerdbcli.bat auto autorunbuild.tcl

and

hammerdbcli.bat auto autorundrive.tcl

When both are complete the shell will exit as intended therefore it is a good idea to set logtotemp to ensure that you receive both the output and errors (for example if the build script exits because the database already exists).

Build Script – autorunbuild.tcl

puts "SETTING CONFIGURATION"
global complete
proc wait_to_complete {} {
global complete
set complete [vucomplete]
if {!$complete} { after 5000 wait_to_complete } else { exit }
}
puts "SETTING CONFIGURATION"
dbset db mssqls
diset connection mssqls_server {(local)\SQLDEVELOP}
diset tpcc mssqls_count_ware 10
diset tpcc mssqls_num_vu 5
vuset logtotemp 1
buildschema
wait_to_complete
vwait forever

Driver Script – autorundrive.tcl

#!/bin/tclsh
proc runtimer { seconds } {
set x 0
set timerstop 0
while {!$timerstop} {
incr x
after 1000
  if { ![ expr {$x % 60} ] } {
          set y [ expr $x / 60 ]
          puts "Timer: $y minutes elapsed"
  }
update
if {  [ vucomplete ] || $x eq $seconds } { set timerstop 1 }
    }
return
}
puts "SETTING CONFIGURATION"
dbset db mssqls
diset connection mssqls_server {(local)\SQLDEVELOP}
diset tpcc mssqls_driver timed
diset tpcc mssqls_rampup 1
diset tpcc mssqls_duration 2
vuset logtotemp 1
loadscript
puts "SEQUENCE STARTED"
foreach z { 1 2 4 } {
puts "$z VU TEST"
vuset vu $z
vucreate
vurun
#Runtimer in seconds must exceed rampup + duration
runtimer 200
vudestroy
after 5000
}
puts "TEST SEQUENCE COMPLETE"

A recommended editor that recognises TCL syntax for editing HammerDB script is Vim. 

How to Graph HammerDB Response Times

The HammerDB workload derived from TPC-C contains a feature to record the response times for the stored procedures that are run in the logfile. This post shows how you can easily convert this data into spreadsheet format to view the response times you have captured over a run.

Firstly, to activate the time profile feature select Time Profile for the GUI or the set the equivalent parameter in the CLI to true.

Time Profile

You can note that when loaded the driver script is modified to report response times for the first active virtual user running the workload. (Only the first virtual user is tracked to minimise the impact on the workload). This data is reported every 10 seconds into the HammerDB log.  The output is in percentile format reporting the minimum, 50th percentile, 95th percentile, 99th percentile and maximum for each of the procedures during that 10 second interval with an extract shown below.

+-----------------+--------------+------+--------+--------------+
 |PERCENTILES 2019-08-23 04:54:21 to 2019-08-23 04:54:31
 |neword|MIN-812|P50%-2320|P95%-3614.5|P99%-4838|MAX-10610|SAMPLES-2221
 |payment|MIN-281|P50%-1136|P95%-1892|P99%-2344|MAX-3725|SAMPLES-2133
 |delivery|MIN-2931|P50%-4048|P95%-5247|P99%-5663|MAX-5989|SAMPLES-197
 |slev|MIN-1521|P50%-2108|P95%-2629|P99%-2859|MAX-3138|SAMPLES-248
 |ostat|MIN-212|P50%-398.5|P95%-610|P99%-770|MAX-1732|SAMPLES-205
 |gettimestamp|MIN-3|P50%-5|P95%-6|P99%-6|MAX-27|SAMPLES-4753
 +-----------------+--------------+------+--------+--------------+
 |PERCENTILES 2019-08-23 04:54:31 to 2019-08-23 04:54:41
 |neword|MIN-797|P50%-2301.5|P95%-3590|P99%-6458|MAX-10130|SAMPLES-2147
 |payment|MIN-299|P50%-1136|P95%-1840|P99%-2301|MAX-3470|SAMPLES-2124
 |delivery|MIN-2922|P50%-4164.5|P95%-5334|P99%-5802|MAX-6179|SAMPLES-247
 |slev|MIN-1342|P50%-2074|P95%-2700|P99%-2945|MAX-3038|SAMPLES-218
 |ostat|MIN-193|P50%-409|P95%-571|P99%-620|MAX-897|SAMPLES-220
 |gettimestamp|MIN-3|P50%-5|P95%-6|P99%-6|MAX-34|SAMPLES-4735
 +-----------------+--------------+------+--------+--------------+

In text from this data can be useful, however often it is better to view this in graphical form. The simple script below can be run at the command line and provided with a logfile with the data for one run only. (Note that it is important that to convert you only provide a logfile for that to convert, otherwise all of the data will be combined).  When run on a logfile with data such as shown above this will output the data in tab delimited format that can be interpreted by a spreadsheet.

#!/bin/tclsh
 set filename [lindex $argv 0]
 set fp [open "$filename" r]
 set file_data [ read $fp ]
 set data [split $file_data "\n"]
 foreach line $data {
 if {[ string match *PERCENTILES* $line ]} {
 set timeval "[ lindex [ split $line ] 3 ]"
 append xaxis "$timeval\t"
         }
     }
 puts "TIME INTERVALS"
 puts "\t$xaxis"
 foreach storedproc {neword payment delivery slev ostat} {
 puts [ string toupper $storedproc ]
 foreach line $data {
 if {[ string match *PROCNAME* $line ]} { break }
 if {[ string match *$storedproc* $line ]} {
 regexp {MIN-[0-9.]+} $line min
 regsub {MIN-} $min "" min
 append minlist "$min\t"
 regexp {P50%-[0-9.]+} $line p50
 regsub {P50%-} $p50 "" p50
 append p50list "$p50\t"
 regexp {P95%-[0-9.]+} $line p95
 regsub {P95%-} $p95 "" p95
 append p95list "$p95\t"
 regexp {P99%-[0-9.]+} $line p99
 regsub {P99%-} $p99 "" p99
 append p99list "$p99\t"
 regexp {MAX-[0-9.]+} $line max
 regsub {MAX-} $max "" max
 append maxlist "$max\t"
     }
       }
 puts -nonewline "MIN\t"
 puts $minlist
 unset -nocomplain minlist
 puts -nonewline "P50\t"
 puts $p50list 
 unset -nocomplain p50list
 puts -nonewline "P95\t"
 puts $p95list 
 unset -nocomplain p95list
 puts -nonewline "P99\t"
 puts $p99list
 unset -nocomplain p99list
 puts -nonewline "MAX\t"
 puts $maxlist
 unset -nocomplain maxlist
     }
 close $fp

In this example we run the script above, pass the name of the logfile for the run where response times were captured and output them to a file with a spreadsheet extension name.  Note that it is important to output the data to a file and not to a terminal with that data then cut and paste into a spreadsheet. If output to a terminal it may format the output by removing the tab characters which are essential to the formatting.

$ ./extracttp.tcl pgtp.log > pgtp.txt

If we look in this file we can see that it has extracted the data and output them into rows with the extracted values.

$ more pgtp.txt
TIME INTERVALS
     04:49:51    04:50:01    04:50:11    04:50:21    04:50:31    04:50:41    04:50:51    04:51:01    04:51:11    04:51:21    04:51:31    04:51:41    04:51:51    04:52:01    04:52:11    04:52:21    04:52:31    04:52:41    04:52:51    04:53:01    04:53:11    04:53:21    04:53:31    04:53:41    04:53:51    04:54:01    04:54:11    04:54:21    04:54:31    04:54:41    04:54:51    04:55:01    04:55:11    04:55:21    04:55:31    04:55:41    04:55:51    04:56:01    04:56:11    04:56:21    04:56:31    04:56:41    
 NEWORD
 MIN    457 519 684 734 828 829 795 894 803 880 775 840 774 851 821 792 720 849 751 813 715 879 823 778 853 739 807 812 797 781 841 852 775 865 832 686 805 845 813 822 863 833 
 P50    866 1099    1455    1918.5  2318    2267    2271    2307    2315    2347    2299    2296    2301.5  2313    2314    2351    2324    2312    2284    2347    2344 ...

We now have an option.  In testing with Excel 2013 we can simply give this file a .xls extension and open it. If we do it will give the following warning, however if you click OK it will open with the correctly formatted data.

Excel extension

Alternatively if we open the file with the .txt extension it will show 3 steps for the Text Import Wizard.  Click through the Wizard until Finish.

Text Import 1
Text Import 2
Text Import 3

After clicking Finish the data has been imported into the spreadsheet without warnings.

Imported Data

As an example we will highlight the rows we want to graph by clicking on the row numbers.

Highlighted Spreadsheet

If we then click on Insert and Recommended Charts, the default graph produced by Excel is shown below with the addition of a vertical axis title and a chart header.

Finally make sure when saving the spreadsheet it is saved in Excel format rather than the imported Tab (Text Delimited).

With this approach it is straightforward to turn the captured response times into a visual representation to being further analysis to your workloads.

HammerDB v3.3 event driven scaling

HammerDB v3.3 includes a new feature called event driven scaling to enable the scaling of virtual users to thousands of sessions running with keying and thinking time enabled. This feature adds additional benefit to your testing scenarios with the ability to handle large numbers of connections or testing with connection pooling. This post explains the benefits that this feature brings and how to enable it.

When running transactional workloads with HammerDB the default mode is CPU intensive meaning that one virtual user will run as many transactions as possible.  As an example  we will run a simple test by creating a 10 warehouse TPC-C schema and configuring a single virtual user to run a test in this default mode.

Once we’ve created this Virtual User and run the test for 1 minute on a simple PC configuration we receive a result showing that the Virtual User ran many thousands of transactions a minute.

Using HammerDB metrics we can also see that this single Virtual User was using a fair amount of CPU.

Now lets select keying and thinking time by checking the “Keying and Thinking Time” option we can now see that the transaction rate has dropped considerably.

It is clear that we would need to create a large number of Virtual Users to even begin approaching the same number of transactions that a single virtual user could process without keying and thinking time. Event driven scaling is a feature that enables each Virtual User to create multiple database sessions and manage the keying and thinking time for each asynchronously in an event-driven loop enabling HammerDB to create a much larger session count.  It should be clear that this feature is only designed to work with keying and thinking time enabled as it is only the keying and thinking time that is managed asynchronously.

To configure this feature we now select Asynchronous Scaling noting that Keying and Thinking Time is automatically selected. We have also selected a 1000 clients for our one Virtual User and a Login Delay of 60ms. This means that each client will wait for 60ms after the previous client has logged in before then logging in itself. Finally we have chosen verbose output.  Note that with this feature it is important to allow the clients enough time to both login fully before measuring performance and also at the end it will take additional time for the clients to all complete their current keying and thinking time and to exit before the virtual user reports all clients as complete.

We now create a single virtual user as before. However this time once all the clients have connected we now see that there are the 1000 asynchronous clients connected to the test database.

SELECT DB_NAME(dbid) as DBName, COUNT(dbid) as NumberOfConnections FROM sys.sysprocesses WHERE dbid > 0 GROUP BY dbid;
DBName NumberOfConnections
master 43
tempdb 1
tpcc 1003

When the workload has completed (noting as before that additional time must be given to allow all of the asynchronous clients to complete their work and log off) we can see that HammerDB reports the active sessions based on the asynchronous client count and the transaction rate for the single virtual user is considerably higher with keying and thinking time.

The event driven scaling feature is not intended to replace the default CPU intensive mode of testing and it is expected that this will continue to be the most popular methodology. Instead being able to increase up client sessions with keying and thinking time adds additional test scenarios for highly scalable systems.