HammerDB v4.10 New Features: Purge and Write back for MariaDB TPROC-C

Many of the HammerDB TPROC-C workloads have included features to prevent the database doing maintenance tasks for the previous run whilst another run is taking place. This is particularly important when running automated workloads back-back to generate a performance profile for a progressively increasing number of virtual users.  An example of this is the “Checkpoint when complete” option for Oracle which will flush all dirty/modified data blocks in the in-memory buffer cache to disk and then switch the logfiles so this will not happen during a subsequent run with database writer activity impacting performance.

History List

With MySQL and MariaDB a key maintenance activity that can impact performance is purging which can be monitored with the history list length with a SQL statement such as follows.

select variable_value from information_schema.global_status where variable_name = 'INNODB_HISTORY_LIST_LENGTH'"

When a workload deletes data from MySQL or MariaDB the row is not deleted straight away and only when it is not needed for read consistency and the referring undo record for the operation is deleted. The history list length that can be queried is the number of undo log pages that contain changes.  If the history list length grows too large then a larger number of row versions can impact query performance, however purging to process the undo log pages can also itself impact performance requiring mutex locking.  For this reason we have a number of parameters, notably innodb_purge_threads, innodb_purge_batch_size, innodb_max_purge_lag and innodb_max_purge_lag_delay.  The number of purge threads and batch size determine how aggressively purging takes place and the purge lag and delay allow throttling of throughput of transactions if the history list grows too large to allow purging to catch up.

It should be clear that if you want to stress a MySQL or MariaDB databases then a workload that includes delete operations is essential to the overall picture and fortunately in the HammerDB workload the DELIVERY stored procedure includes a loop that processes a large number of delete operations and therefore is a great opportunity to test the effectiveness of your purge settings.  However until recently the purge parameters have only been able to be changed after a server restart and therefore we haven’t been able to accelerate a purge after a workload has completed.  Fortunately with MariaDB this change  Make number of purge threads variable dynamic as it suggests makes the configuration dynamic and we can take advantage of this to do a purge to clear the history list after a workload has completed so the purging for one test run does not unduly impact the following run.

In addition to the history list when adding this functionality we can also add similar functionality to write back the dirty buffers in the buffer pool at the same time.

Setting Purge and Write Back

To run the purge and write back after a MariaDB workload you need at least version 10.7.0 of MariaDB when the variables were made dynamic.  If you do not have this version HammerDB will report during a run that the settings cannot be made.  Otherwise if you have version 10.7.0 or above, in the GUI choose the Purge when complete checkbox in your driver settings.

Or in the CLI set the option maria_purge to true.

tpcc {
maria_count_ware = 30
...
maria_purge = true
}

With this setting enabled, run your MariaDB TPROC-C workload as normal.

Once the run has completed it will dynamically set the variables to run the purge and write back and restore your variables when complete.

Choosing optimal purge settings

Of course for this notebook based example the history list length does build up unduly over the course of the workload, however in a larger server environment with high transaction rates it is possible to see history list lengths such as follows depending on your purge settings.

"TEST RESULT : System achieved 916487 NOPM from 2128748 MariaDB TPM",
"Starting purge: history list length 13107078",
"Starting write back: dirty buffer pages 2289677",

Typically for optimal performance during a HammerDB run you want purge_threads and batch_size to be at the default settings as follows innodb_purge_threads =4 , innodb_purge_batch_size = 1000, and then set max_purge_lag and purge lag_delay to low values so we minimize delays during the test, such as innodb_max_purge_lag=0 and innodb_max_purge_lag_delay=1.  As described previously, even in this large server environment HammerDB will modify these settings to accelerate the purge and write back after the run has completed and then restore your settings when it has finished ready for the next workload giving us the best possible results each time.

Summary

Now that MariaDB from version 10.7.0 allows us to dynamically set the purge settings we can take advantage of this to complete the purge after a run has completed before the next one starts so that our test runs are entirely repeatable without being impacted by purging.

Thank-you to Marko Mäkelä of MariaDB for guidance on exactly which settings were needed for purge and write back to make this new HammerDB feature possible.

Deploy HammerDB as a Web Application with CloudTK

From HammerDB v4.9 it is possible to deploy HammerDB through a web browser, giving you access to a fully functional HammerDB GUI in Cloud environments.

The quickest way to run HammerDB with CloudTk is through Docker and in this post will we demonstrate how to pull the HammerDB CloudTK Docker image and run it to start running HammerDB through your chosen browser.

Firstly, pull the Docker image:

$ sudo docker pull tpcorg/hammerdb:latest-cloudtk
latest-cloudtk: Pulling from tpcorg/hammerdb
96d54c3075c9: Pull complete
96261b8fec13: Pull complete
d2c2e5928320: Pull complete
4f4fb700ef54: Pull complete
0dcebfea4cd0: Pull complete
4fea0d72408b: Pull complete
4d35a83249fa: Pull complete
a20f97744e61: Pull complete
181fc9db3bb2: Pull complete
442899704ccd: Pull complete
Digest: sha256:a73903ba354f9a7f826a69e2e1f3f8b3523220927323d0ba913efe168917ec02
Status: Downloaded newer image for tpcorg/hammerdb:latest-cloudtk
docker.io/tpcorg/hammerdb:latest-cloudtk

And then run it. If you want access to functionality such as CPU metrics then you should use the –net-host option.

$ sudo docker run --net=host --name hammerdb-cloudtk tpcorg/hammerdb:latest-cloudtk
can't find package limit
Running with default file descriptor limit
/debug user "debug" password "sycu4xfc.gcw"
httpd started on port 8081
secure httpd started on SSL port 8082

Otherwise, you can only expose the ports as needed, as follows.

  docker run -p 8081:8081 -p 8082:8082 -p 8080:8080 --name hammerdb-cloudtk hammerdb:cloudtk

Once running, you can connect to the port 8081 for an unencrypted connection. Once connected, click on the launcher link.

If preferred, you can connect to an encrypted connection on port 8082. By default, sample self-signed certificates are installed, and these should be replaced by your own certificates.

Whichever connection method is used, click Submit Query on the HammerDB-CloudTK application.

HammerDB will then run fully contained within the browser.

You can change viewing options with the settings on the left of the screen.

You can now use HammerDB exactly as you would do with the GUI displayed on a local display.

The GUI can also be used for functionality to monitor your databases such as the PostgreSQL metrics.

The drag and drop functionality of these windows can be activated by dragging the tab to the title bar of the main HammerDB window.

It is also possible to start the HammerDB Web service from within the container by using the Jobs Options dialog to view the statistics from the jobs you have run.

Confirm that the web service is running.

and then connect to the web service on port 8080 using another tab in your browser. You can then view the charts related to the workloads you have run.

Finally, when you have finished with your HammerDB session use the tab on the left to disconnect.

HammerDB with CloudTk provides you with the option to deploy the HammerDB GUI as a web application. In this post we have demonstrated the easiest way to do this with Docker, however you can also deploy CloudTK within any HammerDB for Linux installation as described in the readme on the HammerDB CloudTK repository.

 

HammerDB v4.9 New Feature: Accelerate SQL Server builds with BCP

This post provides an introduction to the new feature added to HammerDB v4.9 by @krithikasatish  and @JoshInnis to provide accelerated load performance for both SQL Server TPROC-C and TPROC-H schemas.

This accelerated load is implemented using the BCP utility and is turned on or off with the “Use BCP Option” checkbox. With the default for v4.9 to have this feature enabled.

Insert based load

In previous releases, HammerDB loads data using inserts, (or supported a manual use of BCP by generating flat files with the datagen option) and you can still perform schema loads with inserts by deselecting the Use BCP Option.

This earlier functionality has the advantage that there are no intermediate staging files required, and data is inserted into the database with multi-row inserts as soon as it is created. The disadvantage of this approach is that the database sees the insert as a regular database insert, maintaining consistency and recoverability, and with a round-trip to the HammerDB client per multi-row, this is what we see in recent expensive queries.

In this example on a development PC the regular build started at 10:48:35 and ended at 10:54:46 meaning it took 6 minutes 11 seconds to build our 20 warehouse schema.

BCP based load

Now when we select the  “Use BCP Option”

We can see that we are now using an insert bulk command (although the item table being fixed at 100K rows and loaded by the monitor virtual user continues to use regular inserts).

In our PC test the build started at 11:28:16 and ended at 11:30:27 meaning it took 2 minutes 11 seconds to build our 20 warehouse schema.

Using BCP our TPROC-C build now completed almost 3X faster and tests on TPROC-H showed similar results.

Note that for the BCP functionality, temporary data files will be created and deleted in the TMP environment variable area, and you can find this location by running the command

(HammerDB-4.9) % puts $::env(TMP)
C:\Users\Hammer\AppData\Local\Temp

in the HammerDB console, and then see the files being created and deleted as the build progresses.

Summary

The new Use BCP Option for SQL Server accelerates both TPROC-C and TPROC-H schema loads, so you can get to running your benchmarks faster. We thank @krithikasatish and @JoshInnis for this excellent contribution to HammerDB.

 

 

 

 

 

 

 

 

 

 

Why you should benchmark your database using stored procedures

HammerDB uses stored procedures to achieve maximum throughput when benchmarking your database.  HammerDB has always used stored procedures as a design decision because the original benchmark was implemented as close as possible to the example workload in the TPC-C specification that uses stored procedures. Additionally, reviewing official TPC-C full disclosure reports highlighted that all vendors also use stored procedures.

However, there can be a lack of understanding of the benefits that stored procedures bring or if you have a benchmarking tool or database that doesn’t support stored procedures, then you have nothing to compare against.

This blog post introduces the new “No stored procedures” option for MariaDB and MySQL introduced with HammerDB v4.9 and explains how to measure the difference between running with and without stored procedures.

What is a stored procedure?

A stored procedure as it sounds is a procedure stored inside your database that you call with parameters. The business logic is implemented inside the routine and it returns a result. As an example from the TPC-C specification, this is the Stock Level procedure.

int slev() 
{ 
EXEC SQL WHEN EVER NOT FOUND GOTO sqlerr;
EXEC SQL WHEN EVER SQLERROR GOTO sqlerr; 
EXEC SQL SELECT d _next_o_id IN TO :o_id FROM district WHERE d_w_id =:w _id AND d_id = :d_id; 
EXEC SQL SELECT COUNT(DISTINCT(s_i_id )) INTO :stock_count 
FROM order_line, stock 
WHERE ol_w _id =:w _id AND
ol_d_id =:d_id AND ol_o_id <:o_id AND
ol_o_id >=:o_id -20 AND
s_w_id =:w_id AND
s_i_id =ol_i_id AND s_quantity < :threshold; 
EXEC SQL COMMIT WORK; 
return(0); 
sqlerr: 
error();
}

and in MySQL, MariaDB we can implement this procedure as follows. So we call the stored procedure and pass a warehouse id, district id and threshold and receive a stock_count as a result.

Also, note in passing that we include a COMMIT statement in this procedure because one is included in the example code in TPC-C specification.

CREATE PROCEDURE `SLEV` (
st_w_id INTEGER,
st_d_id INTEGER,
threshold INTEGER,
OUT stock_count INTEGER
)
BEGIN 
DECLARE st_o_id INTEGER;
DECLARE `Constraint Violation` CONDITION FOR SQLSTATE '23000';
DECLARE EXIT HANDLER FOR `Constraint Violation` ROLLBACK;
DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK;
START TRANSACTION;
SELECT d_next_o_id INTO st_o_id
FROM district
WHERE d_w_id=st_w_id AND d_id=st_d_id;
SELECT COUNT(DISTINCT (s_i_id)) INTO stock_count
FROM order_line, stock
WHERE ol_w_id = st_w_id AND
ol_d_id = st_d_id AND (ol_o_id < st_o_id) AND
ol_o_id >= (st_o_id - 20) AND s_w_id = st_w_id AND
s_i_id = ol_i_id AND s_quantity < threshold;
COMMIT;
END

Next we can run the same SQL directly without a stored procedure.

proc slev { maria_handler w_id stock_level_d_id prepare RAISEERROR } {

global mariastatus
set threshold [ RandomNumber 10 20 ]
mariaexec $maria_handler "start transaction"
set d_next_o_id [ list [ maria::sel $maria_handler "SELECT d_next_o_id FROM district WHERE d_w_id=$w_id AND d_id=$stock_level_d_id" -list ]]
set stock_count [ list [ maria::sel $maria_handler "SELECT COUNT(DISTINCT (s_i_id)) FROM order_line, stock WHERE ol_w_id = $w_id AND ol_d_id = $stock_level_d_id AND (ol_o_id < $d_next_o_id) AND ol_o_id >= ($d_next_o_id - 20) AND s_w_id = $w_id AND s_i_id = ol_i_id AND s_quantity < $threshold" -list ]]
maria::commit $maria_handler
}

In this example, when we run the client SQL version d_next_o_id is 3001
stock count is 6

SELECT d_next_o_id FROM district WHERE d_w_id=1 AND d_id=10
SELECT COUNT(DISTINCT (s_i_id)) FROM order_line, stock WHERE ol_w_id = 1
AND ol_d_id = 10 AND (ol_o_id < 3001) AND ol_o_id >= (3001 - 20)
AND s_w_id = 1 AND s_i_id = ol_i_id AND s_quantity < 11

and when we call the stored procedure, we get the same result for the stock count.

call slev(1,10,11,@stock_count);
Query OK, 2 rows
select @stock_count;
> +--------------+
> | @stock_count |
> +--------------+
> | 6            |
> +--------------+
> 1 row in set (0.000 sec)

Although we get the same result and run exactly the same SQL, there are notable differences in how we get there. Firstly, the stored procedure is compiled on the database but also the slev procedure on the client is compiled into bytecode. Also using client SQL we do more parsing of SQL statements and are using literal values, although we could also use prepared statements in this scenario. However, the major difference is that whereas in the client SQL approach we make 4 round trips between the client and database to receive our stock count in the stored procedure we are making 2, one to call the stored procedure with the input parameters and one to select the variable where the output is stored.  With a simple example such as this, it would not necessarily be expected for the additional network traffic to be significant between the 2 approaches. However, with more complex application logic this network round trip soon becomes a key focus area for improving performance.

Setting the HammerDB No Stored Procedures Option

The no stored procedures option is a driver script option, and you should create the schema as normal with stored procedures.  Then when selecting the driver options to use client SQL only, you select the No Stored Procedures checkbox.

In the CLI you set maria_no_stored_procs or mysql_no_stored_procs to use a client SQL driver script.

Stored Procedures and Client SQL comparison

To test the stored procedures and client implementations, we ran both workloads against a system equipped with Intel Xeon 8280L. The data shows a scripted automated workload running a number of back to back tests each time with an increasing number of virtual users.

On MySQL, we saw a 1.5X performance advantage in favour of stored procedures and on MariaDB a 1.3X performance advantage.

Note that for all tests, we used the local loopback address and port to provide the lowest possible network latency between client and server and also so we don’t have any concerns about bandwidth limitations.

# iperf -c 127.0.0.1

------------------------------------------------------------

Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 19230 connected with 127.0.0.1 port 5001
[  4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 19230
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  37.3 GBytes  32.1 Gbits/sec
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  37.3 GBytes  32.1 Gbits/sec

We also test both local port and socket connections and found that a local port provided the best comparative data for both MySQL and MariaDB.

Database information and performance schemas

The best approach for database performance analysis is to take a top-down approach. Use the performance metrics available in the database first before looking at data further down in the stack.

Using MariaDB and analysing performance at a workload of 80 Virtual Users the first place we can look at is the information schema user_statistics to quantify the difference in the database traffic. In this example, we can capture the bytes received and sent for the workload without stored procedures

mysql> select * from information_schema.user_Statistics where user='root'\G;
*************************** 1. row ***************************
          USER: root
          TOTAL_CONNECTIONS: 83
          BYTES_RECEIVED: 22847015761
          BYTES_SENT: 50668052468
...

And also for the workload with stored procedures

mysql> select * from information_schema.user_Statistics where user='root'\G;
*************************** 1. row ***************************
          USER: root
          TOTAL_CONNECTIONS: 83
          BYTES_RECEIVED: 3548506639
          BYTES_SENT: 6335812312
...

With this first step, we have identified a difference in that without stored procedures we sent 8X and received 6.4X the amount of data from the server to the client to achieve the same result.  With the workload running for 2 minutes rampup the throughput for the workload without stored procedures is equivalent to BYTES_RECEIVED:  54 MB/s and BYTES_SENT: 120 MB/s.  This in itself is not an issue as it means we are well within the systems bandwidth capabilities. It also makes sense that the database sends more data than it receives as it receives SQL queries and returns results, for the client it will be the opposite receving more data than is sent.

Therefore, the most important question is not so much how much data was sent, but how long it took and where wait time was spent.  At the highest level we can look at an event called io/socket/sql/client_connection to see how much time measured in pico seconds was spent in network related events, firstly without stored procedures

mysql> select * from events_waits_summary_global_by_event_name where event_name like '%client_connection%';
+--------------------------------------+------------+------------------+----------------+----------------+----------------+
| EVENT_NAME                           | COUNT_STAR | SUM_TIMER_WAIT   | MIN_TIMER_WAIT | AVG_TIMER_WAIT | MAX_TIMER_WAIT |
+--------------------------------------+------------+------------------+----------------+----------------+----------------+
| wait/io/socket/sql/client_connection |  350769863 | 2350211822039704 |              0 |        6699889 |     8921327926 |
+--------------------------------------+------------+------------------+----------------+----------------+----------------+
1 row in set (0.01 sec)

and secondly with stored procedures

mysql> select * from events_waits_summary_global_by_event_name where event_name like '%client_connection%';
+--------------------------------------+------------+-----------------+----------------+----------------+----------------+
| EVENT_NAME                           | COUNT_STAR | SUM_TIMER_WAIT  | MIN_TIMER_WAIT | AVG_TIMER_WAIT | MAX_TIMER_WAIT |
+--------------------------------------+------------+-----------------+----------------+----------------+----------------+
| wait/io/socket/sql/client_connection |   43053845 | 310781514128952 |              0 |        7218288 |     7889980872 |
+--------------------------------------+------------+-----------------+----------------+----------------+----------------+
1 row in set (0.01 sec)

Now we can observe that when we used client SQL we spent 7.6X the time waiting for the network compared to when we used stored procedures and it is not so much the time of each event but the cumulative difference.

Note that in some places such as cloud providers the wait event io/socket/sql/client_connection is documented as being related to thread creation, however it can be observed this metric increments when SQL statements are being run against the database rather than when clients are connecting. We can also drill down into the socket_summary_by_instance table to look further at the io/socket/sql/client_connection event, again firstly without stored procedures

*************************** 3. row ***************************
               EVENT_NAME: wait/io/socket/sql/client_connection
               COUNT_STAR: 356410019
           SUM_TIMER_WAIT: 2415153960635112
           MIN_TIMER_WAIT: 0
           AVG_TIMER_WAIT: 6775980
           MAX_TIMER_WAIT: 9975342936
               COUNT_READ: 178204845
           SUM_TIMER_READ: 133364264753112
           MIN_TIMER_READ: 0
           AVG_TIMER_READ: 748092
           MAX_TIMER_READ: 8409523512
 SUM_NUMBER_OF_BYTES_READ: 20950542108
              COUNT_WRITE: 178204678
          SUM_TIMER_WRITE: 2281784113760856
          MIN_TIMER_WRITE: 0
          AVG_TIMER_WRITE: 12804240
          MAX_TIMER_WRITE: 9975342936
SUM_NUMBER_OF_BYTES_WRITE: 49197673794
               COUNT_MISC: 496
           SUM_TIMER_MISC: 5582121144
           MIN_TIMER_MISC: 0
           AVG_TIMER_MISC: 11254116
           MAX_TIMER_MISC: 67299264

and secondly with:

*************************** 3. row ***************************
               EVENT_NAME: wait/io/socket/sql/client_connection
               COUNT_STAR: 43029616
           SUM_TIMER_WAIT: 308592755260896
           MIN_TIMER_WAIT: 0
           AVG_TIMER_WAIT: 7171416
           MAX_TIMER_WAIT: 6103182288
               COUNT_READ: 21514643
           SUM_TIMER_READ: 15175699757784
           MIN_TIMER_READ: 0
           AVG_TIMER_READ: 705312
           MAX_TIMER_READ: 5876657352
 SUM_NUMBER_OF_BYTES_READ: 3219606535
              COUNT_WRITE: 21514479
          SUM_TIMER_WRITE: 293411103781368
          MIN_TIMER_WRITE: 0
          AVG_TIMER_WRITE: 13637520
          MAX_TIMER_WRITE: 6103182288
SUM_NUMBER_OF_BYTES_WRITE: 6082914012
               COUNT_MISC: 494
           SUM_TIMER_MISC: 5951721744
           MIN_TIMER_MISC: 0
           AVG_TIMER_MISC: 12047964
           MAX_TIMER_MISC: 87785304
3 rows in set (0.01 sec)

We observe that io/socket/sql/client_connection does indeed provide us the same data as the events_waits_summary_global_by_event_name table but also the documentation tells us that the columns
COUNT_READ, SUM_TIMER_READ, MIN_TIMER_READ, AVG_TIMER_READ, MAX_TIMER_READ, SUM_NUMBER_OF_BYTES_READ aggregate all receive operations (RECV, RECVFROM, and RECVMSG) and COUNT_WRITE, SUM_TIMER_WRITE, MIN_TIMER_WRITE, AVG_TIMER_WRITE, MAX_TIMER_WRITE, SUM_NUMBER_OF_BYTES_WRITE columns aggregate all send operations (SEND, SENDTO, and SENDMSG). So now we can relate the database statistics to what we can observe on the system and see that without stored procedures we spend 8.8X the time on read operations (RECV, RECVFROM, and RECVMSG) and 7.8X the time on writes (SEND, SENDTO, and SENDMSG) meaning without stored procedures we are spending a lot more time on the round trip between client and server and can use io/socket/sql/client_connection to quantify the difference.

To put it differently at least some of the time spent in the network accounts for the lower database throughput when we are not using stored procedures.

We could also expect that where operations are starting a transaction and then taking locks, if we are spending more time in the network then we could also be spending more time in lock waits. In this example measuring without stored procedures

mysql> select object_name, sum_timer_wait from table_lock_waits_summary_by_table where object_name = 'warehouse';
+-------------+----------------+
| object_name | sum_timer_wait |
+-------------+----------------+
| warehouse   |  3890167468704 |
+-------------+----------------+
1 row in set (0.00 sec)

mysql> select object_name, sum_timer_wait from table_lock_waits_summary_by_table where object_name = 'district';
+-------------+----------------+
| object_name | sum_timer_wait |
+-------------+----------------+
| district    |  5874029749344 |
+-------------+----------------+
1 row in set (0.00 sec)

compared to the same workload with stored procedures

mysql> select object_name, sum_timer_wait from table_lock_waits_summary_by_table where object_name = 'warehouse';
+-------------+----------------+
| object_name | sum_timer_wait |
+-------------+----------------+
| warehouse   |  4039611353088 |
+-------------+----------------+
1 row in set (0.00 sec)
mysql> select object_name, sum_timer_wait from table_lock_waits_summary_by_table where object_name = 'district';
+-------------+----------------+
| object_name | sum_timer_wait |
+-------------+----------------+
| district    |  5383666516296 |
+-------------+----------------+
1 row in set (0.00 sec)

The amount of time in lock waits goes up on the warehouse table but down on the district table as with the district table we have a SELECT FOR UPDATE statement that now includes a round trip between the SELECT and UPDATE.

To visualize the overall difference we can generate a flame graph, again firstly without stored procedures

and secondly with

When you generate your own flame graphs you will have an interactive SVG file allowing you to drill down into both client and server functions, However the difference should immediately be clear at first glance.

The width of the bars show the time spent in each function and at the base as expected we are spending time in mariadbd, the tclsh8.6 client (i.e. HammerDB) and idle.

With stored procedures in the database and the HammerDB client the time spent in the network and even in the HammerDB client at all is minimal compared to the time spent in the database. This is intentional as the application logic is in a stored procedure within the database so we can see the time spent in do_execute_sp and the functions it calls, in particular mysql_execute_command are exactly the same as when we are not using stored procedures, i.e. the SQL is the same, but we are calling it faster.

When we run without stored procedures we can visualise what we saw from io/socket/sql/client_connection in that we are now spending a lot more time in send and recv functions in both the client and the server.  note that within HammerDB this additional time is spend within the MariaDB client library.

We can also see additional time in MYSQLParse when we are not using stored procedures because we are sending SQL statements with literal values that may be different each time and therefore parsed each time, whereas stored procedures use parameters meaning the identical SQL can be reused.

System Performance with LinuxKI

From the previous analysis we can observe that the client and database is spending a lot more time in the network without stored procedures, however the previous timings only showed us active times. For a more complete picture each Virtual User is a parallel thread that matches a client connection meaning that we will now have both active but also idle time when both client and server are waiting to receive data.

mysql> select * from socket_instances;
+----------------------------------------+-----------------------+-----------+-----------+-----------+-------+--------+
| EVENT_NAME                             | OBJECT_INSTANCE_BEGIN | THREAD_ID | SOCKET_ID | IP        | PORT  | STATE  |
+----------------------------------------+-----------------------+-----------+-----------+-----------+-------+--------+
| wait/io/socket/sql/server_tcpip_socket |        94367427331136 |         1 |        15 | ::1       |  3306 | ACTIVE |
| wait/io/socket/sql/server_tcpip_socket |        94367427331456 |         1 |        16 | 127.0.0.1 |  3306 | ACTIVE |
| wait/io/socket/sql/server_unix_socket  |        94367427331776 |         1 |        17 |           |     0 | ACTIVE |
| wait/io/socket/sql/client_connection   |        94367427332096 |         9 |        38 |           |     0 | ACTIVE |
| wait/io/socket/sql/client_connection   |        94367427384896 |      2281 |        39 | 127.0.0.1 | 58684 | IDLE   |
| wait/io/socket/sql/client_connection   |        94367427385216 |      2282 |        40 | 127.0.0.1 | 58690 | IDLE   |
| wait/io/socket/sql/client_connection   |        94367427385536 |      2283 |        41 | 127.0.0.1 | 58696 | ACTIVE |
...
| wait/io/socket/sql/client_connection   |        94367427410176 |      2360 |       144 | 127.0.0.1 | 50442 | ACTIVE |
| wait/io/socket/sql/client_connection   |        94367427410496 |      2361 |       145 | 127.0.0.1 | 50444 | ACTIVE |
| wait/io/socket/sql/client_connection   |        94367427410816 |      2362 |       146 | 127.0.0.1 | 50458 | IDLE   |
+----------------------------------------+-----------------------+-----------+-----------+-----------+-------+--------+
86 rows in set (0.00 sec)

When using client SQL we are going to spend more time in the network as well as more time idle waiting to send and receive and more time for the threads to context switch between states.

To illustrate this we will  use the LinuxKI Toolset which is designed to answer 2 key questions, namely, If it’s running, what’s it doing? and If it’s waiting, what’s it waiting on? To do this we run LinuxKI for 20 seconds while running the workload with and without stored procedures and view the LinuxKI report.

When we observe the global CPU usage one clear difference is the increase in softirqs or software interrupts.(If we were running over a network using the network card we would also see an increase in hardware interrupts).

In the LinuxKI report we can see the difference without stored procedures

and with stored procedures

so without stored procedures we are spending 6.8X the time in the NET_RX Soft IRQ utilising 5% of the available CPU where the main function is net_rx_action which processes the data flow. And in both client and server you can see the data being sent

received

and  read (in both the MariaDB client and server)

Note the function in both client and server of poll_schedule_timeout as this is what we see in the trace events of the top processes calling sleep, so we are seeing timeouts and tasks being descheduled while we are polling inside MariaDB waiting for data and then woken up when the data is ready.

and for contrast with stored procedures where we now see futex for MariaDB mutex locking as we driving a greater level of database throughput and futex_wait_queue_me as we are waiting for mutex acquisition.

and if our MariaDB threads are polling then timing out we are going to be descheduled and doing more context switches, which we can observe on a system wide basis with vmstat or in this example on a per thread basis with pidstat. Capturing this data without stored procedures.

pidstat -wt 3 10
04:01:02 AM  1001         -   3452701  12931.33      7.33  |__mariadbd
04:01:02 AM  1001         -   3452702  13691.00     13.33  |__mariadbd
04:01:02 AM  1001         -   3452703  11562.67     13.67  |__mariadbd
04:01:02 AM  1001         -   3452705  13753.67     14.33  |__mariadbd
04:01:02 AM  1001         -   3452716  12376.00     11.33  |__mariadbd
04:01:02 AM  1001         -   3452717  13874.00     10.00  |__mariadbd
04:01:02 AM  1001         -   3452720  11129.00      9.67  |__mariadbd
04:01:02 AM  1001         -   3452723  13543.67      9.33  |__mariadbd
04:01:02 AM  1001         -   3452726  11826.67     12.00  |__mariadbd
04:01:02 AM  1001         -   3452769   3810.67      3.00  |__mariadbd
04:01:02 AM  1001         -   3452770  12672.00      9.67  |__mariadbd
04:01:02 AM  1001         -   3452773  13539.00     11.67  |__mariadbd
04:01:02 AM  1001         -   3452774  11724.33     11.33  |__mariadbd
04:01:02 AM  1001         -   3452777  12707.67     12.00  |__mariadbd
...
04:01:02 AM     0         -   3456145   8858.33     11.33  |__tclsh8.6
04:01:02 AM     0         -   3456146   9661.00      8.33  |__tclsh8.6
04:01:02 AM     0         -   3456147   8439.67     10.00  |__tclsh8.6
04:01:02 AM     0         -   3456148   8969.33      9.00  |__tclsh8.6
04:01:02 AM     0         -   3456149   8453.00      8.00  |__tclsh8.6
04:01:02 AM     0         -   3456150   7752.67      8.00  |__tclsh8.6
04:01:02 AM     0         -   3456151   6378.33      8.67  |__tclsh8.6
04:01:02 AM     0         -   3456152   4978.00      6.33  |__tclsh8.6
04:01:02 AM     0         -   3456153   4054.67      1.67  |__tclsh8.6
04:01:02 AM     0         -   3456154   2726.67      3.00  |__tclsh8.6

and with stored procedures.

pidstat -wt 3 10
04:07:12 AM  1001         -   3452701   8474.67      7.00  |__mariadbd
04:07:12 AM  1001         -   3452702   8464.33      6.67  |__mariadbd
04:07:12 AM  1001         -   3452703   8403.00      6.33  |__mariadbd
04:07:12 AM  1001         -   3452704   8339.33      5.33  |__mariadbd
04:07:12 AM  1001         -   3452705   8712.00      6.00  |__mariadbd
04:07:12 AM  1001         -   3452706   8656.00      3.33  |__mariadbd
04:07:12 AM  1001         -   3452707   8671.67      5.67  |__mariadbd
04:07:12 AM  1001         -   3452708   8585.67      9.00  |__mariadbd
04:07:12 AM  1001         -   3452716   8361.33      6.67  |__mariadbd
04:07:12 AM  1001         -   3452717   8541.67      6.00  |__mariadbd
04:07:12 AM  1001         -   3452718   8664.33      8.00  |__mariadbd
04:07:12 AM  1001         -   3452719   8574.00      6.33  |__mariadbd
04:07:12 AM  1001         -   3452720   8748.00      3.33  |__mariadbd
...
04:07:12 AM     0         -   3457711    645.33      9.33  |__tclsh8.6
04:07:12 AM     0         -   3457712    646.00     15.00  |__tclsh8.6
04:07:12 AM     0         -   3457713    674.33     15.33  |__tclsh8.6
04:07:12 AM     0         -   3457714    649.33     14.00  |__tclsh8.6
04:07:12 AM     0         -   3457715    652.67     14.00  |__tclsh8.6
04:07:12 AM     0         -   3457716    687.67     15.67  |__tclsh8.6
04:07:12 AM     0         -   3457717    634.67     14.33  |__tclsh8.6
04:07:12 AM     0         -   3457718    638.33     11.67  |__tclsh8.6
04:07:12 AM     0         -   3457719    654.00     16.00  |__tclsh8.6
04:07:12 AM     0         -   3457720    703.33     15.33  |__tclsh8.6
04:07:12 AM     0         -   3457721    669.67      7.67  |__tclsh8.6
04:07:12 AM     0         -   3457722    656.33     12.00  |__tclsh8.6
04:07:12 AM     0         -   3457723    681.33     13.00  |__tclsh8.6

We can see that our HammerDB is now doing almost 14X the number of context switches in our MariaDB client (tclsh8.6) without stored procedures.   This is also going to cause run queue latency to go up as our threads are spending more time being switched off the CPU and back on again.  As a thread can also be scheduled on another CPU we also see the number of migrations between CPUs increase as well as the Node migrations between different NUMA nodes. Again without stored procedures

and to contrast, with stored procedures

and thread migration can introduce a delay in the NET_RX soft irq performance.

Although the LinuxKI report has a wealth more information we can relate this back to the database statistics by observing the sendto/recvfrom activity on the MariaDB ports, without stored procedures.

For  example the data sent to port 3306 matches the BYTES_RECEIVED:  54 MB/s from the database statistics and the data received from port 3306 matches the BYTES_SENT: 120 MB/s. The other ports shows the HammerDB Virtual Users.  And the same data with stored procedures.

LinuxKI enables us to have a system wide view of database performance.

Summary

The key question in this post is why you should benchmark your database using stored procedures and even though the example uses the local loopback address, when we are running the same workload without stored procedures we can observe how much more time we spend in the network and context switches rather than in the database. When we use stored procedures we spend as much time as possible in the database driving higher levels of throughput meaning we are better able to observe the performance capabilities of the database and system under test.

Nevertheless,  in some cases testing without stored procedures may be useful particularly if we are observing the impact of changes at the network layer such as encryption or compression and the HammerDB v4.9 no stored procedure option gives you the flexibility to benchmark such a configuration.

However if you are looking to benchmark a database with client SQL you should account for how much time is spent in each layer, especially if you are running the test over an external network and bring hardware interrupts into play.

 

 

 

 

 

 

 

 

 

How to run a fixed throughput workload with HammerDB

This post gives an introduction to understanding how to run a fixed throughput OLTP workload with HammerDB.  In this example, we will use the CLI to run TPROC-C on a MariaDB database to illustrate the concepts.

Setting the bar with the default workload

Firstly, it is important to understand that the majority of users wanting to run an OLTP workload will use the default approach. This way is typically the fastest way to determine the maximum throughput of a database with minimal configuration, and over time has been proven to show the same performance ratios between systems as a well configured fixed throughput setup.  Therefore, the first thing we will do is to run a default workload, using the sample scripts provided as shown.

set tmpdir $::env(TMP)
puts "SETTING CONFIGURATION"
dbset db maria
dbset bm TPC-C

diset connection maria_host localhost
diset connection maria_port 3306
diset connection maria_socket /tmp/mariadb.sock

diset tpcc maria_user root
diset tpcc maria_pass maria
diset tpcc maria_dbase tpcc
diset tpcc maria_driver timed
diset tpcc maria_rampup 2
diset tpcc maria_duration 5
diset tpcc maria_allwarehouse false
diset tpcc maria_timeprofile false

loadscript
puts "TEST STARTED"
vuset vu 80
vucreate
tcstart
tcstatus
set jobid [ vurun ]
vudestroy
tcstop
puts "TEST COMPLETE"
set of [ open $tmpdir/maria_tprocc w ]
puts $of $jobid
close $of

In this test, our MariaDB 10.10.1 database built with 1000 warehouses returned just over 700,000 NOPM illustrating the upper limit of the system and database combination we are testing.

Vuser 1:80 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 701702 NOPM from 1631827 MariaDB TPM

Fixing throughput with keying and thinking time

The key concept to understanding fixed throughput is that of keying and thinking time. This setting is exactly how it sounds and simulates the delay a real user would have in inputting data i.e. keying time and reading the results returned i.e. thinking time. So as a first step, we will go ahead and set keying and thinking time for one virtual user and run the test for 2 minutes of rampup and 2 minutes of testing time.

diset tpcc maria_keyandthink true

When we run the test, we can see that our virtual user ran transactions at 1 NOPM (new order per minute) with most of the time now spent in keying and thinking time.

TEST STARTED
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
2 Virtual Users Created with Monitor VU
Transaction Counter Started
Transaction Counter thread running with threadid:tid0x7f141b7fe700
Vuser 1:RUNNING
Vuser 1:Ssl_cipher
0 MariaDB tpm
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Ssl_cipher
Vuser 2:Processing 10000000 transactions with output suppressed...
0 MariaDB tpm
0 MariaDB tpm
0 MariaDB tpm
0 MariaDB tpm
6 MariaDB tpm
Vuser 1:Rampup 1 minutes complete ...
6 MariaDB tpm
0 MariaDB tpm
0 MariaDB tpm
12 MariaDB tpm
6 MariaDB tpm
6 MariaDB tpm
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 2 in minutes
0 MariaDB tpm
6 MariaDB tpm
0 MariaDB tpm
0 MariaDB tpm
6 MariaDB tpm
12 MariaDB tpm
Vuser 1:1 ...,
6 MariaDB tpm
6 MariaDB tpm
0 MariaDB tpm
0 MariaDB tpm
0 MariaDB tpm
6 MariaDB tpm
Vuser 1:2 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 1 NOPM from 3 MariaDB TPM
Vuser 1:FINISHED SUCCESS

Now, let’s see what happens if we increase the number of virtual users to 10 and 100.  Again, each virtual user we add is running at approximately 1 NOPM.

Vuser 1:10 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 14 NOPM from 29 MariaDB TPM

Vuser 1:100 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 117 NOPM from 284 MariaDB TPM

So what we have by adding keying and thinking time is a fixed throughput workload. Each virtual user we add is going to add approximately 1 NOPM, so we know exactly the configuration we need to test a particular level of throughput up to the limits of the system we found with the default workload.

Maximum Throughput

So far we have seen throughput of ‘approximately’ 1 NOPM per virtual user. However, the HammerDB TPROC-C workload is derived from the TPC-C specification using the same keying and thinking times with the specification determining maximum throughput as follows:

The maximum throughput is achieved with infinitely fast transactions resulting in a null response time and minimum required wait times. The intent of this clause is to prevent reporting a throughput that exceeds this maximum, which is computed to be 12.86 tpmC per warehouse.

In the TPC-C specification, each warehouse has 10 connections and therefore with HammerDB derived workload this enables us to use 1.28 NOPM per virtual user as a guide to our fixed throughput.

Asynchronous Scaling

Now, this guide of 1.28 NOPM per virtual user presents a challenge if we want to scale numerous virtual users to drive meaningful throughput.  In particular, if these virtual users are spending most of their time waiting for keying or thinking time. For this reason, HammerDB implements asynchronous scaling whereby each virtual user can make multiple connections to the database and run the transactions for these connections when they wake from their keying and thinking time.  We have covered in many blog posts before the concept of parallelism vs concurrency, e.g.

Why Tcl is 700% faster than Python for database benchmarking

Therefore, it should be clear that the virtual users are operating in parallel and the asynchronous connections concurrently.  This way, we can scale HammerDB fixed throughput workloads to thousands of connections by setting the async_scale and async_client options. Note that keyandthink must be enabled as it is the keying and thinking time that is managed asynchronously.

diset tpcc maria_async_scale true
diset tpcc maria_async_client 10

So let’s re-run with 1 virtual user but this time with 10 asynchronous clients. We can see that virtual user now makes 10 connections and we get the 12 NOPM we are expecting.

TEST STARTED
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
2 Virtual Users Created with Monitor VU
Transaction Counter Started
Transaction Counter thread running with threadid:tid0x7f1ce2b97700
Vuser 1:RUNNING
Vuser 1:Ssl_cipher
0 MariaDB tpm
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Started asynchronous clients:vuser2:ac1 vuser2:ac2 vuser2:ac3 vuser2:ac4 vuser2:ac5 vuser2:ac6 vuser2:ac7 vuser2:ac8 vuser2:ac9 vuser2:ac10
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 2:Processing 10000000 transactions with output suppressed...
...
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 VU * 10 AC = 10 Active Sessions configured
Vuser 1:TEST RESULT : System achieved 12 NOPM from 31 MariaDB TPM
Vuser 1:FINISHED SUCCESS

Now if we ramp up the connections to 100 asynchronous clients, we get 128 NOPM.

Vuser 2:RUNNING
Vuser 2:Started asynchronous clients:vuser2:ac1 vuser2:ac2 vuser2:ac3 vuser2:ac4 vuser2:ac5 vuser2:ac6 vuser2:ac7 vuser2:ac8 vuser2:ac9 vuser2:ac10 vuser2:ac11 vuser2:ac12 vuser2:ac13 vuser2:ac14 vuser2:ac15 vuser2:ac16 vuser2:ac17 vuser2:ac18 vuser2:ac19 vuser2:ac20 vuser2:ac21 vuser2:ac22 vuser2:ac23 vuser2:ac24 vuser2:ac25 vuser2:ac26 vuser2:ac27 vuser2:ac28 vuser2:ac29 vuser2:ac30 vuser2:ac31 vuser2:ac32 vuser2:ac33 vuser2:ac34 vuser2:ac35 vuser2:ac36 vuser2:ac37 vuser2:ac38 vuser2:ac39 vuser2:ac40 vuser2:ac41 vuser2:ac42 vuser2:ac43 vuser2:ac44 vuser2:ac45 vuser2:ac46 vuser2:ac47 vuser2:ac48 vuser2:ac49 vuser2:ac50 vuser2:ac51 vuser2:ac52 vuser2:ac53 vuser2:ac54 vuser2:ac55 vuser2:ac56 vuser2:ac57 vuser2:ac58 vuser2:ac59 vuser2:ac60 vuser2:ac61 vuser2:ac62 vuser2:ac63 vuser2:ac64 vuser2:ac65 vuser2:ac66 vuser2:ac67 vuser2:ac68 vuser2:ac69 vuser2:ac70 vuser2:ac71 vuser2:ac72 vuser2:ac73 vuser2:ac74 vuser2:ac75 vuser2:ac76 vuser2:ac77 vuser2:ac78 vuser2:ac79 vuser2:ac80 vuser2:ac81 vuser2:ac82 vuser2:ac83 vuser2:ac84 vuser2:ac85 vuser2:ac86 vuser2:ac87 vuser2:ac88 vuser2:ac89 vuser2:ac90 vuser2:ac91 vuser2:ac92 vuser2:ac93 vuser2:ac94 vuser2:ac95 vuser2:ac96 vuser2:ac97 vuser2:ac98 vuser2:ac99 vuser2:ac100
...
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 VU * 100 AC = 100 Active Sessions configured
Vuser 1:TEST RESULT : System achieved 128 NOPM from 286 MariaDB TPM
Vuser 1:FINISHED SUCCESS

and 10,000 connections gives us  12,612 NOPM, which is what we would expect from our fixed throughput configuration.

mysql> show processlist
...
10003 rows in set (0.01 sec)
...
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:100 VU * 100 AC = 10000 Active Sessions configured
Vuser 1:TEST RESULT : System achieved 12612 NOPM from 29201 MariaDB TPM

Remember that you should also increase the rampup time to allow enough time for all of the asynchronous clients to connect and reached our fixed throughput rate.

Modifying the keying and thinking time

If you want to modify the keying and thinking time you can do that by modifying the script you are running. Firstly run the savescript command.

hammerdb>savescript fixed.tcl
Success ... wrote script to /home/HammerDB-4.8/TMP/fixed.tcl

and then edit the file. Towards the end of the file you can see the section where transaction selection takes place and the implementation of keying and thinking time.

        set choice [ RandomNumber 1 23 ]
                if {$choice <= 10} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:neword" }
                    if { $KEYANDTHINK } { async_keytime 18  $clientname neword $async_verbose }
                    neword $maria_handler $w_id $w_id_input $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 12 $clientname neword $async_verbose }
                } elseif {$choice <= 20} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:payment" }
                    if { $KEYANDTHINK } { async_keytime 3 $clientname payment $async_verbose }
                    payment $maria_handler $w_id $w_id_input $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 12 $clientname payment $async_verbose }
                } elseif {$choice <= 21} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:delivery" }
                    if { $KEYANDTHINK } { async_keytime 2 $clientname delivery $async_verbose }
                    delivery $maria_handler $w_id $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 10 $clientname delivery $async_verbose }
                } elseif {$choice <= 22} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:slev" }
                    if { $KEYANDTHINK } { async_keytime 2 $clientname slev $async_verbose }
                    slev $maria_handler $w_id $stock_level_d_id $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 5 $clientname slev $async_verbose }
                } elseif {$choice <= 23} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:ostat" }
                    if { $KEYANDTHINK } { async_keytime 2 $clientname ostat $async_verbose }
                    ostat $maria_handler $w_id $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 5 $clientname ostat $async_verbose }
                }

So, we will roughly halve the keying and thinking time (the values must be whole numbers) and save the file.

set choice [ RandomNumber 1 23 ]
                if {$choice <= 10} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:neword" }
                    if { $KEYANDTHINK } { async_keytime 9  $clientname neword $async_verbose }
                    neword $maria_handler $w_id $w_id_input $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 6 $clientname neword $async_verbose }
                } elseif {$choice <= 20} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:payment" }
                    if { $KEYANDTHINK } { async_keytime 2 $clientname payment $async_verbose }
                    payment $maria_handler $w_id $w_id_input $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 6 $clientname payment $async_verbose }
                } elseif {$choice <= 21} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:delivery" }
                    if { $KEYANDTHINK } { async_keytime 1 $clientname delivery $async_verbose }
                    delivery $maria_handler $w_id $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 5 $clientname delivery $async_verbose }
                } elseif {$choice <= 22} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:slev" }
                    if { $KEYANDTHINK } { async_keytime 1 $clientname slev $async_verbose }
                    slev $maria_handler $w_id $stock_level_d_id $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 3 $clientname slev $async_verbose }
                } elseif {$choice <= 23} {
                    if { $async_verbose } { puts "$clientname:w_id:$w_id:ostat" }
                    if { $KEYANDTHINK } { async_keytime 1 $clientname ostat $async_verbose }
                    ostat $maria_handler $w_id $prepare $RAISEERROR $clientname
                    if { $KEYANDTHINK } { async_thinktime 3 $clientname ostat $async_verbose }
                }

Now, we can use customscript to load the modified workload and custommonitor to configure the additional monitor virtual user shown in this excerpt.

#loadscript
customscript /home/HammerDB-4.8/TMP/fixed.tcl
custommonitor timed
puts "TEST STARTED"
vuset vu 100

Note, that with a customscript the values you have configured for the asynchronous clients will now be set in this script and not loaded dynamically.

set async_client 100;# Number of asynchronous clients per Vuser
set async_verbose false;# Report activity of asynchronous clients
set async_delay 1000;# Delay in ms between logins of asynchronous clients

So now we have halved the keying and thinking time we have doubled the throughput.

Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:100 VU * 100 AC = 10000 Active Sessions configured
Vuser 1:TEST RESULT : System achieved 24310 NOPM from 56630 MariaDB TPM
Vuser 1:FINISHED SUCCESS

You should not take the keying and thinking time to minimal values as remember a single virtual user is servicing the requests from all of their configured asynchronous connections.

Using middleware

We have illustrated the concept of a fixed throughput workload up to 10,000 connections and if you have followed the examples you will have already needed to increase system limits  such as open files and database limits such as max connections. Nevertheless the maximum setting for the number of connections for MariaDB is 100,000 still short of our 700,000 default workload. For this reason when configured such a fixed throughput workload you will need to configure middleware such as maxscale or proxysql in our MariaDB example. Typically you will have actual database connections numbered in the low hundreds with HammerDB connecting to the proxy instead.  Additionally you can configure HammerDB for remote primary and replica modes connecting multiple instances of HammerDB using asynchronous clients to scale up to hundreds of thousands of connections needed to drive your system to the maximum throughput achieved with the default workload.

 

HammerDB v4.8 New Features : Viewing your Benchmark results with the HammerDB Web Service

In HammerDB v4.7 introduced the concept of Jobs to the CLI, a central repository to store workload output, configuration and results. v4.8 extends this functionality to the GUI and adds a Web Service to view and visualize this Job related data.

Turning Jobs Off and On

Of course one of the most important aspects is to be able to disable the functionality. To turn the functionality on or off in the CLI use the jobs disable command.

hammerdb>jobs disable 1
Disabling jobs repository, restart HammerDB to take effect

and in the GUI use the Jobs Disabled/Enabled option.

Configuring the Jobs Database

The base configuration is set in the xml files in the config directory with the jobs SQLite database being set in the commandline section of generic.xml by specifying the filename. So for example to change this to hammer.db in in the DATA directory, set this as follows:

<commandline>
    <sqlite_db>/opt/HammerDB-4.8/DATA/hammer.db</sqlite_db> 
       <keepalive_margin>10</keepalive_margin>
       <jobsoutput>JSON</jobsoutput>
       <jobs_disable>0</jobs_disable>
</commandline>

Then HammerDB will initialize this database

$ ./hammerdbcli
HammerDB CLI v4.8
Copyright (C) 2003-2023 Steve Shaw
Type "help" for a list of commands
Initialized new Jobs on-disk database /opt/HammerDB-4.8/DATA/hammer.db

Note that this is the Jobs SQLite database. We also have the configuration SQLite databases that persists the configuration over restarts. This also includes the jobs SQLite database location, so after the configuration SQLite databases are created then changes to the XML files are not re-read until the files are deleted (or it detects a different version of HammerDB and will refresh them automatically).

Therefore the best approach is  to leave the configuration as the default of TMP and set an environment variable to the preferred location. On Linux it will look for environment variables TMP, TMPDIR, or TEMP and then finally look for /tmp. On Windows it will look for TEMP, TMP, TMPDIR or finally C:. for example, on Linux.

$ export TMP=`pwd`/TMP
$ ./hammerdbcli 
HammerDB CLI v4.8
Copyright (C) 2003-2023 Steve Shaw
Type "help" for a list of commands
Initialized new Jobs on-disk database /opt/HammerDB-4.8/TMP/hammer.DB

This database will be used by both the CLI and GUI and can be moved or removed as preferred. When jobs are enabled, if the database file does not exist in the specified location, it will be recreated.

Running a workload

Jobs only modify how output is stored and not how the workload itself is run. In this example, we will run a PostgreSQL TPROC-C autopilot workload to generate some jobs to analyse. Note that if we want to capture a transaction count during a run, then the transaction counter must be enabled and similarly if we want to capture timing data this option must be enabled.

We can see that a Job id is created each time a new workload is run.

Starting and Stopping the Web Service

To start and stop the Web Service under the GUI select the Jobs Options under the Tree menu.

The options enable you to start/stop and query the status of the Web Service.

When running, the status will show the Web Service environment.

Note that on Windows, security features may prevent you opening the port which is by default 8080 and therefore you may need to review the additional options for starting and stopping the Web Service.  Therefore we will give examples on Windows, however the commands are the same on Linux as well.

Under the CLI you have the command wsstart, wsstop and wsstatus for the same functionality as the GUI.  Also the command wsport can query and change the port the Web Service runs on.

HammerDB CLI v4.8
Copyright (C) 2003-2023 Steve Shaw
Type "help" for a list of commands
Initialized Jobs on-disk database C:/Users/hdb/AppData/Local/Temp/hammer.DB using existing tables (245,760 KB)
hammerdb>wsstart
HammerDB Web Service v4.8
Copyright (C) 2003-2023 Steve Shaw
Type "help" for a list of commands
Starting HammerDB Web Service on port 8080
hammerdb>wsstop
Stopping HammerDB Web Service on port 8080
hammerdb>wsstatus
Web Service not running: connect failed connection refused
hammerdb>wsport
Web Service Port set to 8080

You can also run the Web Service directly from a command prompt with the additional options of wait/nowait and gui, with wait, waiting without a CLI prompt, nowait returning a prompt that enables querying the interface directly (with the help command providing a list of commands) and gui that returns no output at all (as run by the GUI).

C:\Program Files\HammerDB-4.8>hammerdbws wait
HammerDB Web Service v4.8
Copyright (C) 2003-2023 Steve Shaw
Type "help" for a list of commands
Starting HammerDB Web Service on port 8080

Querying Jobs

With the Web Service running, we can start a browser from the GUI with the Browse option or run a browser directly to the port we have configured.  We can now see the list of Jobs we generated with autopilot, that they all ran successfully, and also pinpoint the highest performance.

We can now drill down on an individual job to view its configuration and output.

Text based data will be shown in JSON format, with the example showing the full workload configuration at the time it was run.

The result option will show a chart of the NOPM and TPM data with a link to the data it is generated from.

Charts are generated with Apache echarts and therefore the browser must be able to access the URL.

https://cdn.jsdelivr.net/npm/echarts@5.4.1/dist/echarts.min.js

If Apache echarts cannot be downloaded then the chart screens will be left blank.

The transaction counter link will be shown if the transaction counter was running during the workload (for both GUI and CLI). If the transaction counter was not running, then the link is omitted.

If the timing option was enabled, then  a chart for response times is also generated.

Charts are also generated for TPROC-H results and timings.

Failed and Unknown Jobs

If a Job errors or is left in an unknown state, then that will be reported in the Job Index. Note the top result is shown for both TPROC-C and TPROC-H.

Job CLI Interface

Jobs can also continue to be queried directly from the CLI with additional functionality at this interface such as querying the timings of individual Virtual Users.

hammerdb>job 6487506D5FDF03E263338333 result
[
  "6487506D5FDF03E263338333",
  "2023-06-12 18:05:49",
  "12 Active Virtual Users configured",
  "TEST RESULT : System achieved 46050 NOPM from 202715 Db2 TPM"
]


hammerdb>job 6487506D5FDF03E263338333 timing 2
{
  "NEWORD": {
    "elapsed_ms": "420335.0",
    "calls": "26129",
    "min_ms": "2.274",
    "avg_ms": "8.121",
    "max_ms": "194.175",
    "total_ms": "212183.912",
    "p99_ms": "24.189",
    "p95_ms": "14.837",
    "p50_ms": "7.21",
    "sd": "4729.629",
    "ratio_pct": "50.48"
  },
  "PAYMENT": {
    "elapsed_ms": "420335.0",
    "calls": "26219",
    "min_ms": "1.742",
    "avg_ms": "6.192",
    "max_ms": "154.997",
    "total_ms": "162349.531",
    "p99_ms": "18.641",
    "p95_ms": "11.624",
    "p50_ms": "5.47",
    "sd": "3479.359",
    "ratio_pct": "38.624"
  },
  "DELIVERY": {
    "elapsed_ms": "420335.0",
    "calls": "2691",
    "min_ms": "3.615",
    "avg_ms": "8.983",
    "max_ms": "66.023",
    "total_ms": "24172.593",
    "p99_ms": "27.364",
    "p95_ms": "15.336",
    "p50_ms": "8.032",
    "sd": "4385.244",
    "ratio_pct": "5.751"
  },
  "SLEV": {
    "elapsed_ms": "420335.0",
    "calls": "2714",
    "min_ms": "0.795",
    "avg_ms": "3.243",
    "max_ms": "78.237",
    "total_ms": "8800.962",
    "p99_ms": "11.414",
    "p95_ms": "7.068",
    "p50_ms": "2.586",
    "sd": "2776.821",
    "ratio_pct": "2.094"
  },
  "OSTAT": {
    "elapsed_ms": "420335.0",
    "calls": "2663",
    "min_ms": "0.705",
    "avg_ms": "3.072",
    "max_ms": "55.527",
    "total_ms": "8180.22",
    "p99_ms": "10.176",
    "p95_ms": "6.495",
    "p50_ms": "2.57",
    "sd": "2302.054",
    "ratio_pct": "1.946"
  }
}

Additional CLI functionality is also provided with the getchart command to return the HTML for a generated chart.

hammerdb>job 6487506D5FDF03E263338333 getchart result
<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>6487506D5FDF03E263338333 Result</title>
    <script type="text/javascript" src="https://cdn.jsdelivr.net/npm/echarts@5.4.1/dist/echarts.min.js"></script>
  </head>
  <body>
    <div id="id_a3a1672ce6814324bd80d8b84cff1557" class="chart-container" style="width:900px; height:500px;"></div>
    <script>
        var chart_a3a1672ce6814324bd80d8b84cff1557 = echarts.init(document.getElementById('id_a3a1672ce6814324bd80d8b84cff1557'), null, {renderer: 'canvas'});
        var option_a3a1672ce6814324bd80d8b84cff1557 = {
  "backgroundColor": "rgba(0,0,0,0)",
  "color": [
    "#5470c6",
    "#91cc75",
    "#fac858",
    "#ee6666",
    "#73c0de",
    "#3ba272",
    "#fc8452",
    "#9a60b4",
    "#ea7ccc"
  ],
  "animation": true,
  "animationDuration": 1000,
  "animationDurationUpdate": 500,
  "animationEasing": "cubicInOut",
  "animationEasingUpdate": "cubicInOut",
  "animationThreshold": 2000,
  "progressiveThreshold": 3000,
  "title": [{
      "show": true,
      "text": "Db2 TPROC-C Result 6487506D5FDF03E263338333 @ 2023-06-12 18:05:49",
      "target": "blank",
      "subtarget": "blank",
      "textAlign": null,
      "textVerticalAlign": "auto",
      "padding": 5,
      "itemGap": 10,
      "z": 2,
      "left": "auto",
      "top": "auto",
      "right": "auto",
      "bottom": "auto",
      "backgroundColor": "transparent",
      "borderColor": "transparent",
      "borderWidth": 1,
      "borderRadius": 0
    }],
  "legend": [{
      "type": "plain",
      "show": true,
      "z": 2,
      "left": "45%",
      "top": "auto",
      "right": "auto",
      "bottom": "5%",
      "width": "auto",
      "height": "auto",
      "orient": "horizontal",
      "align": "auto",
      "padding": 5,
      "itemGap": 10,
      "itemWidth": 25,
      "itemHeight": 14,
      "symbolRotate": "inherit",
      "selectedMode": true,
      "inactiveColor": "rgb(204, 204, 204)",
      "inactiveBorderColor": "rgb(204, 204, 204)",
      "inactiveBorderWidth": "auto",
      "backgroundColor": "transparent",
      "borderWidth": 0,
      "borderRadius": 0,
      "pageButtonItemGap": 5,
      "pageIconColor": "rgb(47, 69, 84)",
      "pageIconInactiveColor": "rgb(170, 170, 170)",
      "pageIconSize": 15
    }],
  "tooltip": [{
      "show": true,
      "trigger": "item",
      "showContent": true,
      "alwaysShowContent": false,
      "triggerOn": "mousemove|click",
      "transitionDuration": 0.4,
      "padding": 5,
      "order": "seriesAsc"
    }],
  "xAxis": [{
      "show": true,
      "type": "category",
      "data": ["Db2 12 Active Virtual Users configured"],
      "gridIndex": 0,
      "position": "bottom",
      "offset": 0,
      "nameLocation": "end",
      "nameGap": 15,
      "nameRotate": 0,
      "inverse": false,
      "boundaryGap": true,
      "scale": false,
      "splitNumber": 5,
      "minInterval": 0,
      "silent": false,
      "triggerEvent": false,
      "axisLabel": {
        "show": true,
        "interval": "auto",
        "inside": false,
        "rotate": 0,
        "margin": 8,
        "showMinLabel": null,
        "showMaxLabel": null,
        "hideOverlap": false,
        "fontStyle": "normal",
        "fontWeight": "normal",
        "fontFamily": "sans-serif",
        "fontSize": 12,
        "lineHeight": 12,
        "backgroundColor": "transparent",
        "borderWidth": 0,
        "borderType": "solid",
        "borderDashOffset": 0,
        "borderRadius": 0,
        "padding": 0,
        "shadowColor": "transparent",
        "shadowBlur": 0,
        "shadowOffsetX": 0,
        "shadowOffsetY": 0,
        "textBorderColor": null,
        "textBorderWidth": 0,
        "textBorderType": "solid",
        "textBorderDashOffset": 0,
        "textShadowColor": "transparent",
        "textShadowBlur": 0,
        "textShadowOffsetX": 0,
        "textShadowOffsetY": 0,
        "overflow": "truncate",
        "ellipsis": "..."
      },
      "zlevel": 0,
      "z": 0
    }],
  "yAxis": [{
      "show": true,
      "gridIndex": 0,
      "position": "left",
      "offset": 0,
      "realtimeSort": true,
      "sortSeriesIndex": 0,
      "type": "value",
      "name": "Transactions",
      "nameLocation": "end",
      "nameGap": 15,
      "nameRotate": 0,
      "inverse": false,
      "boundaryGap": false,
      "scale": false,
      "splitNumber": 5,
      "minInterval": 0,
      "silent": false,
      "triggerEvent": false,
      "axisLabel": {
        "show": true,
        "interval": "auto",
        "inside": false,
        "rotate": 0,
        "margin": 8,
        "formatter": "{value}",
        "showMinLabel": null,
        "showMaxLabel": null,
        "hideOverlap": false,
        "fontStyle": "normal",
        "fontWeight": "normal",
        "fontFamily": "sans-serif",
        "fontSize": 12,
        "lineHeight": 12,
        "backgroundColor": "transparent",
        "borderWidth": 0,
        "borderType": "solid",
        "borderDashOffset": 0,
        "borderRadius": 0,
        "padding": 0,
        "shadowColor": "transparent",
        "shadowBlur": 0,
        "shadowOffsetX": 0,
        "shadowOffsetY": 0,
        "textBorderColor": null,
        "textBorderWidth": 0,
        "textBorderType": "solid",
        "textBorderDashOffset": 0,
        "textShadowColor": "transparent",
        "textShadowBlur": 0,
        "textShadowOffsetX": 0,
        "textShadowOffsetY": 0,
        "overflow": "truncate",
        "ellipsis": "..."
      },
      "zlevel": 0,
      "z": 0
    }],
  "series": [
    {
      "type": "bar",
      "name": "NOPM",
      "colorBy": "series",
      "legendHoverLink": true,
      "coordinateSystem": "cartesian2d",
      "roundCap": false,
      "showBackground": false,
      "itemStyle": {
        "color": "#00CC00",
        "borderColor": "rgb(0, 0, 0)",
        "borderType": "solid",
        "borderDashOffset": 0,
        "borderCap": "butt",
        "borderJoin": "bevel",
        "borderMiterLimit": 10,
        "opacity": 0.90
      },
      "sampling": "max",
      "cursor": "pointer",
      "barMinWidth": null,
      "barCategoryGap": "20%",
      "large": false,
      "largeThreshold": 400,
      "data": [46050],
      "zlevel": 0,
      "z": 2,
      "silent": false
    },
    {
      "type": "bar",
      "name": "TPM",
      "colorBy": "series",
      "legendHoverLink": true,
      "coordinateSystem": "cartesian2d",
      "roundCap": false,
      "showBackground": false,
      "itemStyle": {
        "color": "#66ff66",
        "borderColor": "rgb(0, 0, 0)",
        "borderType": "solid",
        "borderDashOffset": 0,
        "borderCap": "butt",
        "borderJoin": "bevel",
        "borderMiterLimit": 10,
        "opacity": 0.90
      },
      "sampling": "max",
      "cursor": "pointer",
      "barMinWidth": null,
      "barCategoryGap": "20%",
      "large": false,
      "largeThreshold": 400,
      "data": [202715],
      "zlevel": 0,
      "z": 2,
      "silent": false
    }
  ]
}
chart_a3a1672ce6814324bd80d8b84cff1557.setOption(option_a3a1672ce6814324bd80d8b84cff1557);
    </script>
  </body>
</html>

Summary

In this post we have introduced new HammerDB v4.8 functionality for viewing workload related results and configuration from a central location.

Find the right PostgreSQL, MySQL and MariaDB parameters first time with hammerpost

Hammerpost is a HammerDB contribution project from @dineshkumar02 that wraps HammerDB with an automated framework to rapidly prototype multiple combinations of database parameters to find the optimal configuration for any specific environment.

You can find the hammerpost project and documentation here. This post gives a brief example of using hammerpost to encourage you to investigate this project further.  In this example, we already have a PostgreSQL database already running on a system and want to test different combinations of parameters.

Firstly, we will check that we can start our database.

hammerdb@REDPOLL:~$ /home/hammerdb/pgsql/bin/pg_ctl -D /home/hammerdb/pgsql/data start
waiting for server to start....2023-04-04 11:27:29.650 GMT [3555] LOG:  redirecting log output to logging collector process
2023-04-04 11:27:29.650 GMT [3555] HINT:  Future log output will appear in directory "log".
done
server started

Next, we will install the hammerpost-agent on the same system as the database,  this example the system is running Ubuntu 22.04.

hammerdb@REDPOLL:~$ git clone https://github.com/dineshkumar02/hammerpost-agent.git
Cloning into 'hammerpost-agent'...
remote: Enumerating objects: 38, done.
remote: Counting objects: 100% (38/38), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 38 (delta 15), reused 32 (delta 9), pack-reused 0
Receiving objects: 100% (38/38), 22.45 KiB | 1.25 MiB/s, done.
Resolving deltas: 100% (15/15), done.
hammerdb@REDPOLL:~$ cd hammerpost-agent
hammerdb@REDPOLL:~/hammerpost-agent$ make
GOOS=linux CGO_ENABLED=0 go build -o hammerpost-agent -v -ldflags="-X 'main.Version=0.1.0' -X 'main.GitCommit=45d7496' -X 'main.CommitDate=2023-04-02 19:14:52 +0530'"
hammerpost-agent

We can then start the agent on the same system as the database giving it the ability to start and stop our database with the modified parameters.

hammerdb@REDPOLL:~/hammerpost-agent$ sudo ./hammerpost-agent --stop-cmd "/home/hammerdb/pgsql/bin/pg_ctl -D /home/hammerdb/pgsql/data stop -m f" --start-cmd "/home/hammerdb/pgsql/bin/pg_ctl -D /home/hammerdb/pgsql/data start" --pgdsn "postgres://postgres:postgres@localhost:5432/postgres" --db-type postgres
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /start                    --> main.main.func1 (3 handlers)
[GIN-debug] GET    /stop                     --> main.main.func2 (3 handlers)
[GIN-debug] GET    /info                     --> main.main.func3 (3 handlers)
[GIN-debug] GET    /metrics                  --> main.main.func4 (3 handlers)
[GIN-debug] POST   /set-param                --> main.main.func5 (3 handlers)
[GIN-debug] GET    /load                     --> main.main.func6 (3 handlers)
[GIN-debug] GET    /ping                     --> main.main.func7 (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Listening and serving HTTP on :8989

Next we will grab the latest HammerDB PostgreSQL docker image with the drivers included.

hammerdb@REDPOLL:~$ sudo docker pull tpcorg/hammerdb:postgres
[sudo] password for hammerdb: 
postgres: Pulling from tpcorg/hammerdb
...
Digest: sha256:59d901bfd14452ca3f345524384dd9d4b61effcc555e5133a71534440141c378
Status: Downloaded newer image for tpcorg/hammerdb:postgres
docker.io/tpcorg/hammerdb:postgres

and start the image

hammerdb@REDPOLL:~$ sudo docker run --network=host -it --name hammerdb-postgres tpcorg/hammerdb:postgres bash
root@REDPOLL:/home/hammerdb/HammerDB-4.7#

We now install go inside the container and install hammerpost in the HammerDB directory. Note that it needs to be a more updated version than the default with the HammerDB Docker container.

root@REDPOLL:/usr/local# wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz
root@REDPOLL:/usr/local# tar -xvf go1.19.4.linux-amd64.tar.gz
root@REDPOLL:/usr/local# export PATH=/usr/local/go/bin:$PATH
root@REDPOLL:/usr/local# apt-get install make

root@REDPOLL:/home/hammerdb/HammerDB-4.7# git clone https://github.com/dineshkumar02/hammerpost.git
root@REDPOLL:/home/hammerdb/HammerDB-4.7/hammerpost# go version
go version go1.19.4 linux/amd64
root@REDPOLL:/home/hammerdb/HammerDB-4.7/hammerpost# go get
root@REDPOLL:/home/hammerdb/HammerDB-4.7/hammerpost# make

Back in the main HammerDB directory we can link the hammerpost executable, templates and parameter file to local links, meaning we can run hammerpost straight from the local hammerdb directory. We have used hammerp as the name of the link to the hammerpost execuatable as the directory is already called hammerpost.

root@REDPOLL:/home/hammerdb/HammerDB-4.7# ln -s ./hammerpost/hammerpost ./hammerp
root@REDPOLL:/home/hammerdb/HammerDB-4.7# ln -s ./hammerpost/hammer-templates ./hammer-templates
root@REDPOLL:/home/hammerdb/HammerDB-4.7# ln -s ./hammerpost/params.json ./params.json

With hammerpost installed the first step is to initialize the schema with an example as follows.

root@REDPOLL:/home/hammerdb/HammerDB-4.7# ./hammerp --init --name test-bench-1 --pgdsn "postgres://postgres:postgres@localhost:5432/postgres" --users 4 --warehouses 10 --hammerpost-agent localhost:8989

╔ hammerpost - v0.1.0 ═════════════════════════════════════════╗
║                                                              ║
║                                                              ║
║                 OS linux                                     ║
║           Platform ubuntu-22.04                              ║
║             Kernel 5.15.0-56-generic                         ║
║             Uptime 12131                                     ║
║    Total Processes 233                                       ║
║           Load Avg 0.05                                      ║
║                CPU Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz  ║
║          CPU Count 4                                         ║
║          CPU Cores 1                                         ║
║            CPU Mhz 3491.918                                  ║
║   Total Memory(GB) 7                                         ║
║    Free Memory(GB) 1                                         ║
║    Used Memory(GB) 0                                         ║
║                                                              ║
║                                                              ║
╚══════════════════════════════════════════════════════════════╝

DB type  postgres
schema initialized

We can then update the params.json file with the combination of parameters we want to test. In this example it is 4 tests with different combinations of shared_buffers and wal_buffers.

root@REDPOLL:/home/hammerdb/HammerDB-4.7# vi params.json
{
        "shared_buffers": ["512MB", "1GB"],
        "wal_buffers": ["32MB", "64MB"]
}

We can then run the test. hammerpost will update the parameters, start and stop the database and run hammerdb.

root@REDPOLL:/home/hammerdb/HammerDB-4.7# ./hammerp --run --name test-bench --pgdsn "postgres://postgres:postgres@redpoll:5432/postgres" --users 4 --warehouses 10 --hammerpost-agent redpoll:8989 --param-file ./params.json  --logfile test-bench1.log

╔ hammerpost - v0.1.0 ═════════════════════════════════════════╗
║                                                              ║
║                                                              ║
║                 OS linux                                     ║
║           Platform ubuntu-22.04                              ║
║             Kernel 5.15.0-56-generic                         ║
║             Uptime 16947                                     ║
║    Total Processes 237                                       ║
║           Load Avg 0.42                                      ║
║                CPU Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz  ║
║          CPU Count 4                                         ║
║          CPU Cores 1                                         ║
║            CPU Mhz 3491.918                                  ║
║   Total Memory(GB) 7                                         ║
║    Free Memory(GB) 0                                         ║
║    Used Memory(GB) 0                                         ║
║                                                              ║
║                                                              ║
╚══════════════════════════════════════════════════════════════╝

DB type  postgres
Benchmark id  5
Parameter test cases  4

┌ Parameters ──────────┐
│                      │
│ shared_buffers:512MB │
│ wal_buffers:32MB     │
│                      │
│                      │
└──────────────────────┘

┌ Results ──────────────┐
│                       │
│ 66906 NOPM 151341 TPM │
│                       │
└───────────────────────┘

┌ Parameters ────────┐
│                    │
│ shared_buffers:1GB │
│ wal_buffers:32MB   │
│                    │
│                    │
└────────────────────┘

┌ Results ──────────────┐
│                       │
│ 70710 NOPM 161991 TPM │
│                       │
└───────────────────────┘

┌ Parameters ──────────┐
│                      │
│ shared_buffers:512MB │
│ wal_buffers:16MB     │
│                      │
│                      │
└──────────────────────┘

┌ Results ──────────────┐
│                       │
│ 67146 NOPM 153539 TPM │
│                       │
└───────────────────────┘

┌ Parameters ────────┐
│                    │
│ shared_buffers:1GB │
│ wal_buffers:16MB   │
│                    │
│                    │
└────────────────────┘

┌ Results ──────────────┐
│                       │
│ 66126 NOPM 151047 TPM │
│                       │
└───────────────────────┘

hammerdb run completed
root@REDPOLL:/home/hammerdb/HammerDB-4.7#

hammerpost will also gather system level metrics that you can use to analyze your tests.

root@REDPOLL:/home/hammerdb/HammerDB-4.7# ./hammerp --test-details 9
+----------------------+----------------------+----------+----------------------+--------+-------+
|        START         |         END          | DURATION |      PARAMETERS      | OUTPUT | ERROR |
+----------------------+----------------------+----------+----------------------+--------+-------+
| 2023-04-04T15:11:38Z | 2023-04-04T15:14:34Z | 2m56s    | shared_buffers:512MB |        |       |
|                      |                      |          | wal_buffers:16MB     |        |       |
+----------------------+----------------------+----------+----------------------+--------+-------+
Test Details
root@REDPOLL:/home/hammerdb/HammerDB-4.7# ./hammerp --test-metrics 9
+-----------+--------------+-------------------------------+
| CPU USAGE | MEMORY USAGE |             TIME              |
+-----------+--------------+-------------------------------+
|     21.72 |        98.04 | 2023-04-04 15:13:27 +0000 UTC |
+-----------+--------------+-------------------------------+
|     95.36 |        98.35 | 2023-04-04 15:13:29 +0000 UTC |
+-----------+--------------+-------------------------------+
|     95.18 |        98.50 | 2023-04-04 15:13:31 +0000 UTC |
+-----------+--------------+-------------------------------+
|     98.75 |        98.57 | 2023-04-04 15:13:34 +0000 UTC |
+-----------+--------------+-------------------------------+
|    100.00 |        98.46 | 2023-04-04 15:13:36 +0000 UTC |
+-----------+--------------+-------------------------------+
|     97.96 |        98.50 | 2023-04-04 15:13:38 +0000 UTC |
+-----------+--------------+-------------------------------+
|     99.50 |        98.56 | 2023-04-04 15:13:40 +0000 UTC |
+-----------+--------------+-------------------------------+
|     98.99 |        98.35 | 2023-04-04 15:13:42 +0000 UTC |
+-----------+--------------+-------------------------------+
|    100.00 |        98.41 | 2023-04-04 15:13:45 +0000 UTC |
+-----------+--------------+-------------------------------+
|     94.72 |        98.49 | 2023-04-04 15:13:47 +0000 UTC |
+-----------+--------------+-------------------------------+
Test Metrics
root@REDPOLL:/home/hammerdb/HammerDB-4.7# ./hammerp --test-stats 9
+--------+--------+------------+----------+------------+----------+---------+
| AVGCPU | AVGMEM | AVGRPERSEC | AVGRMBPS | AVGWPERSEC | AVGWMBPS | AVGUTIL |
+--------+--------+------------+----------+------------+----------+---------+
|  94.69 |  98.40 |       0.00 |     0.00 |       0.00 |     0.00 |    0.00 |
+--------+--------+------------+----------+------------+----------+---------+

Hammerpost can be a great addition to your HammerDB benchmarking environment by enabling you to automate your tests to determine optimal parameter settings for your configuration.

HammerDB v4.7 New Features Pt 3: Lightweight Docker Images for rapid testing

HammerDB v4.5 introduced a Dockerfile and Docker Images to pull from Docker Hub for rapid deployment of HammerDB with builds already including third-party database libraries.

How to deploy HammerDB CLI fast with Docker

HammerDB v4.7 introduces more lightweight database specific Docker Images, so if you only want to run benchmarks against a specific database you can go from having no benchmarking environment to reviewing your results in as little as 3 commands.

In this example, we will use the PostgreSQL Docker image and have PostgreSQL running on our Linux system.  To install HammerDB for Docker we pull the PostgreSQL specific image.

1.

~/pgsql$ sudo docker pull tpcorg/hammerdb:postgres
[sudo] password for hammerdb: 
postgres: Pulling from tpcorg/hammerdb
06d39c85623a: Already exists 
a030842f98bc: Pull complete 
c5c513b4b24b: Pull complete 
6b7637531b01: Pull complete 
bdf997c08005: Pull complete 
f6cc176b4372: Pull complete 
bb48f6545dc6: Pull complete 
4f4fb700ef54: Pull complete 
Digest: sha256:59d901bfd14452ca3f345524384dd9d4b61effcc555e5133a71534440141c378
Status: Downloaded newer image for tpcorg/hammerdb:postgres
docker.io/tpcorg/hammerdb:postgres

and run it as follows:

2.

~/pgsql$ sudo docker run --network=host -it --name hammerdb-postgres tpcorg/hammerdb:postgres bash

As HammerDB has built in example scripts, if our database has default connection parameters, we can go straight ahead and run the PostgreSQL driver script. Alternatively, the following post explains how to update the scripts for your connection parameters. Note that if you are sure that you are using the default connection parameters, you can go ahead and run the script instead of the bash shell in step 2 as above.

HammerDB v4.7 New Features Pt 2: Example CLI Scripts

 

3.

/home/hammerdb/HammerDB-4.7# ./scripts/tcl/postgres/tprocc/pg_tprocc.sh

When the script has finished we can gather our results, having installed, started and run a benchmark in our HammerDB Docker image in 3 steps.

...
BUILD HAMMERDB SCHEMA
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
HammerDB CLI v4.7
...
HAMMERDB RESULT
HammerDB CLI v4.7
Copyright (C) 2003-2023 Steve Shaw
Type "help" for a list of commands
Initialized SQLite on-disk database /home/hammerdb/HammerDB-4.7/TMP/hammer.DB using existing tables (188,416 KB)
TRANSACTION RESPONSE TIMES
{
  "NEWORD": {
    "elapsed_ms": "436026.5",
    "calls": "33387",
    "min_ms": "0.505",
    "avg_ms": "7.431",
    "max_ms": "8131.221",
    "total_ms": "248091.653",
    "p99_ms": "31.727",
    "p95_ms": "2.933",
    "p50_ms": "1.676",
    "sd": "147681.729",
    "ratio_pct": "56.998"
  },
  "PAYMENT": {
    "elapsed_ms": "436026.5",
    "calls": "33439",
    "min_ms": "0.28",
    "avg_ms": "3.821",
    "max_ms": "9115.8",
    "total_ms": "127767.668",
    "p99_ms": "5.088",
    "p95_ms": "1.645",
    "p50_ms": "0.852",
    "sd": "115840.578",
    "ratio_pct": "29.354"
  },
  "DELIVERY": {
    "elapsed_ms": "436026.5",
    "calls": "3300",
    "min_ms": "1.129",
    "avg_ms": "8.367",
    "max_ms": "7738.33",
    "total_ms": "27612.602",
    "p99_ms": "39.382",
    "p95_ms": "6.893",
    "p50_ms": "3.635",
    "sd": "144034.336",
    "ratio_pct": "6.344"
  },
  "SLEV": {
    "elapsed_ms": "436026.5",
    "calls": "3296",
    "min_ms": "0.804",
    "avg_ms": "4.103",
    "max_ms": "1834.672",
    "total_ms": "13524.524",
    "p99_ms": "8.315",
    "p95_ms": "4.386",
    "p50_ms": "2.685",
    "sd": "39241.027",
    "ratio_pct": "3.107"
  },
  "OSTAT": {
    "elapsed_ms": "436026.5",
    "calls": "3378",
    "min_ms": "0.203",
    "avg_ms": "3.592",
    "max_ms": "2699.458",
    "total_ms": "12135.334",
    "p99_ms": "22.288",
    "p95_ms": "1.47",
    "p50_ms": "0.64",
    "sd": "65613.436",
    "ratio_pct": "2.788"
  }
}

TRANSACTION COUNT
{"PostgreSQL tpm": {
    "0": "2023-03-27 15:22:48",
    "65952": "2023-03-27 15:22:58",
    "72756": "2023-03-27 15:23:08",
    "50502": "2023-03-27 15:23:18",
    "41172": "2023-03-27 15:23:28",
    "55320": "2023-03-27 15:23:38",
    "72258": "2023-03-27 15:23:49",
    "68442": "2023-03-27 15:23:58",
    "61098": "2023-03-27 15:24:08",
    "52602": "2023-03-27 15:24:18",
    "57360": "2023-03-27 15:24:28",
    "56484": "2023-03-27 15:24:38",
    "74220": "2023-03-27 15:24:48",
    "55518": "2023-03-27 15:24:58",
    "69714": "2023-03-27 15:25:09",
    "63444": "2023-03-27 15:25:19",
    "62340": "2023-03-27 15:25:29",
    "57720": "2023-03-27 15:25:39",
    "40698": "2023-03-27 15:25:49",
    "40446": "2023-03-27 15:25:59",
    "39720": "2023-03-27 15:26:09",
    "31398": "2023-03-27 15:26:19",
    "38124": "2023-03-27 15:26:29",
    "36048": "2023-03-27 15:26:39",
    "29814": "2023-03-27 15:26:49",
    "34938": "2023-03-27 15:26:59",
    "13980": "2023-03-27 15:27:09",
    "23304": "2023-03-27 15:27:19",
    "11022": "2023-03-27 15:27:29",
    "47718": "2023-03-27 15:27:39",
    "38202": "2023-03-27 15:27:49",
    "36006": "2023-03-27 15:27:59",
    "11832": "2023-03-27 15:28:09",
    "29982": "2023-03-27 15:28:19",
    "37308": "2023-03-27 15:28:29",
    "36054": "2023-03-27 15:28:39",
    "21456": "2023-03-27 15:28:49",
    "15510": "2023-03-27 15:28:59",
    "36534": "2023-03-27 15:29:10",
    "34746": "2023-03-27 15:29:20",
    "28920": "2023-03-27 15:29:30",
    "8976": "2023-03-27 15:29:40",
    "29280": "2023-03-27 15:29:50",
    "38514": "2023-03-27 15:30:03",
    "24288": "2023-03-27 15:30:10",
  }}

HAMMERDB RESULT
[
  "6421B4C85F7E03E293532393",
  "2023-03-27 15:22:48",
  "4 Active Virtual Users configured",
  "TEST RESULT : System achieved 15824 NOPM from 36400 PostgreSQL TPM"
]

If preferred, to access utilities such as PostgreSQL metrics in this example, you can also run the HammerDB GUI and export the DISPLAY back to your host Desktop.