API Load Testing and Performance Analysis: Grafana k6 vs Apache JMeter
This article explains how to perform load testing on APIs using the fast and open-source tools k6 and Apache JMeter, which do not require extensive experience.
What is an API? An API (Application Programming Interface) is an interface that facilitates communication between software applications.
What is Load Testing? Load Testing involves evaluating the performance of an application under a specific load. These tests aim to assess the resilience, speed, and overall performance of an application.
What is k6? k6 is an open-source load testing tool used to measure application performance. It utilizes a JavaScript-based scripting language and provides a user-friendly interface. With k6, you can simulate realistic API traffic and measure how your system responds under load. You don't need to be highly experienced to identify and fix performance issues before they affect your users.
What is JMeter? JMeter is an open-source load testing tool used to measure application performance. Based on Java, JMeter offers a user-friendly interface and can be used across a wide range of scenarios for creating and managing performance tests. JMeter provides a flexible framework for simulating realistic user behaviors and measuring how your system responds under load. You don't need extensive experience to detect and resolve performance issues, although experienced users can utilize JMeter for testing complex scenarios.
We'll conduct the same tests using both k6 and JMeter, comparing their advantages and disadvantages.
The table below provides a general comparison of these two tools based on metrics such as installation, scripting, scope, and support.
Grafana k6 vs Apache JMeter
Feature | JMeter | k6 |
---|---|---|
Language and Syntax | Java | JavaScript |
Installation and Setup | Requires JRE, GUI-based | Requires Node.js, Console-based |
Performance and Scalability | Suitable for large load tests, high resource consumption | Lightweight, scalable, better scalability |
Scenario Definition | XML-based, GUI or manual XML | Scripting in JavaScript |
Distributed Test Support | Comprehensive support, distributed test management | Distributed test support in k6 Cloud or custom distributed environments |
Reporting and Analysis | GUI-based, graphical analysis tools | Console-based, k6 Cloud and integrations |
Community and Support | Large user community, documentation | Rapidly growing community |
JMeter's interface offers various data visualization options for users. These include backend listeners, graphical reports, JMeter plugins, Grafana, and InfluxDB integration.
k6, console-based usage aside, can also be used with various data visualization systems and methods, as shown in the image below.
It is possible to quickly conduct these tests on a Windows machine.
For k6, we will create a test.js file and obtain the k6 test results as standard output in the terminal.
For JMeter, we will apply the tests both through the interface and via the terminal.
k6 Basic Test Scenarios
We'll examine simple JavaScript code snippets for some k6 load testing scenarios for API endpoints. These scenarios include load testing, endurance testing, stress testing, and performance optimization.
Load Testing
Load testing simulates a scenario where a high number of users connect to a web server accessed with an HTTP method simultaneously.
k6 can be used to evaluate how web applications perform under specific load:
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
// Represents the number of simulated users
// You can adjust the number of concurrent users according to your scenario
const numberOfUsers = 100;
// Making a GET request for each user simultaneously
Array.from({ length: numberOfUsers }, (_, index) => {
http.get(`https://example.com/page${index}`);
sleep(1);
});
}
Endurance Testing
Endurance Testing creates a load test scenario that continues for a specific duration, simulating how long an application can operate stably.
k6 can assess the system performance of web applications under a constant load for a specific duration:
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
// You can set the duration of the test under a constant load in "minute" units
const testDuration = '1m';
// Making a GET request for each user simultaneously
while (1) {
http.get('https://example.com');
sleep(1);
}
}
Stress Testing
Stress Testing simulates sudden traffic spikes or rapid changes in the number of users to measure the application's limits.
k6 can evaluate how web applications behave under an instantaneous load exceeding system capacity:
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
// You can increase the instantaneous user load to simulate sudden traffic spikes
const numberOfUsers = __VU * 10;
// Making a GET request for each user simultaneously
Array.from({ length: numberOfUsers }, (_, index) => {
http.get(`https://example.com/page${index}`);
sleep(0.1);
});
}
Performance Optimization Testing
Performance Optimization Testing is suitable for simulating performance improvements after specific optimizations have been made in the application.
k6 can be used to evaluate the effects of code changes or infrastructure updates aimed at improving the performance of web applications:
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
// Optimized page
const optimizedPage = 'https://example.com/optimized-page';
// Making a GET request for each user simultaneously
Array.from({ length: 100 }, () => {
http.get(optimizedPage);
sleep(0.1);
});
}
Let's see a practical example of a sample k6 test script that includes these scenarios for a more comprehensive test scenario.
Create a k6_load_test.js file for k6, which runs with JavaScript. Then, paste the following test code into the file:
import http from 'k6/http';
import { check, sleep, group } from 'k6';
export let options = {
stages: [
{ duration: '30s', target: 500 }, // 500 users for 30 seconds
{ duration: '1m', target: 50 }, // 50 users for 60 seconds
{ duration: '30s', target: 0 }, // Ramp down to 0 users in 30 seconds
],
};
export default function () {
// Select a random HTTP method
let methods = ['GET', 'POST', 'PUT', 'DELETE'];
let method = methods[Math.floor(Math.random() * methods.length)];
// Select a random API endpoint
let endpoints = [
'https://example.com/k6_load_test/get',
'https://example.com/k6_load_test/post',
'https://example.com/k6_load_test/put',
'https://example.com/k6_load_test/delete',
];
let endpoint = endpoints[Math.floor(Math.random() * endpoints.length)];
// Make an HTTP request for load testing
let res = http.request(method, endpoint);
// Check the HTTP response
check(res, {
'is status 200': (r) => r.status === 200,
});
// Simulate running under a constant load by adding a short sleep
// sleep(1);
}
The k6 test code above simulates a load test performed on a specific API.
The test gradually increases the number of virtual users making HTTP requests over a specific duration (total of 3 minutes).
In the first stage, 500 virtual users are active for 30 seconds, then 50 virtual users for one minute, and finally, there are no virtual users for 30 seconds.
Each virtual user makes an HTTP request using a randomly selected HTTP method (GET, POST, PUT, DELETE) and API endpoint combination. The HTTP response of each request is checked, and if the response code is 200, it is considered successful.
A short wait time (1 second) is added after each request to simulate running under a constant load.
Let's run the code:
k6 run k6_load_test.js
After running the k6 command script, you will receive output similar to the following. The measurements obtained in the terminal output provide information about how your API performs under load and can help you identify any bottlenecks or issues.
k6 Result Output
What do the metric values in k6 test results mean?
Grafana k6 Result Metrics
Metric | Description | Value |
---|---|---|
checks | Includes a specific HTTP status or another check | 46 MB |
data_received | Shows the total amount of data received during the scenario | 23 MB |
data_sent | Shows the total amount of data sent during the scenario | 180,077 |
http_req_blocked | Shows the time spent while the HTTP request was blocked | Average 9.42ms |
http_req_connecting | Shows the time taken for the HTTP request to establish a connection with the server | Average 5.04 ms |
http_req_duration | Shows the completion time of the HTTP request | Average 242.36 ms |
http_req_failed | Shows the number of failed HTTP requests | 0 |
http_req_receiving | Shows the time taken for the HTTP request to receive data from the server | 216.14 µs |
http_req_sending | Shows the time taken for the HTTP request to send data to the server | 52.92 µs |
http_req_tls_handshaking | Shows the time taken for the TLS handshake of the HTTP request | Average 4.37 ms |
http_req_waiting | Shows the time spent by the HTTP request while waiting for a response from the server | Average 242.09 ms |
http_reqs | Shows the total number of HTTP requests made during the scenario | 180,077 |
iteration_duration | Shows the completion time of an iteration within the scenario | 252.11 ms |
iterations | Shows the total number of iterations completed within the scenario | 180,077 |
vus | Shows the current number of virtual users (VUs) running | Min: 1, Max: 499 |
vus_max | Shows the maximum number of virtual users during the scenario | 500 |
p(90) | Represents the value that represents the lowest 90% of the data set | |
p(95) | Represents the value that represents the lowest 95% of the data set |
Based on these outputs, you can plan improvements for your API or API provider service.
k6 is one of the most important tools for creating flexible test scenarios. These scenarios, created with script coding, are as extensive and limitless as software boundaries.
During k6 testing, data visualization can be achieved, and system performance metrics obtained with Prometheus on systems running on Kubernetes can be visualized with Grafana as follows:
Grafana System Metrics
For using k6 on Windows, Mac, and Linux, visit https://k6.io/docs/get-started/installation/ .
Basic Test Scenarios with JMeter
For general test scenarios in JMeter, values for Threads (Number of Threads), Loop (Number of Iterations), and Ramp-up Period should be entered in the interface.
JMeter Thread Properties
API Performance Testing:
Threads: Number of requests sent to the API. For example, there could be 500 or more client threads.
Loop: Specifies how many times each client will repeat a certain action. It's often set to 1 since each request can be different.
Ramp-up Period: Determines how long it will take to start all clients. For instance, it can be set to 360 to start 500 clients in 6 seconds.
Additionally, for API Performance testing scenarios:
1.Increasing Request Count:
Simulating increasing request load over a specific period by increasing the number of requests sent to the API, creating a gradually increasing request load per second.
2.Maximum Load Testing:
Simulating the maximum access load to the API by determining the maximum number of clients accessing the API simultaneously, observing how the API behaves under this load.
3.Long-Term Load Testing:
Continuously sending requests to the API over a specific period, used to determine how the API performs under long-term load.
4.Different Request Types Testing:
Testing the API with different types of requests. Scenarios can be created with different HTTP methods such as GET, POST, PUT, DELETE, and different request bodies.
5.Average Response Time Testing:
Evaluating the API's overall performance by testing the average response time under a specific load.
6.Error Handling Testing:
Testing how the API handles error situations under a specific load, simulating scenarios such as malformed requests or authorization errors.
7.Scheduler Tests:
Using a scheduler to send requests to the API over a specific time interval, monitoring how the API performs throughout a day.
These tests can be initiated in the JMeter interface with the RUN command or via the terminal. Additionally, test contents can be saved in CSV format using the following command.
jmeter -n -t \Users\YOUR_USER\path\apache-jmeter-5.6.3\apache-jmeter-5.6.3\backups\TEST_NAME.jmx -l \Users\YOUR_USER\path\apache-jmeter-5.6.3\apache-jmeter-5.6.3\backups\RECORD_NAME.csv
After running the command, you will obtain an output similar to the image below:
JMeter Result Output
What do the metric values of a JMeter test result mean?
Apache JMeter Result Metrics
Metric | Description | Value |
---|---|---|
summary | Total Requests | 6247 |
total time | Total Duration (167 sec) | 167 sec |
Avg req/minute | Average Requests per Minute (381.3 req) | 381.3 req/min |
Avg | Average Response Time (749 ms) | 749 ms |
Min | Minimum Response Time (101 ms) | 101 ms |
Max | Maximum Response Time (4281 ms) | 4281 ms |
Err | Error Count (0) | 0 |
Err Percentage | Error Percentage (0.00%) | 0.00% |
Active | Active Request Count (172) | 172 |
Started | Started Request Count (1365) | 1365 |
Finished | Finished Request Count (1193) | 1193 |
In light of these outputs, you can plan the improvements needed for your API or API provider service.
Additionally, in the JMeter interface, you can obtain detailed outputs similar to the result table found in the visual below.
JMeter Results Table
To use JMeter on Windows, Mac, and Linux, please visit the following link: https://www.simplilearn.com/tutorials/jmeter-tutorial/jmeter-installation .
Comparison of JMeter and K6 Performance Testing Results
Performance testing is a crucial process used to evaluate how a system or application performs under specific conditions. These tests can be conducted using different tools. In this article, we will compare the results of two different performance testing tools: JMeter and K6. The comparison criteria include the test environment, test duration, target users, statistics, average response time, maximum response time, and speed.
Grafana k6 vs Apache JMeter Test Result
Feature | JMeter | k6 |
---|---|---|
Test Environment | Run as independent test. | Run in a local environment. |
Test Duration | Ran for a total of 2 minutes 41 seconds. | Ran for a total of 3 minutes. |
Target Users | Maximum of 500 concurrent users. | Ramp-up to 500 concurrent users. |
Total Requests | 25,000 requests were made. | 180,077 requests were made. |
Average Response Time | 11.441 milliseconds. | 242.36 milliseconds. |
Maximum Response Time | 21.153 milliseconds. | 33.68 seconds. |
Speed | 169.6 requests/second. | 857.45 requests/second. |
Results
When examining the performance test results of JMeter and k6, we can see that both tools have different strengths.
JMeter, despite having a lower request rate, can support high concurrent user numbers. This feature makes it ideal for testing large-scale applications. On the other hand, k6 has a higher request rate and has minimized the failure rate of requests. This can be particularly preferable for applications requiring quick responses.
The choice of tool to use will depend on the testing requirements, target metrics, and the structure of the application. For example, if testing with high concurrent user numbers, JMeter might be preferred; however, if speed and failure rate are critical, k6 might be more suitable. Both tools can be used according to different scenarios and provide valuable insights in performance testing.
In this article, we discussed how k6 and JMeter can be used as performance testing tools. Additionally, we performed a sample load test scenario with both tools and compared the results.
You can also conduct load tests with these tools and evaluate the performance of your applications.