Postman provides the capability to run the collections at specific times each day or week. For example, you may want to run a collection that tests the functionality of your API every day. You can use the Collection Runner to schedule collection runs to execute automatically at specified days and times.
Scheduled runs execute in the Postman Cloud.
Schedules share permissions with their collections. For example, if you have permissions to edit a collection, you’ll be able to edit that collection’s schedules.
Personal, private, and team workspaces support scheduling collection runs.
If you import or export a collection, its schedules don’t import or export with it. However, if you delete a collection, its schedules are deleted also.
Scheduled collection runs have the same usage limits as monitors.
When you schedule a collection run with the Collection Runner, the scheduled run is added to the collection’s Runs tab. You can view, pause, edit, and delete scheduled collection runs from the collection’s Runs tab.
Scheduling a Collection Run
Step 1 – Click “Run Collections”
Select Collectionsin the sidebar and select the collection or folder you want to schedule. Click on the “Run Collection”.
Step 2 – Schedule the Run
On the Functional tab, select Schedule runs.
Step 3 – Select Configuration options
Choose any configuration options:
The schedule’s name – Postman Automatic Run
The run’s frequency – Hourly
An environment associated with the collection (optional) – No
How many times you want the collection to run (iterations) – 1
A JSON or CSV data file (optional)
Notification recipients (optional)
Advanced settings (optional)
Retry if run fails
Set request timeout
Set delay between requests
Follow redirects
Enable SSL validation
By default, your queries are executed in the collection’s list order. Select the request you want to move, then drag it to the new position in the execution sequence. By unchecking the box next to a certain request’s name, you can also delete that request from the run.
Click the “Schedule Run“ button.
Viewing a scheduled run
Step 1 – View the schedule in Postman console
Select Collections in the sidebar and select the collection with the scheduled run you want to view.
Select the Runstab.
Select the Scheduled runs tab, hover over your scheduled run, and Select View.
Double-click on any green bar and it shows all the test result.
Step 2 – View the email
Postman sends email as you have configured in the runner option. Here, I have configured to send email after failure of 1 consecutive run. So, we can see the sample email below:
Sample Email
Congratulations. This tutorial has explained the steps to schedule the Collection run in the Postman. Happy Learning!!
The environment is a set of key-value pairs. We can use environments to group related sets of values together and manage access to shared Postman data if you are working as part of a team.
Why do we need an environment?
Imagine we have 4 different environments – Dev, QA, UAT, and Pre Prod. These environments need different credentials to login to the application. So, we will create same request with different credentials or will keep on changing the credential of same request every time. This is a messy approach. So, the ideal scenario is we create environment variables for the changing attributes and select the environment for particular attributes. In this way, we have a single request with different combination of attributes.
How do we create an environment?
Select Environments on the left and select +.
Enter a name for your environment, and initialize it with any variables you need. You can also specify variables for the environment later.
Save the request. Click on the “Send” button to verify the API returns a successful response as shown in the below image.
Step 2 – Create an environment and add key-value pairs (variables)
Create a new environment as explained in the above tutorial.
Add the values that are dynamic for different environments. Click the “Save” button to save the variables.
In the Keyfield, enter the name of the environment variable that will be used in the Postman Collection. In the Valuefield, enter the value that will replace the variable when the call is made. For example:
Enter a name for your variable, and specify its Initialand Currentvalues. By default, the current value will copy the initial value.
The Initial Value is synced to your account using the Postman servers. It’s shared with any collaborators who have access to the environment.
The Current Value is used in your local instance of Postman and is never synced to your account or shared with your team unless you choose to persist it.
Step 3 – Select the environment from dropdown
Go to the new request and select the environment created just now. In this case, I have selected “QA” environment. Refer to the newly created variables in the request.
Step 4 – Refer the newly created variables in the request
Here, we can see that it is showing all 3 variables that we have defined in the previous step. To use an environment variable value in a request, reference it by name, surrounded with double curly braces:
Hover over a variable reference to get its current value.
Step 5 – Run the request
Click on the “Send” button. The request will be sent, and the response displayed as in shown the below image.
Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!! Cheers!!
Constant Timer is used to delay the next request by a constant time which you can configure by adding the value of constant delay time.
Constant Timer has the following input fields:
Name: To provide the name of the timer. This is a non-mandatory field.
Comments: To provide arbitrary comments (if any). This is a non-mandatory field.
Thread Delay (in milliseconds): The pause time in milliseconds. Thread(s) will hold the execution of the sampler/request for the defined time and once the delay time is over then the sampler will be executed. This is a mandatory field.
The sample request and response used in this tutorial is shown below:
Add Thread Group To add Thread Group: Right-click on the “Test Plan” and add a new thread group: Add -> Threads (Users) -> Thread Group
In the Thread Group control panel, enter Thread Properties as follows: We will take an example of row no 5
Number of Threads: 1 – Number of users connects to the target website Loop Count: 10 – Number of times to execute testing Ramp-Up Period: 1 – It tells JMeter how long to delay before starting the next user. For example, if we have 5 users and a 5 -second Ramp-Up period, then the delay between starting users would be 1 second (5 seconds /5 users).
Step 2 – Add HTTP Request Sampler
The JMeter element used here is HTTP Request Sampler. In HTTP Request Control Panel, the Path field indicates which URL request you want to send
Add HTTP Request Sampler To add: Right-click on Thread Group and select: Add -> Sampler -> HTTP Request
The below-mentioned are the values used in HTTP Request to perform the test
Name – HTTP POST Request Demo
Server Name or IP – reqres.in
Protocol – https
Method – POST
Path – /api/users
Step 3 – Add HTTP Head Manager
The Header Manager lets you add or override HTTP request headers like can add Accept-Encoding, Accept, Cache-Control
To add: Right-click on Thread Group and select: Add -> Config Element -> HTTP Read Manager
The below-mentioned are the values used in Http Request to perform the test Content-type = application/json
Below is the image once HTTP Header Manager is added to the Test Plan.
Step 4 – Add a Constant Timer
I want to have each thread pause for the same amount of time (500ms) between requests. So, configure Constant Timer.
To add: Right-click on Thread Group and select: Add -> Timer ->Constant Timer
Configuring Thread Delay of 500 milliseconds
Step 5 – Adding Response Assertion to Test Plan
The response assertion control panel lets you add pattern strings to be compared against various fields of the request or response.
To add: Right-click on HTTP Request and select: Add -> Assertions-> Response Assertions
Here, I have selected the below options:-
Apply to: Main Sample only Field to Test: Response Code Pattern Matching Rules: Substring Pattern To Test: 201
Step 6 – Adding Listeners to Test Plan
Listeners – They show the results of the test execution. They can show results in a different format such as a tree, table, graph, or log file We are adding the View Result in Table
View Result in Table – The View Result in Table listener displays information about each sample in the form of a table. To add: Right-click on Test Plan, Add -> Listener -> View Result in Table
Note – Don’t use “View Results Tree” or “View Results in Table” listeners during the load test, use them only during the scripting phase to debug your scripts.
Step 7 – Save the Test Plan
To Save: Click File Select -> Save Test Plan as ->Give the name of the Test Plan. It will be saved in .jmx format.
Step 8 – Run the Test Plan
Click on the Green Triangle as shown at the top to run the test.
Step 9 – View the Execution Status
Click on View Result in Table to see the status of Run. A successful request will be of a Green colour in the Text Section.
For example, in the above figure, let’s analyse Sample 2.
Start time is 15:41:10.240
Sample Time of Sample 1 is 273 ms
Constant Timer: 500 ms (as configured)
End Time of this sample is = 15:41:10.240 + 273 + 500 = 15:41:11.013
So Sample 2 should start at the time is 15:41:11.014 (As shown in the above figure – approximately started at 16).
We are done! Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!!
This timer pauses each thread request for a random amount of time, with each time interval having the same probability of occurring. The total delay is the sum of the random value and the offset value.
Uniform Random Timer has the following input fields:
Name: To provide the name of the timer. This is a non-mandatory field.
Comments: To provide arbitrary comments (if any). This is a non-mandatory field.
Random Delay Maximum (in milliseconds): Maximum random number of milliseconds to pause. This is a mandatory field.
Constant Delay Offset (in milliseconds): The pause time in milliseconds. Thread(s) will hold the execution of the sampler/request for the defined time and once the delay time is over then the sampler will be executed. This is a mandatory field.
The sample request and response used in this tutorial is shown below:
Add Thread Group To add Thread Group: Right-click on the “Test Plan” and add a new thread group: Add -> Threads (Users) -> Thread Group
In the Thread Group control panel, enter Thread Properties as follows: We will take an example of row no 5
Number of Threads: 1 – Number of users connects to the target website Loop Count: 10 – Number of times to execute testing Ramp-Up Period: 1 – It tells JMeter how long to delay before starting the next user. For example, if we have 5 users and a 5 second Ramp-Up period, then the delay between starting users would be 1 second (5 seconds /5 users).
Step 2 – Add HTTP Request Sampler
The JMeter element used here is the HTTP Request Sampler. In the HTTP Request Control Panel, the Path field indicates which URL request you want to send
Add HTTP Request Sampler To add: Right-click on Thread Group and select: Add -> Sampler -> HTTP Request
The below-mentioned are the values used in HTTP Request to perform the test
Name – Parameterized HTTP Request
Server Name or IP – reqres.in
Protocol – https
Method – POST
Path – /api/users
Step 3 – Add HTTP Head Manager
The Header Manager lets you add or override HTTP request headers like can add Accept-Encoding, Accept, Cache-Control
To add: Right-click on Thread Group and select: Add -> Config Element -> HTTP Read Manager
The below-mentioned are the values used in HTTP Request to perform the test Content-type = application/json
Below is the image once HTTP Header Manager is added to the Test Plan.
Step 4 – Add a Uniform Random Timer
I want to have each thread pause for the same amount of time (500ms) between requests. So, configure the Uniform Random Timer.
To add: Right-click on Thread Group and select: Add -> Timer ->Uniform Random Timer
Configuring Random Delay Maximum of 200 ms and Thread Delay of 500 milliseconds.
Step 5 – Adding Response Assertion to Test Plan
The response assertion control panel lets you add pattern strings to be compared against various fields of the request or response.
To add: Right-click on HTTP Request and select: Add -> Assertions-> Response Assertions
Here, I have selected the below options:-
Apply to: Main Sample only Field to Test: Response Code Pattern Matching Rules: Substring Pattern To Test: 201
Step 6 – Adding Listeners to the Test Plan
Listeners – They show the results of the test execution. They can show results in a different format such as a tree, table, graph, or log file We are adding the View Result in Table
View Result in Table – The View Result in Table listener displays information about each sample in the form of a table. To add: Right-click on Test Plan, Add -> Listener -> View Result in Table
Note – Don’t use “View Results Tree” or “View Results in Table” listeners during the load test, use them only during the scripting phase to debug your scripts.
Step 7 – Save the Test Plan
To Save: Click File Select -> Save Test Plan as ->Give the name of the Test Plan. It will be saved in .jmx format.
Step 8 – Run the Test Plan
Click on the Green Triangle as shown at the top to run the test.
Step 9 – View the Execution Status
Click on View Result in Table to see the status of Run. A successful request will be of a Green colour in the Text Section.
For example, in the above figure, let’s analyse Sample 2
Start time is 16:10:16.625
Sample Time of Sample 1 is 290 ms
Constant Timer: 500 ms (as configured)
End Time of this sample is = 16:10:16.625 + 290 + 500 = 16:10:17.415 and a random value of 77 ms is added to make it 16:10:17.492
We are done! Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!!
Postman contains a full-featured testing sandbox that enables you to write and execute JavaScript-based tests for your API. You can then integrate Postman with your CI/CD build system using Newman, the command-line collection runner for Postman. In this tutorial, we are going to learn how we can integrate Postman with Jenkins.
To generate a Performance Report in Jenkins, we need to download NodeJS Plugin. Please refer to this tutorial to install the plugin – How to install Plugins in Jenkins.
Go to Manage Jenkins > Manage Plugins and install the NodeJS plugin.
Step 2: Global Tool Configuration for NodeJS
Go to Manage Jenkins > Global Tool Configuration and under NodeJS, select Add NodeJS.
Enter a name for the Node.js installation.
Select the version of NodeJS installed on your machine from the Version dropbox.
In Global npm packages to install, enter newman.
Click on the Apply and Save buttons.
Step 3: Create a new FreeStyle project
Give the Name of the project – Postman_Demo
Click on the FreeStyle project.
Click on the OK button.
In the General section, enter the project description in the Description box.
Step 4: Source Code Management
Select Source Code Management as None if the project is locally present on the machine.
Step 5: Build Environment
Select the “Provide Node & npm bin/folder to PATH” option.
Step 6: Select Execute Windows batch command
In the Build Steps section, select Execute Windows batch command.
Use the below command to go to the path where the JMeter is placed in your system.
cd C:\Users\Vibha\Desktop\Postman
newman run --disable-unicode API_Tests.json
Click on the Apply and Save buttons.
We have created a new project “Postman_Demo” with the configuration to run the Postman scripts.
Step 7: Execute the tests
Let’s execute it now by clicking on the “Build Now” button.
Right-click on Build Number (here in my case it is #1).
Click on Console Output to see the result.
Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!! Cheers!!
A NewmanHTML reporter that has been extended to include the separation of the iteration runs so these are no longer aggregated together and also some additional handlebars helpers to enable users to create better custom templates
This reporter comes with a dashboard-style summary landing page and a set of different tabs that contain detailed request information. There are also a few optional configuration flags available, to tailor the final report in a number of different ways.
Prerequisite:
NodeJS is installed
Newman is installed
Implementation Steps
Step 1: Install newman-reporter-htmlextra package
Go to cmd prompt and type the below command:-
npm install -g newman-reporter-htmlextra
Step 2: Export the already created Postman tests
Export your Postman API collection as JSON file. To know how to export Collection in Postman, please refer to this tutorial – How to Export Postman Collections?.
This screen provides the option to export the JSON file as Collection v2 or Collection v2.1 and click on the “Export” button.
Open the Command Line and go to that location of the collection JSON file.
Step 3: Run the API collection through the command line
Use the below command to execute the tests and generate an HTML Report. The following command will create a new report in the ./newman directory, if the directory does not exist, it will be created as part of the Newman run.
newman run API_Newman_Tests.json -r htmlextra
Go inside the newman folder and see a .html report is present.
Step 4: View Newman HTML Report
Double-click on the .html report and this will display. On the summary page, we get a summary of the test results we have run. We get detailed information like a number of iterations, assertions, skipped tests, and failed tests. From the Total Requests tab, we can get a lot of information like baseurl, the request body, the response header, and the response format. For more details, we can go to the Failed tests tab to check which scripts failed, the status code, and the assertion error message.
Go to the Total Requests tab to get all the information regarding the request method used, request URL, request body, response body, time taken to process the request, the means size of the request, and so on.
Go to the Failed Tests tab and see all the details about the failed tests.
Step 5: Additional CLI Options
Specify a path for the HTML Report.
Specify a path where the output HTML file will be written to disk.
Use the below command.
newman run API_Newman_Tests.json -r htmlextra --reporter-htmlextra-export ./results/report.html
If the path is not specified, the file will be written to the newmain/ in the current working directory.
That’s it! Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!!
Postman is a powerful tool that allows developers to test APs efficiently. One of the key features of Postman is the ability to create and share collections, which are groups of API requests that can be organized together. In this tutorial, I will discuss how to import Postman collections.
Step 1: Open Postman and navigate to the Collections tab on the left-hand side. Click on the “Import” button located in the top-left corner of the screen.
Step 2: In the “Import File” modal, click on the “Choose Files” button and select the Postman collection file you wish to import.
Step 3: Once the file has been selected, click on the “Import” button to initiate the import process.
Step 4: After the import process is complete, you should see your newly imported collection listed in the Collections tab.
Step 5: Run the tests present in the Collection to see if the import is successful.
Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!!
Newman enables you to run and test a Postman Collection directly from the command line. It’s built with extensibility in mind so that you can integrate it with your continuous integration (CI) servers and build systems.
Let’s learn about Newman and how to run the Postman collection using a command line.
Implementation Steps
Step 1: Check if node.js is already installed or not
Open your cmd from your machine
node -v
If found: ‘node’ is not recognized as an internal or external command, operable program, or batch file
That means node.js is not installed on your system yet.
Step 2: Check if npm is already installed or not
npm is node package manager which is used to installed packages over node.
npm -v
‘npm’ is not recognized as an internal or external command, operable program or batch file message means that node is not found in the system.
Step 3: Install node.js
Newman is node.js program and to get newman we have to first install node.js
Depending upon the system requirement, select and download 32-bit or 64-bit for Window or macOS operating system Node.js installer.
Once the installer finishes downloading, launch it. Open the downloads link in your browser and click the file. Or, browse to the location where you have saved the file and double-click it to launch. Click the “Next” button.
Select the option “I accept the terms in the License Agreement” and click the “Next” button.
The installer will prompt you for the installation location. Leave the default location, unless you have a specific need to install it somewhere else. Click the “Next”button.
This screen shows that npm package manager is also getting installed. Click the “Next” button.
Select the “Next” button.
Click on the “Install” button.
This screen shows that installation is in progress.
Click the “Finish” button to end the installation process.
Step 4: Check if node.js and npm are successfully installed
Close the cmd prompt and reopen the cmd prompt. Type node-v command and see if the version is displayed or not. The below image shows that node.js of version 18.17 is installed on the machine.
Type npm-v command and see if the version is displayed or not. The below image shows that npm of version 9.6.7 is installed on the machine.
Step 5: Install Newman
npm install -g newman
It will get Newman then install in your system. It will take couple of second.
After finishing the installation you can see it has added all the packages from contributors.
Step 6: Export the already created Postman tests
Export your Postman API collection as json file
This screen provides the option to export the json file as Collection v2 or Collection v2.1 and click on the “Export” button.
On cmd go to that location of collection json file
cd C:\Users\Vibha\Desktop\Automation\Postman
Step 7: Run the API collection through command line
newman run API_Testing.json
The output of the above command is shown below. There is a log of entire over all summary showing iterations, requests, test-scripts, prerequest-scripts, assertions, total run duration, total data received, average response time and in case of any failure we get the details of failure and all.
Congratulations!! We have run the Postman tests through command line. I hope this tutorial is useful to you.
In this tutorial, we will go through the steps to export a Collection in Postman.
Step 1: Open Postman and navigate to the collection you want to export. Click on the ellipsis (…) icon next to the collection name.
Step 2: Select “Export” from the drop-down menu. Choose the format in which you want to export the collection. Postman supports a variety of formats including JSON, CSV, and YAML.
Step 3: Choose the location on your computer where you want to save the exported file.
Step 4: Click “Save” to export the collection.
Please keep in mind that if you wish to export a huge collection in Postman, the export procedure may take a few minutes. You may quickly share your Postman collections with others or backup your collections for future use using the exported file.
We are done! Congratulations on making it through this tutorial and hope you found it useful! Happy Learning!!
This tutorial explains the process to run the Selenium Tests on multiple browsers in the GitLab pipeline. This is a very important step towards achieving CI/CD. Ideally, the tests need to run after any change (minor/major) before merging the latest change to the master branch. This makes life of a QA very easy.
Once the proposed changes are built, then push the commits to a feature branch in a remote repository that’s hosted in GitLab. The push triggers the CI/CD pipeline for your project. Then, GitLab CI/CD runs automated scripts (sequentially or in parallel) to build as well as to test the application. After a successful run of the test scripts, GitLab CI/CD deploys your changes automatically to any environment (DEV/QA/UAT/PROD). But if the test stage is failed in the pipeline, then the deployment is stopped.
After the implementation works as expected:
Get the code reviewed and approved.
Merge the feature branch into the default branch.
GitLab CI/CD deploys your changes automatically to a production environment.
To use GitLab CI/CD, we need to keep 2 things in mind:
a) Make sure a runner is available in GitLab to run the jobs. If there is no runner, install GitLab Runner and register a runner for your instance, project, or group.
b) Create a .gitlab-ci.yml file at the root of the repository. This file is where CI/CD jobs are defined.
The Selenium tests run on a headless browser in the pipeline.
How to check if GitLab Runner is configured?
Go to the project created in the GitLab. Go to the left side and click on Settings. Go to CI/CD option and a CI/CD Settings page open. Scroll down and see the Runners. By default, Shared runner will be selected for any new project.
What is a headless browser?
A headless browser is like any other browser, but without a Head/GUI (Graphical User Interface). A headless browser is used to automate the browser without launching the browser. While the tests are running, we could not see the browser, but we can see the test results coming on the console.
As explained in one of the previous tutorial, it is needed to add the maven-surefire-plugin to run the TestNG tests through the command line.
Step 3 – Create the Test Code
This is the BaseTest Class where the WebDriver is initialized, headless mode, full screen, and at the end close the WebDriver.
package com.example.tests;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.edge.EdgeOptions;
import org.openqa.selenium.firefox.FirefoxOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Parameters;
import java.net.URL;
import java.time.Duration;
public class BaseTests {
protected static ThreadLocal<RemoteWebDriver> driver = new ThreadLocal<RemoteWebDriver>();
public static String remote_url = "http://selenium-hub:4444";
public final static int TIMEOUT = 5;
@BeforeMethod
@Parameters("browser")
public void setUp(String browser) throws Exception {
if(browser.equalsIgnoreCase("chrome")) {
ChromeOptions options = new ChromeOptions();
options.addArguments("--start-maximized");
options.addArguments("--headless=new");
options.addArguments("--remote-allow-origins=*");
driver.set(new RemoteWebDriver(new URL(remote_url), options));
System.out.println("Browser Started :"+ browser);
} else if (browser.equalsIgnoreCase("firefox")) {
FirefoxOptions options = new FirefoxOptions();
options.addArguments("--start-maximized");
options.addArguments("-headless");
driver.set(new RemoteWebDriver(new URL(remote_url), options));
System.out.println("Browser Started :"+ browser);
} else if (browser.equalsIgnoreCase("edge")) {
EdgeOptions options = new EdgeOptions();
options.addArguments("--start-maximized");
options.addArguments("--headless=new");
driver.set(new RemoteWebDriver(new URL(remote_url), options));
System.out.println("Browser Started :"+ browser);
} else {
throw new Exception ("Browser is not correct");
}
driver.get().get("https://opensource-demo.orangehrmlive.com/");
driver.get().manage().timeouts().implicitlyWait(Duration.ofSeconds(TIMEOUT));
}
public WebDriver getDriver() {
return driver.get();
}
@AfterMethod
public void closeBrowser() {
driver.get().quit();
driver.remove();
}
}
There is a Login pages that need to be tested.
LoginPage contains the tests to log in to the application. After successful login, the application moves to the next webpage – HomePage. You can see that BaseTest class is extended here.
package com.example.tests;
import org.openqa.selenium.By;
import org.testng.annotations.Test;
import static org.testng.Assert.assertEquals;
public class LoginPageTests extends BaseTests {
By userName = By.name("username");
By passWord = By.name("password");
By loginBtn = By.xpath("//*[@id='app']/div[1]/div/div[1]/div/div[2]/div[2]/form/div[3]/button");
By errorMessage = By.xpath("//*[@id='app']/div[1]/div/div[1]/div/div[2]/div[2]/div/div[1]/div[1]/p");
By blankUsername = By.xpath("//*[@id='app']/div[1]/div/div[1]/div/div[2]/div[2]/form/div[1]/div/span");
By dashboardPage = By.xpath("//*[@id='app']/div[1]/div[1]/header/div[1]/div[1]/span/h6");
@Test
public void invalidCredentials() {
getDriver().findElement(userName).sendKeys("1234");
getDriver().findElement(passWord).sendKeys("12342");
getDriver().findElement(loginBtn).click();
String actualErrorMessage = getDriver().findElement(errorMessage).getText();
System.out.println("Actual ErrorMessage :" + actualErrorMessage);
assertEquals(actualErrorMessage,"Invalid credentials");
}
@Test
public void blankUsername() {
getDriver().findElement(userName).sendKeys("");
getDriver().findElement(passWord).sendKeys("12342");
getDriver().findElement(loginBtn).click();
String actualErrorMessage = getDriver().findElement(blankUsername).getText();
System.out.println("Actual ErrorMessage :" + actualErrorMessage);
assertEquals(actualErrorMessage,"Required");
}
@Test
public void successfulLogin() {
getDriver().findElement(userName).sendKeys("Admin");
getDriver().findElement(passWord).sendKeys("admin123");
getDriver().findElement(loginBtn).click();
String actualMessage = getDriver().findElement(dashboardPage).getText();
System.out.println("Message :" + actualMessage);
assertEquals(actualMessage,"Dashboard");
}
}
Step 4 – Create testng.xml to run the tests
Now, let’s create a testng.xml to run the TestNG tests. It is very easy to create testng.xml in the case of Eclipse. Right-click on the project, and select TestNG -> Convert to TestNG. It will create a basic testng.xml structure. In case of IntelliJ, create a new file with the name of testng.xml and copy the code from here.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite" parallel="tests" thread-count="3">
<test name="Chrome Test">
<parameter name="browser" value="chrome"></parameter>
<classes>
<class name="com.example.tests.LoginPageTests"/>
</classes>
</test> <!-- Test -->
<test name="Firefox Test">
<parameter name="browser" value="firefox"></parameter>
<classes>
<class name="com.example.tests.LoginPageTests"/>
</classes>
</test> <!-- Test -->
<test name="Edge Test">
<parameter name="browser" value="edge"></parameter>
<classes>
<class name="com.example.tests.LoginPageTests"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->
version: 3. It is the latest version of the docker-compose files.
services(containers): This contains the list of the images and their configurations.
image: It defines which image will be used to spin up container.
ports: Published ports with host:container format.
container_name: You can give name to your containers.
depends_on: This defines the required dependency before spinning up the container. In our docker-compose.yml file, containers Chrome and Firefox are dependent upon container hub to spin up.
SE_NODE_MAX_INSTANCES: This defines how many instances of same version of browser can run over the Remote System.
SE_NODE_MAX_SESSIONS: This defines maximum number of concurrent sessions that will be allowed.
Step 6 – Create a .gitlab-ci.yml
stages:
- test
variables:
SELENIUM_SERVER_NAME: selenium-hub
SELENIUM_SERVER_URL: http://${SELENIUM_SERVER_NAME}:4444
DOCKER_HOST: tcp://docker:2375
services:
- docker:20.10.16-dind
test:
stage: test
image: docker/compose
before_script:
- docker-compose up -d selenium-hub chrome edge firefox
- sleep 10
- docker-compose run ping curl ${SELENIUM_SERVER_URL}/status
script:
- docker-compose run tests mvn clean test
artifacts:
when: always
name: "report"
paths:
- target/surefire-reports/**
expire_in: 7 days
GitLab Section
Step 7 – Create a blank project in GitLab
To know, how to create a blank new project in GitLab, please refer tothis tutorial.
Step 8 – Push the project from local repository to Gitlab Repository
To know, how to push the changes in GitLab, please refer to this tutorial.
Step 9 – Run the tests in the GitLab pipeline
Now, when a new change is committed, a pipeline kicks off and it runs all the tests.
Step 10 – Check the status of the pipeline
Once the Status of the pipeline changes to either failed or passed.. that means the tests are already executed.
As you can see the Status is failed here which means that the execution is completed. Let us see the logs of the execution it shows that out of 9 tests, all 9 are passed. This shows that tests ran successfully in the GitLab pipeline.
As I have added an artifact also in the gitalb-ci.yml, which is highlighted in the image. This artifact creates a folder with the name “report” and the reports in this folder come from the path /target/surefire-reports. This artifact gives us the option to download the reports or browse the report. This report will be available for 7 days only as mentioned in the gitlab-ci.yml.
Step 11 – Download the report
Once, will click on the download button, it will download “report.zip”. Unzip the folder and it looks like something as shown below:
Example of Emailable-Report.html
Example of Index.html
Congratulations. This tutorial has explained the steps to run Selenium tests in GitLab CI/CD. Happy Learning!!