Run selenium test in headless mode with real Chrome and Firefox

Time has always been a factor to measure the efficiency and effectiveness of our automated test scripts. Being CD/CI a crucial need, its very important to run our test quickly. Best way of it is obviously running your test with headless browsers (e.g. GhostDriver, PhantomJs driver), but we have always seen below issues with these browsers-

  1. Same locators(xpaths, CSS selectors) do not work when test executed on headless browsers
  2. Additional line of code to handle cookies and other issues.
  3. Javascript alerts create problems

And there are many more problems apart from above mentioned. These problems happen as Xpath engine and java-script engine implementation varies from browser to browser and specifically for headless browsers. But what if we run our real intended browsers in headless mode during our test execution. That will obviously solve our above problem.

Selenium always surprises us with some cool new features. This time it has just blown the need for headless browsers like phantomjs etc. Now with selenium version 3.6.0 on-wards you can run your real browsers(chrome and Firefox) test in headless mode.

Now lets look at how to make our browser run in headless mode during our test execution. Please make sure to utilize this feature, you are using selenium version 3.6.0 and above. Lets go through a sample code for google test for Firefox and Chrome respectively.

System.setProperty("webdriver.chrome.driver","Path to chrome driver exe");
ChromeOptions options = new ChromeOptions();
options.setHeadless(true); //this line is actually enables the headless mode
WebDriver driver = new ChromeDriver(options);
driver.navigate().to("https://google.com");
driver.findElement(By.name("q")).sendKeys("hello");
driver.close();

Similarly, we can use FirefoxOptions to enable headless mode for Firefox browser

System.setProperty("webdriver.gecko.driver","Path to gecko driver exe");
FirefoxOptions options = new FirefoxOptions();
options.setHeadless(true); //this line is actually enables the headless mode
WebDriver driver = new FirefoxDriver(options);
driver.navigate().to("https://google.com");
driver.findElement(By.name("q")).sendKeys("hello");
driver.close();

Also, its very easy to do your selenium Grid configuration for your remote test execution. please refer to below code snippet.

FirefoxOptions ffoptions = new FirefoxOptions();
ffoptions.setHeadless(true);
RemoteWebDriver driver = new RemoteWebDriver(
        new URL("http://localhost:4444/wd/hub"),
        ffoptions);

I have also done some time comparison with chrome browser headless and non headless mode. Below is time analysis for a simple test which navigate to google and search for some text :

  • Execution time in non-headless mode – 12.193 seconds
  • Execution time in headless mode – 9.321 seconds

This can tremendously reduce execution time when you will execute your large test suites.

Now, its time to say goodbye to your third party headless browsers. Execute the test with real browsers in headless mode.

Cheers, Happy Automating 🙂

Advertisements

Docker Tutorial Part 4 -> Understanding components: docker-machine, Dockerfile, Images and Containers

In our previous blog post Docker setup and installation on Ubuntu , we have done installation of docker. Now before diving deep in to hands on of Docker, its time to understand what are all components or terminology.

Following are some major components of Docker-

  1. Docker-Machine
  2. Docker Engine
  3. Dockerfile
  4. Docker Images
  5. Docker Containers

Docker-Machine – This can also be referred as Docker Host. If you are working on a Linux machine then your Linux system itself is a Docker-machine / Docker host. But if you are working with Boot2Docker on windows, then your Linux VM inside Oracle virtual box is the docker-machine. In case of Windows(Boot2Docker), you can have multiple docker-machine and can also create new using below command-

docker-machine create --driver virtualbox <suitable machine name>

Docker Engine – Docker engine is the core of Docker, it is the main software/package which drive all docker commands and enable user to create images and containers. This is a lightweight and powerful open source containerization technology combined with capabilities to build and ship your applications

DockerFile – Dockerfile is a plain text file with no extension. It has series of  instructions which needs to be perform to create a docker image. Instruction are some docker specific commands which when executed in sequence generates a docker image. If you have a dokerfile, you can easily create an image from it with a custom name. Every instruction when executed, stored in a layer. A sample dockerfile with python looks like below-

FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y python python-dev python-pip python-virtualenv && \
rm -rf /var/lib/apt/lists/*
WORKDIR /data
CMD [“bash”]

Docker images – Docker image is a blue print of what you need in a container. Docker image is build up from a series of layers. Each layer represents an instruction in the image’s dockerfile. Each layer except the last one is read-only. To understand the concept at high level, you can consider an image as a Class and container as object of that class. Object is a run time entity similar to container. Like a class can have many objects of it, An image can have multiple containers out of it.

Docker container – Docker container is a lightweight execution environment to build and ship your application. This is just your docker image in action.

Lets understand the difference with an analogy. Consider your Dockerfile as some Abc.java file. Now when you build/compile Abc.java file you get .class file (docker image). Now when you can create multiple objects(containers) of class Abc.java(docker image), and these objects can be referred as Docker containers. So in the end the run time entity on which you will actually work is Container.

In the next blog post , we would be learning about running your local source code in to container

To know more in details, hands-on and for personal / corporate training please reach out to – gauravtiwari91@yahoo.com

Docker Tutorial Part 3 -> Setup and installation on Ubuntu

Installing docker on Linux is as simple as installing any other Linux package, We don’t require the whole Docker toolbox for working with docker on Linux.

In this blog post, I will be talking about installing community edition installation of Docker.

To install Docker, you need the 64 bit version of either one of below Ubuntu-

  • Xenial 16.04 (LTS)
  • Trusty 14.04 (LTS)
  • Yakkety  16.10

Uninstall older version of Docker – Older version of Docker called docker or docker-engine. if you have these then uninstall them, otherwise skip this part

sudo apt-get remove docker docker-engine

Install Docker – You can install the Docker in different ways, as per your needs

  1. Setup docker repositories and install from them – this is easy in installation and upgrades (recommended approach)
  2. Download the DEB package and install it manually and also manage upgrades manually (prefer when lack of internet access)

Install using Repository – If you are doing the setup for the first time on a new host machine, you need to setup the docker repository. Then you can use the same repository for install and updates

  • Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
  • Add Docker’s official GPG key :
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  • Verify that the key fingerprint is 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
sudo apt-key fingerprint 0EBFCD88

pub   4096R/0EBFCD88 2017-02-22
      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid                  Docker Release (CE deb) <docker@docker.com>
sub   4096R/F273FCD8 2017-02-22
  • use the below command to setup the stable repository
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
  • Update the apt package
sudo apt-get update
  • Install the latest / specific version of Docker with below commands
sudo apt-get install docker-ce   //for latest version
sudo apt-get install docker-ce=  //for specific version
  • Verify that Docker CE is installed correctly by running a sample hello-world docker image-
sudo docker run hello-world
  • You should see something like below if your installation is successful and complete

docker-linux

If you guys face issues in installation , please mention in comments section.

In the next blog post , we would be learning docker terminology and different docker components

To know more in details, hands-on and for personal / corporate training please reach out to – gauravtiwari91@yahoo.com

Docker Tutorial Part 2 ->Getting started with Docker: Setup and Installation on Windows

Now we have the basic understanding of docker technology, lets go ahead and do the installations. if you are still not aware about it, please go back and read my post Docker technology overview: How is it different from virtual machines and come back here.

Please make sure virtualization is enabled in your Windows system and follow below steps to install Docker toolbox on Windows-

  • Click on the link and download docker toolbox from – Get Docker Toolbox for Windows
  • Docker toolbox will include following docker tools – (Don’t worry, we will cover each of them in upcoming blog posts)-
  1. Docker CLI client for running Docker Engine to create images and containers
  2. Docker Machine for running Docker Engine commands from Windows terminal
  3. Docker Compose for running docker-compose command
  4. Docker Kinematic – ( Docker GUI – for interactive docker operations)
  5. Oracle VM Virtual box
  6. Git MSYS-git UNIX tools
  • Docker Engine uses Linux-specific kernel features, we can’t run Docker Engine natively on Windows. (So indirectly you will be creating containers inside a small Linux VM running in the Oracle virtual box). The new Docker for Windows uses native virtualization and does not need Virtual box to run docker. (lets stick to this as of now for learning purpose)
  • Install the executable which you downloaded in first step. double click and keep following installation instructions. Once you done with installation, you will see below icons on your desktop-

installed

  • Click on the Docker quick start to launch the toolbox terminal. After this if it asks for any permissions, press yes. When its started, you will see a terminal displays $ prompt
  • Now type command docker and you will see all help options for docker as below-

dockerinstallationNow you are good to go and play around docker images and containers. You can give a try to a hello-world docker images. This image checks for your installation and print success message if installation is correct. Type “docker run hello-world” on terminal and hit Enter.

In the next blog post , we will learn about doing setup on Linux environment 🙂

To know more in details, hands-on and for personal / corporate training please reach out to – gauravtiwari91@yahoo.com

Docker Tutorial Part 1 ->Docker technology overview: How is it different from virtual machines

Before we start blindly follow the docker training program and start learning it. Lets understand why we should learn, why we need it. What this technology is and how it works.

What is Docker – All applications have their own dependencies, which include both software and hardware resources. Docker is an open source platform for developers, QA etc. Its a mechanism that helps in isolating the dependencies per each application by packaging them into a single unit called container. Containers are safe to use and deployed easily compared to previous approaches

How containers are different as a concept – Lets understand the difference by an analogy. Consider your virtual machine as a house and container as an apartment.

Houses (Virtual machines) are fully self-contained which has its own infrastructure- plumbing, electricity, water supply etc. On a majority all house would have at least a bedroom, living area, bathroom and kitchen. Still, even if I am trying to bought a house with only a room, I would end up buying more than what i need in a house.

Apartments (The containers) are built around shared infrastructure. The apartment building (Docker host) shares plumbing, electricity, water supply etc. They are also offered in different sizes as per your need. You also have to pay for only those services which you want to use.

Also maintenance cost for house will always be higher than an apartment.

So with containers, you share the underlying resource of the Docker host and use only the software which you need to run your application.

And with virtual machines- its just opposite, you are going to have full operating systems and default programs comes with it.

Now when we have understand the concept, lets go a little technical. Consider the building as docker-host and builder as docker-engine in below explaination-

Docker containers versus Virtual Machines – Virtual machines have full OS with its own memory management with the overhead of virtual device drivers. In an virtual machine, valuable resources are emulated for the guest OS and Hypervisor, which makes it possible to run many instances of one or more operating systems in parallel on a single machine

While Docker containers are executed with Docker engine rather than a Hypervisor, therefore containers are smaller than virtual machines and enable faster startup and better performance, great compatibility due to sharing of the host’s kernel. Architecture level visual difference is as below – containersvsVM

So to optimize our SDLC and reduce time spent in test script execution,overhead of maintaining the execution/deployment environments, We should really go for container technology.

Now we know what docker is and why we should use it. To know more in details and for personal / corporate trainings please reach out to – gauravtiwari91@yahoo.com

 

 

Triggering Remote Jenkins jobs from another Jenkins

Continuous integration and delivery is a very crucial part of software life cycle for automatic execution or deployment of the code. Usually we have single Jenkins for deployment, automation scripts etc. But many times different teams like Ops, Dev and QA create their own Jenkins for their own purposes.

Now if we talk about integrating everything at same place, it is not feasible to manage and re-create jobs. So solution is to make communication between two Jenkins servers and trigger build accordingly.

In this blog post, I will talk about how can you trigger a JOB-A (on remote-jenkins ) from JOB-B (on local-jenkins). The real scenario of this would be; when we are trying to trigger an automation script on Jenkins1 after successful completion of code deployment on Jenkins2.

For understanding this let assumes few things below-

  • We have a local job-  Job-B (local-jenkins) on server local-jenkins:8080
  • We have a remote job – Job-A (remote-jenkins) on server – remote-jenkins:8080

Now we want to trigger Job-A from Job-B. For achieving this we need to install few plugins in our local Jenkins (from which we want to trigger the job- local-jenkins in this case) –

Go to manage-Jenkins->configure system->Parameterized Remote Trigger Configuration, and do configuration as stated below-

remotetrigger1

You can add many remote servers. Now you have to do following changes in your local Jenkins job i.e. Job B.

buildsteps

buildinfo

Now save the configuration of your job and build your local job i.e. Job-B. Console of local job looks like below- local job

Console of Remote job looks like below – remotejob

You can see it says that started by local-Jenkins. So it has triggered this job on remote Jenkins from local Jenkins.

Similarly you can link multiple jobs to be build across different Jenkins server. I hope now you can easily integrate multiple Jenkins. Please add comments in case of any issues encountered. 🙂

 

Docker Training Program – [Build, Ship, and Run Any App, Anywhere]

In the automation driven industry, we are way more advance in automating our test cases, deployments etc. but automating your infrastructure, setup of environments is still a pain. We all have seen situations when something is working on a machine but it is not working on other machine. Sometimes a QA files a OS specific defect , but developer is no longer able to re-produce. So solution of all these problem is one single thing – DOCKER

I have recently started working on Docker and utilized this very efficiently in Automation Testing and Dev-ops; specially in setting up the execution environment. Following are the major benefit of docker-

  • Build, Ship, and Run Any App / automation script , Anywhere
  • Setup of execution environment for dev/testing is a matter of seconds
  • Docker hub – A cloud of docker images which provides image for every possible software you are looking for, and you can also push your own images and use it form anywhere
  • Continuous integration and fast deployment

Going forward, I would be going through following topics-

  1. Introduction to Docker container technology, how is it different from virtual machines
  2. Installing and Setting up Docker on Windows
  3. Installing and Setting up Docker on Linux (Ubuntu)
  4. Understanding Docker components: docker-machine, Dockerfile, Images and Containers
  5. Hooking your local source code in to container
  6. Understanding major docker commands and shortcuts
  7. Executing your local selenium test inside the container
  8. How to use Selenium-Grid with docker
  9. Building custom images from dockerfile
  10. How to minimize the size of your docker images
  11. Managing your containers with docker compose
  12. How to scale your execution environment with docker and multi-threadinng
  13. Using docker containers as Jenkins slaves
  14. Docker on AWS

So far, I will  be writing about each topic mentioned above. I will also keep adding new topics to the list. Stay tuned to this post for all docker related stuff.

You can reach out to – gauravtiwari91@yahoo.com for more details and personal training with live projects.