Enabling Continuous Deployment Through Jenkins
There are hundreds of articles out there on Internet, which speaks about continuous deployment through Jenkins pipeline, then you must be wondering, why one more? Because, this is ONE STOP SHOP. It might be bit lengthy, but I can assure you, you won’t need any next article after this.
I will go through complete series of steps, to setup tools and configurations that are required to develop the application on your local machine and deploying it to cloud servers.
Technology Stack
OS: Ubuntu 18.04.03 LTS 64-bit
Language: Java
Java environment: openjdk version "1.8.0_242"
Application: Web application
Application Development: Eclipse IDE 2020
Build tool: Apache Maven 3.6.0
Testing tool: TestNG over JUnit and Selenium WebDriver
Deployment: Apache Tomcat 9.0.33
Source Code Versioning: Git and GitHub
Docker Registry: DockerHub
Cloud Server provider: Digital Ocean
Server operations: Rundeck
Dummy Domain provider: Ngrok
Setting Up Java environment
My Java environment was messed up as I used earlier some other Java version from sdkman.io, due to which, I got cyclic dependencies of packages. First I resolved those, then actually installed fresh Java. Through this, I learnt a bit of sdk command also. You may skip these steps for sdk, but it is good to know.
SDK is a way to maintain several versions of Software Development Kit in parallel on Unix based environments. If you go on sdkman.io, you will find JDKs menu, with different company distributions as well.
# Installing sdkman~$ curl -s "https://get.sdkman.io" | bash
~$ source "$HOME/.sdkman/bin/sdkman-init.sh"
~$ sdk version# Commands on sdk
~$ sdk list : Shows all packages available
~$ sdk install <package_name> : Install a particular package
~$ sdk list java : Shows all packages for java
~$ sdk current java : Shows current installed java
~$ sdk uninstall java <version> : Uninstall particular version# Installing JDK
~$ sudo apt-get update
~$ sudo apt install openjdk-8-jre-headless
~$ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
~$ export PATH=$PATH:$JAVA_HOME/bin
~$ java -version
Setting Up Eclipse IDE for development
Eclipse IDE is a famous and widely used across industry to develop Enterprise Applications. It is quite flexible and provides plugins that help us to integrate with other technologies, so that we can use them right away from the IDE.
- Go to https://www.eclipse.org/downloads/ and click download 64bit installer.
- Download the tar file eclipse-inst-linux64.tar.gz
- Unzip the tar file and run eclipse-inst file:
~$ cd /home/{<username}
~$ mv downloads/eclipse-inst-linux64.tar.gz .
~$ tar -xvf eclipse-inst-linux64.tar.gz
~$ cd eclipse-installer
~$ ./eclipse-inst
4. This will run the installer. Select Java EE. Accept the terms and it will be installed. It will also ask for the workspace to be created. Leave it default and finish.
5. Note that this will not create an icon of Eclipse in your launcher menu. We need to create it manually:
~$ cd /home/{username}/.local/share/applications/
~$ touch eclipse.desktop# Write following content in eclipse
[Desktop Entry]
Version=1.0
Name=Eclipse
Comment=Java IDE
Type=Application
Categories=Development;IDE;
Exec=/home/{username}/eclipse/jee-2020-03/eclipse/eclipse
Terminal=false
StartupNotify=true
Icon=/home/{username}/eclipse/jee-2020-03/eclipse/icon.xpm
Name[en_US]=Eclipse~$ chmod a+x eclipse.desktop# Reboot your system and you will find eclipse in launcher menu
# Add it to your favorites
Creating a web application using Maven as Dependencies handler
Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project’s build, reporting and documentation from a central piece of information. We will use it to handle dependencies and plugins, so that the versions of the libraries which we are using are automatically taken care of and we don’t need to explicitly add jars.
1. Eclipse -> File -> New -> Maven Project -> Next -> From catalog, select Internal -> maven-webapp-archtype.
2. Provide Group Id as package -> Provide Artifact Id as Project name.
3. Finish.
4. If this gives an error, "javax.servlet.http.HttpServlet was not found on build path", it is due to missing dependency in pom.xml file. Add this to dependency:<!-- https://mvnrepository.com/artifact/javax.servlet/servlet-api -->
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
An archtype refers to architecture type of the application. This enables building the default architecture i.e. how files are organized, dependencies to be included.
Git and GitHub
Git is a distributed version control system based on SHA1 hashes, that allows us to move back and forth between the different versions of project source code. It can be local to our file system. But, when it comes to collaboration of the members of the development team, we need a common repository that could be shared. GitHub provides us online repository hosting service, along with several additional features.
# Installing Git
~$ sudo apt-get update
~$ sudo apt-get install git
~$ git --version
Basic Git Commands
~$ git init : Initialize local git repository
~$ git add . : Adding files to staging area
~$ git status : Tells status of files in repo
~$ git commit -m "First Commit" : Commits the changes to repository
~$ git checkout <branch> : Checks out particular branch
~$ git push : Pushing changes to remote(GitHub)
But do you really think, every time you make any changes in the source code of the project in Eclipse, you will be running these commands on the terminal? The answer is a big NO. So, how do you do this?
Adding Git Plugin to Eclipse
Open Eclipse -> Help -> Install New Software -> Paste following URL in Work With: http://download.eclipse.org/egit/updates -> Enter -> Select Git Integration for Eclipse -> Finish.
After Git plugin is installed, you will get a menu item Team in right click of the project, which has options for git.
Converting our project as Git enabled repository
1. Eclipse -> Right click on project -> Team -> Share Project.
2. It provides two options for creating repository:
- In parent directory of project
/home/{username}/eclipse-workspace/{projectname}.git
- In the common git repository folder
/home/{username}/git/{projectname}.git3. You can use any of them. It is good idea, if we use second option, to have all projects at one place, which is second option. But, here I will show you both the options.Parent folder
- Select the checkbox, Use or create repository in parent folder
- Create repository -> Select checkbox -> Finish.Common Git folder
- DO NOT select the checkbox.
- Press Create...
- Replace the word repository by {project-name}.
- Finish.
Creating a GitHub repository
Using GitHub, we can create both public and private repositories. It also provides various features like fork, pull requests and issues, which are very often used in Open Source contributions.
1. Create account on https://github.com/ and sign in.
2. On left repository pane, Click New and provide repository name.
3. DO NOT initialize it with README.md. We will manage this from Eclipse.
Linking Eclipse project repository with GitHub repository
We require this, so that whenever we make changes to project in Eclipse, we could push this to remote repository to be shared among other team members and expose this, such that it is accessible to other servers.
1. To view the repository tab:
Eclipse -> Window -> View -> Git Repository. 2. To add remote GitHub repository, inside repository view. In this tab, we see all repositories configured in Eclipse. To link remote repository to our local repository, based on the earlier selection, parent folder or common git folder, we have two directions:Parent folder:
Local Repository -> Remote -> Right click -> Copy URI -> Add GitHub Repository URL -> Username -> Password for authentication -> Finish.Common Git folder:
Local Repository -> Remote -> Create Remote -> Remote Name -> Configure push -> URI -> Change -> Add GitHub Repository URL -> Username -> Password for authentication -> Finish.
Now, whenever you make changes, simply right click on project, Go to Team menu and you will get various options for git. Let’s try out our first commit.
Before that, we maintain our .gitignore file, that could be found under working tree -> repositories view.Add following contents to .gitignore file:/target/
/.classpath
/.settings/
/.project
/home/Right click on project -> Team -> Commit -> Type a commit message -> Select files to commit from bottom -> Commit and push -> Refresh GitHub Repository.
Cheers, Milestone-1 is achieved !
Apache Tomcat Setup
Apache Tomcat provides us with a web container which is anopen source implementation of the Java Servlet, JavaServer Pages, Java Expression Language and Java WebSocket technologies, to allow Java applications to be deployed inside it. Let us set this up, so that we can run the web application inside it.
1. Go to https://tomcat.apache.org/download-90.cgi
2. Download Binary Distribution: Core tar.gz file.
3. Copy this and extract this file in /home/{username}.
4. Inside conf folder, go to server.xml and change the Connector port to 8000:
<Connector port="8000" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />** We do this, because we will use Jenkins on port 8080.
Adding Tomcat Server in Eclipse
1. File -> New -> Other -> Filter Server -> Filter Tomcat -> Select Apache Tomcat 9.0
2. Provide the path: /home/{username}/apache-tomcat-9.0.33
3. Configure Apps to run on this, if you want.
4. Finish.
Running our application on local machine
For testing our local changes, we run the application on our local machine in Apache Tomcat Server.
Right Click on Project in Explorer -> Run As -> Run on server -> Choose an existing server -> Select Tomcat -> Finish.
Testing Application using TestNG and Web Driver
TestNG is test framework for Java applications inspired by JUnit. It adds a lot of functionalities to including annotations, multi-threaded pool and data driven testing. WebDriver is a web automation framework that allows you to execute your tests against different browsers, not just Firefox, Chrome (unlike Selenium IDE). WebDriver also enables you to use a programming language in creating your test scripts.
Download the driver geckodriver for Firefox.
Go to https://github.com/mozilla/geckodriver/releases/tag/v0.26.0
Scroll down and download zip file for geckodriver.~$ tar -xvf geckodriver-v0.26.0-linux32.tar.gz
~$ chmod +x geckodriverCopy this file and put inside the project folder.
Let us see, how we write tests and perform them.
1. Create directory structure src/test/java.2. Inside this folder, create a class AppTester, with certain package name, say com.integration.AppTester. Basically, what we are doing is Loading a web driver for Firefox browser in headless mode. Headless means UI is not actually opened.Then, using this driver, we navigate to the app URL and finally @Test annotations specify that a particular method is considered as test case.public class AppTester {
private WebDriver driver;
String appURL = "http://localhost:8000/IntegrationProject/";
@BeforeClass
public void testSetUp() {
System.setProperty("webdriver.gecko.driver", "geckodriver");
FirefoxOptions firefoxOptions = new FirefoxOptions();
firefoxOptions.addArguments("--headless");
driver = new FirefoxDriver(firefoxOptions);
}
@Test
public void verifyGooglePageTittle() {
driver.navigate().to(appURL);
String getTitle = driver.getTitle();
Assert.assertEquals(getTitle, "Home");
}
@AfterClass
public void tearDown() {
driver.quit();
}
}3. I also created testng.xml in project folder, which directs TestNG framework, which all classes contains the test cases annotations.<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd" >
<suite name="Test Run">
<test name="Simple Test">
<classes>
<class name="com.integration.AppTester"/>
</classes>
</test>
</suite>
4. Add following dependencies to pom.xml.
selenium-java, testng and junit. You can easily search for dependencies at https://mvnrepository.com/artifact/
5. Add following plugins to pom.xml. <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.1</version>
<configuration>
<suiteXmlFiles>testng.xml</suiteXmlFiles>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.21.0</version>
<configuration>
<suiteXmlFiles>testng.xml</suiteXmlFiles>
</configuration>
<executions>
<execution>
<id>integration-test</id>
<goals>
<goal>integration-test</goal>
</goals>
</execution>
<execution>
<id>verify</id>
<goals>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>6. Run As -> Server. 7. Once the application is up, Run As -> Maven test.
Keep up the ladder, Milestone-2 is achieved !
Jenkins
Jenkins is an open-source automation server that automates the repetitive technical tasks involved in the continuous integration and delivery of software. It is built on Java.
#Installing Jenkins
What we are doing is, downloading the key for jenkins and adding it to apt-keys. Next, we add deb package to the apt list. Update is used to update the URL to apt list by reading from .list file. When we do install, it takes the package from URL.~$ wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -~$ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'~$ sudo apt update~$ sudo apt install jenkins~$ sudo systemctl start jenkins
Before building the pipeline, we will first setup the further elements of our pipeline, i.e. we will first install docker on our local machine as well as, setup nodes on Digital Ocean with docker installed on them, also we will install and create jobs on rundeck.
Setting up servers on Digital Ocean
Digital Ocean is a company providing us with cloud servers in IaaS mode. So, it provides with bare installed OS and we need to install further setup. Similarly, it also provides with Database servers, etc. One can consider it like providing a complete enterprise model in cloud with data center facilities like load balancing, node migrations, etc.
It is cool to use, because it provides with free 100$ credits to be used and trust me, it gives you a real feel of company like setup that motivates to show your work to the world, having your own public IPs. Note that, this credit gets expired approx in 3 months with bare minimum functionalities, but may expire soon, if you use too many facilities. Ohhh…it also asks for Rs.70 one time payment, just to verify your card details. For this demo, I will just say, we can buy 3 servers and deploy our application on it, as production. You will soon come to know, why 3 servers !!
1. Go to https://cloud.digitalocean.com
2. Sign Up with your email/gmail/github account.
3. Prove that you are a not a robot, by picture based authentication
4. It will ask for setup billing, provide your card details and it will redirect to payment of Rs.70, pay it and you will get into control panel of your account.
5. You will see many resources that could be added, droplets(servers), database, load balancers, clusters, disk, DNS, etc.
6. Select Create->Droplet
Specify following details:
OS: Ubuntu 18.04 (x64)
Choose Plan: Standard with 2GB RAM, 10$ per month. It's enough.
Data center regions: Nearest to your location.
Additional options: Private networking, User data, Monitoring
SSH Key: This can be used for authentication and remote login to our nodes. To generate, run this command on your machine: ~$ ssh-keygen
(Press enter to both, filename and paraphrase. It will generate set of private and public key.)
~$ cd /home/{username}/.ssh
~$ cat id_rsa.pub
Copy the content and add this to Digital Ocean, in New SSH Key.7. Provide hostname and number of droplets to create.
8. Press create and it will redirect you to project, where you can see, a list of all resources you have. Now, you can see 3 droplets with IP address and hostnames.
Next, we will setup our local machine, for ssh login, just by name, so that we don’t need to write this IP address again and again.
~$ cd /home/{username}/.ssh
~$ touch config
~$ nano configWrite following content in it:
Host <node_name_on_digital_ocean>
HostName <IP_on_digital_ocean>
User root
.
.
so on.Now, try logging in from terminal in your local machine.~$ ssh <node_name_on_digital_ocean>
First time, it will ask to add to known_hosts, accept and move on. Success !
Docker
I cannot miss talking about Docker, as this technology fascinates me a lot. It is like a revolution in the deployment model of the industry, where light weight containers are being run, rather than running full fledged VMs. Thus, the bare minimum configuration required for application to run is packaged as an image that could be hosted in a registry like DockerHub and runtime instance of it, called as container. This makes it small in size, fast bootup and run anywhere. Earlier, Docker emerged as containerization tool, but later it became of a suite of products including: docker-machine, docker-compose, docker swarm and many more. Okay, you can read more about it on, https://www.docker.com/why-docker.
We would be using a step next to basic docker, called as Docker Swarm. Before that, let’s install docker on our local machine, as well as on our nodes that we created on Digital Ocean.
#Installing docker
~$ curl -fsSL https://get.docker.com -o get-docker.sh
~$ sh get-docker.sh
Docker Swarm
Docker Swarm is a distributed coordination of the nodes (managers and workers) using underlying RAFT database with protocol , that ensures that the docker service which we deploy is always up and running. It allows us to specify replicas, scale up and scale down the replicas. Thus, the important issues of cloud, scalability and availability, are taken care of well by Docker Swarm. It inherently provides layer-3 load balancing and hence, any node in the swarm can handle the incoming requests.
Bingo!! So, now you get it, why I asked for creating 3 droplets? We will create multiple replicas of service and thus we will have multiple containers running our application, in just one shot.
Let us move towards, configuring swarm between three machines.
# Setting up swarm
# Note that we need to remote login in the nodesnode1~$ docker swarm init (Initializes swarm and raft)
node1~$ docker swarm join-token manager (Get the command to join this swarm as manager)node2~$ docker swarm join --token <token> IP:2377
node3~$ docker swarm join --token <token> IP:2377
Now, our target is to run docker service on these nodes, when we are actually deploying the application from rundeck. Let us move to configuring Rundeck.
How’s the josh? Milestone-3 is achieved !
Rundeck
Rundeck is automated runbooks that work the way you want to work. It is built for both modern distributed ways of working and centralized legacy environments to automate running of jobs on servers and schedule them to reduce manual effort. Thus, it enables to resolve incidents quicker and improve the productivity of your teams. It also allows assigning users and permissions for jobs.
#Installing rundeck
Download Rundeck debian package from http://rundeck.org/download/deb~$ sudo dpkg -i rundeck_3.2.4.20200318-1_all.deb
~$ sudo service rundeckd start#Configuring authentication on rundeck1. Run localhost:4440 in browser to open rundeck manager gui tool.
2. Default login is admin/admin.
3. Properties can be found at:
~$ sudo cat /etc/rundeck/rundeck-config.properties
4. Before running a job, we need to setup authentication mechanism for the node. Gear Icon on top right -> Key Storage -> Add or Upload key -> Select private key -> Add the private key of node1 under keys/root -> Save.SSH into node1 and append the id_rsa.pub file to authorize_keys file inside /root/.ssh directory.#Adding nodes
Create project -> Specify name, desc, label -> Press create.
Sources for adding nodes -> From file -> Specify:Format: resourcesxml
File path: /var/lib/rundeck/projects/Linux/etc/resources.xml
Select checkboxes for generate, Writeable, include server node.
Save.File gets generated -> Go in edit tab -> Insert Node tag with details:<node name="node1" description="node1" tags="node1" hostname="167.71.238.28" osArch="amd64" osFamily="unix" username="root" ssh-key-storage-path="keys/root"/>Save.#Testing whether nodes were added successfully
Commands tab on left -> Choose node -> Execute command: uptime -> Run.
--------------------------------------------------------------------An alternative approach to adding nodes (Taken from Tushar's response):1. The SSH key pair of rundeck server machine, should be copied to /var/lib/rundeck/.ssh/ directory.~rundeck_machine$ cp .ssh/* /var/lib/rundeck/.ssh/2. Next, add the public key id_rsa.pub of rundeck machine to .ssh/authorize_keys file in node machine.3. However, the keys are owned by root user and not rundeck user, which will still give permission denied. Thus, the ownership transfer needs to be done from rundeck user to rundeck group, to allow access of the file.~rundeck_machine$ chown rundeck:rundeck /var/lib/rundeck/.ssh/**** Note: You don't need to specify the ssh-key-storage-path in the <node> tag. Also, make a note that, if you specify ssh-key-storage-path, then the authentication mechanism of <node> tag will take priority over /var/lib/rundeck/.ssh keys. Using <node> tag authentication provides more flexibility using paraphrases.
Let us create a job in Rundeck that we want to run on our Swarm node. This will require a workflow of commands to be run.
Project -> Jobs -> Job actions -> New Job -> Create steps -> As commands:Name the steps as you want and add the following commands.Step1: docker service rm integration_app
Step2: docker image pull vvin95/integration-app:latest
Step3: docker service create --name integration_app --replicas 3 --publish 8080:8080 vvin95/integration-app:latest**Note the JobId of the job created. It will be required to trigger from Jenkins.
Integrating SCM, Build Image and Deploy through Rundeck using Jenkins: Till now, we have constructed bits and pieces of our automation and now is the time when we glue them together to form complete pipeline that would automate our task.
Remember, we installed Jenkins earlier, now we are going to set it up for our use. It will be a large number of configurations inside jenkins, so keep up with me, I will explain the meaning of each configuration, which I am going to use in pipeline, so it will be easy.
1. Go to localhost:8080 in browser, where Jenkins GUI is present.
2. It will ask for initialAdminPassword and also provide the path where this password is saved.
~$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword3. Install General Plugins.
4. create a new user and Finish. Start using Jenkins.
Jenkins is based more on plugins and many people freak out about the sole purpose of making Jenkins. They say, we do automation for speeding the innovation process, while concentrating less on the repititive tasks, but it turns out that, typically teams have to handle breakdown of Jenkins plugins as well many times. So, always prefer stable and non-vulnerable plugins.
#If not-vulnerable
Manage Jenkins -> Manage Plugins -> Available -> Filter -> Install without restart#If vulnerable
Download stable plugin file from https://updates.jenkins-ci.org/download/plugins/ Manage Jenkins -> Manage Plugins -> Advanced -> Upload HPI file -> Install without restart1. Git
2. GitHub plugin
3. Unleash Maven plugin
4. Docker plugin
5. Pipeline
6. Rundeck
Configure System in Jenkins
This involves configuring URLs (if we are using any external application server like rundeck) for the various components. Let’s see step by step.
Manage Jenkins -> Configure SystemAfter installing above plugins, you will see following configuration items in this menu:#Rundeck
-> Add Rundeck
Instances: Provide name
Provide URL of Rundeck server (we have, localhost:4440)
Login ID
Password
Test connection
#Maven Project Configuration
For this, leave default settings, we don't need to specify anything.#Jenkins Location
Here, you can change the hostname and port number if you want.
Specify System Admin Email which will be used to send emails.#Extended Email Notification
Note that, I am using gmail for this, you can use your organization's SMTP server details.SMTP Server : smtp.gmail.com
Default user E-mail suffix : @gmail.com
Use SMTP Authentication : Select Checkbox
Username : <Gmail account>
Password : <App password set in Gmail Privacy>
SMTP Port : 465
Default Recipients : cc: abc@gmail.com, bcc:xyz@gmail.com
Default Content : Modify this as per your wish#Email Notification
Configure this same as above.** For app password, you need to enable 2-step authentication in gmail security settings. Otherwise, gmail does not allow sending emails by third party apps directly.#Cloud
This will say, "The cloud configuration has moved to a separate configuration page."Go to different configuration page -> Add Cloud -> Docker.Provide Cloud details:
Name: Docker
Docker Host URI: tcp://127.0.0.1:4243
Expose DOCKER_HOST: SelectIn Advanced tab, provide docker host IP.
Test Connection. It will fail for the first time, saying permission denied. For this, we need to provide permissions to read and execute:~$ sudo chmod 666 /var/run/docker.sock
Global Tool Configuration for Jenkins
This involves providing path to various binaries to be used for Java, maven building, Git, etc.
Considering that you followed the exact instructions, set the following values or if you installed in different directory, you need to adjust accordingly:Manage Jenkins -> Global Tool Configuration.Maven Configuration: default
JDK : /usr/lib/jvm/java-8-openjdk-amd64
Git : /usr/bin/git
Ant and Gradle : default
Maven : /usr/share/maven
Docker : default
Creating DockerHub repository
DockerHub is the default registry for docker, where we can find both official images from companies and customized images created by different users. Let us create our own repository.
Go to https://hub.docker.com/ .
Sign Up and go into account.
Create Repository -> Provide Name and Description -> Create.
Adding DockerHub credentials in Jenkins
In left menu, click credentials -> System -> Global Credentials -> Provide username and password of DockerHub -> Provide an ID for this credentials as docker-hub-credentials. We will using this later, so make a note of it.
Huff…tired ? Just the last brick, hold it. Afterall, you are doing this because you don’t want to do anything, but just code your beautiful application ! Milestone-4 is achieved !!
Creating Jenkins Pipeline
The aim of the pipeline is to trigger whenever we push changes to our GitHub repository, the build should happen using dependencies defined in pom.xml and it should be tested automatically. If test is successful, then it builds an image from Dockerfile and pushes to DockerHub. If this is success, then Rundeck job is triggered from here, which will deploy the final containerized application to the swarm of nodes.
Config the github webhook: This is an API which sends POST message to Jenkins that GitHub repository has gone some changes. GitHub requires a public URL for Jenkins server, but we are running on localhost, so we use ngrok which will generate a tunnel for us, so that trigger could be forwarded to us. If you are using Jenkins on some public server, then do provide that URL.
#Generating public URL
Go to https://ngrok.com/
Create a account and login.
Go to Auth option in left pane in account. Copy the command from there, along with authentication token.On terminal, run:~$ sudo snap install ngrok
~$ ./ngrok authtoken <auth_token>
~$ ngrok http 8080This will generate the public URL for us in terminal.
http://6bbbaa4e.ngrok.io (Something like this.) ** DO NOT close the terminal with ngrok. If you use paid version of ngrok, we can get URLs for longer duration.#GitHub Webhook
Github repository -> Settings -> Webhooks -> Add Webhook.Payload URL: http://6bbbaa4e.ngrok.io/github-webhook/
Select just the push event.
Save.Now, whenever you will push changes to this repository, it will trigger Jenkins.
Creating Jenkins pipeline
I will be using Pipeline as script over Pipeline as SCM, because I feel it is more easy. Let us see, how we create the pipeline.
Jenkins -> New Item -> Pipeline -> Enter item name -> OkGeneral Section: Leave it defaultBuild triggers: This specifies when will our pipeline trigger. It has various options.Build periodically: Every specified period, run pipeline.
Poll SCM : Poll every specified period, if any changes, then run pipeline
GitHub hook trigger for GITScm polling: We use this one.Advanced Project Options: Leave it default:Pipeline: Select pipeline as script.
P.S. I didn't knew the syntax, but Jenkins provides with a tab below, Pipeline syntax and that is what I used to write below.pipeline {
environment {
registry = "<DockerHubUsername>/<RepositoryName>"
registryCredential = 'docker-hub-credentials'
dockerImage = ''
dockerImageLatest = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git '<GitHubRepository URL>'
}
}
stage('Build Executable Jar'){
steps {
sh 'mvn clean test package'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
dockerImageLatest = docker.build registry + ":latest"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
dockerImageLatest.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
stage('Execute Rundeck job') {
steps {
script {
step([$class: "RundeckNotifier",
includeRundeckLogs: true,
jobId: "2125a7f4-cf3c-49b7-ac45-635da518f50b",
rundeckInstance: "Rundeck",
shouldFailTheBuild: true,
shouldWaitForRundeckJob: true,
tailLog: true])
}
}
}
}
post {
always{
emailext body: "Dear Sir/Mam, <br/><br/> ${currentBuild.currentResult}: Job ${env.JOB_NAME} build ${env.BUILD_NUMBER} <br/> More info at: ${env.BUILD_URL} <br/><br/> THIS IS A SYSTEM GENERATED EMAIL. PLEASE DO NOT REPLY.",
recipientProviders: [[$class: 'DevelopersRecipientProvider'], [$class: 'RequesterRecipientProvider']],
subject: "Jenkins Build ${currentBuild.currentResult}: Job ${env.JOB_NAME}"
}
}
}
Just one more thing…
Add Dockerfile to your project in Eclipse, which will act as means to build image. Without Dockerfile, we cannot create an image. I used a simple one for demonstration:FROM tomcat:9.0
WORKDIR /usr/local/tomcat
ADD target/IntegrationProject.war webapps/
Grab a cup of coffee. Make some beautiful UI and commit the changes to GitHub. Yeahhhh…beauty lies in the eyes of the viewer. The world sees your application and what would you see? This…
In the end, it does matter ! Cheers ! :)