CI (4)

Create a Jenkins Support Channel

When creating a large Jenkins setup, it is often a good idea to consider have a team governing, documenting, and managing the tools a little. These teams almost always require a clear support channel, otherwise everything becomes tribal knowledge, and that is not going to scale well.

Signature of a large Jenkins setup means it has

  • Multiple workers with varying operating systems
  • Multiple integrations to systems for static analysis, artifact repositories and code-review platform triggers such as pull-requests and patch sets
  • Support for an advanced authorization role strategy or single sign on from other platforms
  • Support for multiple build-systems such as MSBuild, Maven, Gradle, Ant, Make and so forth

When a large number of teams with a similarly large number of software development stacks starts to use it. Then it becomes relevant with being able to support the systems by providing a clear and obvious support-channel.

A common channel for issue-management is JIRA, which ironically started as an ITSM-platform. And another common place to store documentation is whatever source-control platform you have available, such as GitHub Pages, Wikimedia or Confluence.

So i would like to show how to include custom-integration in the Jenkins header directly into those tools.

Starting point

Configure JIRA Issue Collector

  • In JIRA, create a target project
  • Go to settings for the project
  • Go to Issue Collectors
  • Click + Add issue collector
  • Configure it as you may wish, make sure your select custom trigger-style
  • Submit the collector
  • Extract the link in the markup JIRA wants you to embed.

The collector-link will look something like this

Configure Jenkins to embed collector and run some JavaScript

  • Go to Manage Jenkins
  • Go to System Configuration
  • Fill in Issue Collector URL: with the collector-link from the previous step.
  • Fill in the Theme Javascript part with an accessable location where you save this JavaScript. SonaType Nexus rawFiles-store works nicely for this purpose.
  • Get the following html
  • Put it into the system message
  • preview html should now put two buttons in the header
  • If you want you can include custom styling to the buttons in the simple theme css box. I used the following to achieve the looks below.

And voila! You have opened a support channel directly from your Jenkins header into your documentation and added support for adding JIRA issues directly into your JIRA project! Clicking the Report an issue! will open a JIRA Modal and allow your users to fill in a support issue as their own JIRA users.

Issue collectors can be customized to pre-populate fields quite extensively, read more on Atlassians own documentation.


Essential Jenkins CI

The great enablers of any successful DevOps is Continuous Integration (CI), Delivery and Deployment(CD). CI is a workflow at the core while delivery and deployment are extensions on top of that workflow. Described plainly it is the automation of manual work within software development. I have written about this before.

But in essence it is about letting something neutral build and compile all new and existing code, run unit and integration tests all the time as new changes are pushed. This can, and should, of course be run by the same developer who should also create more F.I.R.S.T class tests. It would be wrong to say that CI is about distrusting developers to be professionals. But it is about creating safeguards from accidents. By failing the build when something is wrong as early as possible, and also notifying stakeholders and developers that committed since the last build, we have a better change to fix it early.

This also has the side-effect that it encourages a workflow that makes developers to rely on a clean, compiling mainline with green tests. sometimes this also leads to a state where the mainline is always releasable or better yet, always deployed to production. It becomes a point of truth to the current state of the code. This also encourages creating more, and better, tests at all levels in the test-pyramid.

The way most teams accomplish this is by using a build or CI server. Now these days there are a few on the market, but the most popular one by far is Jenkins, likely because it is the only one licensed under the Creative Commons licensing, giving a’lot of flexibility to a very vibrant plugin community which likely contributes to its ever growing popularity. That is why this article will describe the essentials when working with Jenkins.

Meet Jenkins

At its core, Jenkins is not much more than a bunch of cron jobs with a web frontend. It is highly extendable with plugins that affect the system as a whole or that adds additional build-steps and triggers to a job.


You can make a job run practically any build system for any language and collect the reports and results to the job dashboard. It can be integrated to other platforms such as Sonar to statically analyze code and look for obvious bugs. Jobs can also be made a part of the code-review process even before code is merged to a mainline. The flexibility is what makes it powerful.

In the dashboard you can also find the workspace link that takes you to the folder where the project has been built. This is good if you are debugging or setting your job up to understand what is happening behind the console log as you can inspect the files here.


Every job consists of a general section that allows you to describe, provide links to source control, and define any parameters that must be provided when triggering (starting) the job. Parametarized tests are a great way to provide a simple interface for less-technical users in Jenkins. Like running system tests towards different environments using a subset of all testcases. Where a choice parameter becomes a dropdown of different test servers to deploy it to and a CSV String parameter specifies the testcases to run.


This is the part where you tell Jenkins where to checkout code from. By default it supports GIT and SVN for source control. It can also support other SCMs through plugins such as Mercurial and CVS, why you would ever need to use them when GIT exists, but they exist.

Build Triggers

Jobs can be configured to trigger at specific times, after other jobs have executed, manually or trigger on SCM. SCM is a trigger on something new that has been pushed to a source code repository. But there can also be triggers that can come webhooks from pull-requests (GitHub, BitBucket), merge-requests (GitLab), Patch-sets(Gerrit) and so on.


Some plugins can inject variables that can be used throughout the job, use secrets and managed property files. These variables can usually be used in all fields that a configurable in the Jenkins job using the $VARIABLE notation. Jenkins also has a bunch of default environment variables that you can use at your leisure.

Build Steps

These steps are the bread and butter of a job, and you can do anything from write pure sh and bat scripts inline that Jenkins should execute to run build systems such as ANT, Maven, Gradle, MSBuild or dotnet to triggering SonarScanner in the OS. If you find yourself having more than five build steps then you should probably consider breaking the job up into several jobs instead of a super-job.

Post steps

These steps are used mostly to collect test and analysis reports from the workspace to show in the dashboard. It is also used to send notifications by email with the email notification step, slack or even call remote APIs to notify code-reviews that the build succeeded.

Standard Jobs in most projects

A common setup of Jenkins jobs that can usually be set up no matter which stack or systems you are building. Ranked in the order of priority.

  • CI Build: Compile, test and build artifacts that triggers on SCM against mainline, artifacts deployed to artifact storage to be possibly deployed later.
  • Sonar CI Build:: Static code analysis that triggers daily, weekly, monthly or after a release.
  • Pre-merge Build: Compile and test that triggers as part of code-review
  • Sonar pre-merge Build: Static code analysis that triggers as part of code-review and reviews the change
  • Deploy to test job: Deploy the latest artefact from CI Build or specific branch.
  • Release Job: Perform a release, deploy the release to artifact storage and deploy it to production.

Good luck with your Jenkins endeavours adopting CI/CD!

Comment on below if you found this useful or if i said something stupid.

Stable Selenium Tests

Let’s face it, they are slow, brittle and expensive to maintain. But then again there is no real replacement for making browser-based end-to-end tests. And as long as you keep the system test suite small, isolated and concise there shouldn’t be much to maintain.

On code testing code

Let’s keep it in mind that for any good system should include the following type of testing

  • many unit tests to test that the code honors its contact, how else can you know?
  • some integration tests for important flows through integrated components functionally
  • few system tests for important end-to-end flows, this tests that the system stands up
  • little manual testing using the human-eye for things that are hard to detect like usability

For this post i’m going to focus on the few system tests and how to make them as stable, readable and simple as possible. If you have heard of page object pattern, data-tags and selenium grids then this post is redundant to you. But if it is news for you then please read on.

They are the three most important patterns to follow if you’re going to make selenium tests. Here’s what they do and why they help.

Page Object Pattern and data tags

Single purpose classes that are responsible to act as an API for a test that is responsible for resolving WebElement using CssSelectors, ids or classes. This makes for readable tests. By using a tag called data-test that holds an unique identifiera that never changes we can lock the test onto that tag no matter which type of element or classes it has. This is preferrable to relying on either XPath, ids or classes which are subject to change if the page changes. This will alleviate tests breaking because someone on the frontend refactors CSS classes, and leave the breakages that means something is missing, invisible or immovable.


This kind of page object contains the page-specific element names and maps them the variabler that makes sense for the content. The suoerclass takets care of initializing @FindBy-annotated variabler lazily, meaning it is evaluated upin accessing the element. Something like

A simple grid

A selenium grid is a server that runs selenium tests as a service. There are naturally many cloud providers for this use case but there are some FOSS disruptors in that space asswell. I have especially taken heart to selenoid, that spawns docker containers containing insividual isolated browsers in a headless way that is VERY suitable for running Quick system tests upon a new deploy.

Setup a selenoid grid locally by using this Vagrantfile if you like, it will speed up and automate the process, i’m going to assume it is available from your localhost moving on.

A Java Project

Include the following dependencies in your pom.xml, build.xml or build.gradle to run selenium-java-remote using JUnit 4. We’re going to stay away from large frameworks and simply run Selenium Tests as a regular unit-test. Use whatever loggning library you like and replace my println, logging is not the focus of this article. 😉

Here’s a tiny that initializes the Page Objects and creates the remote webdriver.

Any JUnit test must extend this class and will initialize a remote webdriver per each test in this case. This can also be used to run as many test-methods in parallel as there are concurrent containers in the grid. This is when Selenoid shines the brightest, because it will create a lightweight container for each test method making for awesome idempotent and isolated tests. The TestBase will parse if a test has failed and put a recording of it into the target folder if you’re running maven.

This results in small, stable and readable tests such as this one.

I hope that helps you running better, faster and more stable Selenium-based test at all levels!


Sonar Comments on BitBucket Pull-Requests

At work we have a scramble to use static code analyzers to improve the quality of code in general. Both from a security perspective and from a standardization perspective. I have worked with Sonar before, but it has almost always been in the background, alone and forgotten by everyone who are pushing features. Now those who know me are aware that i prefer early feedback, preferably pre-merge. I like to think of the Patch, Pull or Merge request as the real guard against flinchy developers like myself who don’t have time to run the tests, or check sonar for issues that should be fixed while i’m covering that particular code. This article is about resolving that and getting sonar comments directly on pull-requests.


  • TeamCity as a build server
  • C# classic as software platform
  • MSBuild as a build system
  • BitBucket cloud for a source repository .

High level design

This is what it looks like from a high level. A Pull-Request in BitBucket triggers a TeamCity job that, in turn, runs the same pull-request builder build-process as would be done with a regular pre-merge job but with a sonar-analysis in preview-mode and a specific sonar-plugin that is able to post comments.


Things you should probably do before delving in to all the configuration.


  • A specific user that can be named Sonar-Reviewer and added to your team


Make sure you build the pull-request trigger from master branch if the latest release is still pullrequest-20172603195632 since it needs the fix in this PullRequest by yours truly to be able to post the pull-request id to sonar. mvn package with maven should create the zip you need)



There aren’t that many things to setup for this to work actually.

Configuration in BitBucket

Configuration in Sonar

  • If analysis is protected then create a system user for TeamCity to login to sonar

Configure TeamCity

  • Set the JAVA_HOME variable to where your JRE 8 is for each agent
  • Make sure any proxies the agent should use to post to is also specified in the SONAR_SCANNER_OPTS environment variable, either as agent property or as build parameter. In my case i had to se env.SONAR_SCANNER_OPTS=-Dhttp.proxyHost=myproxy.tld -Dhttp.proxyPort=1234 in the AGENT_HOME/conf/
  • Configure a pull-request trigger to look like this
  • Make sure your VCS root has the following branch specification: +:refs/heads/*
  • Go to parameters

    • Add the following parameters, it’s of course possible to skip the ones you don’t want configurable
    • Make sure you added the OAuth Client Key and Client Secret from your BitBucket user created earlier
  • Go to build steps

    • Add Sonar Analysis Begin step
    • Set a project key, version and branch as you see fit, they may not be empty but they are not important for this either
    • Add Sonar Analysis begin with the following huge parameter list with the following Additional CommandLine Args

/d:sonar.analysis.mode=preview /d:sonar.bitbucket.repoSlug=YOUR_REPOSITORY /d:sonar.bitbucket.accountName=YOUR_ORGANIZATION_OR_USER /d:sonar.bitbucket.oauthClientKey=%sonar.bitbucket.oauthClientKey% /d:sonar.bitbucket.oauthClientSecret=%sonar.bitbucket.oauthClientSecret% /d:sonar.bitbucket.pullRequestId=%trigger.pullRequestId% /d:sonar.bitbucket.minSeverity=%sonar.bitbucket.minSeverity% /d:sonar.bitbucket.approvalFeatureEnabled=%sonar.bitbucket.approvalFeatureEnabled% /d:sonar.bitbucket.maxSeverityApprovalLevel=%sonar.bitbucket.maxSeverityApprovalLevel% /d:sonar.bitbucket.buildStatusEnabled=%sonar.bitbucket.buildStatusEnabled%

Make sure it corresponds to the parameters you added before. Save the build step.

  • Add a MSBuild step with whatever targets you want. Sonar for MSBuild suggests MSBuild.exe /t:Rebuild
  • Add a Sonar Analysis End step with default settings

That’s it!

At this point you should be able to create a pull-request, see the job trigger in TeamCity and have the sonar-plugin work its magic and post any issues introduced by the PR as comments like this.

I’m especially happy i was able to put this integration in place, seeing as i had no prior C#, Sonar Analysis for MSBuild or TeamCity experience experience. But it all get’s easier with time, and most integrations look similar and require the same kind of tinkering.