Notice: Undefined index: height in /var/www/ on line 27
Dev –

Dev (4)

Continuous Integration as a shared place of truth

  • 2018-04-17
  • CI
  • #Dev

I sometimes get to advocate the practice Continuous Integration and Delivery as part of a DevOps culture. Mostly for developers but sometimes managers and testers. If you ended up here then you most likely have some idea on what it is about, but to be sure even the business people are in on it let’s rehash the components that make up Continous Integration, Delivery and Deployment. As can be read from the words of everyones favourite self-declared loud-mouth on the design of enterprise software Martin Fowler. This is a process that can be split into a these key components.

Continuous Integration

  • Keep a single source code repository
  • Have one long-living mainline and keep the number of branches short
  • Automate a self-contained build towards the mainline
  • Automate tests as part of the build and thus make it self-testing, avoid end to end tests.
  • Everyone commits to mainline every day
  • Every commit should build the mainline on a machine separate from the team
  • Broken builds are fixed immediately
  • Keep the Build Fast
  • Test in a clone of the production environment
  • The latest executable or artifact is easy to find and start or deploy
  • Everyone can see what is happening

Continuous Delivery

Adds a final step to the continuous integration cycle that makes sure that the mainline is always deployed to a test-system whenever something is merged into mainline. Alternatively it is very easy to make an automated deployment from a particular feature branch or released version.

Continuous Deployment

Everything that is merged into mainline eventually goes into production if it passes all the tests, quality checks with all its bells and whizzles.

What’s the big deal?

CI/CD is intended not just to speed up development, but to help coordinate it. Having a shared mainline from which all new feature branches are created allows for an obvious point of integration and short-lived branches. If you always build what is going in to the mainline and always build what is on the TRUNK or HEAD then you have an automatic system that builds trust and supports the developers. This allows merging code into the mainline several times a day because you dare to do it. You dare to do it because there is a built trust that the system will be unforgiving if you do something stupid. I’d like tos uggest that it then acts as a shared truth between all developers that use it.

Shared truth

This shared truth is a powerful tool when it is stable and proven within a team. It allows the team to focus on what is important, features, because it can trust that the build will stop them from doing something stupid that breaks the tests or introduces unneccesary technical debt. But only as long as the build is fast, doesn’t produce false-positives and every commit is built. A flaky build, with tests that toggles or works half of the time undermines the shared truth, because if it can no longer be trusted then it is not really truth. Exactly like people, if they sometimes speak the truth and sometimes lie, then you cannot trust them. But if you have an impeachable person in your team who you can really trust to always rightfully stop you from doing something stupid, then that will truly empower a team.

Have you heard of our lord and savior Selenoid?

Last few weeks more and more colleagues at work have been asking for how to work with Selenium. Now i’m of the opinion that System tests (that selenium often are) are expensive, fragile, take time and are mostly a waste of that time. This is because people tend to write WAY more system tests than they would need and really overdo it.

Unit and Integration tests by far more important before we’re putting all of our test-coverage in a system test. With that said, System tests absolutely have their place in a real DevOps environment. As long as you don’t see it as a replacement for manual testing, but more of an indication of the health of your deployment. And what better way to get that in place than using the most populat framework for automating the browser: Selenium.


Selenium has bindings in most languages out there which is really neat. The tricky thing with it is that the framework is decoupled from the drivers that interacts with a particular browser. This makes a’lot of sense because it is easier to provide an interface or protocol (selenese) for the browsers to allow browser vendors to bind their own functionality to match it. So Google Chrome, Mozilla Firefox and Opera, IE has their own drivers, some mobile providers like the android and IOS platforms also have driver implementations that can speak selenese for their systems.

This has a consequence for us as developers that we need to mix and match multiple versions of browsers versions with driver versions and operating systems. This makes selenium tests tricky to set up in a sense that isn’t platform dependent, and as a consequence the CI environment will not match the dev-environment making this kind of testing unreliable.

There’s light in the end of the tunnel though as there’s another kind of driver, the Remote Web Driver. What this does is decouple the need to handle drivers to a server. The driver connects to a service or selenium grid that provides a set of capabilities for which browsers and browser-versions it supports. That way the test does not need to consider setting up the infrastructure For running a selenium test, because that’s the concern of the grid.

So, at work i started looking into setting one of these selenium grids up to help out my colleagues. During that research i stumbled into this video from 2017 SeleniumConf in Berlin about a project called Selenoid.

You can watch it by clicking here.

Hello Selenoid

Selenoid is a project that utilizes Docker images to spin up standalone environments for each running remote selenium session. It is written in Go and also comes with a neat, optional, UI. The UI is able to display the currently running sessions, both its server logs and allow the user to interact with the browser over VNC. This is a disruptor that is in the same market as Saucelabs and Browserstack and makes it possible to get the same scalability and flexibility as these expensive platforms but for free and deployed on your own docker-environment.

Suddenly it has become really easy to spin up a server that takes care of all that is cumbersome with Selenium. Meaning matching driver to browser per environment problem that came with the original java-based selenium server. It also uses a fourth of RAM as the Java version, as claimed in the video.

I really believe Selenoid is the future of running Selenium clusters, i’ve tried it out for some time now and i’m really blown away. So if you are thinking about running a Selenium Server then you should definitely check it out

Vagrant and Selenoid

I’m not a big fan of having a bunch of docker images on my workstation as they usually end up cluttering the machine when i stop using them. So i put sub-projects with docker or other Linux-based experiments in Vagrant-dev environments. Vagrant is a wrapper on top of Virtualbox and makes it real easy to have a completely virtualized environment that can be automatically setup, provisioned and torn down every day. So i wrote a Vagrantfile that downloads an ubuntu/trusty image and provisions Docker, starts Selenoid and Selenoid UI.

I also threw in an example on how to run Remote Selenium tests. You can find it and the vagrantfile and how to run it on my GitHub: Selenoid-Vagrant

private WebDriver browser;

public void setUp() throws Exception {
    DesiredCapabilities abilities =;
    abilities.setCapability("version", "61.0");
    this.browser = new RemoteWebDriver( new URL("http://localhost:4444/wd/hub"), abilities);
    this.browser.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
    this.browser.manage().window().setPosition(new Point(220, 10));
    this.browser.manage().window().setSize(new Dimension(1000,650));

public void testSimple() throws Exception {
    assertEquals("Google", this.browser.getTitle());

public void tearDown() throws Exception {

On Code Review

I’m a professional, i will code as i wish I like code-review, it’s generally a great idea to have it incorporated early whenever code is being written to create business value. Mostly because it is cheaper to fix bugs early abd before they break something in production and starts sending nagging users your way. By having a few extra critical eyes on your code you can stop many of those embarrassements. But it not only finds bugs, it also spreads knowledge within the team, increases the bus-factor, and gradually increases the competence of the developers in that team. This has been talked and written about lots and lots of times before, so don’t take my word for it. Many companies, SmartBear and atlassian for example, has their own motivational articles about why it should be used, so don’t take my word for it. Even coding-horror has written a piece about it. The effect of good code-review is not unheard of!

So why another piece on Code Review?

But i still encounter people who do not see the point! Code review is sometimes seen as a cumbersome process that only steals time and does not have any return on investment, essentially slowing down feature development. Hold your horses, slowing down? Yes, slowing down feature development, because of ${BOGUS_REASON}. I heard many of them, if it is not this then that. It is being made reasonably clear that the attitude towards code review is being clouded by team stress, management, the idea that manual labour is okay this once.

The Great List of Bullshit I think we should change every poor attitude we meet about it. So i have collected some of the arguments i have heard against code-review.

I’m the only senior developer so nobody has any real input for me First of all, the only reason you are a senior developer is because educators, lecturers, professors and developers senior to you have taught and coached you into becoming the grand developers you are today. So

it’s time to pass the baton, use code-review as a means to get junior on par with your level of expertise oh chosen one. Secondly, everyone can have some input on any code, you are not an omnipotent deity. Perhaps they will come in with a perspective that is more modern, or unfamiliar to yours.* Learn to appreciate feedback*, the worst thing that can happen is that your code improves.

I know what i’m doing, i don’t need code-review, that’s for junior devs Then show off your great examples on how it should be done, by performing letting people review your code and understand your greatness.

My code doesn’t have bugs, i have a great workflow That’s simply just not true, nobody writes flawless code all the time. But if you do, great, why aren’t you famous? Because most of us aren’t awesome elite developers and the sooner we realize that the sooner we can use a workflow that allows feedback and constructive criticism on our work.

It is expensive, we shouldn’t spend time on it First of all, do you realize how expensive it is to have untrained junior developers writing code without supervision? Have a look at

raxos502’s Terraria clone and find out just how horrible code can become when in the hands of someone new with software development.

It takes time to build and verify that the code actually fixed the thing/built the feature Use proper Continuous Integration on patches going into your code and review that only when static code analysis, unit and integration tests have gone green. It should never be up to the reviewer to acceptance test the code, that is what testers and product owners are for.

It doesn’t replace testing Of course it doesn’t, testing is another beast to tame and code-review doesn’t replace it or even strive to solve the problems that testing solves. If someone in the business tries to claim that it does then they shouldn’t be allowed to make decisions about it.

I don’t like it You don’t have to. But code-review is about reading code, which is what you do all day. So if you don’t like it then why are you in this industry?

Why code review is awesome

  • School new developers to know about the quirks of a project. And let’s face it, every project has them.
  • Coach, code review is likely the best way to coach junior developers into their new role. To invest in them early is probably the best way to get them to deliver value to the team.
  • Spread the knowledge, by having multiple people code-review your code you make more and more people aware of how the project grows and in what direction it expands.
  • Enable remote work, when you have no idea on the remote developers skillset or when you are working with offshore or open source projects.

How to make it so

Informality is quintessential when it comes to code-review, anyone should be able to do it and give their seal of approval. Gerrit has something that’s called +1 and +2, and it is awful in the sense that you enforce lead/senior and junior developers and gives a select few the ability to approve all the code. In my experience this has never worked out well. Continuous Integration and a workflow resembling Git* Flow. Connect as much as you can to the code-review platform, which should be close to the repository. I personally like how GitHub, Gerrit, GitLab and BitBucket does it. They make the pull request or patch trigger web-hooks that in turn trigger Jenkins or some other CI Server. With that connection in place the possibilities are endless. Take some time and let the team get the full power of CI in addition to the code-review. Break the culture of single omnipotent developers, because it never lasts because they eventually leave and when they do you can be sure that the product they had their poisonous hand over is dead in the water.

Tutorial: Flow SoapUI Plugin

What’s this?

This is a plugin i decided to write this week for SoapUI and consequently Ready API, both SmartBear products. Both has a very nice plugin architecture these days, very eloquently describe by Ole himself here, the original SoapUI developer.

Personally i’m quite familiar with the codebase, having worked at SmartBear for a time a few years ago, but i’ve never really had the time or idea to put the effort of trying it out before. So, at my newest assignment there’s a few SOAP services that have a the characteristic that they are REALLY unstable. Every second request fails at some point in it making the test-cases fail intermittently in SoapUI. We’re in the process of automating a number of testcases using the soapui-maven-plugin. So, what the team has done is to wrap the teststeps that are really unstable with groovy-scripts and conditional gotos to make it reset and do the test-step again. Essentially we ended up with three teststeps just to run one unstable step. I know what you are thinking. Why in the world would you want to have a teststep failing every other time? And i agree with you. If i was in change i would dig into what is making it fail and fix it. The problem here is that the service is dependent on a third-party service that we don’t have control over and that the owner is too scared to replace or modify, and we really need the data from that service. But we can’t have four times as many test-steps just because of this problem, either, that’ll get hard to maintain for us. So let’s write a plugin that repeats a number of steps if they have failed for a few attempts!. And here it is!


First follow the guidelines on GitHub that explains where to put files and how to jailbreak the plugin if you need that then [here’s] a guide on how to use the plugin. But to keep


Okay, so the plugin is installed and you are ready for testing, now what. SoapUI should have picked up the plugin automatically and you can find it in your list of test-steps.

Just add the test-step and make sure to point out which test-step you want to iterate from.

If all assertions and test-steps are successfull then the test-step simply succeeds and continues to the next one. If not then it will begin another iteration until it have iterated as many times as are specified in the max attempts dropdown.

When the repeat-test-step is configured you can give it a go by running the test-case. Here i use the sample-project which is provided with the plugin source-code. It’s very simple, it is testing by sending GET-requests to a mock-service i set up with two resource-urls. /everyThirdTime and /everyTime. It’s fairly obvious how often they return a 200 OK HTTP response right? Great. The mockservice is started and stopped in the setupScript and teardownScript of the testcase.

So the SoapUI test-case consists of three test-steps, works_every_time, works_every_third_time and Repeat Test Step which is configured to repeat two times. So the unstable test-step is the second one, and let’s remember that i configured the mockservice to respond 200 OK to the third request. So two attempts should be enough to make the service pass. Which it does, very reproducible. So much so that i included it as a regression-test for the plugin in TravisCI.

The test-runner will show the historic failed results in the log but will be a successful test-case as long as at least one iteration is successful. If it successful at the first iteration then the repeat-test-step will not trigger any iterations at all.

Plugin Development

I won’t go into detail about the development itself, that’s a topic for another post, which i might write someday. It was a very nice experience to be honest, the plugin architecture of SoapUI lately is very formidable and wraps things like xmlbeans configuration nicely by parsing annotations instead so we don’t have to think about it as plugin-developers. But i more or less followed Oles tutorial on that part until i had the test-step reachable inside of SoapUI. From there it was just a matter of manipulating the components of the TestStep Plugin run-method and utilizing the API which already had support for manipulating the control-flow around the test-steps. The tricky part was to get it to un-fail the test-case after it had failed test-steps of a previous iteration after following iteration had passed. I solved that by removing the test-results of the previous iterations test-steps and upon success of an iteration i manipulate the testCaseRuncontext property, see below.

if (allRelevantTestsPassed) { 
   testCaseRunContext.setProperty(TestCaseRunner.Status.class.getName(), TestCaseRunner.Status.RUNNING);

This was actually quite hidden as there are no method on the testCaseRunContext to set the status of the testrun, but since SoapUI is fairly open source i could just checkout the source and have a look at how it handled it internally so i could do it myself. As i discovered the status is a property.

That’s it! I hope you’ll find the plugin useful. Do give me a shoutout if you find any bugs or have any feature requests for other test-steps or SoapUI components go to GitHub and report them as issues on the github project.