Blog


Stable Selenium Tests

Let’s face it, they are slow, brittle and expensive to maintain. But then again there is no real replacement for making browser-based end-to-end tests. And as long as you keep the system test suite small, isolated and concise there shouldn’t be much to maintain.

On code testing code

Let’s keep it in mind that for any good system should include the following type of testing

  • many unit tests to test that the code honors its contact, how else can you know?
  • some integration tests for important flows through integrated components functionally
  • few system tests for important end-to-end flows, this tests that the system stands up
  • little manual testing using the human-eye for things that are hard to detect like usability

For this post i’m going to focus on the few system tests and how to make them as stable, readable and simple as possible. If you have heard of page object pattern, data-tags and selenium grids then this post is redundant to you. But if it is news for you then please read on.

They are the three most important patterns to follow if you’re going to make selenium tests. Here’s what they do and why they help.

Page Object Pattern and data tags

Single purpose classes that are responsible to act as an API for a test that is responsible for resolving WebElement using CssSelectors, ids or classes. This makes for readable tests. By using a tag called data-test that holds an unique identifiera that never changes we can lock the test onto that tag no matter which type of element or classes it has. This is preferrable to relying on either XPath, ids or classes which are subject to change if the page changes. This will alleviate tests breaking because someone on the frontend refactors CSS classes, and leave the breakages that means something is missing, invisible or immovable.

Example:

This kind of page object contains the page-specific element names and maps them the variabler that makes sense for the content. The suoerclass takets care of initializing @FindBy-annotated variabler lazily, meaning it is evaluated upin accessing the element. Something like

A simple grid

A selenium grid is a server that runs selenium tests as a service. There are naturally many cloud providers for this use case but there are some FOSS disruptors in that space asswell. I have especially taken heart to selenoid, that spawns docker containers containing insividual isolated browsers in a headless way that is VERY suitable for running Quick system tests upon a new deploy.

Setup a selenoid grid locally by using this Vagrantfile if you like, it will speed up and automate the process, i’m going to assume it is available from your localhost moving on.

A Java Project

Include the following dependencies in your pom.xml, build.xml or build.gradle to run selenium-java-remote using JUnit 4. We’re going to stay away from large frameworks and simply run Selenium Tests as a regular unit-test. Use whatever loggning library you like and replace my println, logging is not the focus of this article. 😉

Here’s a tiny SeleniumTestBase.java that initializes the Page Objects and creates the remote webdriver.

Any JUnit test must extend this class and will initialize a remote webdriver per each test in this case. This can also be used to run as many test-methods in parallel as there are concurrent containers in the grid. This is when Selenoid shines the brightest, because it will create a lightweight container for each test method making for awesome idempotent and isolated tests. The TestBase will parse if a test has failed and put a recording of it into the target folder if you’re running maven.

This results in small, stable and readable tests such as this one.

I hope that helps you running better, faster and more stable Selenium-based test at all levels!

#




Sonar Comments on BitBucket Pull-Requests

At work we have a scramble to use static code analyzers to improve the quality of code in general. Both from a security perspective and from a standardization perspective. I have worked with Sonar before, but it has almost always been in the background, alone and forgotten by everyone who are pushing features. Now those who know me are aware that i prefer early feedback, preferably pre-merge. I like to think of the Patch, Pull or Merge request as the real guard against flinchy developers like myself who don’t have time to run the tests, or check sonar for issues that should be fixed while i’m covering that particular code. This article is about resolving that and getting sonar comments directly on pull-requests.

Requirement

  • TeamCity as a build server
  • C# classic as software platform
  • MSBuild as a build system
  • BitBucket cloud for a source repository .

High level design

This is what it looks like from a high level. A Pull-Request in BitBucket triggers a TeamCity job that, in turn, runs the same pull-request builder build-process as would be done with a regular pre-merge job but with a sonar-analysis in preview-mode and a specific sonar-plugin that is able to post comments.

Prerequisites

Things you should probably do before delving in to all the configuration.

BitBucket

  • A specific user that can be named Sonar-Reviewer and added to your team

TeamCity

Make sure you build the pull-request trigger from master branch if the latest release is still pullrequest-20172603195632 since it needs the fix in this PullRequest by yours truly to be able to post the pull-request id to sonar. mvn package with maven should create the zip you need)

SonarQube

Configuration

There aren’t that many things to setup for this to work actually.

Configuration in BitBucket

Configuration in Sonar

  • If analysis is protected then create a system user for TeamCity to login to sonar

Configure TeamCity

  • Set the JAVA_HOME variable to where your JRE 8 is for each agent
  • Make sure any proxies the agent should use to post to api.bitbucket.org is also specified in the SONAR_SCANNER_OPTS environment variable, either as agent property or as build parameter. In my case i had to se env.SONAR_SCANNER_OPTS=-Dhttp.proxyHost=myproxy.tld -Dhttp.proxyPort=1234 in the AGENT_HOME/conf/buildAgent.properties.
  • Configure a pull-request trigger to look like this
  • Make sure your VCS root has the following branch specification: +:refs/heads/*
  • Go to parameters

    • Add the following parameters, it’s of course possible to skip the ones you don’t want configurable
    • Make sure you added the OAuth Client Key and Client Secret from your BitBucket user created earlier
  • Go to build steps

    • Add Sonar Analysis Begin step
    • Set a project key, version and branch as you see fit, they may not be empty but they are not important for this either
    • Add Sonar Analysis begin with the following huge parameter list with the following Additional CommandLine Args

/d:sonar.analysis.mode=preview /d:sonar.bitbucket.repoSlug=YOUR_REPOSITORY /d:sonar.bitbucket.accountName=YOUR_ORGANIZATION_OR_USER /d:sonar.bitbucket.oauthClientKey=%sonar.bitbucket.oauthClientKey% /d:sonar.bitbucket.oauthClientSecret=%sonar.bitbucket.oauthClientSecret% /d:sonar.bitbucket.pullRequestId=%trigger.pullRequestId% /d:sonar.bitbucket.minSeverity=%sonar.bitbucket.minSeverity% /d:sonar.bitbucket.approvalFeatureEnabled=%sonar.bitbucket.approvalFeatureEnabled% /d:sonar.bitbucket.maxSeverityApprovalLevel=%sonar.bitbucket.maxSeverityApprovalLevel% /d:sonar.bitbucket.buildStatusEnabled=%sonar.bitbucket.buildStatusEnabled%

Make sure it corresponds to the parameters you added before. Save the build step.

  • Add a MSBuild step with whatever targets you want. Sonar for MSBuild suggests MSBuild.exe /t:Rebuild
  • Add a Sonar Analysis End step with default settings

That’s it!

At this point you should be able to create a pull-request, see the job trigger in TeamCity and have the sonar-plugin work its magic and post any issues introduced by the PR as comments like this.

I’m especially happy i was able to put this integration in place, seeing as i had no prior C#, Sonar Analysis for MSBuild or TeamCity experience experience. But it all get’s easier with time, and most integrations look similar and require the same kind of tinkering.




Continuous Integration as a shared place of truth

  • April 17, 2018
  • CI
  • #Dev

I sometimes get to advocate the practice Continuous Integration and Delivery as part of a DevOps culture. Mostly for developers but sometimes managers and testers. If you ended up here then you most likely have some idea on what it is about, but to be sure even the business people are in on it let’s rehash the components that make up Continous Integration, Delivery and Deployment. As can be read from the words of everyones favourite self-declared loud-mouth on the design of enterprise software Martin Fowler. This is a process that can be split into a these key components.

Continuous Integration

  • Keep a single source code repository
  • Have one long-living mainline and keep the number of branches short
  • Automate a self-contained build towards the mainline
  • Automate tests as part of the build and thus make it self-testing, avoid end to end tests.
  • Everyone commits to mainline every day
  • Every commit should build the mainline on a machine separate from the team
  • Broken builds are fixed immediately
  • Keep the Build Fast
  • Test in a clone of the production environment
  • The latest executable or artifact is easy to find and start or deploy
  • Everyone can see what is happening

Continuous Delivery

Adds a final step to the continuous integration cycle that makes sure that the mainline is always deployed to a test-system whenever something is merged into mainline. Alternatively it is very easy to make an automated deployment from a particular feature branch or released version.

Continuous Deployment

Everything that is merged into mainline eventually goes into production if it passes all the tests, quality checks with all its bells and whizzles.

What’s the big deal?

CI/CD is intended not just to speed up development, but to help coordinate it. Having a shared mainline from which all new feature branches are created allows for an obvious point of integration and short-lived branches. If you always build what is going in to the mainline and always build what is on the TRUNK or HEAD then you have an automatic system that builds trust and supports the developers. This allows merging code into the mainline several times a day because you dare to do it. You dare to do it because there is a built trust that the system will be unforgiving if you do something stupid. I’d like tos uggest that it then acts as a shared truth between all developers that use it.

Shared truth

This shared truth is a powerful tool when it is stable and proven within a team. It allows the team to focus on what is important, features, because it can trust that the build will stop them from doing something stupid that breaks the tests or introduces unneccesary technical debt. But only as long as the build is fast, doesn’t produce false-positives and every commit is built. A flaky build, with tests that toggles or works half of the time undermines the shared truth, because if it can no longer be trusted then it is not really truth. Exactly like people, if they sometimes speak the truth and sometimes lie, then you cannot trust them. But if you have an impeachable person in your team who you can really trust to always rightfully stop you from doing something stupid, then that will truly empower a team.




Have GitHub or BitBucket drive JIRA Workflows

At work we have all infrastructure you can imagine. Today i’m going to talk about three pieces of them.

  • Atlassian JIRA Cloud
  • GitHub Enterprise
  • Atlassian BitBucket Cloud

I was approached by a team that requested that we look into having BitBucket (In their case) drive the workflows of JIRA. If you are unfamiliar with JIRA it is a huge issue-tracker that you can customize beyond recognition. It is powerful, but is as a consequence quite complex. In this case the workflow consists of five states. TODO, IN PROGRESS, REVIEW, TEST and DONE.

The requirement was as follows:

TODO => IN PROGRESS

  • Given a JIRA issue is in TODO
  • When a branch is created, From JIRA or directly in GIT, that contains a JIRA issue token
  • Then the JIRA issue is moved from TODO to IN PROGRESS

IN PROGRESS => REVIEW

  • Given a JIRA issue is in IN PROGRESS
  • When a Pull Request is created on BitBucket or GitHub that contains a JIRA issue token
  • Then the JIRA issue is moved from IN PROGRESS to REVIEW

IN REVIEW => IN PROGRESS

  • Given a JIRA issue is in in REVIEW
  • When a Pull Request is declined on BitBucket or GitHub that contains a JIRA issue token
  • Then the JIRA issue is moved from REVIEW to IN PROGRESS

IN REVIEW => TEST

  • Given a JIRA issue is in in REVIEW
  • When a Pull Request is approved on BitBucket or GitHub that contains a JIRA issue token
  • Then the JIRA issue is moved from REVIEW to DONE
  • And the issue assignment is set to unassigned

At first i was thinking that’s rather excessive to have such triggers. What i didn’t see is that this is exactly how the team already worked, in BitBucket. JIRA was just additional overhead to them. Sure, they created the branch from their issue. But after that they didn’t seem to want to bother with JIRA more than to move the cards just before their SCRUM standups. So it made sense all the sense in the world to add it. And it gave me a reason to have a closer look at our JIRA integration between GitHub Enterprise since it didn’t seem to parse any issues from it.

So here’s how i made it work, in case someone else out there has a similar use case.

Steps to achieve it

  1. Become a JIRA admin, because modifying workflows require administrator permissions (Luckily i have a close colleague who is the JIRA admin)
  2. Setup Integration to BitBucket and GitHub enterprise from your JIRA
  3. Go to JIRA, Select Settings
  4. Select Projects
  5. Select the name of the project you want to add source triggers for
  6. Select Workflows
  7. Edit the workflow that is in use as list or diagram, diagram format is easiest
  8. Drag transition lines from each workflow state according to the requirements and name them accordingly. I get five of them, one for each requirement. We should end up with something resembling the image below.
  9. Select a transition, a pop-up should appear describing triggers, conditions and post-functions of the transition. Click triggers (0) to edit.
  10. Select the relevant source trigger for that transition
  11. Repeat steps 9 to 11 until all transitions have a source trigger.
  12. For the last requirement REVIEW => TEST we should add a post function that modifies the JIRA-issues field assigned to unassigned, it is added in the same way as source triggers but instead select post functions.
  13. Publish the workflow in order for the triggers to become activated

And you are all done!

You can check whether the triggers work by creating branches, commits and pull-requests that contain the jira issue and watch the activity tab on the JIRA issue to see that the source trigger actived!




Have you heard of our lord and savior Selenoid?

Last few weeks more and more colleagues at work have been asking for how to work with Selenium. Now i’m of the opinion that System tests (that selenium often are) are expensive, fragile, take time and are mostly a waste of that time. This is because people tend to write WAY more system tests than they would need and really overdo it.

Unit and Integration tests by far more important before we’re putting all of our test-coverage in a system test. With that said, System tests absolutely have their place in a real DevOps environment. As long as you don’t see it as a replacement for manual testing, but more of an indication of the health of your deployment. And what better way to get that in place than using the most populat framework for automating the browser: Selenium.

Selenium

Selenium has bindings in most languages out there which is really neat. The tricky thing with it is that the framework is decoupled from the drivers that interacts with a particular browser. This makes a’lot of sense because it is easier to provide an interface or protocol (selenese) for the browsers to allow browser vendors to bind their own functionality to match it. So Google Chrome, Mozilla Firefox and Opera, IE has their own drivers, some mobile providers like the android and IOS platforms also have driver implementations that can speak selenese for their systems.

This has a consequence for us as developers that we need to mix and match multiple versions of browsers versions with driver versions and operating systems. This makes selenium tests tricky to set up in a sense that isn’t platform dependent, and as a consequence the CI environment will not match the dev-environment making this kind of testing unreliable.

There’s light in the end of the tunnel though as there’s another kind of driver, the Remote Web Driver. What this does is decouple the need to handle drivers to a server. The driver connects to a service or selenium grid that provides a set of capabilities for which browsers and browser-versions it supports. That way the test does not need to consider setting up the infrastructure For running a selenium test, because that’s the concern of the grid.

So, at work i started looking into setting one of these selenium grids up to help out my colleagues. During that research i stumbled into this video from 2017 SeleniumConf in Berlin about a project called Selenoid.

You can watch it by clicking here.

Hello Selenoid

Selenoid is a project that utilizes Docker images to spin up standalone environments for each running remote selenium session. It is written in Go and also comes with a neat, optional, UI. The UI is able to display the currently running sessions, both its server logs and allow the user to interact with the browser over VNC. This is a disruptor that is in the same market as Saucelabs and Browserstack and makes it possible to get the same scalability and flexibility as these expensive platforms but for free and deployed on your own docker-environment.

Suddenly it has become really easy to spin up a server that takes care of all that is cumbersome with Selenium. Meaning matching driver to browser per environment problem that came with the original java-based selenium server. It also uses a fourth of RAM as the Java version, as claimed in the video.

I really believe Selenoid is the future of running Selenium clusters, i’ve tried it out for some time now and i’m really blown away. So if you are thinking about running a Selenium Server then you should definitely check it out

Vagrant and Selenoid

I’m not a big fan of having a bunch of docker images on my workstation as they usually end up cluttering the machine when i stop using them. So i put sub-projects with docker or other Linux-based experiments in Vagrant-dev environments. Vagrant is a wrapper on top of Virtualbox and makes it real easy to have a completely virtualized environment that can be automatically setup, provisioned and torn down every day. So i wrote a Vagrantfile that downloads an ubuntu/trusty image and provisions Docker, starts Selenoid and Selenoid UI.

I also threw in an example on how to run Remote Selenium tests. You can find it and the vagrantfile and how to run it on my GitHub: Selenoid-Vagrant

private WebDriver browser;

@Before
public void setUp() throws Exception {
    DesiredCapabilities abilities = DesiredCapabilities.chrome();
    abilities.setCapability("version", "61.0");
    this.browser = new RemoteWebDriver( new URL("http://localhost:4444/wd/hub"), abilities);
    this.browser.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
    this.browser.manage().window().setPosition(new Point(220, 10));
    this.browser.manage().window().setSize(new Dimension(1000,650));
}

@Test
public void testSimple() throws Exception {
    this.browser.get("http://www.google.com");
    assertEquals("Google", this.browser.getTitle());
}

@After
public void tearDown() throws Exception {
    this.browser.quit();
}



Just headless system things

I have a RHEL 6.4 Jenkins master that had problems starting an in-build selenium-server using maven. The pom looks something like this. Now i’ve been tracking an error for some time that looks something like this.

[11:43:57] I/launcher - Running 1 instances of WebDriver
[11:43:57] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[11:44:17] E/launcher - null
[11:44:17] E/launcher - WebDriverError: null
at Object.checkLegacyResponse (/full/path/to/protractor/node_modules/selenium-webdriver/lib/error.js:505:15)

VERY descriptive right?

At first i thought this had something to do with not being able to access the port on localhost, turns out it wasn’t that.

At second glance i thought it had something to do with the webdriver being faulty, so i tried providing something random as a webdriver to see if i had the same problem. Then i got this.

[14:02:34] I/launcher - Running 1 instances of WebDriver
[14:02:34] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[14:02:35] E/launcher - The driver executable does not exist: C:\bullshit
[14:02:35] E/launcher - WebDriverError: The driver executable does not exist: C:\bullshit

Which is far more descriptive to be honest. But then i remembered that i had piped away the logging from the server somewhere.

<processLogFile>${project.build.directory}/selenium-server/server.log</processLogFile> Aha!

And that’s when i found this lovely line in the logs

Creating a new session for Capabilities [{count=1, browserName=chrome, chromeOptions={args=[--headless, --disable-gpu, --window-size=800,600]}, version=, platform=ANY}]
chromedriver: error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory

Now this i can work with. This is something very common with CentOS, RHEL and Ubuntu Servers. Since they have no X window system all libraries that create something of value for displaying stuff on a screen are skipped out. And apparantly chromium-webdriver requires X11. That feels kind of stupid as my capabilities explicitly specifies running in headless mode. The sane thing would be that this kind of system dependencies shouldn’t be required. But apparantly they do.

Oh well, just get on with installing X11 stuff on your system. This is how you fix it on RHEL/CentOS

yum install libX11

But still it didn’t work!

In the end, it turned out i hadn’t even installed google-chrome-stable in the first place. So i did that, and it worked. However, then i got even worse problems, first libpulse wasn’t installed and was apparantly a requirement for a headless environments (?!).

yum install pulseaudio-libs

This eventually lead to core dumps when trying to make a simple GET towards any site and dump the dom.

[jenkins@my-worker01~]$ /opt/google/chrome/google-chrome --headless --disable-gpu --dump-dom --enable-crash-reporter --enable-logging --v=99 --no-sandbox http://dl.google.com
[1109/092534.785845:VERBOSE1:zygote_main_linux.cc(539)] ZygoteMain: initializing 0 fork delegates
[1109/092534.786417:INFO:cpu_info.cc(50)] Available number of cores: 4
shared memfd open() failed: Invalid argument
[1109/092534.794703:VERBOSE1:pulse_util.cc(138)] Failed to connect to the context. Error: Connection refused
[1109/092534.795643:VERBOSE1:webrtc_internals.cc(109)] Could not get the download directory.
[1109/092534.798643:VERBOSE1:breakpad_linux.cc(2007)] Non Browser crash dumping enabled for: renderer
Failed to generate minidump.Illegal instruction (core dumped)
[0100/000000.826404:ERROR:broker_posix.cc(43)] Invalid node channel message
[jenkins@my-worker01 ~]$

I found at least one other person on the Google Developers Blog where they presented Headless Chrome and that was also a Red Hat Enterprise Linux system. This is where i ran into a complete halt and decided to file a bug-report towards the chromium project. You can view it below if you want.

https://bugs.chromium.org/p/chromium/issues/detail?id=782161

I’m creating a selenium-cluster with a windows-server instead, at this point it seems easier than configuring chrome-headless for RHEL




On Code Review

I’m a professional, i will code as i wish I like code-review, it’s generally a great idea to have it incorporated early whenever code is being written to create business value. Mostly because it is cheaper to fix bugs early abd before they break something in production and starts sending nagging users your way. By having a few extra critical eyes on your code you can stop many of those embarrassements. But it not only finds bugs, it also spreads knowledge within the team, increases the bus-factor, and gradually increases the competence of the developers in that team. This has been talked and written about lots and lots of times before, so don’t take my word for it. Many companies, SmartBear and atlassian for example, has their own motivational articles about why it should be used, so don’t take my word for it. Even coding-horror has written a piece about it. The effect of good code-review is not unheard of!

So why another piece on Code Review?

But i still encounter people who do not see the point! Code review is sometimes seen as a cumbersome process that only steals time and does not have any return on investment, essentially slowing down feature development. Hold your horses, slowing down? Yes, slowing down feature development, because of ${BOGUS_REASON}. I heard many of them, if it is not this then that. It is being made reasonably clear that the attitude towards code review is being clouded by team stress, management, the idea that manual labour is okay this once.

The Great List of Bullshit I think we should change every poor attitude we meet about it. So i have collected some of the arguments i have heard against code-review.

I’m the only senior developer so nobody has any real input for me First of all, the only reason you are a senior developer is because educators, lecturers, professors and developers senior to you have taught and coached you into becoming the grand developers you are today. So

it’s time to pass the baton, use code-review as a means to get junior on par with your level of expertise oh chosen one. Secondly, everyone can have some input on any code, you are not an omnipotent deity. Perhaps they will come in with a perspective that is more modern, or unfamiliar to yours.* Learn to appreciate feedback*, the worst thing that can happen is that your code improves.

I know what i’m doing, i don’t need code-review, that’s for junior devs Then show off your great examples on how it should be done, by performing letting people review your code and understand your greatness.

My code doesn’t have bugs, i have a great workflow That’s simply just not true, nobody writes flawless code all the time. But if you do, great, why aren’t you famous? Because most of us aren’t awesome elite developers and the sooner we realize that the sooner we can use a workflow that allows feedback and constructive criticism on our work.

It is expensive, we shouldn’t spend time on it First of all, do you realize how expensive it is to have untrained junior developers writing code without supervision? Have a look at

raxos502’s Terraria clone and find out just how horrible code can become when in the hands of someone new with software development.

It takes time to build and verify that the code actually fixed the thing/built the feature Use proper Continuous Integration on patches going into your code and review that only when static code analysis, unit and integration tests have gone green. It should never be up to the reviewer to acceptance test the code, that is what testers and product owners are for.

It doesn’t replace testing Of course it doesn’t, testing is another beast to tame and code-review doesn’t replace it or even strive to solve the problems that testing solves. If someone in the business tries to claim that it does then they shouldn’t be allowed to make decisions about it.

I don’t like it You don’t have to. But code-review is about reading code, which is what you do all day. So if you don’t like it then why are you in this industry?

Why code review is awesome

  • School new developers to know about the quirks of a project. And let’s face it, every project has them.
  • Coach, code review is likely the best way to coach junior developers into their new role. To invest in them early is probably the best way to get them to deliver value to the team.
  • Spread the knowledge, by having multiple people code-review your code you make more and more people aware of how the project grows and in what direction it expands.
  • Enable remote work, when you have no idea on the remote developers skillset or when you are working with offshore or open source projects.

How to make it so

Informality is quintessential when it comes to code-review, anyone should be able to do it and give their seal of approval. Gerrit has something that’s called +1 and +2, and it is awful in the sense that you enforce lead/senior and junior developers and gives a select few the ability to approve all the code. In my experience this has never worked out well. Continuous Integration and a workflow resembling Git* Flow. Connect as much as you can to the code-review platform, which should be close to the repository. I personally like how GitHub, Gerrit, GitLab and BitBucket does it. They make the pull request or patch trigger web-hooks that in turn trigger Jenkins or some other CI Server. With that connection in place the possibilities are endless. Take some time and let the team get the full power of CI in addition to the code-review. Break the culture of single omnipotent developers, because it never lasts because they eventually leave and when they do you can be sure that the product they had their poisonous hand over is dead in the water.




Tutorial: Flow SoapUI Plugin

What’s this?

This is a plugin i decided to write this week for SoapUI and consequently Ready API, both SmartBear products. Both has a very nice plugin architecture these days, very eloquently describe by Ole himself here, the original SoapUI developer.

Personally i’m quite familiar with the codebase, having worked at SmartBear for a time a few years ago, but i’ve never really had the time or idea to put the effort of trying it out before. So, at my newest assignment there’s a few SOAP services that have a the characteristic that they are REALLY unstable. Every second request fails at some point in it making the test-cases fail intermittently in SoapUI. We’re in the process of automating a number of testcases using the soapui-maven-plugin. So, what the team has done is to wrap the teststeps that are really unstable with groovy-scripts and conditional gotos to make it reset and do the test-step again. Essentially we ended up with three teststeps just to run one unstable step. I know what you are thinking. Why in the world would you want to have a teststep failing every other time? And i agree with you. If i was in change i would dig into what is making it fail and fix it. The problem here is that the service is dependent on a third-party service that we don’t have control over and that the owner is too scared to replace or modify, and we really need the data from that service. But we can’t have four times as many test-steps just because of this problem, either, that’ll get hard to maintain for us. So let’s write a plugin that repeats a number of steps if they have failed for a few attempts!. And here it is!

Installation

First follow the guidelines on GitHub that explains where to put files and how to jailbreak the plugin if you need that then [here’s] a guide on how to use the plugin. But to keep

Usage

Okay, so the plugin is installed and you are ready for testing, now what. SoapUI should have picked up the plugin automatically and you can find it in your list of test-steps.

Just add the test-step and make sure to point out which test-step you want to iterate from.

If all assertions and test-steps are successfull then the test-step simply succeeds and continues to the next one. If not then it will begin another iteration until it have iterated as many times as are specified in the max attempts dropdown.

When the repeat-test-step is configured you can give it a go by running the test-case. Here i use the sample-project which is provided with the plugin source-code. It’s very simple, it is testing by sending GET-requests to a mock-service i set up with two resource-urls. /everyThirdTime and /everyTime. It’s fairly obvious how often they return a 200 OK HTTP response right? Great. The mockservice is started and stopped in the setupScript and teardownScript of the testcase.

So the SoapUI test-case consists of three test-steps, works_every_time, works_every_third_time and Repeat Test Step which is configured to repeat two times. So the unstable test-step is the second one, and let’s remember that i configured the mockservice to respond 200 OK to the third request. So two attempts should be enough to make the service pass. Which it does, very reproducible. So much so that i included it as a regression-test for the plugin in TravisCI.

The test-runner will show the historic failed results in the log but will be a successful test-case as long as at least one iteration is successful. If it successful at the first iteration then the repeat-test-step will not trigger any iterations at all.

Plugin Development

I won’t go into detail about the development itself, that’s a topic for another post, which i might write someday. It was a very nice experience to be honest, the plugin architecture of SoapUI lately is very formidable and wraps things like xmlbeans configuration nicely by parsing annotations instead so we don’t have to think about it as plugin-developers. But i more or less followed Oles tutorial on that part until i had the test-step reachable inside of SoapUI. From there it was just a matter of manipulating the components of the TestStep Plugin run-method and utilizing the API which already had support for manipulating the control-flow around the test-steps. The tricky part was to get it to un-fail the test-case after it had failed test-steps of a previous iteration after following iteration had passed. I solved that by removing the test-results of the previous iterations test-steps and upon success of an iteration i manipulate the testCaseRuncontext property, see below.

if (allRelevantTestsPassed) { 
   testCaseRunContext.setProperty(TestCaseRunner.Status.class.getName(), TestCaseRunner.Status.RUNNING);
}

This was actually quite hidden as there are no method on the testCaseRunContext to set the status of the testrun, but since SoapUI is fairly open source i could just checkout the source and have a look at how it handled it internally so i could do it myself. As i discovered the status is a property.

That’s it! I hope you’ll find the plugin useful. Do give me a shoutout if you find any bugs or have any feature requests for other test-steps or SoapUI components go to GitHub and report them as issues on the github project.




New Blog

Hello!

Not that i really had an old one, but it was time to make myself one. Especially considering that fiber was finally installed into our house. Free access to the internet highway means ye olde server actually doing something. So why not a blog, about code and life, at 03:30 AM in the morning. What could possibly go wrong? I most likely misconfigured something along the way.

Anyway. The point of this blog is to keep my code, stories, scripts, failed bread-recipes, some kind of curriculum and misconfigured servers in the same place.

Short introduction.

I’m a father, husband, baker, cyclist, snowboarder, geek and last but not neccessarily least a software engineer/developer/architect. I consider titles to be irrelevant, whoever writes the code really gets the job done. I’m going to keep short on most parts private, shielding my family somewhat from the fierce domain of the intertubes. But professionally i can keep talking.

My profession is first and foremost about writing code. I like functional programming, Linux, the JVM, continuous integration, hosting my own servers and stuff that categorizes as hometech.

In that area there’s more than a few Raspberry Pi’s, one in particular powering a (not yet put inside a box) MagicMirror, which i’m very fond of at the moment.

 

I’m going to make a habit of sharing code snippets i’m particularly fond or proud of, so i’ll get started right away.

Here’s a piece of bash that shows the current git branch (if any), you are on directly on your REPL, give it a spin!

 

Show branch name in terminal

https://gist.github.com/O5ten/1d5f5dda2b3f7abd6194

 

Here’s my GitHub btw, seems like GitHub stopped answering when the site bombarded them with requests while configuring.