Behaviour Driven Development for Mobile Apps

Geert Van Der Cruijsen gave a  Xamarin University guest lecture that I attended on behavior driven development (BDD) for mobile applications.

His website is http://mobilefirstcloudfirst.net/ and he has some great Xamarin blog posts (not all about automated testing). Geert is a technical expert and architect on mobile and cloud working as a lead consultant in the Netherlands. He is also a Xamarin University training partner.

The lecture started by going over what BDD is and why we use it. Geert explained that BDD has come about because of the following problem:

The product owner speaks in plain text; the developer speaks in code and the tester speaks in test plans.

The business guy has a great idea that will change the world and make a load of money. He needs a way to express this as requirements and communicate it to the developers without misunderstanding.

With each party speaking in their own language there is a cost to the translation which causes the requirements to blur and the end result is not necessarily what the customer wants, as such:

We need one ubiquitous language between all the parties which helps build a clear specification. This is where BDD comes in, If the PO can write a requirement and that wording exists in both the specification and the solution itself there is less room for miscommunication. Also the code itself should reflect the business terminology through domain driven development.

BDD tries to create a communication flow between the business and the developers (+testers). Requirements are split into (small) specific scenarios that the business can write in a well-known form that the developer knows and is directly translatable into the software features – this leads to improved understanding.

Enter: SpecFlow, NUnit, Gherkin, Xamarin.UITest, Xamarin Test Cloud

Geert introduced the above technologies and then focused on some of the specifics of implementing this process into a Xamarin mobile application and why. He gave the statistic that 26% of application downloads are abandoned after the first use and as such:

Mobile application quality is very important and the applications can be very complex. Having the application right the first time is very important.

He then walked through a demo Xamarin application and added some SpecFlow tests around its functionality and displayed running the tests on the Xamarin Test Cloud.

All in all the lecture was mainly about the benefits of BDD for the business and the customer and less on the implementation but it was an interesting talk and has highlighted two things to me:

  • Without input from the business on the content of the acceptance tests then they may not satisfy the customer requirements.
  • Domain driven design is very important for acceptance tests to be easily understood and implemented by developers as there is less translation required. If a feature is called ‘Updates basket’ on the UI and by the business then it should be called that in the code.

Question Time
During the mobile application development that I have done we hit a couple of testing challenges and I posed one of the bigger ones to Geert as a question to see what his advice would be to tackle it:

Question: Given the Xamarin Test Cloud can take a huge amount of time to run tests, how would you manage running  tests for an application with a large number of them.
Answer: He suggested the running of a few ‘core’ tests on the test cloud on a regular basis and running the full suite perhaps weekly. Also running them often locally. (This is how we managed it on our project which is good).

Microsoft C# 70-483 Exam

I have recently completed and passed the C# 70-483 exam and as such this is my obligatory post of what I found useful while preparing for the exam and a couple of pointers on the exam itself.

Although I have only been using C# in a professional way for a little over a year most of the topics covered in the exam were not new to me. However the depth of which they were covered meant that there was a lot of knowledge that I was missing.

I found that the questions in the exam itself were fairly straightforward but the way they were asked had me questioning the fundamentals of the English language. They were asked in a very pedantic way and as such my one bit of advice would be to take your time and don’t be afraid to mark the question to be reviewed so you can come back to it.

The following are the resources I used before the exam:

  • Programming in C# Exam Ref book from Microsoft
  • MeasureUp practice exams
  • https://mva.microsoft.com/en-US/training-courses/programming-in-c-jump-start-14254
  • Pluralsight for specific topics

I would give the advice that you shouldn’t start with the practice exams until you are fairly close to the exam date. The practice tests use a question bank which you undoubtedly begin to memorize after a while. Passing on some advice that Leon gave me – you don’t just need to know the answer, but why that is the answer.

For future exams the process that I would take would be to watch the jump start video on the Microsoft Virtual Academy to identify the topic areas where I am lacking knowledge. Once identified I would use Pluralsight with the exam reference book to fill those gaps. Finally starting no more than a couple of weeks from the exam I would start to use the practice exams.

As well as the knowledge factor the exam is attempting to test your ‘experience’ so running code samples and trying the different things you are learning to better understand them would certainly be beneficial.

Xamarin UI Tests Page Object Model

This blog post is an overview of how we have used SpecFlow, Xamarin UITest framework and the page object pattern to create an automated test suite for a mobile application running on Android and iOS.

The application has been developed with Xamarin.IOS and Xamarin.Android using MVVM. The models and view models are all shared between the platforms and view are native. As this was a capability project we did not develop the apps side by side but instead we started with Android and then the iOS development started half way through the project.

This manner of development threw up a fair amount of problems with the differences between the platforms and naturally the tests had to cater for them without sacrificing the ‘clean code’ aspect. The main challenges (in terms of testing) were in the controls such as Androids ‘Spinner’ and the iOS ‘Picker’. Also date time selectors, checkboxes and radio buttons. There were many development challenges (mainly life-cycle stuff) but that can be covered elsewhere.

Walkthrough

I believe the easiest way to show this is by doing a walk through from the SpecFlow down to the page object discussing how it has been implemented and some of the key points. I have not included the full code, just the interesting bits and have redacted the project specific info.

We start with a test case written in Gherkin:

Simple enough, we want to go to the snake form page, fill in half the details and then check that we get the correct validation messages back. In this case we want latitude, longitude and observation errors to display.

In this example the differences between Android and iOS were:

  • Android displays errors as a toast that disappears after 4 seconds whereas iOS displays a dialog which requires user input.
  • The reporting field is a Spinner on Android and a Picker on iOS.

These steps are then contained in a steps file, I haven’t included first two (user setup and navigation) as they are shared steps.

You can see that the snake page instance is being used by all of the steps and they are performing as little logic as possible. The most they are doing is splitting some strings before calling the method in the model.

So what does the page object model look like?

For this blog post I have merged the Snake Page and its base class into a single class to display the full code. However methods such as the SetTextValue and SetSpinner value are shared between multiple pages and as such can be in a base model, this can also be said for some of the other methods.

There are two dictionaries for the form – these are the controls and the validation errors that the form can return. This has been done to allow shorthand at the SpecFlow level but could be considered as introducing a new thing to maintain.

We have used a naming convention for the text and spinner IDs that has allowed us to use a generic method for setting the control text and then split it down to the different input methods. You can see in the snake page class where we are handing the different controls by simply checking if the platform is Android or iOS and then acting accordingly.

CaptureValidationErrors is not required by iOS as the validation errors are displayed in a dialog box and therefore stay on the screen indefinitely. On Android we have a set time of four seconds to retrieve the validation messages and then check through them. That means that they have to be stored in a variable. This does make the page object stateful however we only check the validation messages once so that is OK.


I hope this has displayed how we have used SpecFlow, Xamarin UITest framework and the page object pattern to test the mobile application whilst keeping a flexible but simple test framework. The page object pattern has allowed us to handle the different platform types and make a maintainable set of page classes.

Learning Management

These are some learning management methods that I have been using to make the most of my time and not just learn things with a ‘first come first serve’ methodology. This has been especially useful as topics to learn have come from multiple mentors and from work on projects – it cannot all be done at the same time and I have had to prioritize. Also as I tend to find myself reading around topics that are generally interesting but not on topic for what I set out to do – this allows me to stay on track whilst bringing those topics into my backlog.

Learning Journal

This one was set as a requirement for my trainee scheme, the idea was to note down topics in a reflective manner following the theory, application and reflection cycle. I found that I tended to use this as a retrospective journal more than reflective as it was taking a considerable amount of time to maintain. It was easier to look ahead with learning topics by adding them to a learning kanban (see below) which was easy to maintain.

Accepting that the journal would be more retrospective than reflective allowed me to leverage its uses in other ways. I created more detailed entries for technologies/processes/techniques that I had used; where they had come from, and where/how I had used them. I found that keeping track of resources has helped me see what types of resources I find most beneficial.

In all I think this method worked well for me but I can see why it would not be so popular with others as it’s a lengthy learning technique that requires some dedication to keep it updated.

Learning Kanban

I set myself up with a board on Trello that had a backlog, in progress, backburner and done columns. Just like a standard board learning topics can be added to the backlog to be looked at. I have had a variety of backlog items from researching whole technologies to reading a blog post.

Each time someone has mentioned a topic for me to look at I have added it in here so that someday I will get to it. What I have struggled the most with is putting these in a suitable order as it requires some restraint to not put the most interesting at the top. Equally so it also requires some restraint not to just cherry pick the topics.

When I have felt comfortable that I have completed an item it has been moved over to the done column.

I think this method has given me the most benefits from the others I have tried as it has a very low-cost in time and effort. It is a very efficient way of managing your learning.

Blogging

The final learning method that I have used is blogging; more time-consuming than both of the others it is better suited to a few topics that have been challenging or detailed.

I set myself up with a simple WordPress blog and have written a variety of posts for anything that I have found especially beneficial. It is a great way of reinforcing understanding as it tests your knowledge to write out an article.

You could probably get similar benefits from creating a wiki of learning rather than a blog if you preferred a more technical style of writing. I think that blogging has been a good resource for learning.

Creating a versioning script for a vNext build definition

OK so this is actually a little more than a version script. I was asked to create a PowerShell step in a vNext build definition that would set the version, company, copyright and product information in all of the AssemblyInfo.cs files in a solution.

I started with looking for existing solutions which led me to an excellent article from InCycle here. This in turn led me to Microsoft’s example here. So this was doing essentially what I wanted and could be expanded upon, here is what I did and the code I used.

The main challenge that I had to handle with this solution is that it is fairly old and well over a hundred projects of varying versions. This means that there has been multiple management methods and as such I had to account for missing or modified from default assembly information.

Example: 

So my solution was to use the following:

Simple enough and only modified from the Microsoft version a little to account for the additional fields. This was the first time I have written PowerShell so was certainly a new experience for me.

I then included this file in the source control of the solution in a folder called BuildScripts. And although covered in the InCycle article these are the complete steps I used to incorporate the script in my build process.

  1. Include the file in source control (/BuildScripts/ApplyVersionAndDefaultsToAssemblies.ps1)
  2. Create a PowerShell step in the vNext definition and point it at the script.1
  3. On the ‘Variables’ settings create a ProjectVersion variable and set the version in the format x.x2
  4. On the ‘General’ settings set the build number format to:

    $(BuildDefinitionName)_$(ProjectVersion).$(date:yy)$(DayOfYear)$(rev:.r)

    3

  5. Then save everything and queue a new build. It will apply your company information and all assemblies will have the same version. I am now using this same script in multiple solutions and all builds.

How badly setup Auto Tests can impact a development team

I have just finished the final sprint with a team and for the past few days have been trying to fix some of the failing auto tests, of which there have been many.

The tests are not failing because of the program code but because of a performance impact when running on the test server. On our fast Dev machines all the tests run easily and quickly – run them on what I imagine is a VM and the program goes to a snail pace. This means that the tests are trying to access resources before they have been created or before they are visible. As such they begin to fail.

The immediate impact on the development team that I have observed is that a rota exists for running and then re-running and if needed further re-running the auto tests; and after that, running the further failing tests locally to see if it’s an actual bug or not. With the tests taking over three hours to run it’s costing a developer at least one hour a day – but really it’s closer to two or three if they have to run auto tests locally.

Over six sprints of two weeks each this has easily cost the development team 60 to 120 hours.

That’s quite a lot of time for something that shouldn’t have been a problem. The fault lies is many places – I believe that the writers of the tests should have taken more care, especially when it became apparent that there were performance issues. I also believe that the PO/Management should have allocated more time to actually fixing the tests rather than creating a rota.

Now the majority of these tests were inherited from the previous sprints concerning this project and as such it’s not really the current developers fault that they were so bad. There should have been a task in sprint 0 to actually setup the auto tests and ensure they were all running properly and stable.

Through the development it became clear that we could not trust in the auto tests and therefore they became a hindrance rather than a useful tool.

From the 30 – 40 tests that failed every day myself and another developer have got that down to about six and are hopeful that some final changes will make them more robust. We have certainly not spent over 120 hours, or 60 – perhaps around 12 hours to get it to this point.

Know Your Domain

I have been reading ‘The Clean Coder’ by “Uncle Bob” Martin and a section on knowing your domain has recently struck home for me.

I have started working with a development team on a proper project (rather than one of my previous trainee ones) and have had a very steep learning curve of the products that this program handles. More so than that I have had no knowledge of what the customers are using the products for or even how they are using them.

It is the responsibility of every software professional to understand the domain of the solutions they are programming. If you are writing an accounting system, you should know the accounting field. You don’t have to be a domain expert, but there is a reasonable amount of due diligence that you ought to engage in.

(Martin. R. The Clean Coder, pg 21)

This became very apparent to me when I was given my first PBI to work on. It was modifying existing functionality to work with an additional product type. One of the functions was outputting a report on the users valid holdings and the other on the users cancelled/replaced holdings.

Simple enough until I compared what products were actually in these reports and saw missing ones on the cancelled/replaced. Due to my lack of knowledge of the domain I had no clue what these products even were and why they were not included. As soon as I asked someone and found out it became obvious.

I should have researched beforehand and understood what products this program was dealing with and it would have saved wasting my time. I now have a list of acronyms concerning the products and client tools and a much greater understanding of the products and services being provided. Although I will continue growing my knowledge base.