Musings of a PC

Thoughts about Windows, TV and technology in general

Improving web site quality through tighter GitHub/Bamboo integration

Before I get into the nitty-gritty, a brief recap of how things are working before any of the changes described in this article …

A Linaro static website consists of one or more git repositories, with potentially one being hosted as a private repository on Linaro’s BitBucket server and the others being hosted on GitHub as public repositories. Bamboo, the CI/CD tool chosen by Linaro’s IT Services to build the sites, monitors these repositories for changes and, when a change is identified, it runs the build plan for the web site associated with the changed repositories. If the build plan is successful, the staging or production web site gets updated, depending on which branch of the repository has been updated (develop or master, respectively).

All well and good but it does mean that if someone commits to a repository a breaking change (e.g. a broken link or some malformed YAML) then no other updates can be made to that website until that specific problem has been resolved.

To solve this required several changes being made that, together, helped to ensure that breaking changes couldn’t end up in the develop or master branches unless someone broke the rules by bypassing the protection. The changes we made were:

  • Using pull requests to carry out peer reviews of changes before they got committed into the develop or master branch.
  • Getting GitHub to trigger a custom build in Bamboo so that the proposed changes were used to drive a “test” build in Bamboo, thereby assisting the peer review by showing whether or not the test build would actually be successful.
  • Using branch protection rules in GitHub to enforce requirements such as needing the tests to succeed and needing code reviews.

Pull requests are not a native part of the git toolset but they have been implemented by a number of the git hosting platforms like GitHub, GitLab, BitBucket and others. They may vary in the approach taken but, essentially, one or more people are asked to look at the differences between the incoming changes and the existing files to see if anything wrong can be identified.

That, in itself, can be a laborious and not always successful process at spotting problems which is why there is an increasing use of automation to assist. GitHub’s approach is to have webhooks or apps trigger an external activity that might perform some testing and then report back on the results.

We opted to use webhooks to get GitHub to trigger the custom builds in Bamboo. They are called custom builds because one or more Bamboo variables are explicitly defined in order to change the behaviour of the build plan. I’ll talk more about them in a subsequent article.

The final piece of the puzzle was implementing branch protection rules. I’ve linked to the GitHub documentation above but I’ll pick out the key rules we’ve used:

  • Require pull request reviews before merging.
    When enabled, all commits must be made to a non-protected branch and submitted via a pull request with the required number of approving reviews.
  • Require status checks to pass before merging.
    Choose which status checks must pass before branches can be merged into a branch that matches this rule.

There is a further option that has been tried in the past which is “Include administrators”. This enforces all configured restrictions for administrators. Unfortunately, too many of the administrators have pushed back against this (normally because of the pull request review requirement) so we tend to leave it turned off now. That isn’t to say, though, that administrators get a “free ride”. If a pull request requires a review, an administrator can merge the pull request but GitHub doesn’t make it too easy:

Clicking on Merge pull request, highlighted in “warning red”, results in the expected merge dialog but with extra red bits:

So an administrator does have to tick the box to say they are aware they are using their admin privilege, after which step they can then complete the merge:

If an administrator pushes through a pull request that doesn’t build then they are in what I describe as the “you broke it, you fix it” scenario. After all, the protections are there for a good reason 😊.

Index page: Tips, tricks and notes on building Jekyll-based websites

Link-checking static websites

In migrating the first Linaro site from WordPress to Jekyll, it quickly became apparent that part of the process of building the site needed to be a “check for broken links” phase. The intention was that the build plan would stop if any broken links were detected so that a “faulty” website would not be published.

Link-checking a website that is currently being built potentially brings problems, in that if you reference a new page, it won’t yet have been published and therefore if you rely on checking http(s) URLs alone, you won’t find the new page and an erroneous broken link is reported.

You want to be able to scan the pages that have been built by Jekyll, on the understanding that a relative link (e.g. /welcome/index.html instead of https://mysite.com/welcome/index.html) can be checked by looking for a file called index.html within a directory called welcome, and that anything that is an absolute link (e.g. it does start with http or https) is checked against an external site.

I cannot remember which tool we started using to try to solve this. I do remember that it had command-line flags for “internal” and “external” link checking but testing showed that it didn’t do what we wanted it to do.

So an in-house solution was created. It was probably (at the time), the most complex bit of Python code I’d written and involved learning about things like how to run multiple threads in parallel so that the external link checking doesn’t take too long. Some of our websites have a lot of external links!

Over time, the tool has gained various additional options to control the checking behaviour, like producing warnings instead of errors for broken external links, which allows the 96Boards team to submit changes/new pages to their website without having to spend time fixing broken external links first.

The tool is run as part of the Bamboo plan for all of the sites we build and it ensures that the link quality is as high as possible.

Triggering a test build on Bamboo now ensures that a GitHub Pull Request is checked for broken links before the changes are merged into the branch. We’ve also published the script as a standalone Docker container to make it easier for site contributors to run the same tool on their computer without needing to worry about which Python libraries are needed.

The script itself can be found in the git repo for the Docker container, so you can see for yourself how it works and contribute to its development if you want to.

Index page: Tips, tricks and notes on building Jekyll-based websites

Automating Site Building

As I mentioned in Building a Website That Costs Pennies to Operate, the initial technical design of the infrastructure had the website layout defined in a private git repository and the content in a public git repository.

The private git server used was Atlassian BitBucket – the self-hosted version, not the cloud version. Although Linaro’s IT Services department is very much an AWS customer, we had already deployed BitBucket as an in-house private git service so it seemed to make more sense to use that rather that choose to pay an additional fee for an alternative means of hosting private repositories like CodeCommit or GitHub.

So what to do about the build automation? An option would have been to look at CodeBuild but, as Linaro manages a number of Open Source projects, we benefit from Atlassian’s very kind support of the Open Source community, which meant we could use Atlassian Bamboo on the same server hosting BitBucket and it wouldn’t cost us any more money.

For each of the websites we build, there is a build plan. The plans are largely identical to each other and go through the following steps, essentially emulating what a human would do:

  • Check out the source code repositories
  • Merge the content into a single directory
  • Ensure that Jekyll and any required gems are installed
  • Build the site
  • Upload the site to the appropriate S3 bucket
  • Invalidate the CloudFront cache

Each of these is a separate task within the build plan and Bamboo halts the build process whenever a task fails.

There isn’t anything particularly magical about any of the above – it is what CI/CD systems are all about. I’m just sharing the basic details of the approach that was taken.

Most of the tasks in the build plan are what Bamboo calls a script task, where it executes a script. The script can either be written inline within the task or you can point Bamboo at a file on the server and it runs that. In order to keep the build plans as identical as possible to each other, most of the script tasks run files rather than using inline scripting. This minimises the duplication of scripting across the plans and greatly reduces the administrative overhead of changing the scripts when new functionality is needed or a bug is encountered.

To help those scripts work across different build plans, we rely on Bamboo’s plan variables, where you define a variable name and an associated value. Those are then accessible by the scripts as environment variables.

We then extended the build plans to work on both the develop and master branches. Here, Bamboo allows you to override the value of specified variables. For example, the build plan might default to specifying that jekyll_conf_file has a value of “_config.yml,_config-staging.yml”. The master branch variant would then override that value to be “_config.yml,_config-production.yml”.

The method used to trigger the builds automatically has changed over time because we’ve changed the repository structure, GitHub has changed the service offerings and we’ve started doing more to tightly integrate Bamboo with GitHub so I’m not going to go into the details on that just yet.

Index page: Tips, tricks and notes on building Jekyll-based websites

Linaro sites and repositories

Building a Website That Costs Pennies to Operate

Back in 2014, the company I work for – Linaro – was using WordPress to host its websites. WordPress is a very powerful piece of software and very flexible but it did present some challenges to us:

  • Both WordPress and MySQL needed regular patching to minimise vulnerabilities.
  • It could be quite a resource hog if you were trying to get an optimal end-user experience from it.
  • It was difficult to make a WordPress site run across multiple servers (to avoid having single points of failure, resulting in an inaccessible site).

Towards the end of 2014, I attended the AWS re:Invent conference and happened to attend a session that would ultimately change how Linaro delivers its websites:

The basis of the idea presented in this session is to use a static site generator which takes your content, turns it into HTML pages and stores it in a S3 bucket from where it can be hosted/accessed by your customers.

By doing so, it eliminates the “retrieve the data from a database and convert it to a web page on the fly” process and thereby eliminates a database platform (e.g. MySQL) and the conversion software (e.g. WordPress). The up-front conversion is a one-off time hit, compared to the per-page time hit that a system like WordPress endures.

It is worth emphasising that although the session was at an Amazon conference, the underlying premise and the tools being discussed can be used on any cloud provider.

Earlier, I said that this session would ultimately change how Linaro delivers its websites because it took a bit of persuading … In fact, the following year, I shared this article with the staff who managed the content of the websites:

Why Static Site Generators Are The Next Big Thing

The challenge was that everyone was used to using WordPress and switching to a static site generator was going to be quite an upheaval in terms of workflow, content creation and management.

We got there, though.

We ended up choosing Jekyll as our static site generator. One of the reasons is because it is the technology used to drive GitHub Pages and, as such, gets a lot of use. For the rest of the infrastructure, we did use S3 and CloudFront to provide the hosting infrastructure and, as expected, this turned out to be a lot cheaper and a lot faster than using WordPress.

To migrate the websites to Jekyll, the Marketing team started by building out a Jekyll theme to manage the look and feel of the sites. Initially, this was kept in a private git repository on one of Linaro’s private git servers. The content was always managed as public git repositories on GitHub.

That split of repositories actually caused a couple of headaches for us:

  1. Building the site required both repositories to be retrieved from the git servers and the content merged.
  2. If we wanted to automate the building of the website, we’d need tools that could work with our private git server.

… but that will keep for another article 😊.

Index page: Tips, tricks and notes on building Jekyll-based websites

Tips, tricks and notes on building Jekyll-based websites

This is a collection of articles about how Linaro uses Jekyll and other tools to build its websites. This particular post will be the main index page and will link out to the other posts.

It should be noted that I will be focusing on the tools and technology, rather than tips on Jekyll itself (like how to build a theme). There are better qualified people than myself to write about such topics 😊

Building a Website That Costs Pennies to Operate

Linaro sites and repositories

Automating Site Building

Link checking static websites

Improving web site quality through tighter GitHub/Bamboo integration

Future topics (partly so I remember what I want to write about):

  • Triggering GitHub tests when a Pull Request is opened
  • Moving to a Docker container for building the site
  • Edge redirects

An open letter to Leo Laporte, Paul Thurrott and Mary Jo Foley

I know that I haven’t written anything here for a long time now … I’ve been sorta busy :). I needed to get something off my chest, though, and this seemed as good a platform as any on which to do it.

So this is addressed to Leo Laorte, Paul Thurrott and Mary Jo Foley, the hosts of TWiT TV’s Windows Weekly. TWiT has the tagline of “netcasts you love from people you trust” and Windows Weekly has the tagline of “talk about Windows and all things Microsoft”. Sadly, for me at least, lately neither of these statements have been true for a while now.

I want to make it clear that this is an opinion piece. As such, you may disagree with what I write, and that’s fine – you are entitled to your own opinion – but I am allowed to have my own opinion even if you do disagree with it.

With that said …

Windows Weekly really doesn’t seem to be sticking to talk about Windows and all things Microsoft. Episode 461, for example, spent the opening 30 minutes talking about Facebook and their bot announcement; I’ve even re-listened to that part of the show and there was barely any comparison with the bot announcements made at Microsoft’s recent BUILD developer conference. Leo even went so far as to say that Facebook had the inside track! There was then an unannounced advertisement for Amazon Echo before going on to talk about Android handsets again (see below) and how Mary Jo is now using a Nexus instead of a Lumia Icon.

Remind me what this show is called?

Leo, you come across as a very affable person; easy to listen to and generally a good host. However, there are three things that really grate with me about you on Windows Weekly:

  1. Sometimes you just don’t listen to whoever else is talking with the outcome being that you ask a question that has literally been something that was said seconds earlier.
  2. There doesn’t seem to be a show that goes by without you promoting an Android handset. This is Windows Weekly. If I was interested in Android stuff, I’d be listening to This Week in Google. Anyone would think it was an unannounced advertisement the way you go on about it.
  3. Associated with #2, you really do have a tendency to derail the topic of conversation. You even admitted as much in episode 461 as you went to the first ad after talking about nothing really related to Microsoft.

Paul, you are a very depressing person to listen to. I don’t know if your articles have always been so tabloid or if this is since you left Penton to form thurrott.com, but I do get very disappointed/frustrated when headlines are just clickbait. Take the headline “Windows Phone is Irrelevant Today, But It Still Has a Future“. This is a very provoking headline … particularly since the use of the word irrelevant actually pertains to the statistical relevance of the number of Windows Phone/Mobile handsets in use. Like Leo, you have started pushing Android really hard lately instead of trying to find even the smallest positive about Windows Mobile.

You made a fair point about how Microsoft could have used Windows Mobile handsets on stage during the BUILD keynotes but, apart from that, your criticism of the lack of anything phone-related at BUILD was very unfair. Windows 10 Mobile is Windows 10. Any developer-related news or information would have been across the whole of Windows 10 unless it was Hololens because nobody knows how to develop for that, hence the sessions.

By and large, Mary Jo (with her Enterprise hat on) doesn’t get sucked into the anti-Microsoft rhetoric coming from Leo and Paul but recently she hasn’t been immune. There was one episode where she asked why data protection hadn’t been mentioned in BUILD. Errr … wasn’t that a developer event? Wouldn’t you expect data protection to be covered at Ignite (what used to be Tech-Ed)?

It has got to the point where I just don’t enjoy listening to the podcast any longer. I said at the start of this post that I needed to get something off my chest but I think that a comment on a recent Mary Jo article puts it more eloquently than me:

Since Mary Jo and Paul Thourrott don’t believe in Microsoft products, I unsubscribed to the ZDnet email, and to both their podcasts. They forgot that the ones that listen are Microsoft fans, and we don’t appreciate being laughed at. Maybe they should join an android show. I no longer listen to Windows Weekly or What the Tech.

I don’t consider myself to be a fanboy, but I do prefer the Microsoft ecosystem over Android or Apple. As such, I want to listen to people who are like me and I’ve come to the conclusion that Leo, Paul and Mary Jo simply don’t believe in Microsoft products and so I am no longer listening to Windows Weekly or following TWiT, Paul or Mary Jo on Twitter.

To use that word from Paul’s article, I may be (statistically) insignificant, but I still count.

Quick tip: file differences in Visual Studio

Visual Studio has a pretty good file differencing tool … but it only seems to be available from the GUI if the files you are comparing are under source control.

If you want to compare other files, e.g. the output from an app, etc, a common suggestion is this:

devenv.exe /diff list1.txt list2.txt

which will start a separate instance of Visual Studio and then run the diff tool.

If you want to use diff within a running instance, though, this can be done from the Visual Studio command window (CTRL+W, A) then:

Tools.DiffFiles list1.txt list2.txt

This doesn’t require any plugins or extensions to be installed.

Migrating a WP Silverlight app to WP XAML – supporting settings

Earlier this year, Microsoft announced Universal Apps – an enhancement in VS2013 that took advantage of improved API compatibility between Windows 8.1 and Windows Phone 8.1 to allow a much greater amount of code sharing between an app for both platforms.

This article is not going to look at that side of things in any detail. Instead, whilst working on a Universal App version of Relative History, it occurred to me that there might be … challlenges … when it comes to dealing with how a user has been using an existing Windows Phone app. The primary reason why there could be challenges is because a Windows Phone Universal App doesn’t use the Silverlight infrastructure so access to things like application settings get handled in a different way.

Upgrading a published Windows Phone app

There is some guidance published by Microsoft on how to upgrade an app. It mostly focusses on upgrading to Silverlight 8.1, which is not of much help to me, but there is some key information about the upgrade process, namely:

  • If testing, it is necessary to copy the ProductID and PublisherID attributes from the published app into the PhoneProductID and PhonePublisherID attributes in the Package.appxmanifest file. This is for the scenario where the earlier version of the app is already deployed onto a device or emulator but not installed from the Store. It is not possible to upgrade an app installed form the Store by deploying a new version of the app from your development computer.
  • If publishing, you simply submit the app to the Store. Microsoft takes care of replacing the product ID and publisher ID with the values from the app you are updating.

I’ll go into some of the above in more detail in a moment, but the key point here is that you cannot upgrade a Store-installed app in testing. If you want to test the process through the Store, here are the steps (thanks to Romasz for his answer on StackOverflow):

  • Publish a Beta version of the existing app – WP7.0/7.1/8.0 Silverlight.
  • Install that from the Store onto your phone.
  • Submit an update to the Beta version in the Store. Add a new package. Do not replace the old package.
  • Your phone will eventually spot the new version and update it. When you run the updated app, it should then execute any code you’ve written to deal with the upgrade process.

Can settings and files be “upgraded”?

My first thought was: when a user upgrades from a Silverlight-based WP app to a XAML-based WP app, what happens to the settings and what happens to any locally-stored files? Let’s see what the community says …:

I got some great responses to that question:

So, the answer is a qualified yes … so the next question is how do we go about it?

Upgrading an app

If you haven’t already done so, you need to install Windows Phone Power Tools (WPPT). This will help because it allows you to browse and access the contents of the app’s private storage. This is useful, for example, if you want to see what you need to be accessing, updating or (in my case) deleting when you migrate from Silverlight to XAML.

As was touched upon earlier and, as Ginny pointed out, you need to make sure that the App ID remains the same. This is the ID string that the operating system uses to identify your app and can be found in WMAppManifest.xml.

Capture

Copy that string into the clipboard then open the file Package.appxmanifest in VS2013 using the XML(Text) Editor. You need to paste the ID you’ve just copied into the PhoneProductId string in that file and save it.

Capture

The PublisherID can be found towards the end of the same XML section in WMAppManifest.xml and this then gets copied into the PhonePublisherId section of Package.appxmanifest.

Now build your app. If it was building successfully previously, it should still build successfully. You can even deploy it to a device or emulator if you want to double-check :-).

Next step is to install the WP Silverlight version of your app to a device or emulator. Use WPPT to install the XAP file. Once installed, run the app and do whatever you need to do in order to get it set up, or get some data installed … whatever is necessary to properly test the upgrade process.

Updating the settings

Once you’ve installed the Silverlight version and used it a bit, you can use WPPT to look at the Isolated Storage for your app:

Capture

Notice how I’ve expanded the Local directory. The main file we’re interested in now is __ApplicationSettings. Everything else will continue to exist after the app is updated by the WP XAML version of the app, and it then becomes up to you whether or not your new version can use those files, update them or delete them. In the case of my app, for example, the Silverlight version of the app uses the local SQL implementation and the data is stored in .sdf files. With the XAML version, I’ve moved to using SQLite so the .sdf files are essentially useless to me. My app will need to delete them in order to free up the storage space, but I’ll also use the contents of __ApplicationSettings in order to ascertain more information about those databases so that I can re-import the data, this time into a SQLite file.

So … onto __ApplicationSettings. If you use WPPT to get the file onto your computer, you can open it in WordPad and see something like this:

Capture

It is important to note that the application settings stored by a WP Silverlight app cannot be accessed through ApplicationData.LocalSettings. The latter is for Windows Store apps.

To retrieve the settings stored by a WP Silverlight app, we basically have to read and parse the __ApplicationSettings file. A simplistic way to do this would be something like this:

StorageFolder localFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
if (localFolder != null)
{
  StorageFile appSettings = await localFolder.GetFileAsync("__ApplicationSettings");
  if (appSettings != null)
  {
    // Got the file - now try to read it
    XDocument oldSettings = new XDocument();
    using (var readStream = await appSettings.OpenStreamForReadAsync() as Stream)
    {
      using (StreamReader sr = new StreamReader(readStream))
      {
        // Dump the first line - it isn't XML
        sr.ReadLine();
        // Then get the rest in
        oldSettings = XDocument.Load(sr);
      }
    }

    XNamespace xns = "http://schemas.microsoft.com/2003/10/Serialization/Arrays";
    XElement f1 = oldSettings.Root.Element(xns + "KeyValueOfstringanyType");
    var f2 = oldSettings.Descendants(xns + "KeyValueOfstringanyType").ToList();
  }
}

That succeeds in reading in the XML. You’ve then got to iterate through the list, looking for Keys that you are interested in and extracting the Values from the corresponding XML node. For example, here is one node retrieved from my settings file:

<KeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
  <Key>FirstRun</Key>
  <Value xmlns:d3p1="http://www.w3.org/2001/XMLSchema" i:type="d3p1:boolean" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">false</Value>
</KeyValueOfstringanyType>

Things get a bit more challenging if you used a class or other complex type to save away. Thankfully, Pedro Lamas has written a great bit of code that essentially mimics what the Silverlight code does when it reads in the application settings file. So, now, all you need to do is call GetValuesAsync() and you’ll end up with a set of key pairs including any non-standard classes. In my case, that is an ObservableCollection<DatabaseList>.

Now, I did encounter some struggle points while trying to get Pedro’s code to work in my app. The main challenge was around the handling of the first line in the settings file. That line of text (which I was ignoring in my simplistic approach) actually lists out any “non standard” types so that the DataContractSerializer knows how to parse the XML appropriately and rebuild the objects.

So why was I having problems? Here is the first line from my settings file:

System.Collections.ObjectModel.ObservableCollection`1[[Relative_History.DatabaseList, Relative History, Version=1.4.8.0, Culture=neutral, PublicKeyToken=null]], System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e

The problem I was having was that the call to Type.GetType (in Pedro’s code) was returning null because the system couldn’t match the object. There were a couple of reasons why, in my case:

  1. The WP Silverlight app had an assembly name of “Relative History”. In my Universal App project, I had the Win8.1 project set to an assembly name of “RelHist2.Windows” and the WP8.1 project set to an assembly name of “RelHist2.WindowsPhone”.
  2. The version number expected was 1.4.8.0 but my Universal App has a version number of 1.0.0.0.

One work-around I came up with was to replace the section of Pedro’s code where knownTypes is declared with this:

System.Type[] knownTypes = new Type[] { typeof(System.Collections.ObjectModel.ObservableCollection<Relative_History.DatabaseList>) };

This worked but did mean that the code wasn’t then as flexible as it should be. I finally solved the problem by changing the assembly names in my Universal App to match that of the Silverlight app, and increased the version number to be higher than that of the Silverlight app.

One final tweak to Pedro’s code if you are copying/pasting his code rather than using Cimbalino is to change:

public async Task<IEnumerable<KeyValuePair<string, object>>> GetValuesAsync()

to

public async Task<IDictionary<string, object>> GetValuesAsync()

With all of that done, Pedro’s code then makes it much easier to load in the settings and access the ones you want to retain:

                var oldSettings = await GetValuesAsync();
                // Now see which settings we want to retain
                if (oldSettings.ContainsKey("fred"))
                {
                    var value = oldSettings["fred"];
                }

In the next article, I’m going to be looking at handling licencing when moving from WP Silverlight to WP XAML.

 

Screen flexibility in Windows

For a while, I’ve been using my Surface Pro 2 with an external monitor, with the Surface beneath the monitor. In Windows, I’ve had the Surface’s screen to the left of the monitor and I’ve “trained” myself that if I want to move something from the monitor “down” to the Surface, I have to move it to the left.

Today, I added a 2nd external monitor, daisy-chained with Display Pro, and then moved the SP2 so that it sits underneath both of them, like this:

WP_20140709_001

I then opened the Screen resolution dialog and started dragging the SP’s screen across so that it would sit between the two external monitors. As I did, I realised you could alter the vertical position of the screen relative to the two monitors. In doing so, I discovered that you can literally match how the monitors are laid out:

Capture

I’ve now got to retrain my muscle memory so that it moves content in the physical direction of the screens rather than how I remembered the logical layout, but I am really impressed that this is possible!

(Probably obvious to most, but sometimes it is the little things that can make a big difference!)