Musings of a PC

Thoughts about Windows, TV and technology in general

Category Archives: Computers and Internet

Improving web site quality through tighter GitHub/Bamboo integration

Before I get into the nitty-gritty, a brief recap of how things are working before any of the changes described in this article …

A Linaro static website consists of one or more git repositories, with potentially one being hosted as a private repository on Linaro’s BitBucket server and the others being hosted on GitHub as public repositories. Bamboo, the CI/CD tool chosen by Linaro’s IT Services to build the sites, monitors these repositories for changes and, when a change is identified, it runs the build plan for the web site associated with the changed repositories. If the build plan is successful, the staging or production web site gets updated, depending on which branch of the repository has been updated (develop or master, respectively).

All well and good but it does mean that if someone commits to a repository a breaking change (e.g. a broken link or some malformed YAML) then no other updates can be made to that website until that specific problem has been resolved.

To solve this required several changes being made that, together, helped to ensure that breaking changes couldn’t end up in the develop or master branches unless someone broke the rules by bypassing the protection. The changes we made were:

  • Using pull requests to carry out peer reviews of changes before they got committed into the develop or master branch.
  • Getting GitHub to trigger a custom build in Bamboo so that the proposed changes were used to drive a “test” build in Bamboo, thereby assisting the peer review by showing whether or not the test build would actually be successful.
  • Using branch protection rules in GitHub to enforce requirements such as needing the tests to succeed and needing code reviews.

Pull requests are not a native part of the git toolset but they have been implemented by a number of the git hosting platforms like GitHub, GitLab, BitBucket and others. They may vary in the approach taken but, essentially, one or more people are asked to look at the differences between the incoming changes and the existing files to see if anything wrong can be identified.

That, in itself, can be a laborious and not always successful process at spotting problems which is why there is an increasing use of automation to assist. GitHub’s approach is to have webhooks or apps trigger an external activity that might perform some testing and then report back on the results.

We opted to use webhooks to get GitHub to trigger the custom builds in Bamboo. They are called custom builds because one or more Bamboo variables are explicitly defined in order to change the behaviour of the build plan. I’ll talk more about them in a subsequent article.

The final piece of the puzzle was implementing branch protection rules. I’ve linked to the GitHub documentation above but I’ll pick out the key rules we’ve used:

  • Require pull request reviews before merging.
    When enabled, all commits must be made to a non-protected branch and submitted via a pull request with the required number of approving reviews.
  • Require status checks to pass before merging.
    Choose which status checks must pass before branches can be merged into a branch that matches this rule.

There is a further option that has been tried in the past which is “Include administrators”. This enforces all configured restrictions for administrators. Unfortunately, too many of the administrators have pushed back against this (normally because of the pull request review requirement) so we tend to leave it turned off now. That isn’t to say, though, that administrators get a “free ride”. If a pull request requires a review, an administrator can merge the pull request but GitHub doesn’t make it too easy:

Clicking on Merge pull request, highlighted in “warning red”, results in the expected merge dialog but with extra red bits:

So an administrator does have to tick the box to say they are aware they are using their admin privilege, after which step they can then complete the merge:

If an administrator pushes through a pull request that doesn’t build then they are in what I describe as the “you broke it, you fix it” scenario. After all, the protections are there for a good reason 😊.

Index page: Tips, tricks and notes on building Jekyll-based websites

Link-checking static websites

In migrating the first Linaro site from WordPress to Jekyll, it quickly became apparent that part of the process of building the site needed to be a “check for broken links” phase. The intention was that the build plan would stop if any broken links were detected so that a “faulty” website would not be published.

Link-checking a website that is currently being built potentially brings problems, in that if you reference a new page, it won’t yet have been published and therefore if you rely on checking http(s) URLs alone, you won’t find the new page and an erroneous broken link is reported.

You want to be able to scan the pages that have been built by Jekyll, on the understanding that a relative link (e.g. /welcome/index.html instead of https://mysite.com/welcome/index.html) can be checked by looking for a file called index.html within a directory called welcome, and that anything that is an absolute link (e.g. it does start with http or https) is checked against an external site.

I cannot remember which tool we started using to try to solve this. I do remember that it had command-line flags for “internal” and “external” link checking but testing showed that it didn’t do what we wanted it to do.

So an in-house solution was created. It was probably (at the time), the most complex bit of Python code I’d written and involved learning about things like how to run multiple threads in parallel so that the external link checking doesn’t take too long. Some of our websites have a lot of external links!

Over time, the tool has gained various additional options to control the checking behaviour, like producing warnings instead of errors for broken external links, which allows the 96Boards team to submit changes/new pages to their website without having to spend time fixing broken external links first.

The tool is run as part of the Bamboo plan for all of the sites we build and it ensures that the link quality is as high as possible.

Triggering a test build on Bamboo now ensures that a GitHub Pull Request is checked for broken links before the changes are merged into the branch. We’ve also published the script as a standalone Docker container to make it easier for site contributors to run the same tool on their computer without needing to worry about which Python libraries are needed.

The script itself can be found in the git repo for the Docker container, so you can see for yourself how it works and contribute to its development if you want to.

Index page: Tips, tricks and notes on building Jekyll-based websites

Automating Site Building

As I mentioned in Building a Website That Costs Pennies to Operate, the initial technical design of the infrastructure had the website layout defined in a private git repository and the content in a public git repository.

The private git server used was Atlassian BitBucket – the self-hosted version, not the cloud version. Although Linaro’s IT Services department is very much an AWS customer, we had already deployed BitBucket as an in-house private git service so it seemed to make more sense to use that rather that choose to pay an additional fee for an alternative means of hosting private repositories like CodeCommit or GitHub.

So what to do about the build automation? An option would have been to look at CodeBuild but, as Linaro manages a number of Open Source projects, we benefit from Atlassian’s very kind support of the Open Source community, which meant we could use Atlassian Bamboo on the same server hosting BitBucket and it wouldn’t cost us any more money.

For each of the websites we build, there is a build plan. The plans are largely identical to each other and go through the following steps, essentially emulating what a human would do:

  • Check out the source code repositories
  • Merge the content into a single directory
  • Ensure that Jekyll and any required gems are installed
  • Build the site
  • Upload the site to the appropriate S3 bucket
  • Invalidate the CloudFront cache

Each of these is a separate task within the build plan and Bamboo halts the build process whenever a task fails.

There isn’t anything particularly magical about any of the above – it is what CI/CD systems are all about. I’m just sharing the basic details of the approach that was taken.

Most of the tasks in the build plan are what Bamboo calls a script task, where it executes a script. The script can either be written inline within the task or you can point Bamboo at a file on the server and it runs that. In order to keep the build plans as identical as possible to each other, most of the script tasks run files rather than using inline scripting. This minimises the duplication of scripting across the plans and greatly reduces the administrative overhead of changing the scripts when new functionality is needed or a bug is encountered.

To help those scripts work across different build plans, we rely on Bamboo’s plan variables, where you define a variable name and an associated value. Those are then accessible by the scripts as environment variables.

We then extended the build plans to work on both the develop and master branches. Here, Bamboo allows you to override the value of specified variables. For example, the build plan might default to specifying that jekyll_conf_file has a value of “_config.yml,_config-staging.yml”. The master branch variant would then override that value to be “_config.yml,_config-production.yml”.

The method used to trigger the builds automatically has changed over time because we’ve changed the repository structure, GitHub has changed the service offerings and we’ve started doing more to tightly integrate Bamboo with GitHub so I’m not going to go into the details on that just yet.

Index page: Tips, tricks and notes on building Jekyll-based websites

Linaro sites and repositories

Building a Website That Costs Pennies to Operate

Back in 2014, the company I work for – Linaro – was using WordPress to host its websites. WordPress is a very powerful piece of software and very flexible but it did present some challenges to us:

  • Both WordPress and MySQL needed regular patching to minimise vulnerabilities.
  • It could be quite a resource hog if you were trying to get an optimal end-user experience from it.
  • It was difficult to make a WordPress site run across multiple servers (to avoid having single points of failure, resulting in an inaccessible site).

Towards the end of 2014, I attended the AWS re:Invent conference and happened to attend a session that would ultimately change how Linaro delivers its websites:

The basis of the idea presented in this session is to use a static site generator which takes your content, turns it into HTML pages and stores it in a S3 bucket from where it can be hosted/accessed by your customers.

By doing so, it eliminates the “retrieve the data from a database and convert it to a web page on the fly” process and thereby eliminates a database platform (e.g. MySQL) and the conversion software (e.g. WordPress). The up-front conversion is a one-off time hit, compared to the per-page time hit that a system like WordPress endures.

It is worth emphasising that although the session was at an Amazon conference, the underlying premise and the tools being discussed can be used on any cloud provider.

Earlier, I said that this session would ultimately change how Linaro delivers its websites because it took a bit of persuading … In fact, the following year, I shared this article with the staff who managed the content of the websites:

Why Static Site Generators Are The Next Big Thing

The challenge was that everyone was used to using WordPress and switching to a static site generator was going to be quite an upheaval in terms of workflow, content creation and management.

We got there, though.

We ended up choosing Jekyll as our static site generator. One of the reasons is because it is the technology used to drive GitHub Pages and, as such, gets a lot of use. For the rest of the infrastructure, we did use S3 and CloudFront to provide the hosting infrastructure and, as expected, this turned out to be a lot cheaper and a lot faster than using WordPress.

To migrate the websites to Jekyll, the Marketing team started by building out a Jekyll theme to manage the look and feel of the sites. Initially, this was kept in a private git repository on one of Linaro’s private git servers. The content was always managed as public git repositories on GitHub.

That split of repositories actually caused a couple of headaches for us:

  1. Building the site required both repositories to be retrieved from the git servers and the content merged.
  2. If we wanted to automate the building of the website, we’d need tools that could work with our private git server.

… but that will keep for another article 😊.

Index page: Tips, tricks and notes on building Jekyll-based websites

Tips, tricks and notes on building Jekyll-based websites

This is a collection of articles about how Linaro uses Jekyll and other tools to build its websites. This particular post will be the main index page and will link out to the other posts.

It should be noted that I will be focusing on the tools and technology, rather than tips on Jekyll itself (like how to build a theme). There are better qualified people than myself to write about such topics 😊

Building a Website That Costs Pennies to Operate

Linaro sites and repositories

Automating Site Building

Link checking static websites

Improving web site quality through tighter GitHub/Bamboo integration

Future topics (partly so I remember what I want to write about):

  • Triggering GitHub tests when a Pull Request is opened
  • Moving to a Docker container for building the site
  • Edge redirects

An open letter to Leo Laporte, Paul Thurrott and Mary Jo Foley

I know that I haven’t written anything here for a long time now … I’ve been sorta busy :). I needed to get something off my chest, though, and this seemed as good a platform as any on which to do it.

So this is addressed to Leo Laorte, Paul Thurrott and Mary Jo Foley, the hosts of TWiT TV’s Windows Weekly. TWiT has the tagline of “netcasts you love from people you trust” and Windows Weekly has the tagline of “talk about Windows and all things Microsoft”. Sadly, for me at least, lately neither of these statements have been true for a while now.

I want to make it clear that this is an opinion piece. As such, you may disagree with what I write, and that’s fine – you are entitled to your own opinion – but I am allowed to have my own opinion even if you do disagree with it.

With that said …

Windows Weekly really doesn’t seem to be sticking to talk about Windows and all things Microsoft. Episode 461, for example, spent the opening 30 minutes talking about Facebook and their bot announcement; I’ve even re-listened to that part of the show and there was barely any comparison with the bot announcements made at Microsoft’s recent BUILD developer conference. Leo even went so far as to say that Facebook had the inside track! There was then an unannounced advertisement for Amazon Echo before going on to talk about Android handsets again (see below) and how Mary Jo is now using a Nexus instead of a Lumia Icon.

Remind me what this show is called?

Leo, you come across as a very affable person; easy to listen to and generally a good host. However, there are three things that really grate with me about you on Windows Weekly:

  1. Sometimes you just don’t listen to whoever else is talking with the outcome being that you ask a question that has literally been something that was said seconds earlier.
  2. There doesn’t seem to be a show that goes by without you promoting an Android handset. This is Windows Weekly. If I was interested in Android stuff, I’d be listening to This Week in Google. Anyone would think it was an unannounced advertisement the way you go on about it.
  3. Associated with #2, you really do have a tendency to derail the topic of conversation. You even admitted as much in episode 461 as you went to the first ad after talking about nothing really related to Microsoft.

Paul, you are a very depressing person to listen to. I don’t know if your articles have always been so tabloid or if this is since you left Penton to form thurrott.com, but I do get very disappointed/frustrated when headlines are just clickbait. Take the headline “Windows Phone is Irrelevant Today, But It Still Has a Future“. This is a very provoking headline … particularly since the use of the word irrelevant actually pertains to the statistical relevance of the number of Windows Phone/Mobile handsets in use. Like Leo, you have started pushing Android really hard lately instead of trying to find even the smallest positive about Windows Mobile.

You made a fair point about how Microsoft could have used Windows Mobile handsets on stage during the BUILD keynotes but, apart from that, your criticism of the lack of anything phone-related at BUILD was very unfair. Windows 10 Mobile is Windows 10. Any developer-related news or information would have been across the whole of Windows 10 unless it was Hololens because nobody knows how to develop for that, hence the sessions.

By and large, Mary Jo (with her Enterprise hat on) doesn’t get sucked into the anti-Microsoft rhetoric coming from Leo and Paul but recently she hasn’t been immune. There was one episode where she asked why data protection hadn’t been mentioned in BUILD. Errr … wasn’t that a developer event? Wouldn’t you expect data protection to be covered at Ignite (what used to be Tech-Ed)?

It has got to the point where I just don’t enjoy listening to the podcast any longer. I said at the start of this post that I needed to get something off my chest but I think that a comment on a recent Mary Jo article puts it more eloquently than me:

Since Mary Jo and Paul Thourrott don’t believe in Microsoft products, I unsubscribed to the ZDnet email, and to both their podcasts. They forgot that the ones that listen are Microsoft fans, and we don’t appreciate being laughed at. Maybe they should join an android show. I no longer listen to Windows Weekly or What the Tech.

I don’t consider myself to be a fanboy, but I do prefer the Microsoft ecosystem over Android or Apple. As such, I want to listen to people who are like me and I’ve come to the conclusion that Leo, Paul and Mary Jo simply don’t believe in Microsoft products and so I am no longer listening to Windows Weekly or following TWiT, Paul or Mary Jo on Twitter.

To use that word from Paul’s article, I may be (statistically) insignificant, but I still count.

Security and Email groups in LDAP

After working with Active Directory since 2000, I’ve clearly become a bit spoiled with the ability to create a group in AD that serves the purpose of both a security group and a distribution group with just one checkbox. Why do I say that? Because in LDAP, there isn’t a commonly used way of achieving that same goal.

LDAP, or Lightweight Directory Access Protocol, is an Internet protocol that services can use to look up information from a server. Active Directory makes use of LDAP which is why you’ll come across terms common to both such as schema.

For the UNIX world, the commonly used schemas are defined in RFCs. For example, the most common objectClass used to define a person – inetOrgPerson – is defined in RFC 2798. This is a really great class to use for storing personal information in an LDAP directory because it has attributes for all of the important stuff you might want to know about someone.

When it comes to groups, though, things get a bit tougher. There is posixGroup which is a good class to use for security needs because it stores a group ID, a description and the members of the group. Rather surprisingly, there isn’t an accepted standard for defining a distribution or email group. There are classes for defining groups of users, such as groupOfNames, groupOfUniqueNames and groupOfMembers. They each have their slight differences and which one(s) you use typically comes down to either personal preference or the tools you are going to be using to manage those groups.

Another curious aspect about groups in LDAP is that there are differences in how the members are represented. For example, posixGroup uses an attribute called memberUid and the value is just that – the uid of the member of the group. groupOfUniqueNames, by comparison, uses an attribute called uniqueMember and the value is the distinguishedName of the member. One of the benefits of using the distinguishedName is that it allows groupOfUniqueNames to contain other groups as members, which posixGroup does not.

So when it comes to trying to maximise the value of a group’s membership by using the group for dual-purposes, i.e. security and email, what can you do? One option is to define your own objectClass and add it to the schema on the LDAP server. That is essentially what Microsoft did but the problem then is that your tools probably won’t know how to work with that class unless you can modify the tools.

Given that the objectClasses do represent members in different ways makes any attempt to “merge” or “overlay” multiple objectClasses to get the desired result is also likely to fail.

For myself, in the end, I decided that I would use posixGroup as the definitive representation of a group and then have a script that reads the various posixGroup groups and creates groupOfUniqueNames groups to match those groups. I wrote the script in Perl and it is at the end of this blog.

Here is a bit more detail about how the script works and how I’ve got my LDAP server set up …

I have an organizationalUnit (OU) called groups, below which I have an OU called mailing and an OU called security. The idea should be clear but all of my posixGroup groups go into ou=security and the script creates the mailing groups in ou=mailing. The script can be used in two ways:

  1. Full scan of the security OU. It looks at each of the groups in turn and processes it.
  2. Processing of one security group by specifying the cn value on the command line. This functionality is primarily there for use with LDAP Account Manager Pro. This great web-based product allows the administrator to define custom scripts that get run at specified trigger points. In my case, I get it to run the script, passing the cn value, when a group is created or modified, thus ensuring the email group is kept up-to-date.

The logic behind processing a security group is as follows:

  • Get the attributes we need from the security group: modifyTimestamp, description, displayName, mail, owner and memberUid.
  • If there aren’t any members, we delete the corresponding email group. This is because groupOfUniqueNames has to have at least one member. If you want to use groupOfMembers instead, that restriction goes away and the script could be modified accordingly.
  • If the email group already exists and its modifyTimestamp is newer than that of the security group, we don’t do anything else because the implication is that it was created by the script after the security group was created/modified.
  • The next step is to delete the email group. This is done rather than trying to figure out the differences in group membership. If you want to get fancy with the code, go ahead but this works for me Smile.
  • The final step is to create a new email group, specifying the attributes and members retrieved from the security group.

A few notes about the attributes: the posixGroup class doesn’t allow you to specify a display name (displayName), email address (mail) or group owner (owner). To permit that, I use the objectClass extensibleObject which allows you to add any attribute defined in the schema. LDAP purists tend to frown on this because it could lead to errant attributes being added. If you are concerned, you could define your own objectClass as an auxiliary class in order to allow just those three attributes to be used. Alternatively, the script will work without them since displayName and owner aren’t strictly necessary and the script can auto-create an email address by adding the email domain to the end of the existing cn value.

For the email groups, I again use extensibleObject because groupOfUniqueNames doesn’t allow a display name or email address. The email address is clearly required if you want this to work as an email group and the display name may be required if you are, for example, syncing with Google (which was my requirement) and you want the group to have a “nicer” name than just the cn value. Again, if you don’t like the idea of allowing all attributes to be added, you could define your own objectClass and amend the script accordingly.

Final comments:

  • this is my first Perl script and I have been quite lazy in that I have hard-coded the various domain bits into the script. Feel free to improve and, if you want, share back!
  • I’ve not used SSL in the connection because the script runs directly on the LDAP server. It is quite straightforward to amend the script to use LDAPS and there are examples on the web on how to do that.
  • The script assumes, when converting from memberUid to uniqueMember that all of the UIDs exist in the same OU, namely ou=staff,ou=accounts,dc=example,dc=com. It should be fairly straightforward to extend the script so that it searches for the UID and finds the DN that way.

use strict;

use Net::LDAP;

# See if a group name has been passed on the command line, e.g. from

# LDAP Account Manager

my $groupMatch = "";

# $#ARGV is -1 if no parameters, 0 if 1 parameter, etc. We only

# look for one group name.

if ($#ARGV == 0)

{

$groupMatch = $ARGV[0];

}

# Create a connection to the LDAP server

my $ldap = Net::LDAP->new ( "<LDAP server>" ) or die $@;

my $mesg = $ldap->bind ( "account with appropriate write privs",

password => "account's password",

version => 3 );

my $result = $ldap->search ( base => "ou=security,ou=groups,dc=example,dc=com",

filter => "cn=*",

attrs => ['cn', 'description', 'displayName', 'mail', 'memberUid', 'owner', 'modifyTimestamp'] );

print "Got ", $result->count, " entries from the search.\n";

# Walk through the entries

my @entries = $result -> entries;

my $entr;

foreach $entr ( @entries ) {

my $thisCN = $entr->get_value("cn");

# Only process the group if either we are processing them

# all, or the group name matches.

if (($groupMatch eq "") || ($groupMatch eq $thisCN))

{

print "DN: ", $entr->dn, "\n";

my $deleteEmailGroup = 1;

my $buildEmailGroup = 1;

my $thisModify = $entr->get_value("modifyTimestamp");

my $thisDescription = $entr->get_value("description");

my $thisDisplayName = $entr->get_value("displayName");

my $thisMail = $entr->get_value("mail");

my $thisOwner = $entr->get_value("owner");

my $memberRef = $entr->get_value ( "memberUid", asref => 1 );

if ($memberRef == undef)

{

# No members means we don't build a new email group, regardless

# of timestamps, etc. We do still try to delete an existing email

# group though.

$buildEmailGroup = 0;

}

else

{

# We have members in the security group so now we check timestamps

# so that we only create a new email group if the security group has

# been modified more recently.

# See if the email group exists already and, if it does, when was it

# modified? There is no point in creating the new email group if it

# was modified after the security group.

my $emailGroup = $ldap->search ( base => "ou=mailing,ou=groups,dc=example,dc=com",

filter => "cn=$thisCN",

attrs => ['modifyTimestamp']);

if ($emailGroup->count == 1)

{

my @emailEntries = $emailGroup -> entries;

my $emailEntry = @emailEntries[0];

my $emailModify = $emailEntry->get_value("modifyTimeStamp");

if ($thisModify > $emailModify)

{

print "... security group is newer\n";

}

else

{

print "... email group is newer\n";

$deleteEmailGroup = 0;

$buildEmailGroup = 0;

}

}

else

{

print "... email group doesn't exist.\n";

}

}

if ($deleteEmailGroup)

{

print "  ... deleting old email group\n";

$mesg = $ldap->delete("cn=$thisCN,ou=mailing,ou=groups,dc=example,dc=com");

# If we got an error from that, print the error and don't try to

# create the replacement group

if ($mesg->code() != 0 && $mesg->code() != Net::LDAP::Constant->LDAP_NO_SUCH_OBJECT)

{

print "  ... error while deleting group: ", $mesg->error(), " (code ", $mesg->code(), ")\n";

$buildEmailGroup = 0;

}

}

if ($buildEmailGroup)

{

# If we have members in the group, create a new email group

my $entry = Net::LDAP::Entry->new();

$entry->dn("cn=$thisCN,ou=mailing,ou=groups,dc=example,dc=com");

$entry->add('cn' => $thisCN,

'objectClass' => [ 'groupOfUniqueNames', 'extensibleObject' ]

);

# If we have an email address set that, otherwise make one up

if ($thisMail)

{

$entry->add('mail' => $thisMail);

}

else

{

$entry->add('mail' => "$thisCN\@example.com");

}

# If we have a description, display name or owner, set them

if ($thisDescription)

{

$entry->add('description' => $thisDescription);

}

if ($thisDisplayName)

{

$entry->add('displayName' => $thisDisplayName);

}

if ($thisOwner)

{

$entry->add('owner' => $thisOwner);

}

# For each of the memberUid entries, add a uniqueMember attribute

# $memberRef is a reference to the array, so dereference it

my @members = @{ $memberRef };

foreach ( @members ) {

print "  ... adding $_\n";

$entry->add('uniqueMember' => "uid=$_,ou=staff,ou=accounts,dc=example,dc=com");

}

#         $entry->dump();

print "  ... creating group\n";

$mesg = $entry->update( $ldap );

if ($mesg->code() != 0)

{

print "  ... error while creating group: ", $mesg->error(), "\n";

}

}

print "\n";

}

}

$ldap->unbind;

Two IPv6 addresses defined? Try this

If you have an IPv6 network set up, the likelihood is that you are making use of Router Advertisements to allow your systems to automatically grab an IP address.

However, if you then statically assign an IPv6 address to a server, for example, you end up with two IPv6 addresses … which seems to me to be very messy. Deleting the dynamically assigned address seems to be pretty difficult … you can delete the DNS entry to try to stop other systems using it, but the DNS entry will come back.

The answer lies with netsh. All you need to do is run this command:

netsh interface ipv6 set interface <x> routerdiscovery=disabled

where <x> is the index for the interface. This can easily be found with the command:

netsh interface ipv6 show interface

This will stop the server from listening for those Router Advertisements and automatically remove the dynamic address.

Pinning a WordPress blog with IE9

If you’ve got IE9 installed, you’ll know that one of the features it introduces is the ability to pin a website to the Windows 7 taskbar and, depending on how the site has been defined, gain useful shortcuts from the jump list.

The Windows Team recently shared that if you use this feature with a WordPress blog, you really get some great benefit and the thing about WordPress is that there are over 20 million sites that immediately gain this benefit.

So what does it give you? Well, here is the jump list for my blog, pinned to the task bar:

image

So, at the top, you get the 5 most recent posts followed by a set of tasks that are most useful to the blog owner.

If you enjoy reading this blog, please consider pinning it to your taskbar. Alternatively, you can now subscribe to the blog by email – there is a Sign me up! button on the right hand side of the page, at the top. You’ll then get notifications of new posts by email. Alternatively, there is the good old RSS Feed, which is on the page just a bit higher than the Email Subscription feature.

Thanks for reading.