31 Oct

Fix 503 Unavailable Error in JIRA

Assuming you’ve set up JIRA and it had been working fine until recently, but you’re now seeing a random 503 error, try the following:

Restart the server

sudo service httpd stop
sudo service httpd start

Sometimes Apache just needs a kick.

Restart JIRA

sudo /etc/init.d/jira stop
sudo /etc/init.d/jira start

Sometimes you’ll get an error to do with a catalina.pid and associated process not found – this means Tomcat has failed.

You may need to run this, then try restarting JIRA again:

sudo mv /opt/atlassian/jira/work/catalina.pid ~/catalina-backup.id

Restart Tomcat

sudo /opt/atlassian/jira/bin/shutdown.sh
sudo /opt/atlassian/jira/bin/startup.sh

Reboot VPS

If you’re running this on a VPS (such as EC2 instance) one of your last options it to reboot the server. Log into your VPS provider and reboot the instance, then run some of these jira start commands again.


All of these options might give JIRA the kick it needs. Sometimes things fall over and they just need a prompt.

But you also have to ask, why did the 503 error start appearing? The answer is likely because your system ran out of memory.

I was trying to do things on the cheap, running JIRA on a single AWS EC2 nano instance, running some commands on my instance to increase swap space in place of RAM.

But every now and again, JIRA will fall over. And, in this case, despite all the rebooting, I couldn’t get JIRA back to life. I got rid of the 503 error but I would start seeing other errors on JIRA startup, all to do with a lack of memory. I’d already maxed out the swap space, so I was out of options.

My final fix was to ‘stop’ the instance in AWS, change the instance size to ‘micro’ rather than ‘nano’ – which doubles its memory capacity – and then start the instance again.

Ran my jira start commands and it all started working. Worth spending a few extra pounds per month to keep JIRA happy!

14 Dec

GitHooks.io

I came up with the idea for GitHooks.io back in January 2016. A complex technical project I persevered with on the side, I only felt it was ready in December 2016 almost a year later, when I presented the idea to the BBC’s internal monthly Web Developer Gathering (see slide deck) – the first presentation I have ever volunteered to do!

GitHooks.io is three things:

  1. A framework in which to write reusable webhooks
  2. A hosted environment to serve your webhook endpoints
  3. A platform to share your webhooks with other people

In short, GitHooks.io can be described as “webhooks as a service”.

Architecture

GitHooks.io was designed to scale, so runs on AWS infrastructure. The main website runs on an EC2 micro instance, and the computation that happens on a webhook endpoint is all executed within AWS Lambdas.

The database was originally a MySQL database running on a private network via AWS’ RDS infrastructure, but to reduce costs I have temporarily terminated this and use an SQLite database instead. Switching to the lighter database was trivial as all database communications are proxied through PDO and the database configuration is entirely YAML-config-driven (as is much of the site functionality, taking inspiration from BBC News).

I have documented in detail how GitHooks.io works here: http://githooks.io/how-it-works

Security

GitHooks.io allows developers to use the GitHub access tokens of the people who install their GitHooks (for communication with the GitHub API), without having to explicitly hand over the access token to the GitHook itself and trust them with it. It does this via an in-house authRequest Node module which automatically appends the access token only to requests made to the GitHub API.

GitHooks.io runs GitHook code inside Node virtual machines on AWS Lambda infrastructure, to sandbox arbitrary code away from the main site as much as possible.

It also has built-in infinite loop protection, by detecting how many Lambdas are requested by the same installation and over what period of time, and automatically disabling the installation if it exceeds a reasonable limit.

14 Dec

Olympic Body Match

 

I was the lead developer for the BBC Olympic Body Match, which takes your physical attributes and matches you to your three closest Olympian counterparts.

It went out in 21 different languages including Persian and Russian, and was consumed across BBC Sport, BBC News, the BBC Sport and News Apps, and the bbc.com ad-enabled equivalents, which each had their own requirements.

Delivering these to a tight, immovable deadline whilst supporting all browsers required careful planning, stakeholder management, following agile processes effectively, and considered pair programming and work delegation.

19 Jun

Use RequireJS with WordPress (plugins) & jQuery (UI)

I recently found myself really wanting to use RequireJS with WordPress, to manage the various JavaScript dependencies a client site had. Unfortunately, this was easier said than done.

WordPress is historically not very compatible with RequireJS, as it provides jQuery and a myriad of other JavaScript files out of the box, which are difficult to shoehorn into your RequireJS configuration. There are loads of WordPress plugins out there which rely on jQuery being a global variable in the page.

Also, any attempt at loading RequireJS into the page will often result in errors because many JavaScript modules first check if require is defined before they execute. This means that the very act of including RequireJS in the page will change how some code is executed, and cause you lots of headaches!

We need require to be defined so that we can make lots of lovely require calls throughout our webpage, but we can’t actually define require until all of WordPress’ native dependencies and plugin JS files have sorted themselves out.

I managed to hack together a solution, and it seems to work rather well.

Edit your header

First of all, edit header.php (I put this just above the wp_head() call):

<script>
window.queueForRequire = [];
 
window.r = function (deps, callback) {
    window.queueForRequire.push({
        deps:     deps,
        callback: callback
    });
};
</script>

We’re going to store any require() calls in a queue: queueForRequire.

Then require your modules as normal. Well, as semi-normal: as mentioned previously, many JavaScript files will check if require is defined and misbehave if it is, so instead of mocking require, I’ve made a custom function r which we’ll use instead.

<script>
r(['subscribe-cta'], function (subscribe) {
    console.log('This is my callback');
});
 
<?php if (is_home() && apply_filters('require_slider', $shouldLoadSlider)) : ?>
r(['slider']);
<?php endif; ?>
 
<?php if (is_archive() || is_home()) : ?>
r(['infinite-scroll']);
<?php endif; ?>
</script>

At this stage, all we’re doing is adding require calls to a queue – nothing is being downloaded just yet.

Edit your footer

Now edit footer.php (put this right before the closing </body> tag):

<script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.2.0/require.min.js"></script>
<script>
    require.config({
        baseUrl: '/wordpress/wp-content/themes/tc-magazine-core/js',
        paths: {
            'readmore': 'https://cdnjs.cloudflare.com/ajax/libs/Readmore.js/2.1.0/readmore.min'
        },
        waitSeconds: 15
    });
 
    if (typeof jQuery === 'function') {
        define('jquery', function () { return jQuery; });
    }
 
    for (var i = 0; i < window.queueForRequire.length; i++) {
        require(window.queueForRequire[i].deps, window.queueForRequire[i].callback);
    }
</script>

Only at the very end of the document do we go “now I’m ready to load RequireJS. WordPress has done its thing, so it should be safe for us to do ours now”.

So, this is pretty standard stuff: we’re downloading the RequireJS library from a CDN and initialising it with a config. jQuery is a strange special case handled in lines 11-13: we want our modules to be able to pull in jQuery as a module, but know jQuery is probably a global already, so here we’re creating a jQuery AMD module on the fly.

The magic happens on lines 15-17: now that we have RequireJS, we can go ahead and download all the things we’ve queued already (and call their callbacks if those were passed as arguments).

RequireJS with WordPress

There you have it – RequireJS working nicely with WordPress and all its plugins!

06 Oct

The discipline of ‘good enough’

As a perfectionist, by definition, something is only ‘good enough’ if it is perfect.

I really struggle to live with untested code, or shoddy code, or duplicated code. I come from a world where code can be beautiful, and code is the thing I have complete control over; something I understand and can make better through my own actions.

I seek to make the world a better place; one line of code at a time, even where the ‘world’ is often no more than a small bubble of half a dozen colleagues who will see the benefits.

I seek to optimise my world as much as possible. If there’s an element of manual labour to any of my work, I’ll try to automate it. Once I automate it to the point of just having to run a command, I’ll try to find a way of automatically triggering the command too.

I see technical solutions as beautiful things, especially when they make the right use of inheritance, of micro-service architecture, of scalability and of reusability. When modules are perfectly-named, comprising of small, discrete functions, throwing testable, well-defined and expected errors.

Any sub-par solution in this ecosystem sticks out like a sore thumb. My first reaction, my overbearing, immediate, overwhelming instinct, is to go in and fix it; whether the fix takes 20 minutes or 20 days to implement.

Good enough

Lately, I’ve been trying to distance myself from this idealistic view of the world, and put my business hat on instead.

I need to remind myself that code does not exist in and of itself, and is not, in isolation, of any use. Code is written purely to fulfil a business requirement. Without the business, there would be no requirement and there would be no code.

I need to remind myself that the business has aims and objectives. It cares what gets delivered on the surface, and not necessarily what is happening beneath.

Obviously, the business cares that any technical solution is robust, performant and maintainable. There is an increasing awareness from non-technical stakeholders that these non-functional requirements are, indeed, requirements, and not merely some utopian whims from egotistical, precious, pretentious programmers.

But ultimately, the business will have a thing, or it may have lots of things. The thing is written in code. The thing works. People are using the thing. The thing is achieving its business purpose.

It may have the odd bug, in which case a fix will be prioritised. It may need the odd enhancement, which can be iteratively delivered. But ultimately, it’s a thing, it’s providing value to the business, and it’s good enough.

“But the controller has business logic in it! This needs a refactor!”

“I don’t like all these if-else statements – can’t we use the dictionary pattern?”

“In our new service X, we moved this authentication logic to its own module – shouldn’t we now update our old service Y to use the same module?”

Stop!

Can the change be justified?

We hardly ever touch ‘service Y’ anymore. It’s still being used, it does its job, it’s not fallen over yet. It’s a legacy piece of software – we have new features we need to build. Do we really want to go ahead and start refactoring this old code which might be getting retired soon anyway?

Can we justify to the client that we should spend a sprint refactoring this stuff, addressing all the technical niggles, when really, on balance, taking a step back and looking at what we have… it’s not terrible.

I’m lucky enough to work in a place which gives more freedom than most to make refactor decisions which aren’t business critical. However, in my freelance work I am charging an hourly rate, and have to be constantly mindful of whether I’m developing for my client or for my ego.

In 2014 I built a WordPress site for a client, from scratch. It was my first WordPress site, and it grew to become quite complex, amounting to quite a bit of code.

In 2015 I was commissioned to create four more sites for the same client. These four sites had quite a lot of shared functionality, so I architected them in such a way that I could write the shared functionality once and all the sites would benefit from this shared ‘common’ theme, before developing four small child themes which override where necessary.

My dilemma: the perfectionist in me is crying out to refactor the original 2014 site, converting it into a child theme and inheriting from the common theme in the same way as the other sites. It feels like the right thing to do, saving me from duplicating fixes across two codebases and making for less of a learning curve for any new developer joining the project.

But undertaking this refactor would probably be a couple of weeks’ worth of work. By the end of the refactor, I may even have broken parts of the design or functionality, sacrificed for the sake of simplicity and consistency with the sister sites.

The truth is, this refactor would be of a benefit to me, but not the client. I can’t in this case properly justify what I want to do, even though it feels like the right solution. It’s really not so bad to duplicate the odd bit of code.

I need to keep telling myself: what I have is good enough.

27 Dec

When do I need a non-JavaScript solution?

The short answer to “when do I need a non-JavaScript solution”? Always. The long answer? Keep on reading…

When I test a new feature built by another developer in my team, one of the first things I do is to turn off JavaScript and see what happens.

Developers often act surprised at this, and look at me disdainfully for inevitably breaking their application when I choose to access it in this way. “This is a client-side calculator! Of course it’s going to break without JavaScript!”

Similarly, when I sit in on meetings with developers and stakeholders discussing what they are hoping to build, “fallback images” and non-JavaScript solutions are often treated as a bit of an afterthought. When I put forward the question of fallbacks, it doesn’t surprise me when the answer is a smirk and a “well, I suppose we’d better give IE8 something“.

We’ve all become so used to modern browsers and increasingly powerful mobile devices that the concept of a non-JavaScript solution seems an unnecessary extra effort; an additional burden on designer and developer alike. After all, the actual proportion of users who access the website without JavaScript enabled is never more than a couple of percent.

However, ignoring fallbacks due to the low percentage of users most affected is short-sighted and is missing the point.

We’re all non-JavaScript users

You may have looked at the proportion of users who access your website with JavaScript turned off through no fault of their own (e.g. corporate users of IE8) or deliberately (self-aware tech-savvies who are sick of being tracked with cookies) or who don’t even have JavaScript available in the first place (anyone using increasingly popular proxy browsers for reduced data consumption). The combined total of these users might be a tiny proportion of your overall users. Heck, it might even be zero.

This is no excuse for ignoring your non-JavaScript implementation. Why? To paraphrase somebody on Twitter:

“Every user is a non-JavaScript user until the JavaScript loads.”

We’re not all on fiberoptic broadband connections. If you’re on a mobile phone on the train and you’re about to enter a tunnel, you’ll be stuck with whatever content arrives in the few kilobytes you managed to download before the signal cut out. Which would you rather see: a sea of whitespace and badly formatted text accompanied by nothing but the prospect of a seven-minute data vacuum, or the core article text with some basic but sufficient styling?

Varying degrees of non-JavaScript solutions

Non-interactive features (such as datapics) have no interactive element and thus should get the core content, untainted by “Turn your JavaScript on!” messages. The content of the feature should be accessible and readable without jumping through hoops. Perhaps it’s not possible for the content to look quite as polished as the JavaScript version (though if we’re disciplined about using CSS for presentation and JavaScript only for interaction, this should not be the case).

Some features, such as quizzes, require client-side interaction to be of any real benefit to the reader. Is a non-JavaScript solution necessary, or even viable?

When it comes to content where advanced functionality is the core content, I still expect a few things for non-JavaScript browsers:

  • Though I’m not expecting anything particularly exciting or useful, I would expect the page to not look broken.
  • I’d expect a message something to the effect of “You must turn on JavaScript in order to view this content.”
  • And I wouldn’t expect lots of unnecessary markup containing questions I can’t answer and buttons I can’t click.

To paraphrase another anonymous tweet:

Use JavaScript to inject the markup your interactive application requires.

So, the answer to “when do I need a non-JavaScript solution?” is… always.

Other benefits of providing a non-JavaScript solution

A better experience for corporate users of IE8 and for mobile users in temporary data blackspots aren’t the only advantages of implementing a defaults-first solution. Reasonable core-content experiences provide benefits in a number of other situations:

  • Reasonable experience if another script on the page breaks your JavaScript.
    • Bugs can and do creep into live pages. Maybe the ID of the element your script hooks into has been changed, or another script on the page is re-defining jQuery at runtime, or you’ve accidentally deleted a JavaScript dependency from the server. Instead of the page spewing a load of JSON or unused markup to the user, the user will be presented with a simpler version of the same content.
  • Accessibility for screen readers
    • Canvas-based interactives are not usable by screen readers. If they get a description of the fish game that they could be playing in a more modern browser, disabled users at least get an understanding of what content is on the page.
  • Search engine optimisation
    • Canvas-based interactives are not crawlable by search-engine spiders. However, a simple description of the canvas content will give search engines an idea of what content exists on the page.
  • Support for harsh browser environments and future compatibility
    • Who knows what lies ahead technologically? We may one day browse the web on our toasters, pub urinals, coffee cups and Boris Bikes. We have no way of knowing what level of sophistication such browsers would support. By providing core content to all, we’re future-proofing our content as much as possible.
  • Clearer separation of concerns – content, presentation, interaction
    • It’s well known that HTML is for content, CSS is for presentation and JavaScript is for interaction. Keeping the three areas separate is good architecture, and supports a defaults-first development style. By delivering our non-JavaScript solution through our markup and CSS alone, we’re fitting into this programming ideal.
    • Clearer separation of concerns means more maintainable code, meaning fewer bugs, bugs which are fixed more easily, the improved ability to work in parallel with other developers (e.g. one on presentation, one on interaction), and so on.
    • Cohesive codebases and fewer bugs means hitting deadlines, and if you’re lucky, bonuses and pay rises. As developers, we’ll have gained a stronger handle on the advantages of keeping each area separate (the Single Responsibility Principle), and will strive for similar ideals in the rest of our codebase, leading to better use of object orientation and the like.
  • Last but not least, making websites accessible is the law. Don’t risk being sued for having an inaccessible website. Part and parcel of this is ensuring that every solution has a non-JavaScript default.

Continue reading: Coding defaults, not fallbacks (coming soon)

21 Nov

Magazine Parent/Child Themes

After the success of the new responsive VoiceCouncil theme, my company was commissioned to develop four new websites, on a tight budget and deadline.

Given these limitations, I decided to architect this in such a way as to minimise the duplication of effort (the DRY principle) across the four sites. Certain elements, such as social media icons, footer, search form, and so on, were largely similar across the different themes. I pulled these together in a parent theme, then developed child themes which defined and overrode the templates and CSS that were unique to any given site.

This was definitely a good decision – style tweaks and bug fixes only needed applying in one central place. Every child theme only contains the absolutely essential overriding components. Even more advanced features, such as a slideshow on the homepage, were built into the core theme, switched on via a config option in the child theme so that the JavaScript assets are only loaded if the slideshow is required.

Looping Live:

Looping Live

The Mobile Musician:

The Mobile Musician

The Vintage Musician:

The Vintage Musician

Music Maker Apps:

Music Maker Apps

17 May

SmartResolution

SmartResolution is open-source, extensible Online Dispute Resolution software. It was developed as part of my Major Project (dissertation) and won the Best Major Project award in my year.

Skills demonstrated: BDD/TDD, Continuous Integration, Ruby & Cucumber regression tests, modular development, state pattern, publish-subscribe pattern, detailed documentation, continuous and automated deployment, AWS.

Read more about my Major Project here.

06 Mar

High-Impact, Minimal-Effort Cross-Browser Testing

Last month, I got an article published in the web industry’s Smashing Magazine:

https://www.smashingmagazine.com/2016/02/high-impact-minimal-effort-cross-browser-testing/

Smashing Magazine

26 Dec

Commonwealth Games Quiz

http://www.bbc.co.uk/news/uk-28062001

A calculator that matches you to your top (and bottom) Commonwealth sports, based on your personal attributes. This was featured on the bbc.co.uk/ index for two days and promoted on BBC Breakfast. “Shout” singer Lulu completed the test for BBC Breakfast and got badminton as her top sport.

Viewed 2 million times in its first 24 hours, with 80% of people completing the test and reaching the results page. Simultaneously the most Read and Shared story the day it was published, and famous on Reddit. Rigorous time management meant I was ahead of schedule, so I took the initiative to translate the calculator into Welsh. It was promoted on the BBC Cymru homepage.

The quiz has been shared on social media almost 40,000 times to date.

You can view the code at https://github.com/BBCVisualJournalism/newsspec_7954

Loading...