Assuming you’ve set up JIRA and it had been working fine until recently, but you’re now seeing a random 503 error, try the following:
Restart the server
sudo service httpd stop sudo service httpd start
Sometimes Apache just needs a kick.
sudo /etc/init.d/jira stop sudo /etc/init.d/jira start
Sometimes you’ll get an error to do with a catalina.pid and associated process not found – this means Tomcat has failed.
You may need to run this, then try restarting JIRA again:
sudo mv /opt/atlassian/jira/work/catalina.pid ~/catalina-backup.id
sudo /opt/atlassian/jira/bin/shutdown.sh sudo /opt/atlassian/jira/bin/startup.sh
If you’re running this on a VPS (such as EC2 instance) one of your last options it to reboot the server. Log into your VPS provider and reboot the instance, then run some of these
jira start commands again.
All of these options might give JIRA the kick it needs. Sometimes things fall over and they just need a prompt.
But you also have to ask, why did the 503 error start appearing? The answer is likely because your system ran out of memory.
I was trying to do things on the cheap, running JIRA on a single AWS EC2 nano instance, running some commands on my instance to increase swap space in place of RAM.
But every now and again, JIRA will fall over. And, in this case, despite all the rebooting, I couldn’t get JIRA back to life. I got rid of the 503 error but I would start seeing other errors on JIRA startup, all to do with a lack of memory. I’d already maxed out the swap space, so I was out of options.
My final fix was to ‘stop’ the instance in AWS, change the instance size to ‘micro’ rather than ‘nano’ – which doubles its memory capacity – and then start the instance again.
jira start commands and it all started working. Worth spending a few extra pounds per month to keep JIRA happy!