12 July 2016

Limiting how often you run seeds and migrations in Laravel 5.2 testing

Image: Pixabay
If integration tests take too long to run then we stop running them because they're just an annoyance.  This is obviously not ideal and so getting my tests to run quickly is important.

I'm busy writing a test suite for Laravel and wanted more flexibility than the built-in options that Laravel offers for working with databases between tests.

Using the DataMigrations trait meant that my seed data would be lost after every test.  Migrating and seeding my entire database for every test in my suite is very time consuming.

Even when using an in-memory sqlite database doing a complete migration and seed was taking several seconds on my dev box.

If I used the DataTransactions trait then my migrations and seeds would never run and so my tests would fail because the fixture data is missing.

My solution was to use a static variable in the base TestCase class that Laravel supplies.  It has to be static so that it retains its values between tests by the way.  This variable is a boolean flag and tracks whether we need to run the migrations and seeds.  We initialise the class with it set on and so the migrations and seeds will run the first time that any test run.

Now the fact that I'm using Laravel DataTransactions should spare me from affecting the database between tests but if I wanted to be 100% certain I could set the flag and have my database refreshed when the next test runs.

This also means that if the DataTransactions trait is not sufficient (for example I'm using multiple database connections) then I can manually refresh when I want to.


Tip

06 July 2016

Implementing a very simple "all of the above" checkbox

Image:Pixabay

Implementing a checkbox that lets the user "select all of the above" is trivial with jQuery.

The idea is that when the user selects the "all" checkbox then all of the other checkboxes are set to the same value. Similarly if they deselect any of the other options we need to turn off the "all" checkbox.

Tip

28 June 2016

Why am I so late on the Bitcoin train?

Image: Pixabay
I've been somewhat of a Bitcoin sceptic for quite some time.  When it first became a thing I was worried that governments would legislate it out of existence.

It has had a pretty bad rap of being associated with the dark web and it is definitely the choice of currency for malware authors.

In its normal usage Bitcoin is more transparent than cash.  If I give you a cash note there is no permanent record of the transaction and the tax man can't get a sniff into our business.

Governments hate transactions they can't tax or police and so in the beginning there was a concern that Bitcoin would be outlawed.

In contrast to cash, if I transfer you Bitcoin then there is a record of the transaction that anybody in the world can inspect.  It's possible to trace the coins in your Bitcoin wallet back through the various people who owned them.  Anybody in the world can watch the contents of your wallet and see where you spend your money.

This is exactly the sort of thing that governments love.

Of course not everybody wants to share their transactions with the world and so there are Bitcoin laundering services that attempt to anonymise the coins in your wallet.  This puts us back to square one with Bitcoin being very convenient for criminals to use in order to evade financial intelligence controls.

I suspect that the process of banning Bitcoin transactions would impinge too much on the freedom of citizens.  Some governments are talking about banning cryptography in order to maintain surveillance on their citizens so it's not a stretch to imagine them being displeased with Bitcoin laundering services.  *sigh*.

Anyway, back to my killer app for Bitcoin... I've recently emigrated and am still paying debt in South Africa.  Sending money back to South Africa costs about £30 and takes 2-4 days if I use the banking system.  If I use Bitcoin the process costs around £ 3 and I can have the money in my South African bank account on the very same day.

Bitcoin costs one tenth the price of using banks and is at least twice as fast.

My transaction is not at all anonymous and a government can trace the funds to me on either end of the transaction where copies of my passport are stored with the exchanges.  If I wanted to hide this from thieves, the government, spear-phishers, and other people who want to take my money without giving anything in return then I would use a Bitcoin laundry service.
Tip

10 June 2016

Restarting BOINC automatically

Image: https://boinc.berkeley.edu, fair use
BOINC is a program curated by the University of Berkeley that allows people around the world to contribute to science projects.  

It works by using spare cycles from your computer to perform calculations that help do things like folding proteins to find candidates for cancer treatment, mapping the milky way galaxy, searching for pulsar stars, and improving our understanding of climate change and its effects.

It runs as a background process and is easily configured to only run in certain conditions - like when you haven't used your computer for 10 minutes for example.

It comes with a nifty GUI manager and for most people using it on their desktop this post is not going to be at all relevant.  This post deals with the case where a person is running it on a server without the GUI manager.

Anyway, the easiest solution I found to restarting BOINC on a headless server was to use supervisord.  It's pretty much the "go to" tool for simple process management and adding the BOINC program was as easy as would be expected:

Here's the program definition from my /etc/supervisord.conf file:

 [program:BOINC]  
 command=sh /root/boinc/startup.sh  
 process_name=boinc  
 exitcodes=0,1,2
 autostart=true  
 autorestart=unexpected  

I use a script to restart BOINC because I want to make sure that I get reconnected to my account manager in case something goes wrong.

Here's what /root/boinc/startup.sh script looks like:

 /etc/init.d/boinc-client start  
 sleep 10  
 boinccmd --join_acct_mgr http://bam.boincstats.com <user> <pass>  

If BOINC crashes it will automatically get restarted and reconnected to my account manager.  This means I don't need to monitor that process on all the servers I install it on.
Tip

01 June 2016

Associating Vagrant 1.7.2 with an existing VM

My Vagrant 1.7.2 machine bugged out and when I tried to `vagrant up` it spawned a new box instead of bringing up my existing machine.

Naturally this was a problem because I had made some manual changes to the config that I hadn't had a chance to persist to my puppet config files yet.

To fix the problem I found used the command `VBoxManage list vms` in the directory where my Vagrantfile is.  This provided me a list of the machine images it could find.

I then went and edited the file at .vagrant/machines/default/virtualbox/id and replaced the UUID that was in there with the one that the VBoxManage command had output.

Now when I run 'vagrant up' it spins up the correct VM.  Happy days.
Tip

27 May 2016

Redirecting non-www urls to www and http to https in Nginx web server

Image: Pixabay
Although I'm currently playing with Elixir and its HTTP servers like Cowboy at the moment Nginx is still my go-to server for production PHP.

If you haven't already swapped your web-server from Apache then you really should consider installing Nginx on a test server and running some stress tests on it.  I wrote about stress testing in my book on scaling PHP.

Redirecting non-www traffic to www in nginx is best accomplished by using the "return" verb.  You could use a rewrite but the Nginx manual suggests that a return is better in the section on "Taxing Rewrites".

Server blocks are cheap in Nginx and I find it's simplest to have two redirects for the person who arrives on the non-secure non-canonical form of my link.  I wouldn't expect many people to reach this link because obviously every link that I create will be properly formatted so being redirected twice will only affect a small minority of people.

Anyway, here's the config:

Tip

11 May 2016

Logging as a debugging tool

Image: https://www.pexels.com
Logging is such an important part of my approach to debugging that I sometimes struggle to understand how programmers avoid including logging in their applications.

Having sufficiently detailed logs enables me to avoid having to make assumptions about variable values and program logic flow.

For example, when a customer wants to know why their credit card was charged twice I want to be able to answer with certainty that we processed the transaction only once and be able to produce the data that I sent to the payment provider.

I have three very simple rules for logging that I follow whenever I'm feeling like being nice to future me.  If I hate future me and want him to spend more time answering queries than is needed then I forget these rules:

  1. The first command in any function I write is a debug statement confirming entry into the function
  2. Any time that the script terminates with an error then the error condition is logged, along with the exception message and variable values if applicable.
  3. When I catch an exception then I log that as a debug message in the place that I catch it rather than letting it bubble up the stack.  If I'm the one throwing the exception then I log in the place where I throw it. 
These rules have grown on me from the experience of debugging code and having to deal with an assortment of customer queries that have been escalated to the development team.

By logging the entry into functions I can go back on my logs and see the path that a request took through the code.  Instead of wondering how a particular state of execution came about I have a good trail of functions that led me to that point.  

To me errors that fail silently are the worst possible errors.  I don't expect that the user needs to be alerted to every error or its details, but if I am unable to send a transactional email then I expect that there should be a log of that fact. That might sound self-evident but I've recently worked on a project where we send a mail and don't check if it was successful or not.  This would only occasionally happen and we only noticed something was amiss when a customer complained.  I was not able to determine when the problem arose, how often it happened, or to whom I should resend transactional mails along with an apology.

Logging an exception in the place I catch it has consistently proven to be helpful.  Having a log as close as possible to the source of the error condition helps to narrow down the stack that it occurred in.  This is especially valuable when I rethrow the exception with a user friendly message because I don't lose the technical details of the program state.

Because my logs can get very spammy in production I use the "Fingers Crossed" feature of Monolog.  I prefer this to the alternative of increasing the bar for logging to "info" and above because when an error occurs then I have a verbose track of my program state.  I've created a gist showing the setup in Laravel 5.1 and 5.2 but the approach will work anywhere that you use Monolog.

Another useful trick I've learned is to integrate my application errors into my log aggregating platform.  I use Loggly to aggregate my logs and push application error messages to it.  This lets me easily view my application errors in the context of my various server logs so spotting an nginx error or something in syslog that could contribute to the application problem is a lot easier.  The gist that I linked above shows my Loggly setup, but you can also read their documentation.

Useful and appropriate logging is an indispensable tool for debugging and if you're not working on developing your own logging style to support your approach to debugging then hop on it!

Tip