Skip to main content

Posts

Showing posts from April, 2015

Monitoring Varnish for random crashes

I'm using Varnish to cache the frontend of a site that a client is busy promoting.  It does a great job of reducing requests to my backend but is prone to random crashes.  I normally get about two weeks of uptime on this particular server, which is significantly lower than other places that I've deployed Varnish. I just don't have enough information to work with to try and solve why the random crash is occurring.  The system log shows that a child process doesn't respond to CLI and so is killed.  The child never seems to be able to be brought up again. My /var/log/messages file looks like this: 08:31:45 varnishd[7888]: Child (16669) not responding to CLI, killing it. 08:31:45 varnishd[7888]: Child (16669) died signal=3 08:31:45 varnishd[7888]: child (25675) Started 08:31:45 varnishd[7888]: Child (25675) said Child starts 08:31:45 varnishd[7888]: Child (25675) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 08:32:19 varnishd[7888]: Child (2

Fixing when queue workers keep popping the same job off a queue

From Amazon Queue documentation My (Laravel) project uses a queue system for importing because these jobs can take a fair amount of time (up to an hour) and I want to have them run asynchronously to prevent my users from having to sit and watch a spinning ball. I created a cron job which would run my Laravel queue work command every 5 minutes.  PHP is not really the best language for long-running processes which is why I elected to rather run a task periodically instead of listening all the time.  This introduced some latency (which I could cut down) but this is acceptable in my use case (imports happen once a month and are only run manually if an automated import fails). The problem that I faced was that my queue listener kept popping the same job off the queue.  I didn't try running multiple listeners but I'm pretty confident weirdness would have resulted in that case as well. Fixing the problem turned out to be a matter of configuring the visibility time of my queu

Setting up the admin server in HHVM 3.6.0 and Nginx

Hiphop has a built-in admin server that has a lot of useful functions.  I found out about it on an old post on the Hiphop blog . Since those times Hiphop has moved towards using an ini file instead of a config.hdf file. On a standard prebuilt HHVM on Ubuntu you should  find the ini file in /etc/hhvm/php.ini Facebook maintains a list of all the ini settings on Github . It is into this file that we add two lines to enable the admin server: hhvm.admin_server.password = SecretPassword hhvm.admin_server.port = 8888 I then added a server to Nginx by creating this file: /etc/nginx/conf.d/admin.conf (by default Nginx includes all conf files in that directory): server { # hhvm admin listen 8889; location ~ { fastcgi_pass 127.0.0.1:8888; include fastcgi_params; } } Now I can run curl 'http://localhost:8889' from my shell on the box to get a list of commands. Because I host this project with Amazon and have not set up a s

Searching in a radius with Postgres

Postgres has two very useful extensions - earthdistance and postgis .  PostGIS is much more accurate but I found earthdistance to be very easy to use and accurate enough for my purpose (finding UK postcodes within a radius of a point). To install it first find your Postgres version and then install the appropriate package.  On my Debian Mint dev box it looks like the below snippet. My production machine is an Amazon RDS and you can skip this step in that environment. psql -V sudo apt-get install postgresql-contrib postgresql-contrib-9.3 sudo service postgresql restart Having done that you should launch psql and run these two commands.  Make sure that you install cube first because it is a requirement of earthdistance. CREATE EXTENSION cube; CREATE EXTENSION earthdistance; Now that the extensions are installed you have access to all of the functions they provide. If you want to check that they're working you can run SELECT earth(); as a quick way to test