03 February 2016

Working with classic ASP years after it died

I searched for "dead clown" but all the pictures were too
disturbing.  I suppose that's kind of like the experience
of trying to get classic ASP up and running with todays libraries
I'm having to work on a legacy site that runs on classic ASP.  The real challenge is trying to get the old code to run on my Ubuntu virtual machine.

There is a lot of old advice on the web and most of it was based on much older software versions, but I persevered and have finally managed to get classic ASP running on Apache 2.4 in Ubuntu.

The process will allow you to have a shot at getting your code running, but my best advice is to use a small Windows VM.  There's no guarantee that your code will actually compile and run using this solution, and the effort required is hardly worthwhile.

The Apache module you're looking for is Apache::ASP.  You will need to build it manually and be prepared to copy pieces of it to your perl include directories.  You will also need to manually edit one of the module files.

The best instructions I found for getting Apache::ASP installed were on the cspan site.  You'll find the source tarball on the cpan modules download page.

I'm assuming that you're able to install the pre-requisites and build the package by following those instructions.  I was able to use standard Ubuntu packages and didn't have to build everything from source:

 sudo apt-get install libapreq2-3 libapache2-request-perl  

Once you've built and installed Apache::ASP you need to edit your apache.conf file to make sure it's loaded:

 PerlModule Apache2::ASP  
  # All *.asp files are handled by Apache2::ASP  
  <Files ~ (\.asp$)>  
   SetHandler perl-script  
   PerlHandler Apache::ASP  
  </Files>  

If you try to start Apache at this point you will get an error something like Can't locate Apache2/ASP.pm in @INC (you may need to install the Apache2::ASP module)

Unfortunately the automated installs don't place the modules correctly.  I'm not a perl developer and didn't find an easy standard way to add an external path to the include path, so I just copied the modules into my existing perl include path.  You'll find the requested files in the directory where you build Apache2::ASP

The next problem that I encountered was that Apache 2.4 has a different function name to retrieve the ip of the connecting request.  You'll spot an error in your log like this : Can't locate object method "remote_ip" via package "Apache2::Connection".

The bug fix is pretty simple and is documented at cpan.  You'll need to change line 85 of StateManager.pm.  You'll find the file in the directory where you copied the modules into the perl include directory, and its location is in your error log.

 # See https://rt.cpan.org/Public/Bug/Display.html?id=107118  
 Change line 85:  
     $self->{remote_ip}     = $r->connection()->remote_ip();  
 To:  
     if (defined $r->useragent_ip()) {  
         $self->{remote_ip} = $r->useragent_ip();  
     } else {  
         $self->{remote_ip} = $r->connection->remote_ip();  
     }  

Finally after all that my code doesn't run because of compile issues - but known good test code does work.

This is in no way satisfactory for production purposes, but does help in getting a development environment up and running.

25 January 2016

Laravel - Using route parameters in middleware

I'm busy writing an application which is heavily dependent on personalized URLs.  Each visitor to the site will have a PURL which I need to communicate to the frontend so that my analytics tags can be associated with the user.

Before I go any further I should note that I'm using Piwik as my analytics package, and it respects "Do Not Track" requests.  We're not using this to track people, but we are tying it to our clients existing database of their user interests.

I want the process of identifying the user to be as magical as possible so that my controllers can stay nice and skinny.  Nobody likes a fat controller right?

I decided to use middleware to trap all my web requests to assign a "responder" to the request.  Then I'll use a view composer to make sure that all of the output views have this information readily available.

The only snag in this plan was that the Laravel documentation was a little sketchy on how to get the value of the request parameter in middleware.  It turns out that the syntax I was looking for was $request->route()->parameters()which neatly returns the route parameters in my middleware.

The result is that every web request to my application is associated with a visitor in my database and this unique id is sent magically to my frontend analytics.

So, here are enough of the working pieces to explain what my approach was:

19 January 2016

Using OpenSSH to setup an SFTP server on Ubuntu 14.04

I'm busy migrating an existing server to the cloud and need to replicate the SFTP setup.  They're using a password to authenticate a user and then uploading data files for a web service to consume.

YMMV - My use case is pretty specific to this legacy application so you'll need to give consideration to the directories you use.

It took a surprising amount of reading to find a consistent set of instructions so I thought I should document the setup from start to finish.

Firstly, I set up the group and user that I will be needing:

 groupadd sftponly  
 useradd -G sftponly username  
 passwd username  

Then I made a backup copy of and then edited /etc/ssh/sshd_config

Make sure that you change the Subsystem.  I've left the original as a comment in here.  We force the user into the /uploads directory by default when they login.

 #Subsystem sftp /usr/lib/openssh/sftp-server  
 Subsystem sftp internal-sftp  
 Match group sftponly  
      ChrootDirectory /usr/share/nginx/html/website_directory/chroot  
      X11Forwarding no  
      AllowTcpForwarding no  
      ForceCommand internal-sftp -d /uploads  

I elected to place the base chroot folder inside the website directory for a few reasons.  Firstly, this is the only website or service running on this VM so it doesn't need to play nicely with other use cases.  Secondly I want the next sysadmin who is trying to work out how this all works to be able to immediately spot what is happening when she looks in the directory.

Then because my use case demanded it I enabled password logins for the sftp user by finding and changing the line in /etc/ssh/sshd_config like this:

 # Change to no to disable tunnelled clear text passwords  
 PasswordAuthentication yes  

The base chroot directory must be owned by root and not be writeable by any other groups.

cd /usr/share/nginx/html/website_directory
mkdir chroot
chown root:root chroot/  
chmod 755 chroot/  

If you skip this step then your connection will be dropped with a "broken pipe" message as soon as you connect.  Looking in your /var/log/auth.log file will reveal errors like this: fatal: bad ownership or modes for chroot directory

The next step is to make a directory that the user has write privileges to.  The base chroot folder is not writeable by your sftp user, so make an uploads directory and give them "writes" (ha!) to it:

 mkdir uploads  
 chown username:username uploads  
 chmod 755 uploads  

If you skip that step then when you connect you won't have any write privileges.  This is why we had to create a chroot base directory and then place the uploads folder off it.  I chose to stick the base in the web directory to make it obvious to spot, but obviously in more general cases you would place this in more sensible locations.

Finally I link the uploads directory in the chroot jail to the uploads directory where the web service expects to find files.

 cd /usr/share/nginx/html/website_directory  
 ln -s chroot/uploads uploads  

I feel a bit uneasy about a password login being used to write files to a directory being used by a webservice, but in my particular use case my firewall whitelists our office IP address on port 22.  So nobody outside of our office can connect.  I'm also using fail2ban just in case somebody manages to get access to our VPN.

01 December 2015

Lowering your AWS bill by moving to reserved instances

The cost saving from reserving an EC2 instance is quite dramatic.  This morning I moved two web servers to reserved instances and am dropping my hosting cost by 64% for those servers.

There isn't actually any effort required in moving your on demand EC2 instance to a reserved instance.  The only actual change is a billing change, you don't need to do anything in your instance configuration.

The only important thing to remember is that you're reserving an instance in a particular availability zone.  The billing effect will only apply to instances launched in the same availability zone.

Amazon will apply the discount of having a reserved instance to your invoice automatically.  They provide quite extensive documentation on reserved instances on their site (here).

23 November 2015

Fixing where php5 cronjob maxlife mails root user about module mcrypt already loaded

I'm running an nginx server with php5-fpm and was always getting mail in /var/mail/root telling me that the cronjob running usr/lib/php5/maxlifetime was throwing warnings.

The warnings were that:
PHP Warning:  Module 'mcrypt' already loaded in Unknown on line 0

To fix this I had a look at the file and noticed that it was looping through the various sapi methods and running a command.  The line in the shell script looks like this:
for sapi in apache2 apache2filter cgi fpm; do
if [ -e /etc/php5/${sapi}/php.ini ]; then
So I removed the mcrypt extension from my apache2 php.ini (/etc/php5/apache2/php.ini) and now the maxlifetime shell script runs without throwing warnings.

12 November 2015

Updating database when migrating a Wordpress site between domains

If you're using a staging server to test your Wordpress changes then you'll be deploying Wordpress to a new domain once your test team gives the go ahead.

Unfortunately this can break Wordpress quite badly.  All the links in your content are essentially hard coded into the database content table.  There are settings in the options table that help Wordpress with deciding on redirects.

Here are three useful sql statements that will make your life a little easier when migrating.  You can include them as part of your scripted deploy or just run them manually if you don't deploy Wordpress often.

Edit them to suit your domain configuration, but they'll help you to change the links and settings in your database to point to the new domain.

26 August 2015

Setting up a new user in Ubuntu from scratch

Adding new users to Ubuntu is easy because of the convenience tools that exist.

Start with the command

sudo useradd -d /home/testuser -m testuser

This creates the user and sets up a default home directory.  The user doesn't have a password, but you could add one with passwd if you wanted to.

Then create a directory .ssh in their home directory.  Create a file called authorized_keys in the directory and copy in contents of the users public key into it.

Chown the .ssh directory (and file) to the user and chmod the file to 600.

Make sure that /etc/sshd_config is set up to deny logging in by password.

The user should be able to login using their public key by setting up their .ssh/config on their home machine.
Host foo
HostName server.ip.address
User testuser
IdentityFile ~/.ssh/id_rsa