Performance enhancements for a bespoke PHP ecommerce store
Not so long ago, a customer was using an bespoke online store for selling clothing. For 99% of the year, the site would perform perfectly (fast page loading times and so on) – however at the start of a sales promotion, the site would collapse due to a huge spike in requests from potential customers who were responding to a mailshot.
This post details some of the techniques we used to help improve the performance of the site – allowing the customer to increase their sales – rather than losing sales from end users who were unable to access their site.
As mentioned above, we didn’t write the original site – we were called in to undertaken maintenance after the original developer disappeared. The site was built on Propel, PHP5 and a PostgreSQL database.
To start with, one year, they had so many orders that their back end order management system became unusable – it appeared to exhibit O(N^2) (as the number of orders increased, the processin time required increased exponentially) type performance so once there were more than around 60 orders outstanding it became unusable with a page load time of around 2-3 minutes or more.
So first off – we fixed the backend order processing code – so they could quickly deal with the backlog of orders. This was easily accomplished by introducing paging of the orders to show 30 at once, rather than displaying all, and giving them functionality to do ‘bulk’ operations – saving staff from having to perform repetitive and tedious navigation.
Next, we tackled the more interesting parts – fixing the performance issues with the public facing code.
To get some baseline performance guide, we undertook some crude performance measurements of the site, and found that it averaged out at around 8-10 requests/second – depending on the page being viewed. This is without any caching of data or other improvements on our behalf.
Next, we inspected the code base and identified three areas where we could make easy improvements – and after introducing each, we repeated our performance testing on a local server. Each time we saw a performance improvement, and the end result is that the live site had a performance of around 130-180 requests/second.
- Firstly we introduced an autoloader, which dramatically reduced the number of files being opened on a single web request, and saw performance improve by more than 100%
- Next, we introduced caching within the data access (Propel peer) classes to avoid database hits where possible – using the Zend Framework’s Zend_Cache_Frontend_Class component. Through appropriate use of tagging, we could ensure stale data wasn’t served.
- Finally, we introduced Page level caching for customers without orders in their shopping basket – this gave another impressive speed boost, and allowed visitors with an empty shopping cart to cause minimal server load (removing database queries and minimising PHP processing logic and so on).
As a closing note, obviously they could have upgraded the server hardware and/or scaled out in a traditional manner. However, the site tended to be relatively quiet for a large part of the year (a steady stream of customers – not all at once). Throwing hardware at the problem wouldn’t have solved the underlying problem, and considering the performance improvements we made, it’s easy to assume that they’d need to have more than quadrupled their hosting costs in order to achieve the same result as we had with 2-3 days of development and testing.
We deployed our updates on Friday 9th of July 2009; you’ll note the immediate drop-off in resource usage on that day.
CPU Usage (Weekly)
The hourly spikes are caused by a cron job; ignoring these you can see the load has decreased significantly.