You’ve got a VPS server, now what?

When you get a new VPS everything feels nice.  You’ve got your own server, without the dedicated server price tag. However, everything may not be as it seems.

A key thing to remember; A low end VPS does NOT have the same amount of resources available.

The first thing you will want to do with your server is setup everything. Assuming you’re using it for web hosting, this may include;

  • Control Panel
  • Web server & PHP / Python etc
  • E-mail with Anti-virus/Spam detection
  • DNS
  • Database Server
  • Log file processing

Be careful when doing this.  You are likely getting a VPS because your website is now getting enough traffic, or you are hosting enough sites that shared hosting just doesn’t cut it.  By adding all of those services. you could be eating into valuable resources.

If your VPS is for your website, it’s for your website, and ideally not the extras. Consider the following;

Control Panel

After your initial setup there probably isn’t much need for a control panel. Where possible, disable it. Virtualmin/Webmin/Usermin for example runs as its own perl process and while not very much it still uses memory that most of the time you don’t need. You can always login to the machine and enable it again via SSH.

E-mail

Consider using Google Apps (or similar any number of good quality email hosting providers). You can offload all of your E-mail, virus checking and spam checking to someone else. Gmail is a huge platform and most of you know the features offered by this. Google Apps allows all of that on your own domain name. There is 7gb of storage per account, for free. If you need more, or feel like offering google some money, its £33 per account, per year (at the moment)

DNS

Many server providers, like linode, have DNS servers that are free for you to use. Utilize them. BIND, on this very server, was utilising over 100mb of memory.

Log file processing

Webalizer and AWStats are both common log file processing tools. Do you really need them though? In some instances, yes, in most…no. Use Google Analytics (or something similar).  If you get a lot of traffic, processing millions of lines takes minutes to hours, during which time your website could be sluggish. A slow website really kills the user experience. If you really want/need something like AWStats, install it on your own computer and process the log files manually or create another VPS to process files periodically.

Web Server & Database Server

As this is your web server, you can’t do much about the overhead of a web server and perhaps a database server (unless you’re running multiple virtual machines). What you can do, however, is limit what is running. The default installation of Apache on Debian includes a lot of modules you will likely not need or ever use. Disable extensions that you don’t need.

Or….you can get a bigger VPS or multiple VPS for all of your services. Instant (or at least quick) scaling is one of the many benefits of a virtual server.

Remember: the faster your server runs, the faster your website runs, the happier your users are, the happier Google is, the happier you are…….well hopefully.

Creating a Custom Authentication Backend using Django

Today I have posted a new tutorial on djangorocks.com ‘Creating a Custom Authentication Backend‘.

The example created allows you to login to your Django application using your IMAP mail server to check your username & password.  Perfect if you are creating a webmail client or address book – and more I’m sure.

The tutorial is not as long as I would have hoped, but its rather difficult to add to when it really is quite a simple thing to implement.  Hope it helps.

Threading / Background tasks in C# using BackgroundWorker


A simple way to use thread some slower tasks, with a callback method.

It uses the BackgroundWorker.  I found this to be the simplest way to achieve what I needed. The 2 tasks I used it for were image resize & sending E-mails – doing things over the web can be slow.

// Define a new BackgroundWorker
BackgroundWorker bw = new BackgroundWorker();

// This will be threaded
bw.DoWork += delegate(object sender, DoWorkEventArgs e)
{
    BackgroundWorkerConfig cfg = (BackgroundWorkerConfig) e.Argument;
    e.Result = cfg.myVariable.ToUpper();
};

// When the thread is complete, this runs
bw.RunWorkerCompleted += delegate(object sender, RunWorkerCompletedEventArgs e)
{
    BackgroundWorkerConfig cfg = (BackgroundWorkerConfig) e.Result;
    Console.WriteLine("Completed: " + cfg.myVariable);
};

// Start the process
bw.RunWorkerAsync(new BackgroundWorkerConfig("my string goes here"));

// A class used for sending configuration variables into the Worker
class BackgroundWorkerConfig
{
    public string myVariable;
    
    public BackgroundWorkerConfig(string myVariable) {
        this.myVariable = myVariable;
    }
}

If you don’t need to send variables in, or care about getting them out, you can remove everything related to the BackgroundWorkerConfig – ie Batch Resizing images from a pre-defined folder.

C# – Very Basic XML Parsing



In my app I have a VERY basic XML file with a list of countries & country codes. It looks like this;

<xml>
<item><code>ABW</code><country>Aruba</country></item>
<item><code>AFG</code><country>Afghanistan</country></item>
</xml>

Parsing this is very simple.

Using System.Xml;

XmlDocument xml = new XmlDocument();
xml.load("countries.xml");
XmlNodeList nodes = xml.SelectNodes("/xml/item");
foreach(XmlNode node in nodes) {
    Console.WriteLine(node["code"].InnerText + " = " + node["country"].InnerText);
}

Maybe you can use this as the basis for an RSS reader (although I’m sure there are better alternatives) .  I use it just to populate a drop-down box & convert from the country code to name.

Is the Googlebot getting lazy?



After spending a finishing off a new site, installing & configuring servers etc, the SEO ‘expert’ comes along with a list of things that need to be done.

Create a robots.txt

Even though it can see everything we create one?

Add ‘ping’ support

This seems useful and is utilised by most wordpress blogs.  The idea is, you add a new article/news post etc and notify a few sources letting them know to crawl for new content.  I can see the benefit of this if you want to see your pages appearing instantly, however…surely the search engine will come to find it on its own. The more you update, the more frequent a crawler visits.

Create a sitemap.xml file

What now?  I’ve known about sitemap files for a while and since the beginning this seems really pointless.

Create a big list of URLs on your website – so google doesn’t have to?  I am aware that sitemap files, apparantly, do not affect a search engines ability to crawl your site…but how long before it does.

According to wikipedia;

Sitemaps are particularly beneficial on websites where:

  • some areas of the website are not available through the browsable interface, or
  • webmasters use rich Ajax, Silverlight, or Flash content that is not normally processed by search engines.

If content isn’t browsable on your website, surely you don’t want it publicly visible or indexed anyway…and if you do, why is it not somewhere within your navigation?  If it can’t be found by person browsing your site, google shouldn’t see it either.

Ajax/Flash/Silverlight….these are all easily sorted as well, I personally don’t see what the problem is.

Anyways, that’s my rant over.

Dansette