Skip Navigation
+44 1273 906 908 @AddedBytes
Added Bytes


Many of my clients have worked previously consultants and SEOs that inundated them with jargon, especially where proposals and sales calls are concerned. I find myself sometimes using too much jargon - easily done when you spend so much time working in any field. This jargon guide explains the industry terms in simple language.

Read the rest of this post »

Why sites usually drop in the SERPs and what to do if it happens to you.

Read the rest of this post » describes itself as a social bookmarking site, for it allows all users of the net to share their bookmarks with others. Unlike similar enterprises that went before it, users' shared bookmarks are not listed only under their name. Each bookmark also has a set of "tags" associated with it. These tags are words that identify what that page is about. An article on, well, security in PHP would have the tags "php" and "security". Maybe even "webdev" and "programming" as well.

Each user can pick their own tags for their own bookmarks. You can also browse all available bookmarks by tag - meaning that if you wanted to see all bookmarks about PHP, you would simply browse to the PHP tag, and voila - there you would find all bookmarked pages about PHP.

What's Wrong with Directories?

Tags are an intelligent way of organising data. Regular directories work on a similar basis to a filing cabinet - items are stored within folders within drawers, and often only to be found in one place. Of course, that fails to make use of the power of databases and computers. Tags, on the other hand, allow a single item to be found in all the places it should be, rather than just the one place that is the single best fit.

This site, for example, is listed at DMOZ under "Web Design and Development: FAQs, Help, and Tutorials" - a good fit, but it also contains writing on internet marketing, browsers, usability and accessibility, and there are resources available as well as a blog. DMOZ cannot reflect this with its rigid and antiquated structure. At, however, it is associated with the following tags: css, php, design, web, programming, blog, webdev, blogs, development, webdesign, reference, cheatsheet, resources, code, mysql, tips, apache, web-design, tools, tutorial, tech, tutorials, computer, web-dev, html, database. And that's just the front page - specific articles are all listed with their own tags. This means that sites and pages listed by users of are classified and organised in a much more effective and user-friendly way.

There are more serious flaws in the directory model though. The most significant problem with web directories is the editors themselves. A web directory requires editors in order to function, and these can either be paid employees or volunteers. If your directory has paid editors working for it, you are left with no serious choice but to charge a fee for submission, in order to cover your editors' wages. That system scales pretty well - if the directory succeeds and becomes popular, the fees for the extra submissions should be enough to cover the wages of the extra editors required to process those submissions.

A volunteer system does have advantages over the paid-editor model. Because volunteers are not paid, a listing in a directory with volunteer editors can be free. This means that non-profit information sites and low-traffic sites can be listed in the directory (a fee for submission will usually prevent that), and means that editors can go out and find sites themselves to be listed.

Both of these systems have their problems. Volunteer editors are volunteers - making it much harder to hold them accountable for laziness or incompetence. DMOZ - a directory with around five to ten thousands volunteer editors - is a great example of this: submissions are often not processed for many months, if at all. Also, because it is a volunteer position, uncrupulous folk are far more likely to accept bribes to list or de-list sites - they stand to lose very little if discovered - and there are plenty of people who claim that a great many editors do just that. The system can also scale badly - if tens of thousands of submissions suddenly require processing, it can be very difficult to source the hundreds or thousands of editors needed to manage that influx.

The paid system leads to an exclusive directory, which by definition becomes one that is missing out on a huge amount of quality content. Yahoo's fees of hundreds of dollars for a submission have always seemed to many to be completely disproportionate to the benefit of a listing and for many people are higher than the cost of hosting a site or the income from it. As a result, for a long time (and this is still true to a great extent) Yahoo has been an incomplete directory, lacking the in-depth listings required by today's discerning web surfer.

Why is better? is different to both of these systems. It is similar to a peer-review system, in fact. One person bookmarking one page can count as a vote for that page. As the user will have added tags as well, their vote tells the system that one person believes that the page in question is related to each of the tags they listed. After a few hundred people have bookmarked the same link, you'll begin to see some tags used more than others, giving you an idea of how closely the target page relates to each of those tags.

What this has created is a kind of directory with a distributed editting system. The editors are volunteers, but because of their sheer numbers it it much harder for any one editor to affect listings. If one editor is lazy, it does not matter - there are thousands more covering the same topic. If an editor makes a mistake, and lists a page or site under the wrong tag, it doesn't matter - the huge numbers of other editors will make up for it.

Spam is likely to become a huge problem for To a degree, there is already spam within the index, and some work has already been put in to preventing spam.'s robots.txt file prevents indexing of the whole site, so no link popularity benefits from spamming the index directly. However, can still generate plenty of traffic and by virtue of the RSS feeds it generates can generate link popularity from other sites.

Luckily, there are plenty of signals they could look for to weed out spam. Their database can already tell them what tags are related, and what sites. If a user starts to list unrelated sites with tags unrelated to those sites, they may well be a spammer. If lots of new users suddenly join and all bookmark the same page instantly, using the same tags, again that may well be spam. IP tracking and the registration system (that requires a valid email and features a turing test) should make automated spam far harder. Ultimately, it may be's own success that makes spamming virtually impossible. With enough users on the site, a spammer may need to create hundreds, even thousands, of fake users to have a site listed in the "popular" section, or listed highly for a specific tag.

What next?

The intention of is not (at the moment) to become or create a directory. It is by pure fluke that they have created a site and a system so able to perform the same function as a directory but without the problems associated with that. It may well be that other similar sites will spring up whose aim is to build a directory, especially those involved in search, with a large user base. I would be greatly susprised if Google, MSN and Yahoo were not already watching very closely and with great interest. I would be equally surprised if none of them bought or created a social bookmarking product in the next few months, as the power of a distributed editting becomes apparent.

Update (14th December 2005)

It appears that I was rather close to the mark with my guess at what would happen next for - they have just been bought by Yahoo. It will be interesting to see how Yahoo integrate with their other services. I only hope they don't mess it up like so many promising sites before!

A list of country level domain names, ordered by country or domain name.

Read the rest of this post »

Screen Readers Suck!

15 September 2005   |   Comments   |   accessibility

Accessibility has now become a major issue in web design. One benchmark of an accessible site is that it works in common screen reading programs. However, screen readers are making the job of conscientious web designers harder than it should be.

Read the rest of this post »

Referrer spam is becoming increasingly common. At best, it will only render your log files useless. At worst, it can cause your site to be dropped by search engines and your running costs to skyrocket. Here's how to block spurious referrers.

Read the rest of this post »

Ignore Directories in mod_rewrite

8 September 2005   |   Comments   |   apache

A quick piece of code for you. If you are using mod_rewrite and creating RewriteRules for a website that emulate a directory structure, you might happen across the same problem I've had. If you have actual, real folders on the site as well, you don't want requests for items in those folders to be rewritten. You need a way to prevent the RewriteRule(s) matching the real folders. The easiest way to do this is (I think) by adding a RewriteRule for each of the real folders, like the below. This rule will match any request to those folders and prevent it being rewritten later in the set of rules.

RewriteRule ^folder_name/.*$ - [PT]

The problem that led to this snippet of code was that when posting from a form to a PHP script, you may sometimes want to have several fields with the same name and different values. For example, you might want people to be able to tick boxes to indicate which cities they have been to from a list. You would normally add "[]" to the name of the field inputs, like so:

<input type="checkbox" name="cities[]" value="London"> London
<input type="checkbox" name="cities[]" value="Paris"> Paris
<input type="checkbox" name="cities[]" value="Berlin"> Berlin
<input type="checkbox" name="cities[]" value="Madrid"> Madrid
<input type="checkbox" name="cities[]" value="Rome"> Rome

When the form is received by PHP, whichever items are ticked in the cities list above are accessible in the array $_POST['cities']. This is very handy.

Unfortunately, the addition of square brackets causes trouble with JavaScript, especially with a "Select All" function - which allows you to check all boxes at once by clicking a single one. This script works around that using regular expressions.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">
<title>Checkbox Fun</title>
<script type="text/javascript"><!--
var formblock;
var forminputs;
function prepare() {
  formblock= document.getElementById('form_id');
  forminputs = formblock.getElementsByTagName('input');
function select_all(name, value) {
  for (i = 0; i < forminputs.length; i++) {
    // regex here to check name attribute
    var regex = new RegExp(name, "i");
    if (regex.test(forminputs[i].getAttribute('name'))) {
      if (value == '1') {
        forminputs[i].checked = true;
      } else {
        forminputs[i].checked = false;
if (window.addEventListener) {
  window.addEventListener("load", prepare, false);
} else if (window.attachEvent) {
  window.attachEvent("onload", prepare)
} else if (document.getElementById) {
  window.onload = prepare;
<form id="form_id" name="myform" method="get" action="search.php">
  <a href="#" onClick="select_all('area', '1');">Check All Fruit</a> | <a href="#" onClick="select_all('area', '0');">Uncheck All 
  <input type="checkbox" name="area[]" value="1" />Apples<br />
  <input type="checkbox" name="area[]" value="2" />Bananas<br />
  <input type="checkbox" name="area[]" value="3" />Chickens<br />
  <input type="checkbox" name="area[]" value="4" />Stoats
  <br><br><a href="#" onClick="select_all('location', '1');">Check All Locations</a> | <a href="#" onClick="select_all('location', 
'0');">Uncheck All Locations</a><br><br>
  <input type="checkbox" name="location[]" value="1" />Brighton<br />
  <input type="checkbox" name="location[]" value="2" />Hove<br />

The third part of the Writing Secure PHP series, covering weak passwords, clients and more advanced topics.

Read the rest of this post »

Mozilla and Google's prefetching functions are a nice addition to browser technology in many ways. They are not without flaws, however. The main two problems with the prefetching idea are that it messes with log files and it means every link on a page could potentially be followed despite the consequences (dangerous in a site administration context).

It appears from the FAQ that Google only intends their accelerator to prefetch specific pages, that have been specified with the <link> tag. However, many people are claiming that normal links have been prefetched.

To prevent prefetching of a page is simple: add the following PHP to the page you do not want prefetched:

if ((isset($_SERVER['HTTP_X_MOZ'])) && ($_SERVER['HTTP_X_MOZ'] == 'prefetch')) {
    // This is a prefetch request. Block it.
    header('HTTP/1.0 403 Forbidden');
    echo '403: Forbidden<br><br>Prefetching not allowed here.';

This will serve a "forbidden" header to the prefetcher. Normal browsing should be unaffected.

Hi! I'm Dave, a fanatical entrepreneur and developer from Brighton, UK. I've been making websites since Netscape 4 was a thing.

I built, ApolloPad and Cheatography.