Shortly after the shapshot of the 2004 version you see above was taken, I started work with Juno. My brief was fairly simple - to bring them up to date, while improving search results and ideally converting a few more visitors to customers. Though the text-based look has served them very well over time, their simplicity and lack of visible branding was responsible for a large number of potential sales being lost.
The previous system was also technologically past its best. The local product management system was responsible for generating and uploading tens of thousands of HTML files daily to the site, and some pages had become exceptionally large and unwieldy. One, especially, was over 19 megabytes in size.
So the aim of the project was to upgrade the front end of the site. We wanted to keep the same backend, as the systems in place in-house were serving their purpose well. So with this in mind we opted for a site based on PHP and Microsoft's SQL Server. These two are a powerful combination, and also gave us the option to use database replication - meaning the site could be run without needing to alter how things worked locally.
The data itself needs to be processed for the web - the search engine, especially, has a lot of data to sort through, so after data is replicated to the database server, some stored procedures are run on it to modify and simplify what's there, and organise it so that the search engine can run quickly.
The site itself is run from a custom templating system, allowing people to edit the look of the site and add to it without needing to delve into the PHP behind it. There are also a series of page generation caching scripts in place - most pages are cached for a short while, at least - some for many weeks, depending on how often they are updated. Some parts of pages are not cached, and are put placed within the cached pages at runtime.
One of the most interesting features to write was the "Play All" feature, that allows you to listen to all the samples listed on any page. This appears on almost every page of the site. In order to ensure any links to playlists are always going to work and produce the same playlist, each page tracks the samples on that page, and generates a custom playlist for that specific series of titles, taking into account the user's display preferences (eg ordering by release date rather than artist name gives a different playlist). That data is all saved for later use and given a unique ID, so it can be accessed again later, even if the same page has changed since then.
Another feature we added in that went down well was the ability for users to see what other people buying one title had also bought. Combined with artist and label release listings, this gave every customer a chance to explore the site in different ways and to discover new music they might not already have known about.
Normalisation of data was an interesting dilemma too. One problem all online record stores face is how to best list artists and titles. With at least 30,000 records in stock at any one time, Juno's data entry becomes tricky - do you give the user the option of selecting an artist from a drop-down list (that might not contain the artist, might contain them with an unusual spelling or ordering of names, etc), or just give them a text box and allow them to enter the name manually? Manually is the quickest way, naturally, but that creates a slight problem - it becomes quite tricky to then list everything by a single artist, as each entry might be slightly different (even "DJ" as opposed to "D.J." becomes an issue).
The site therefore goes through a process of normalisation of data, splitting artist names up and detecting similarity.
I am no longer with Juno - Dan Burzynski has taken over and has added some excellent new features - I'm now with Propellernet, working closer to home. Working for Juno was a pleasure, though, and from the feedback from the users, it seems the new version of the site has gone down very well indeed.