Discovering an object in a collection of millions

A product design case study
7 minute read

The problem with digital archives

With over 4.5 million objects in its permanent collection, the V&A is the world’s largest museum of decorative arts and design. The museum covers 12.5 acres, has 145 galleries, a collection that spans 5,000 years of art across Europe, North America, Asia and North Africa and one of the largest collections of Islamic art in the Western world.

It's easy to explore the physical space, simply by wandering around. But this is not so easy in the digital space.

Although there's an overwhelming amount of objects, it's relatively easy to explore and discover items in the physical space, simply by wandering around, following signs or gravitating towards places where other people have clustered.

The same is not true of the digital space however— with such a huge archive to explore, where do you start? If you don't have a specific search term or topic in mind, how do you discover new items, the most uncommon, and the most unexpected?

The collection as a digital experience

As part of their digital transformation, the V&A has already digitised over 1.5M objects and continues to add more every day. Their online archive lets you search by object, but you need to know what you are looking for. Could there be a way to wander around the archive and introduce that element of surprise and discovery that browsing the physical space brings?

Building on this idea, along with product designer Gala Jover, I developed a design for a tab-based Chrome extension which would retrieve an object from the V&A collection at random and display a new item, each time you open a new tab.

From clunky code to MV[L]P

As a proof of concept, we built a basic prototype using HTML and Javascript. Very simply, it connected to the V&A museum’s Application Programming Interface (API), sent a request to the database and pulled out an image and a title that we could then display on the screen. With a few tweaks and some reading of the API docs, we were soon able to bring back an artist name and date too.

Our first API-connected prototype

Learning and iterating

It turned out to be quite easy to package our code as a basic Chrome extension that could be installed in Developer mode, and so we ran this early, clunky version on our own desktops for the next few weeks.

We were consistently discovering new and unexpected items from the collection.

Seeing a new image with each new tab was exciting, yet not intrusive, because it didn't interrupt the regular flow of using the browser– it didn't slow the opening of a tab and whatever came up was dismissed as soon as we entered a URL into the address bar, as per usual. Most importantly, as we’d hoped, we were consistently discovering new and unexpected items from the collection. There were a few things lacking though;

  • We’d often see the same object more than once
  • We wanted to know more about the objects that came up
  • We realised it was important to be able wanted a way to share save the really interesting ones, and save them for later

Designing the interface

In our second version we addressed these issues through code, and at the same time began to develop the visual design:

  • We introduced a user-setting for search terms, so users would be able limit the randomisation to their own parameters
  • A flexible layout would allow for different title lengths and varying types of image
  • A full object description and item details should be available on-demand
  • A Pinterest button would provide a way to save objects for later

Designing around random text and images was challenging– we had to deal with titles from 1 word in length to 30, and image ranging from high-quality studio shots to badly-cropped snaps. To get around this we created rules in the code that checked for certain parameters and switched sizing accordingly, and created an aesthetic that would be tolerant of all the different photographic styles that were coming back.

Sketches and early high-fidelity designs

Ready for launch

The new version was delivering everything we’d hoped for our MVP:

The extension also returns objects that are currently in storage– it goes beyond the museum collection and uncovers items that are not open for public viewing.

  • A huge range of surprising and curious objects get brought back by the extension
  • It frequently returns objects that are in storage at the V&A, therefore going beyond the public collection and uncovering things that wouldn't otherwise be seen
  • The Pinterest button means sharing things with a link back to the main collections archive was really easy, and lots of the stuff we’ve pinned during testing has already been re-pinned and is making its way around the web
  • The analytics we’re getting from the extension are helping us to find out which objects grab people’s interest the most, and which get shared to Pinterest

Reaching out to the V&A

Following a good deal of refinements which we made in-browser, we shared the new prototype with friends and family. The feedback on the new prototype was really positive, so feeling emboldened, we reached out to the digital team at the V&A, who were excited by the product and invited us in to talk them through it.

We met with the team a couple more times over the next few months, tweaking the plug-in based on the theirs and others' feedback, and eventually launched the extension on the Chrome Web Store 🚀

The final working extension and some of the curiosities it brings up

What's next?

We've got lots of new ideas for Cole and we have plans to introduce user settings for date range and other restrictions to the randomisation, additional ways of sharing objects, and an updated design. Please watch for updates on the Web Store.

Alex Charlton is a designer specialising in service, user experience and interaction design

Back to main website