At the CLA Emerging Technologies Interest Group Pre-Conference, Mark Leggott presented something he titled “Library 2.0: Threads in the Tapestry.” If you have ever seen or heard Mark talk, you would know that he appears to enjoy using metaphors to organize his talks. This time it was the “Lady and Unicorn” tapestry that covers the six senses.
The theme of the day was the next phase of Library 2.0, namely what is being called “the Semantic Web” or “Web 3.0.” The wikipedia article on the semantic web is quite a bit convoluted, but the main premise is a web that not only contains great content, but also stores content that can be understood and/or processed by machines in ways that are meaningful to humans.
Even more specifically, semantic web products mine the data of already existing social softwares and uses that data to draw links and connections to other articles. Take, for instance, Freebase which is looking to provide rich information experiences by mashing up the wikipedia database with detailed metadata and a variety of other services. The result being that, if you search for James Cameron (say) you may be able to capture the links through which that person is known, say the movies he made, the people he is related to, restaurants he’s been said to favor, people who have criticized his works and so on. The result is a rich data experience where the web basically predicts the other things which may interest or entice you.
To consider the extent to which the semantic web can go for libraries, consider the following three (relatively) new technologies:
- Micropaper — visual output devices (ie. monitors) that have the size and flexibility of paper.
- The Surface Computer — a multi-touch interface that could basically turn the mouse into moose. I discussed other possibilities for this technology before.
- Photosynth — and there’s more to be found in the TED talks presentation/demo.
So imagine this. A Micropaper monitor that uses surface computer technology for interfacing. Right there, you have paper that can be interfaced in ways that are very similar to a book — and then some, because you could manipulate the text, zoom in and out, rotate the items and so on.
Then add photosynth. You could conceivable have your new “book” that can store entire volumes belonging to any author. You could have it go audio and highlight the words as they are being spoken.
But let’s go further. You could have a scientific article with a footnote that is actually the entire cited article with the quoted text highlighted. That means you can check for context in ways never heard of before!
Or how about reading the Hunchback of Notre Dame with detailed information about the history of Victor Hugo and a complete tour of the Cathedral sitting right there in your little paper-like monitor!
There’s alot about this technology that is both exciting and scary for libraries. It’s like I get my mind blown just about every day!