Forget Web 2.0, I Bring You. . . the Semantic Web!

At the CLA Emerging Technologies Interest Group Pre-Conference, Mark Leggott presented something he titled “Library 2.0: Threads in the Tapestry.” If you have ever seen or heard Mark talk, you would know that he appears to enjoy using metaphors to organize his talks. This time it was the “Lady and Unicorn” tapestry that covers the six senses.

The theme of the day was the next phase of Library 2.0, namely what is being called “the Semantic Web” or “Web 3.0.” The wikipedia article on the semantic web is quite a bit convoluted, but the main premise is a web that not only contains great content, but also stores content that can be understood and/or processed by machines in ways that are meaningful to humans.

Even more specifically, semantic web products mine the data of already existing social softwares and uses that data to draw links and connections to other articles. Take, for instance, Freebase which is looking to provide rich information experiences by mashing up the wikipedia database with detailed metadata and a variety of other services. The result being that, if you search for James Cameron (say) you may be able to capture the links through which that person is known, say the movies he made, the people he is related to, restaurants he’s been said to favor, people who have criticized his works and so on. The result is a rich data experience where the web basically predicts the other things which may interest or entice you.

To consider the extent to which the semantic web can go for libraries, consider the following three (relatively) new technologies:

  1. Micropaper — visual output devices (ie. monitors) that have the size and flexibility of paper.
  2. The Surface Computer — a multi-touch interface that could basically turn the mouse into moose.   I discussed other possibilities for this technology before.
  3. Photosynth —
    and there’s more to be found in the TED talks presentation/demo.

So imagine this. A Micropaper monitor that uses surface computer technology for interfacing. Right there, you have paper that can be interfaced in ways that are very similar to a book — and then some, because you could manipulate the text, zoom in and out, rotate the items and so on.

Then add photosynth. You could conceivable have your new “book” that can store entire volumes belonging to any author. You could have it go audio and highlight the words as they are being spoken.

But let’s go further. You could have a scientific article with a footnote that is actually the entire cited article with the quoted text highlighted. That means you can check for context in ways never heard of before!

Or how about reading the Hunchback of Notre Dame with detailed information about the history of Victor Hugo and a complete tour of the Cathedral sitting right there in your little paper-like monitor!

There’s alot about this technology that is both exciting and scary for libraries. It’s like I get my mind blown just about every day!

SurfaRSS? What is web design in a Surface-based Computer World?

Now that Microsoft has released a Surface based computer, I thought it was time that I thought about how people would actually use the thing. Sure the examples of moving pictures around and finger painting are kind of neat, but how does it get people to information?

Some of the things that immediately came to my mind have included:

  • AJAX or some derivative will rule because people will want to be able to move things around.
  • One improvement will have to be the ability to [easily] rotate & resize objects & pan around a page.
  • Thoughts about multiple-user access — how do you create web/internet spaces that multiple people will use?

The actual implications of a surface-based design are interesting, but here is my addition to the conversation. What about a kind of RSS-feed “lazy susan”? Here is a quick draft of a possible design.
SurfaRSS

So, you have that coffee table going on and you are reading the news with your friends. You all want to share the same news. So, here is a way to do it with the new interface. Maybe you would want to combine this with my Ajax-based federated search tool. So you search on one topic and it covers a whole lot of news that a bunch folks can go through and search, discover or whatever with your friends on the topic!

This sounds like a great opportunity for libraries if there ever was one. John Blyberg seems to agree. I think libraries should start imagining right now what the future of public internet access will look like in the next 5 years.

A Conversation with a Friend

The best conversations I have, always seem to happen in North End Halifax when I go to visit my old roommate, best man and great friend, Greg. Greg is an artist in many ways. He’s a musician, an architect, a visual artist, a singer, has an amazing back-yard garden, is a great cook, and always hosts the best parties in town. This time he was describing what he saw as a weakness in the current world climate.

“There are no saints, anymore,” He said. “There’s no one out there that makes me think he or she is going to capture the world’s imagination for the next 40 years.” “There are no Madonnas, Elton Johns or Beatles out there right now. There is a lot of talent, but no one with the attention span to go out there and bring something new to the world.” There was more, alluding to the democratization of art and the primacy of the amateur in the Web 2.0 world.

I should also say that Greg was not complaining or lamenting. He was merely making an observation. Also, I should that that, while Greg is older than I am and I am no spring-chicken, Greg is onto the music world in ways that most 16-year-olds are not. Greg is the first to notice the latest, hippest pop artist coming out. He is a Maven in that world and his view should be looked upon as more than just a curmudgeon wishing for the good ole days. He also wasn’t saying that YouTube and other art-sharing sites suck. His view probably is not that the products on these sites suck, but that people are going online because the products in the mainstream media suck.

It seems to me that the future may be pretty uninspiring for artists if we continue to go down the train we are going.   While the internet world is full of people who are willing to do crazy things, the desire to get your project up first is really killing a sense of, well, religion about our culture.

It was a good conversation and an interesting view coming from a very smart person.

My Entry in Meredith’s “Alternate Bookcover” contest

Emphasis_building

Originally uploaded by Greebie.

I’ve decided to join in on the alternative book cover contest for Meredith Farkas’ Social Software in Libraries.

I’m no artist, that’s for sure. But Paint.net did plenty to help me get over that.

This is a major coup for me, because I work alongside a bunch of designer-folks who always make fun of my design skills. This isn’t going to make them eat crow exactly — maybe a mosquito, june-bug max.

Anyway, get Paint.net and try your own book cover. But don’t be too good, because I want to win at least one of the signed copies. 🙂

Ideas: Federated Search ala Gadget?

I have complained about federated search tools before. From my perspective, what libraries [and their customers] want is simple tool integration. You got (say) 10 tools and you want them all easily accessible from your website. Why not? You pay for them, why shouldn’t they be easily accessed?

Well, it came to me that the federated search could be even better. Why not behave like Google and offer the ability to choose and remove searches from your federated search? If you are a sociologist, are you apt to care a whole lot about what’s on Biological Abstracts?

Well, here is my idea — the wireframe is below. The customer gets to choose the searches he/she wants and can keep them for future reference. Of course, if the search suggests that another database might be appropriate, then the recommendation appears, but the customer is in control the whole time.

Federated Search Wireframe

The main things that are different are this. 1) You can add and remove searches from the federated search. 2) You can edit or filter each search as desired. 3) The system presents results and then recommends other sources, but the latter does not get in the way of the former. Of course, you could probably also set up a few RSS feeds and other stuff to go along with this product. The details are pretty moot in my view. The point here is that federated search ought to be at the stage where it offers a level of customization.

There is an ability to edit the searches for increased or reduced results. Perhaps you can also just search certain formats (this is particularly relevant to public libraries, whose clients may just want “DVDs” or “CDs” when they search a fiction title). Other filters could be good too. For example, how about “just what is in a particular branch?”

If the customer wants a Google search to go with this, that ought to be possible too! The cool thing about this product would be that you could measure what people are choosing to add to their page. That would say lots about what to order for your collection (versus price, of course).

I really think this is feasible, partly because OpenID is beginning to rumble in the tech world. If there is an OpenID system for your databases, then accessing them ought to be much easier. This product might favor Open Source journals as well, since the free ones would be the easiest to hook up to such a system. But that’s a good thing. If the free stuff works for that project, why shouldn’t people approach that stuff first. If the paid-for stuff is so important, it should be worth the extra time to access it.

Anyway, this is just an idea right now. I don’t know what sort of patents, permissions, coding knowledge and etc. you would need to do this sort of thing, but I do content that this kind of search would do better than what federated search does now.

Web Architecture Consensus in 3 meetings

Well, three meetings may have occurred only because I have a great development team, but I am so proud of the team, that I thought I’d blog the process we went through in point form. Well, it’s sort of composed like Confusius’ Analects or The Art of War.

I think the key was to stop the meeting periodically to storyboard a customer’s experience. A breakthrough is inevitable if your team can see the website from the customer’s eyes.

And if that method does not help, you might want to check out Dorothea Salo’s excellent “The Dreaded Redesign” post as another approach.

Web Architecture trumps Web 2.0

I am re-designing a website where I work, and need to keep my technolust in check. For one, we are going to use Joomla and that product has just about everything you could ask and more — RSS feeds, wikis, blog capability, community space, discussion groups, multimedia sharing and lots of extras.

Well, my bosses sat me down and insisted I focus on architecture and I did.

I am really close to presenting an architecture design and, thanks to a great Website team and alot of “consensus-building” (aka heated arguments), I think we have it down-pat.

Now, having gone through this process and knowing that Michael Stephen’s Web 2.0 & Libraries: Best Practices for Social Software out, I just wanted to offer my advice on the hot and the helpful.

I know that it is important for libraries to think outside the box, and I think Web/Library 2.0 is also very important. I also know that Michael Stephen and all other Library 2.0 advocates are not proposing that you sacrifice all information architecture in order to provide Library 2.0 services. So, really, this is a contribution to the Web/Library 2.0 discussion, not an anti-2.0 post.

I think that Architecture trumps Web 2.0. What I mean is that, if you have a website with cruddy architecture and no Web 2.0 capability, you should always put your resources toward architecture first, and then the rest toward 2.0. Here are the reasons why:

  • Web 2.0 power with no product is like pedalling hard with your bike chain unhitched

The following are easy to imagine:

  • An RSS feed or podcast containing information no one cares about.
  • A wiki that becomes a link farm for spammers.
  • A blog without regular updates, clear headlines and unreadable content.
  • A site so pointless that no one will bother to tag it.

I would be surprised to hear anyone argue than any of the above are useful to library customers.

Architecture insists that you look at your website in terms of its core products. Sometimes the core products are just simply library hours and locations. If that’s all you have to offer, then a single page with that information is enough. Web 2.0 ought to make the path to core products easier. It will do that only if you have a solid architecture for your site.

  • “Push” requires at least one “pull” transaction

Ok. Fine. Once a person has your feed, they may never need to visit your site again. But they still need to visit the site the first time. What if. . .?

  • You were sure there was an RSS feed on the site somewhere, you just don’t know where.
  • It took 20 clicks, a login and extra software to just to find a site’s wiki?
  • You are invited to social tag a page, but you don’t know what the heck it is about?

If you have an un-navigable site, Web 2.0 will probably only make it worse.

If you have an un-navigable site, people probably won’t bother trying anyway.

You can’t “push” anything to your customers that doesn’t already have an inherent “pull.”

  • Sometimes, the Web 2.0 is a hardware advancement in the guise of a web-based enhancement

Content sharing is a good example. I barely use Flickr to full capacity because I don’t own a digital camera. I only watch YouTube because I don’t own a DV cam, and my dedication to the site is pretty lame. Some Web 2.0 doesn’t have its full power when the hardware is missing. And I daresay that many key users of library websites cannot afford the technology.

In many ways, Flickr and YouTube is more about the ease of content creation than it is about the online service. Can you fit a DV Cam, editing software, and maybe an external harddrive in your budget? Then maybe video sharing via YouTube is not for you. You’d be much better off making it easier for people to find out when your programs are on.

Either way, YouTube and Flickr is already full of wonderful stuff. Can your product be as compelling or edgy as Mr. Pregnant? If not, then your marketing strategy might be lost, except to librarians like myself who are interested in how libraries are using YouTube. That’s not customer focussed to me. It’ll get my attention, but I am much more dorky than your customers.

  • Most people don’t know they are using Web 2.0

I did a staff survey of the current website, and I asked them whether they used RSS feeds. Most said “no.” Then I asked them to comment on their favorite websites, and they said things like “My Yahoo” or “Google’s personalized home.”

Most people don’t care if you have Web 2.0 capability. They want easy access to the information and fun stuff they desire. Web 2.0 should almost be invisible to your customers.

  • Web 2.0 can appear confusing and/or unpolished

In my view, libraries are about experience. Look at the Seattle Central Library. I believe that a library website should do the same thing. You walk into a cool library and there is awe. There is “learning by osmosis.” There is “I am free to sit down and read and no one will bug me about it.” There is “man these librarians are friendly.”

The website should do the same thing. Social software is cool for a variety of reasons, but the “library” experience can be lost on some people if the architecture is not there.

  • Web 2.0 is not always as easy as some say

This kind of goes back to the Flickr/YouTube statement above. If you barely understand the internet, you will have an even harder time understanding the syntax of a wiki. If you do not have a fast computer with broadband Internet, YouTube is pretty much lost on you. If you are a person who is apt to tap the monitor with the mouse when told to “double-click the My Computer icon,” then Web 2.0 is going to be lost on you.

Has anyone tried reading an RSS feed using Jaws? I haven’t. Maybe I should take a look before I add one?

Again, I am not saying that Web 2.0 offers nothing of value to website. On the contrary, I believe there is unlimited potential for Web 2.0 in library websites. There are lots of great ideas to keep people up-to-date on the latest and greatest. I just think that sitting down, thinking about how your site is designed and how it can be designed better is much, much more important.

Happy Ugly, Relaxed Grammar and My Greatest Pet Peeve: Over-Zealous Pet Peevers

Interesting article of opinion from Ze Frank about Ugly MySpace pages (through Deborah Schultz’s blog).

I’ve always thought some design circles were too hard on the amateurs with their rules about how design x or y works. I am a proponent of the “you ought to know the rules before you break them” school of design (well, of everything artistic), but I also feel that a complete dismissal of the ugly to be a little short-sighted.

The same applies to editing and language. Back in my early Internet days, I enjoyed composing (bad) poetry on the newsgroup rec.arts.poems (aka rap). These days, like most newsgroups, rap is a cesspool of spammers and net.kooks, but back in the day, people would experiment with poetry and many turned into very good poets.

I remember sometimes after submitting a poem I’d be harshly criticized for minor grammar errors. I was even criticized on the CanLit newsgroup by George Bowering for using “reference” as a verb. I was therefore, somewhat vindicated by this article of common “errors” in English grammar that are not really errors.

Despite all the hype of the Eats, Shoots and Leaves (link to my favorite review in the New Yorker), I am very uptight about uptight aesthetes. Like the “Eats, Shoots and Leaves” principle, a grammarian went so far one day as to suggest an error in punctuation “changes meaning.” I suggested that this was preposterous — “meaning” is only effected if and when someone misinterprets a statement due to the slip in grammar, and this, in my view, is an extremely rare occurence. In most cases, punctuation errors are overcome by the context of an article. No one thinks a Panda “shoots.” The result on the reader who recognizes the error is a temporary break from the suspended reality of the composition. Instead of thinking that they are receiving “information,” they immediately become aware of the text. Most people would pass by the error altogether because context would help them to assume the correct phrase.

Sure, in the world of military communication where decisions have to be made split-second, and the consequences of those decisions are dire, you may need tight controls. But on MySpace? Or a poetry newsgroup?

And sometimes I think some of the “rules” would be better off being left to history anyway. For example, take latinized plurals like “cacti” for cactus or “fungi” for fungus. Why oh why do we need to preserve these foolish rules? Why not “cactuses?” It sounds funny at first, but we’d get used to it. Why make the language more complicated for others who want to learn it, just to preserve that English was derived in part from an old, now defunct language?

Now consider what it means to allow people to break the rules.   First, breaking the rules draws attention to the rule itself.   When human habit deviates from the “rules” one ought to think about why the rules exist in the first place.   A first thought is “for whom do these rules exists?”    Proper grammar is a cultural norm that enables those with so-called “proper grammar” to use it to lower the status of others.   But, if you appear in a bar among non-grammarians and refuse to use a preposition at the end of a sentence, the tables turn very quickly — that’s because the norms of grammar change.

Another consideration is the function of the rule — what is the rule trying to achieve?   For instance, commas can help separate items on a list to avoid confusion.  Maybe there is a better way to achieve the same function without using the comma?   We’d never know unless there are people out there making grammar mistakes in the first place.

The same goes for ugly.   Maybe we need people who ignore the rules  (or maybe we should call them conventions), if only to get us to “step back” and understand why the rules are made in the first place.   At first, we get “ugly,” but later on we get “innovative,” “re-invented,” and “provocative.”

And sometimes technological or social change makes conventions redundant.    Like what XML, folksonomies and Google appears to be doing to MARC coding, subject authorities and the OPAC.

I am not saying that conventions should not exist at some level or another.    Tradition is a good way to solve problems .  We should not try re-invent the wheel all the time, and we certainly don’t want everyone spending their time  reinventing language.   But accidents have a beauty of their own, and we should relax our tendencies for stifling people from their bad writing.    In all of us, there is a bad artist/poet/writer just waiting to come out.  And maybe we ought to do so once in a while.