Sunday, March 29, 2009
Michael Stephens in Vancouver, BC
Michael Stephens is one of my favourite librarians. One of the most enjoyable things is the memories of how libraries affect a person's memories and shape a person's life. This is a very honest, intimate discussion of Stephens' love of libraries. He's coming to Vancouver for the upcoming British Columbia Library Association 2009 conference. I'm looking forward to it.
Monday, March 23, 2009
A Time To Be An Information Professional

And thus is the profession of librarianship. Perhaps we will be known by another title, another name, as some of us already are known as metadata managers, taxonomists, information architects, and knowledge managers. Library schools have evolved into I-Schools. Who knows, LIS might evolve the point where it not longer is recognizable to us -- as the apothecary is no longer recognizable to the pharmacist. But the art of searching, sharing knowledge, collecting, organizing, and disseminating information in whatever shape and form they may be, will never change. And hence, whatever we may become, we will never change.
Saturday, March 14, 2009
The Search Continues . . . .

Search algorithms today are largely based on a common paradigm: link
analysis. But they've ignored a mother lode of data: The network.
Nicely said. Although there are a multitude of variations of search algorithms, architectures and tweaks, search technology has been based largely on three canonical approaches. In a nutshell, here they are:
1) Human-powered directories - Hierarchically organized into taxonomies (e.g. Yahoo!)
2) Crawler-based index - Generates results largely prioritized by link analysis. (e.g. Google)
3) Collaborative tagging - Users tag pages with keywords so that future searchers can find
However, these three options still fail to prevent click fraud and also content unreacheable in the Deep Web. Weinman proposes the Network Service Providers as a fourth option, which uses data and metadata associated with the actual network transport of Web content—including HTML pages, documents, spreadsheets, almost anything —to replace and/or augment traditional Web crawlers, improve the relevance and currency of search results ranking, and reduce click fraud. A network service provider could better determine aggregate surfing behavior and hold times at sites or pages, in a way sensitive to the peculiarities of browser preferences and regardless of whether a search engine is used.
Weinman's proposal is an interesting deviation to the thoughts of Semantic Web enthusiasts. It does throw a quirk into the speculation of the future of Web search technology. And so the search continues . . .
Monday, March 09, 2009
Searching Search Like a Yandex

What is interesting is that Yandex's search algorithm is rooted in the highly inflected and very peculiar Russian language. Words can take on some 20 different endings to indicate their relationship to one another. Like the many other non-English languages, this inflection makes the language of Russian precise, but makes search extremely difficult. Google fetches the exact word combination you enter into the search bar, leaving out the slightly different forms that mean similar things. However, Yandex is unique in that it does catch the inflection. Fortune has written an interesting article on Yandex, and my favourite part is its examination into the unique features of this Russian search giant:
While some of its services are similar to offerings available in the U.S. (blog rankings, online banking), it also has developed some applications that only Russians can enjoy, such as an image search engine that eliminates repeated images, a portrait filter that ferrets out faces in an image search, and a real-time traffic report that taps into users' roving cellphone signals to monitor how quickly people are moving through crowded roads in more than a dozen Russian cities.
Thursday, March 05, 2009
BBC's Semantic Web

considering how best to build websites we’d recommend you throw out the Photoshop and embrace Domain Driven Design and the Linked Data approach every time. Even if you never intend to publish RDF it just works. The longer term aim of this work is to not only expose BBC data but to ensure that it is contextually linked to the wider web.
Monday, February 23, 2009
Shame on You Wall Street Journal

It is regrettable. Our reporters do have access to multiple databases including Factiva and this migration to digital databases as you has been happening for many years.
Sure. Good luck with having your reporters spend up to ten times the amount of time it would take to find information a trained information professional could obtain for you in a fraction of the time. A librarian is like the glue that holds the house together. You can only go so far and so long without a librarian's information retrieval skills before the infrastructure cracks and crumbles. Particularly in our emergine Web 2.0 world of social media and open access resources, can a company survive alone without expert information and knowledge management? Best of luck Wall Street Journal.
Saturday, February 21, 2009
Video Sharing for Librarians
I recently presented at TOTS. What is video sharing Whyshould we care? How can be of use for information professionals? What are some issues for us to consider? Let's take a look together.
Monday, February 16, 2009
Who Video Shares? Barack Obama Does!
Who uses Web 2.0 to its fullest capacity? Barack Obama does. The President posts regularly to Vimeo. Vimeo is different in that it offers High-definition content. On October 17, 2007, Vimeo announced support for High Definition playback in 1280x720 (720p), becoming the first video sharing site to support consumer HD.
Wednesday, February 11, 2009
Mashups at PSP 2009
Thursday, January 29, 2009
Is Youtube The New Search?

I found some videos that gave me pretty good information about how it mates, how it survives, what it eats,” Tyler said. Similarly, when Tyler gets stuck on one of his favorite games on the Wii, he searches YouTube for tips on how to move forward. And when he wants to explore the ins and outs of collecting Bakugan Battle Brawlers cards, which are linked to a Japanese anime television series, he goes to YouTube again. . .
“When they don’t have really good results on YouTube, then I use Google."
What does this mean? Are Facebook, Youtube, and Twitter going to take down the venerable goliath Google? Not really. I argued in an article that this is the phenomenon of social search. Are things finally catching up?
Monday, January 26, 2009
Ushahidi as a Mashup
I'm going to be talking soon about mashups. (And getting nervous about it, too). One mashup that I will be discussing is Ushahidi. It's an excellent example of how Web 2.0 is saving lives. Using technology to harness peace. More to come. Here is an excellent slide show of Ushahidi.
Wednesday, January 21, 2009
Nova on the Future of the Web

(2) The Browser is Going to Swallow Up the Desktop
(3) The focus of the desktop will shift from information to attention
(4) Users are going to shift from acting as librarians to acting as daytraders
(5) The Webtop will be more social and will leverage and integrate collective intelligence
(6) The desktop of the future is going to have powerful semantic search and social search capabilities built-in
(7) Interactive shared spaces will replace folders
(8) The Portable Desktop
(9) The Smart Desktop
(10) Federated, open policies and permissions
(11) The personal cloud
(12) The WebOS (Web operating system)
(13) Who is most likely to own the future desktop?
Saturday, January 17, 2009
Topic Maps and the SemWeb

In the same posting, Steve Pepper, an independent researcher, writer and lecturer who has worked with open standards for structured information for over two decades, made a very interesting comment. He argues that:
Indeed, the Topic Maps 2008 Conference in Oslo, Norway, April 2-4 has just concluded. So what are topic maps, and why are they relevant for libraries and information organizations? The basic idea is simple: the organizing principle of information should not be where it lives or how it was created, but what it is about. Organize information by subject and it will be easier to integrate, reuse and share – and (not least) easier for users to find. The increased awareness of the importance of metadata and ontologies, the popularity of tagging, and a growing interest in semantic interoperability are part and parcel of the new trend towards subject-centric computing.
Topic maps is really spearheading is nothing short of a paradigm shift in computing -- the notion of subject-centric computing -- which will affect far more than just the Web.
We've let programs, applications, and even documents occupy centre-stage for far too long. This is topsy-turvy: users are primarily interested in subjects (what the information is about), not how it was created or where it lives. We need to recognize this, and effect the same kind of change in information management that object-orientation effected in programming; hence the need for a subject-centric revolution.
This conference brings together these disparate threads by focusing on an open international standard that is subject-centric to its very core: ISO 13250 Topic Maps, which is interestingly what Katherine Adams had pointed out eight years ago. We're getting closer. The pieces are in place. We just need a good evening to frame together the picture.
Monday, January 12, 2009
hakia and Librarians' Race to End the Search Wars

However, besides QDEX (Quality Detection and Extraction) technology, which indexes the Web using SemanticRank algorithm, a solution mix from the disciplines of ontological semantics, fuzzy logic, computational linguistics, and mathematics, hakia also relies on the subject knowledge expertise of professionals. By combining technology and human expertise, it attempts to completely redefine the search process and experience. Take a look at my hakia, Search Engines, and Librarians How Expert Searchers Are Building the Next Generation Web for a deeper analysis of what hakia is trying to do with librarians. Hopefully, it offers more food for thought.
Thursday, January 08, 2009
A New Web 2.0 Journal

Thanks Dean for recommending this journal to me. It's an excellent read so far.Admittedly, Web 2.0 is a hard concept to get one’s arms totally around as it means anything involving “user content”. This broad definition covers everything from social networks, such as Facebook, to 3D Virtual Reality Worlds, such as Second Life and World of Warcraft, with many, many stops in between. The unifying feature in all of the Web 2.0 systems and tools is that they differ fundamentally from Web 1.0, which is a one-way connection, in which information sources, vendors, advertisers, etc. present information for the reader to consume and / or respond to (the fact that a user may choose to buy on-line from Amazon or Sears does not make those sites something other than Web 1.0 since the user was not the one to initiate the content).