Sunday, March 29, 2009

Michael Stephens in Vancouver, BC



Michael Stephens is one of my favourite librarians. One of the most enjoyable things is the memories of how libraries affect a person's memories and shape a person's life. This is a very honest, intimate discussion of Stephens' love of libraries. He's coming to Vancouver for the upcoming British Columbia Library Association 2009 conference. I'm looking forward to it.

Monday, March 23, 2009

A Time To Be An Information Professional

An apothecary is a historical name for a medical professional who formulated and dispensed medicine to physicians, surgeons and patients. They were what we call the modern day pharmacist. The health profession is in hot demand, and pharmaceutical sciences is one of the most sought-after professions of college graduates.

But it wasn't always this way. Industrialization had an impact on every aspect of the activity of the apothecary. Because new advances in technology in medicine lead to the creation of new drugs, drugs that the individual pharmacist’s own resources could not produce, many drugs that the individual pharmacist was able to produce could be manufactured more economically, and in superior quality.

Not only did proprietary medicines result in the taking over the role that apothecaries were responsible for, they forced the pharmacist to become a vendor of questionable merchandise. This ultimately opened the way to much broader competition from merchants, grocers and pitchmen than the pharmacist had previously encountered, thus marginalizing the profession. Eventually, the "art of compounding" gave way to the new pharmacist's increasingly important role of being health care provider, in which the science of pharmacy turned to specializing in tailoring patients' medications to specifically meet their needs. The remaining pharmacists that do continue compounding do so for the love of the science and interest in their patients well- being. And just like the changing nature of the librarian's work, the essential love for our users and art of searching will not change.

Librarians aren't going anywhere, and they never will, even though the name might. Librarians will adapt, change, and modify - just like the apothecary. But it won't disappear. Librarians are undergoing a change in its profession, and nowhere is this most apparent than the Special Libraries Association, which is celebrating its centennial year. The SLA is a reflection of the profession, as it has often had to question its place in the profession. In 2003, the SLA came to a standstill, and almost became the Information Professionals International, but decided otherwise as SLA represents a century-old tradition and brand name that is too cherished to change.

And thus is the profession of librarianship. Perhaps we will be known by another title, another name, as some of us already are known as metadata managers, taxonomists, information architects, and knowledge managers. Library schools have evolved into I-Schools. Who knows, LIS might evolve the point where it not longer is recognizable to us -- as the apothecary is no longer recognizable to the pharmacist. But the art of searching, sharing knowledge, collecting, organizing, and disseminating information in whatever shape and form they may be, will never change. And hence, whatever we may become, we will never change.

Saturday, March 14, 2009

The Search Continues . . . .

New Approach to Search is a must read for those interested in search technology. Joe Weinman goes into the nitty-gritty of search algorithms, but boils it down into easily understandable (and fun) analogies for the laymen. As Weinman argues,

Search algorithms today are largely based on a common paradigm: link
analysis. But they've ignored a mother lode of data: The network.

Nicely said. Although there are a multitude of variations of search algorithms, architectures and tweaks, search technology has been based largely on three canonical approaches. In a nutshell, here they are:

1) Human-powered directories -
Hierarchically organized into taxonomies (e.g. Yahoo!)

2) Crawler-based index -
Generates results largely prioritized by link analysis. (e.g. Google)

3) Collaborative tagging -
Users tag pages with keywords so that future searchers can find
those pages by entering those tags (e.g. Technorati and Del.icio.us)

However, these three options still fail to prevent click fraud and also content unreacheable in the Deep Web. Weinman proposes the Network Service Providers as a fourth option, which uses data and metadata associated with the actual network transport of Web content—including HTML pages, documents, spreadsheets, almost anything —to replace and/or augment traditional Web crawlers, improve the relevance and currency of search results ranking, and reduce click fraud. A network service provider could better determine aggregate surfing behavior and hold times at sites or pages, in a way sensitive to the peculiarities of browser preferences and regardless of whether a search engine is used.

Weinman's proposal is an interesting deviation to the thoughts of Semantic Web enthusiasts. It does throw a quirk into the speculation of the future of Web search technology. And so the search continues . . .

Monday, March 09, 2009

Searching Search Like a Yandex

Let me introduce Yandex. It's an interesting search engine because it precedes Google. In fact, Yandex was founded in the late 1980s, before the advent of the Web. What is interesting is that Yandex is a classic case study that Google is not the end all and be all of search. Google may be good in English, but how does it fare in multilingual searching. (Remember: English is only a fraction of the Internet's languages).

What is interesting is that Yandex's search algorithm is rooted in the highly inflected and very peculiar Russian language. Words can take on some 20 different endings to indicate their relationship to one another. Like the many other non-English languages, this inflection makes the language of Russian precise, but makes search extremely difficult. Google fetches the exact word combination you enter into the search bar, leaving out the slightly different forms that mean similar things. However, Yandex is unique in that it does catch the inflection. Fortune has written an interesting article on Yandex, and my favourite part is its examination into the unique features of this Russian search giant:

While some of its services are similar to offerings available in the U.S. (blog rankings, online banking), it also has developed some applications that only Russians can enjoy, such as an image search engine that eliminates repeated images, a portrait filter that ferrets out faces in an image search, and a real-time traffic report that taps into users' roving cellphone signals to monitor how quickly people are moving through crowded roads in more than a dozen Russian cities.



Thursday, March 05, 2009

BBC's Semantic Web

BBC gets it.   In the latest issue of Nodalities magazine (one of my favourite reads), BBC reveals how it is applying the bottom-up approach to its contribution in realizing the SemWeb.   To make this happen, web programmers broke with BBC tradition by designing from the domain modelup rather than the interface down.  The domain model provided us with a set of objects (brands, series, episodes, versions, ondemands, broadcasts, etc) and their sometimes tangled interrelationships.

This is exciting stuff.  Without ever explicitly talking RDF we’d built a site that complied with Tim Berners-Lee’s four principles for Linked Data:

(1)  Use URIs as names for things. 

(2)  Use HTTP URIs so that people can look up those names. - 

(3)  When someone looks up a URI, provide useful information

(4)  Include links to other URIs

In fact, as the BBC web developers argue, 
considering how best to build websites we’d recommend you throw out the Photoshop and embrace Domain Driven Design and the Linked Data approach every time. Even if you never intend to publish RDF it just works.   The longer term aim of this work is to not only expose BBC data but to ensure that it is contextually linked to the wider web.  
The idea is to free web of data.

BBC Gets It.

Monday, February 23, 2009

Shame on You Wall Street Journal

It is regrettable indeed.   I was deeply saddened and somewhat enraged by the Wall Street Journal's closing of its library.   In our information age, that depends so much on knowledge workers, Wall Street has decided that it could cut back taking away a vital piece of information news gathering, organizing, and dissemination of up-to-minute information.   Can news reporters expect to do all the work themselves?  Can they properly search for relevant and pertinent information? Is that even their jobs?  

Could we inset librarians and information professionals into the jobs of news journalists?   Of course not.  Wall Street - give your head a shake.   A knowledge centre, particularly in a top-notch industrial media giant such as Wall Street, requires expert searchers.    When asked, a spokesperson responds,

It is regrettable. Our reporters do have access to multiple databases including Factiva and this migration to digital databases as you has been happening for many years.

Sure.  Good luck with having your reporters spend up to ten times the amount of time it would take to find information a trained information professional could obtain for you in a fraction of the time.  A librarian is like the glue that holds the house together.  You can only go so far and so long without a librarian's information retrieval skills before the infrastructure cracks and crumbles.   Particularly in our emergine Web 2.0 world of social media and open access resources, can a company survive alone without expert information and knowledge management?  Best of luck Wall Street Journal.  

Saturday, February 21, 2009

Video Sharing for Librarians




I recently presented at TOTS. What is video sharing Whyshould we care? How can be of use for information professionals? What are some issues for us to consider? Let's take a look together.

Monday, February 16, 2009

Who Video Shares? Barack Obama Does!




Who uses Web 2.0 to its fullest capacity? Barack Obama does. The President posts regularly to Vimeo. Vimeo is different in that it offers High-definition content. On October 17, 2007, Vimeo announced support for High Definition playback in 1280x720 (720p), becoming the first video sharing site to support consumer HD.

Wednesday, February 11, 2009

Mashups at PSP 2009


View more presentations from Allan Cho.
I had recently given a presentation as part of a panel at the Association of American Publisher's Professional Scholarly Publishing (PSP) 2009 joint-pre-conference with the National Library of Medicine, titled "MashUp at the Library Managing Colliding User Needs, Technologies, and the Ability to Deliver."    Here are the slides I had used - any comments most appreciated.

Thursday, January 29, 2009

Is Youtube The New Search?

Information professionals everywhere take note: Google is uncomfortably sliding. Gone are the days that we 'google' for information. And now YouTube, conceived as a video hosting and sharing site, has become a bona fide search tool. Searches on it in the United States recently edged out those on Yahoo, which had long been the No. 2 search engine, behind Google. Interesting that Google owns YouTube, isn't it? In November, Americans conducted nearly 2.8 billion searches on YouTube, about 200 million more than on Yahoo, according to comScore. Here is what one 9 year old reveals about his information search behaviour in a New York Times article:

I found some videos that gave me pretty good information about how it mates, how it survives, what it eats,” Tyler said. Similarly, when Tyler gets stuck on one of his favorite games on the Wii, he searches YouTube for tips on how to move forward. And when he wants to explore the ins and outs of collecting Bakugan Battle Brawlers cards, which are linked to a Japanese anime television series, he goes to YouTube again. . .

“When they don’t have really good results on YouTube, then I use Google."

What does this mean? Are Facebook, Youtube, and Twitter going to take down the venerable goliath Google? Not really. I argued in an article that this is the phenomenon of social search. Are things finally catching up?

Monday, January 26, 2009

Ushahidi as a Mashup



I'm going to be talking soon about mashups. (And getting nervous about it, too). One mashup that I will be discussing is Ushahidi. It's an excellent example of how Web 2.0 is saving lives. Using technology to harness peace. More to come. Here is an excellent slide show of Ushahidi.

Wednesday, January 21, 2009

Nova on the Future of the Web

I heart Nova Spivak.  The grandson of management professor Peter Drucker, Spivak is an intellectual in his own right.  Not only is he a semantic web pioneer and technology visionary, he's also founded Twine, one of the first semantic web services out there.   I think he's one of the brightest minds today regarding ideas about the future of the Web. He's a visionary.   Here's a synopsis of Spivak's treatise Future of the Desktop.

1) The desktop of the future is going to be a hosted web service

(2) The Browser is Going to Swallow Up the Desktop

(3) The focus of the desktop will shift from information to attention

(4) Users are going to shift from acting as librarians to acting as daytraders

(5) The Webtop will be more social and will leverage and integrate collective intelligence

(6) The desktop of the future is going to have powerful semantic search and social search capabilities built-in

(7) Interactive shared spaces will replace folders

(8) The Portable Desktop

(9) The Smart Desktop

(10) Federated, open policies and permissions

(11) The personal cloud

(12) The WebOS (Web operating system)

(13) Who is most likely to own the future desktop?

Saturday, January 17, 2009

Topic Maps and the SemWeb

Half a year ago, I had a posting discussing Katherine Adams' seminal article about librarians and the Semweb. Katherine made a point about Topic Maps, which she believes will ultimately point the way to the next stage of the Web's development. They represent a new international standard (ISO 13250). In fact, even the OCLC is looking to topic maps in its Dublin Core Initiative to organize the Web by subject.

In the same posting, Steve Pepper, an independent researcher, writer and lecturer who has worked with open standards for structured information for over two decades, made a very interesting comment. He argues that:

Topic maps is really spearheading is nothing short of a paradigm shift in computing -- the notion of subject-centric computing -- which will affect far more than just the Web.

We've let programs, applications, and even documents occupy centre-stage for far too long. This is topsy-turvy: users are primarily interested in subjects (what the information is about), not how it was created or where it lives. We need to recognize this, and effect the same kind of change in information management that object-orientation effected in programming; hence the need for a subject-centric revolution.
Indeed, the Topic Maps 2008 Conference in Oslo, Norway, April 2-4 has just concluded. So what are topic maps, and why are they relevant for libraries and information organizations? The basic idea is simple: the organizing principle of information should not be where it lives or how it was created, but what it is about. Organize information by subject and it will be easier to integrate, reuse and share – and (not least) easier for users to find. The increased awareness of the importance of metadata and ontologies, the popularity of tagging, and a growing interest in semantic interoperability are part and parcel of the new trend towards subject-centric computing.

This conference brings together these disparate threads by focusing on an open international standard that is subject-centric to its very core: ISO 13250 Topic Maps, which is interestingly what Katherine Adams had pointed out eight years ago. We're getting closer. The pieces are in place. We just need a good evening to frame together the picture.

Monday, January 12, 2009

hakia and Librarians' Race to End the Search Wars

I've always been intrigued by hakia, which is considered the first SemWeb search engine of its kind. It is said that for the next generation web to exist, there needs to be a more concise way for users to find information and to search the web online. hakia is working with librarians to help make its results even more credible in the attempt to win the race to ouster Google in the current search engine wars. hakia is one of the first Semantic Web search engines.

However, besides QDEX (Quality Detection and Extraction) technology, which indexes the Web using SemanticRank algorithm, a solution mix from the disciplines of ontological semantics, fuzzy logic, computational linguistics, and mathematics, hakia also relies on the subject knowledge expertise of professionals. By combining technology and human expertise, it attempts to completely redefine the search process and experience. Take a look at my hakia, Search Engines, and Librarians How Expert Searchers Are Building the Next Generation Web for a deeper analysis of what hakia is trying to do with librarians. Hopefully, it offers more food for thought.

Thursday, January 08, 2009

A New Web 2.0 Journal

Web 2.0 The Magazine: A Journal for Exploring New Internet Frontiers is an important new journal that librarians and information professionals should take a serious look at. It attempts to fill the information gap in the area of Web 2.0 by focusing on new developments, the most used tools, trends, and reviews of books, articles, sites, and systems themselves so as to make Web 2.0 a useful part of the reader’s technology experience. Here is what Web 2.0 The Magazine attempts to do:

Admittedly, Web 2.0 is a hard concept to get one’s arms totally around as it means anything involving “user content”. This broad definition covers everything from social networks, such as Facebook, to 3D Virtual Reality Worlds, such as Second Life and World of Warcraft, with many, many stops in between. The unifying feature in all of the Web 2.0 systems and tools is that they differ fundamentally from Web 1.0, which is a one-way connection, in which information sources, vendors, advertisers, etc. present information for the reader to consume and / or respond to (the fact that a user may choose to buy on-line from Amazon or Sears does not make those sites something other than Web 1.0 since the user was not the one to initiate the content).

Thanks Dean for recommending this journal to me. It's an excellent read so far.