Friday, April 25, 2008

Library 2.0

Michael Casey and Laura Savastinuk's article in the Library Journal not only changed the way libraries are perceived, but also how librarians run them. In a way, Library 2.0 principles are nothing new. Interlibrary loan is very much a "long tail" concept. In fact, would it be possible to view Library 2.0 as change management in its most extreme form? Nonetheless, it was a brilliant read when the book was published. Here's what I got out of the book about Library 2.0 concepts.

(1) Plan, Implement, and Forget - Changes must be constant and purposeful. Services need to be continually evaluated.

(2) Mission Statement - A library without a clear mission is like a boat without a captain. It drives the organization, serving as a guide when selecting services for users and letting you set a clear course for Library 2.0

(3) Community Analysis - Know your users. Talk to them, have a feel for who you're serving, and who they are.

(4) Surveys & Feedback - Get both users and staff feedback. It's important to know what works and what doesn't.

(5) Team up with competitors - Don't think of the library as being in a "box." Look at what users are doing elsewhere that they could be doing through the library. Neither should bookstores or cafes or the Internet. Create a win-win relationship with local businesses that benefits everyone.

(6) Real input from staff - Having feedback means implementing ideas, and not just for show. Eventually, staff will realize the hoax, and morale will suffer.

(7) Evaluating services - Sacred cows do not necessarily need to be eliminated; however, nothing should be protected from review.

(8) Three Branches of Change model - This allows all staff - from frontline workers to the director - to understand the changes made. The three teams are: investigative, planning, and review team.

(9) Long tail - Web 2.0 concepts should be incorporated into the Library 2.0 model as much as possible. For example, the Netflix model does something few services can do: get materials into the hands of people who do not come into libraries. Think virtually as well as physically.

(10) Constant change & user participation - These two concepts form the crux of Library 2.0.

(11) Web 2.0 technologies - They give users access to a wide variety of applications that are neither installed nor approved by IT. The flexibility is there for libraries to experiment unlike ever before. It is important to have conversation where none exists before. Online applications help fill this gap.

(12) Flattened organizational structure - Directors should not make all the decisions. Instead, front line staff input should be included. Committees that include both managers and lower level staff help 'flatten' hierarchical structure, creating a more vertical structure that leads to more realistic decision-making.

Tuesday, April 22, 2008

7 Opportunities for the Semantic Web

Dan Zambonini’s 7 f(laws) of the Semantic Web is a terrific read, and perhaps offers a refreshing perspective of the challenges of realizing the SemWeb. Too often we hear a dichotomy of arguments, but Zambonini’s calmly lays out what he believes are hurdles for the SemWeb. Instead of regurgitating his points, I’m going to complement them with my own comments:

(1) Not all SemWeb data are created equal - There’s a lot of RDF files on the web, in various formats. But that doesn’t equate to the SemWeb. But this is a bit of a strawman. In fact, it emphasizes the point that the components of the SemWeb are here. The challenge is the finding the mechanism or application that can glue everything together.

(2) A Technology is only as good as developers think it is - Search analysis reveals that people are actually more interested in AJAX than RDF Schema, despite the fact that RDF has a longer history. Zambonini believes that this is because the SemWeb is so incredibly exclusive in an ivory-towerish way. I agree. However, what is to say that the SemWeb won’t be able to accommodate a broader audience in the future? We’ll just need to wait and see.

(3) Complex systems must be built from successively simpler system - I agree with this point. Google is successful in the search engine wars because it learnt how to build up slowly, and created a simple system that got more complex as it needed to. People love Web 2.0 because they’re easy to use and understand. But whereas Web 2.0 was about searching, the SemWeb should be about finding. Nobody said C+ and Java were easy, but complexity pays off in the long run.

(4) A new solution should stop an obvious pain - The SemWeb needs to prove what problems it can solve, and prove its purpose. Right now, Web 2.0 and 1.0 do a good job, so why would we need any more? Fair enough. But information is still in silos. Until we open up the data web, we’re still in many ways living in the dark.

(5) People aren’t perfect - Creating metadata and classifications is difficult. People are sloppy. Will adding SemWeb rules add to the mess that is the Web? I seriously can’t answer this one. We can only predict. But perhaps it’s too cynical to prematurely write off people’s metadata creating skills. HTML wasn’t easy, but we managed.

(6) You don’t need an ontology of everything. But it would help - Zambonini argues for a top-down ontology which would a one-fits-all solution for the entire Web rather than building from a bottom-up approach based on folksonomies of the social web. I would argue that for this to work, we need to look at it from different angles. Perhaps we can meet half way?

(7) Philanthropy isn’t commercially viable - Why would any sane organization buy into the SemWeb and expose their data? We need that killer application in order for this to work. Agree. Ebay did wonders. Let’s hope there’s a follow-up on the way.

Saturday, April 19, 2008

Four Ways to Library 2.0

Library 2.0 has stirred controversy since the day Michael Casey and Linda Savastinuk’s Library 2.0: Service for the next-generation library had hit online newsstands. A loosely defined model for a modernized form of library service that reflects a transition within the library world in the way that services are delivered to users, the concept of Library 2.0 borrows from that of Business 2.0 and Web 2.0 and follows some of the same underlying philosophies. It’s still being debated in the library community about its relevancy to the profession. (Haven’t we always had to serve our users in the first place. What’s new about that?)

Michael Stephens and Maria Collins’ Web 2.0, Library 2.0, and the Hyperlinked Library is a fascinating for those interested in learning more about these concepts. Certainly, at the core of Library 2.0 is blogs, RSS, podcasting, wikis, IM, and social networking sites. But it’s much more than that, and Stephens and Collins boils it down nicely to four main themes of Library 2.0:

(1) Conversations – The library shares plans and procedures for feedback and then responses. Transparency is real and personal.

(2) Community and Participation –
Users are involved in planning library services, evaluating those services, and suggest improvements.

(3) Experience – Satisfying to the user, Library 2.0 is about learning, discovery, and entertainment. Bans on technology and the stereotypical “shushing” are replaced by a collaborative and flexible space for new initiatives and creativity.

(4) Sharing – Providing ways for users to share as much or as little of themselves as they like, users are encourage to participate via online communities and connect virtually with the library.

Thursday, April 17, 2008

The Year Is 2009...

We're not that far. . . In 2002, Paul Ford wrote an amazing piece predicting what the world would look like in 2009. Well, we're almost there. Ford thought about a "Semantic Web scenario," one which had a short feature from a business magazine published in 2009. While Amazon and Ebay both worked as virtual marketplaces (they outsourced as much inventory as possible) by bringing together buyers and sellers while taking a cut of every transaction, Google focused on the emerging Semantic Web.

This is how Ford explains the SemWeb, which is one of the most concise I've seen to date.

So what's the Semantic Web? At its heart, it's just a way to describe things in a way that a computer can “understand.” Of course, what's going on is not understanding, but logic, like you learn in high school:

If A is a friend of B, then B is a friend of A.

Jim has a friend named Paul.

Therefore, Paul has a friend named Jim.

Jim has a friend named Paul.

Therefore, Paul has a friend named Jim.


Of course, it's much more than just A's and B's. But the idea that Google will eventually integrate the SemWeb into its applications is exciting. And for an article that was written back in 2002 with such clarity, it's a highly engaging read.

Saturday, April 12, 2008

Google and Web 3.0?

Maybe Google gets it afterall. Google has made its foray into the Semweb with its new Social Graph API coding. What's that? And why should you care? In having the Social Graph API, it makes information about the public connections between people on the Web, expressed by XFN and FOAF markup and other publicly declared connections, easily available and useful for developers. The public web is made up of linked pages that represent both documents and people. Google Search helps make this information more accessible and useful.

In other words, if you take away the documents, you're left with the connections between people. Information about the public connections between people is really useful. A user might want to see who else you're connected to, and as a developer of social applications, you can provide better features for your users if you know who their public friends are. There hasn't been a good way to access this information.

The Social Graph API looks for two types of publicly declared connections:

  1. It looks for all public URLs that belong to you and are interconnected. This could be a blog, Facebook, and a Twitter account.
  2. It looks for publicly declared connections between people. For example, your blog may link to someone else's blog while your Facebook and Twitter are linked to each other.

This index of connections enables developers to build many applications including the ability to help users connect to their public friends more easily. Google is taking the resulting data and making it available to third parties, who can build this into their applications (including their Google Open Social applications). Of course, the problem is that few people use FOAF and XFN to declare their relationships, but Google's new API could make them more visible and social applications could use them. Ultimately, Google could also index the relationships from social networks if people are comfortable with that.

What does this mean for information professionals? Stay tuned. By having Google on board the Semweb train, (or ship), it could pave the way for more bricks to be laid on the road to realizing the goal of differentiating Paris from Paris.

Wednesday, April 09, 2008

7 Things You Need to Know about the Semantic Web

Over at Read/Write Web, Alex Iskold has come up with what I consider a seminal piece in the Semantic Web literature. In Semantic Web Patterns: A Guide to Semantic Technologies, Iskold synthesizes the main concepts of the Semantic Web, asserting that it offers improved information discoverability, automation of complex searches, and innovative web browsing. Here’re the main themes:

(1) Bottom-Up vs. Top-Down – Do we focus on annotating information in pages (using RDF) so that it is machine-readable in top-down fashion? Or do focus on leveraging information in existing web pages so that they meaning can be derived automatically (folksonomies) in a botton-up approach? Time will tell.

(2) Annotation Technologies – RDF, Microformats, and Meta Headers. The more annotations there are in web pages, the more standards are implemented, and the more discoverable and powerful information becomes.

(3) Consumer and Enterprise – People currently don’t care much for the Semantic Web because all they look for is utility and usefulness. Until an application can be deemed a “killer application,” we continue to wait.

(4) Semantic APIs – Unlike Web 2.0 APIs which are coding used to mash up existing services, Semantic APIs take as an input unstructured information and relationships to find entities and relationships. Think of them as mini natural language processing tools. Take a look.

(5) Search Technologies – The sobering fact is that it’s a growing realization that understanding semantics wont’ be sufficient to build a better search engine. Google does a fairly good job at finding us the capital city of Canada, so why do we need to go any further?

(6) Contextual Technologies - Contextual navigation does not improve search, but rather short cuts it. It takes more guessing out of the equation. That's where the Semweb will overtake Google.

(7) Semantic Databases – The challenge of keeping up with the world is common to all database approaches, which are effectively information silos. That’s where semantic databases come in, as focus on annotating web information to be more structured. Take a look at Freebase.

As librarians and information professionals, we gather, organize, and disseminate. The challenge will be to do this as information is exploding at an unprecedented rate in human history, all the while trying to stay afloat and explaining to our users the technology. Feels like walking on water, don’t you agree?

Tuesday, April 08, 2008

Semantic Librarianship

If I had my stocks for Web 3.0, where would I put them?

How about a neat web service called Freebase. It’s a semanticized version of Wikipedia. But with a bigger potential. Much bigger. Freebase is said to be an open shared database of the world's knowledge, and a massive, collaboratively-edited database of cross-linked data. Until recently accessible by invitation only, this application is now open to the public as a semi-trial service.

What does this have to do with librarians? As Freebase argues, “Wikipedia and Freebase both appeal to people who love to use and organize information.” Hold that though. That’s enough to whet our information organizational appetites.

In our article, Dean and I argued that the essence of the Semantic Web is the ability to differentiate entities that the current Web is unable to do. For example, how can we currently parse Paris from Paris? Although still in its initial stages with improvements to come, Freebase does a nice job to a certain extent. Freebase covers millions of topics in hundreds of categories. Drawing from large open data sets like Wikipedia, MusicBrainz, and the SEC, it contains structured information on many popular topics, like movies, music, people and locations—all reconciled and freely available via an open API.

As a result, Freebase builds on the Social Web 2.0 layer, while providing the Semantic Web infrastructure through RDF technology. For example, Paris Hilton would appear in a movie database as an actress, a music database as a singer and a model database as a model. In Freebase, there is only one topic for Paris Hilton, with all three facets of her public persona brought together. The unified topic acts as an information hub, making it easy to find and contribute information about her.

While information in Freebase appears to be structured much like a conventional database, it’s actually built on a system that allows any user to contribute to the schemas—or frameworks—that hold the data - RDF, as I had mentioned. This wiki-like approach to structuring information lets many people organize the database without formal, centralized planning. And it lets subject experts who don’t have database expertise find one another, and then build and maintain the data in their domain of interest. As librarians, we have a place in all of this. It's out there. Waiting for us.

Wednesday, April 02, 2008

Moving Out & Moving On

Everyone needs a change every now and again. On May 1st, 2008, I will be moving to the Irving K. Barber Learning Centre as Program Services Librarian. Having worked with some very talented and supportive colleagues, I feel supremely fortunate because without them, I would not be at where I am at this point of my career.

Over the past few years, I have enjoyed working in a variety of jobs, from public libraries, to hospital libraries, to research centres, to academic libraries. (I also dabbled in publishing, archival, as well as teaching ventures). The integration of these experiences has been wonderful as it has helped build skills most essential in my upcoming endeavours.

What will this new position entail? To a certain extent, everything that I'm not doing now as an academic librarian. The Irving K. Barber Learning Centre itself is not a "traditional" library. It's a new building, a space for collaborative learning and ideas. A learning commons. A new way of learning. It also represents a new direction for librarianship. If there is one thing that typifies this position, it would be digital outreach. Web 2.0, Semantic Web, and Web 3.0? Stay tuned.

The possibilities are exciting.

I'd like to thank everyone who helped me along the way, particularly Dean Giustini, Eugene Barsky, Eleanor Yuen, Tricia Yu, May Yan, Henry Yu, Hayne Wai, Chris Lee, Rob Ho, Peter James & friends at HSSD, Rex Turgano, Rob Stibravy, Susie Stephenson, Matthew Queree, and Angelina Dawes, among the many. And of course, Hoyu. Thank you to all.

Thursday, March 27, 2008

The Social Web Into the Semantic Web

"What can happen if we combine the best ideas from the Social Web and the Semantic Web?" - Tom Gruber

In other words, can we channel folksonomies, tagging, user-created knowledge into one coherent structured Web? A Semantic Web? Tom Gruber seems to think so. In Collective Knowledge Systems, he proposes the Semantic Web vision points to a representation of the entity - for example, a city - rather than its surface manifestation. Therefore, one of the problems that we've always had accessing the Web's content is the difficulty in differentiating the city of Paris from the celebrity Paris Hilton when using a search engine.

In many ways, harnessing Web 2.0 technologies and refining them for the Semantic Web has been speculated a great deal. How do we move from collected intelligence to collective intelligence? There are three approaches to realizing the Semantic Web. Here they are:

(1) Expose structured data that already underlies unstructured web pages - Site builders would generate unstructured web pages from a database and expose the data using standard formats (think FOAF)

(2) Extract structured data from unstructured user contributions - Manually dentify people, companies, and other entities with proper names, products, instances of relations

(3) Capture structured data on the way into the system - A "snap to grid" system in which users enter structure to the data and helps users enter data within the structure. (Think of automatic spell check).

Where do librarians come in? We have always used our training to structure content, package it, and disseminate to our users. In our article, Dean and I argue that the catalogue is very much an analogy for how the Semantic Web can organize information in a way that the current Web is unable to do. Recent developments in RDA from the library side offer a promising glimpse into the possibilities for Web 3.0. True, we are only surmising. But let's not prevent us from creating.

Tuesday, March 25, 2008

Quantum Information Science?

Have you heard of quantum information science? Eventually, it might solve the problems of information-mess and access. Although quantum physics, information theory, and computer science were among the apex of intellectual achievements of the 20th century, they were often framed as separate entities. Currently, a new synthesis of these themes is quietly emergine. The emerging field of quantum information science is offering important insights into fundamental issues at the interface of computation and physical science, and may guide the way to revolutionary technological advances.

Director of the Institute for Quantum Information, John Preskill proposes in his lecture, that quantum bits (“qubits”), the indivisible units of quantum information, will be central for “quantum cryptography,” wherein the privacy of secret information can be founded on principles of fundamental physics. The quantum laws that govern atoms and other tiny objects differ radically from the classical laws that govern our ordinary experience. Physicists are beginning to recognize that we can put the weirdness to work. That is, there are tasks involving the acquisition, transmission, and processing of information that are achievable in principle because Nature is quantum mechanical, but that would be impossible in a "less weird" classical world.

What does this mean ultimately mean? A “quantum computer” operating on just a few hundred qubits could perform tasks that ordinary digital computers could not possibly emulate. Although constructing practical quantum computers will be tremendously challenging, particularly because quantum computers are far more susceptible to making errors than conventional digital computers, newly developed principles of fault-tolerant quantum computation may enable a properly designed quantum computer with imperfect components to achieve reliability.
How long will it take before we achieve quantum computing? Please be patient. These folks are working on it.

Friday, March 21, 2008

Free on CBC

The Canadian Broadcasting Corporation, long known for its traditional family-style programs (Road to Avonlea and Coronation Street) and NHL hockey, is actually making a splash in technology. A huge one at that. It's decided to apply the 1% principle and open up its content for anyone to freely download. That's right. Free.

In doing so, CBC becomes the major broadcaster in North America to release a high quality, DRM-free copy of a primetime show using BitTorrent technology. On top of that, CBC will also be distributing a version that can put in iPod's. The show, Canada’s Next Great Prime Minister, will completely free (and legal) for anyone to download, share & burn to the heart’s desire. For many, Bit Torrent has meant illegal, downright dirty business. In the future, however, it might actually be a better means to access for information and entertainment. CBC is attempting to prove that there are other means beyond the "box." It's trying to move past physical barriers and into the virtual. Shouldn't libraries be doing the same?

Sunday, March 16, 2008

5 Essences to Librarianship 3.0

What will the future of librarianship look like? Traditional cataloging, collection development, and reference will look very different, even five years from now. Changes are in motion. Don't you get the feeling that things are going to be fast and furious? There seems to be a lot of anxiety and uncertainty among librarians about what the future holds. But change is inevitable in life. From the card catalog to OPACs to the Internet, librarians and information professionals have had to adjust and adapt accordingly to new technologies. But unlike other professions that rely on technology, it's always had to catch up rather than take the lead. But we might not have a choice in the new Web. Here are 5 opportunities for us to look ahead to.

(1) Resource Description and Access - With the Anglo-American Cataloging Rules 2 (AACR2) moving way for its successor, the RDA will play an essential role for how information is to be classified and held in libraries and information organizations. However, the RDA will move beyond just the physical and include Web resources as well. You may ask, how can we catalog something that changes constantly? That's where the Semantic Web comes in.

(2) Information Architecture - Librarians have had to organize information. It's their jobs. As Web become more integrated into their work (as if it weren't already?), librarians will rely ever more so on the Web to conduct their work with patrons. Digital outreach is the key to survival. In order to achieve this, building accessible and user-centred websites will be essential.

(3) Virtual Worlds -
Everywhere gate counts are going down in libraries. Patrons are frequenting libraries less and less for information seeking, and more for products and spaces. This means that reference librarianship is changing, too. To a certain extent, we've experimented with virtual reference. In the future, if we are to embrace the possibilities of how we can bring our expertise to the user through other means. Whether it's Facebook, MySpace, Second Life, or Meebo. Think beyond the walls.

(4) Open Access -
Traditional publishing is nearing its last legs. Things fall apart; the centre cannot hold. Textbook publishers are churning new editions of the same text in order to prevent re-selling; journal publishers are forcing the print copies to be sold as a package with their electronic versions. Why? Fear. Publishers are scrambling to stay in business. Open access will open up new opportunities for how students and users buy books. Why not build you own textbook?

(5) "Free-conomics"
- Everything that users will want will be "free." To understand this principle, just look at the things that you are using without paying. It's based on the 1% principle, where 99% of users get access to the basics of a product while 1% of the others pay for the full premium. The spirit of librarianship has been about the principle of public good and collaboration. It's only natural we find ways to integrate the 1% principle to its full extent.

Sunday, March 09, 2008

Bill Gates Retires from Microsoft

Recently, Forbes revealed that Bill Gates has slipped to number three on the list of the world's wealthiest people. On top of that, Bill Gates is also stepping back from Microsoft to devote more time to the Bill & Melinda Gates Foundation. But that doesn't mean that Bill left with a whimper. Take a look at this video, particularly his going-away comedy skit. Nice job, Bill. Good-bye, but not farewell.

Friday, March 07, 2008

Librarians and Web 3.0

For better or worse, Web 3.0 is around the corner. Okay, maybe the technology is lagging; but we must admit that the third generation (third decade) Web is coming. In a post I had made back in September, Paul Miller of Talis made an insightful response, one which is relevant for today's discussion.
Although I'm slightly surprised at the sector's lack of overt engagement with this obviously synergistic area too, there are certainly examples in which librarians are grasping the Semantic Web and in which Semantic Web developers are recognising the rich potential offered by libraries' structured data...

Ed Summers over at Library of Congress would be one person I'd pick out to mention. Also, the work OCLC and Zepheira are doing on PURL, and our own focus on the Talis Platform within Talis; that's Semantic Web through and through, and we have significant products in the final stages of beta that put semantic technologies such as RDF and SKOS to work in delivering richer, better, more flexible applications to libraries and their users. Things really begin to get interesting, though, when you take the next step from enabling existing product areas with semantic technologies to actually beginning to leverage the resulting connections by joining data up, and reusing those links, inferences and contexts to cross boundaries between libraries, systems, and application areas.

There's also library-directed research at institutes such as DERI here in Europe, and even conferences like the International Conference on Semantic Web and Digital Libraries, which was in India this year.

Finally - for now - there's also a special issue of Library Review in preparation; Digital Libraries and the Semantic Web: context, applications and research, and I'll be speaking on The Semantic Web and libraries - a perfect fit? at the Talis Insight conference in November It's funny that you mention Jane in your post, because I'll also be doing something for her later in November that encompasses some of these themes...

Sometimes moving forward doesn't necessarily mean progress. Sometimes we need to take one step back before we can move two steps in the right direction. But it appears as if the infrastructure is there for us to move in the direction of Web 3.0. What does this mean for librarians? I suspect it means we should stop the bickering about Web versions, and start reflecting on the reasons why patrons are physically relying on library collections and coming to the libraries for information. Googlization of information has resulted in fears for the future of librarianship. But what are we to do? Standing idly by and playing the trumpets as the ship sinks isn't the right way to take it. What to do? Let's try move in the right direction.

Saturday, March 01, 2008

The Business of Free-conomics

He's done it again. Fresh off the press is Chris Anderson's "Free" in Wired Magazine. In 2004, Anderson changed the way business and the Web was conducted through his visionary Long Tail. Two years later, Anderson's back with the idea of "free." While the long tail proved the staple for Web 2.0, please put "free" into your lexicon for the upcoming Web 3.0.

Giving away things for free has been around for a long time. Think Gillette. In fact, the open source software movement is not unlike the shareware movement a decade earlier. (Remember that first game of Wolfenstein?) Like the long tail, Anderson synthesizes "Free" according to six principles:

(1) "Freemium" - Another percent principle: the 1% rule. For every user who pays for the premium version of the site, 99 others get the basic free version.

(2) Advertising - What's free? How about content, services, and software, just to name a few. Who's it free to? How about everyone.

(3) Cross-subsidies - It's not piracy even though it appears like piracy. The fact is, any product that entices you to pay for something else. In the end, everyone will to pay will eventually pay, one way or another.

(4) Zero Marginal Cost - Anything that can be distributed without an appreciable cost to anyone.

(5) Labour Exchange - The act of using sites and services actually creates something of value, either improving the service itself or creating information that can be useful somewhere else.

(6) Gift Economy - Money isn't everything in the new Web. In the monetary economy, this free-ness looks like madness; but that it's only shortsightedness when measuring value about the worth of what's created.

Tuesday, February 26, 2008

Collection Management 2.0

Librarianship sometimes feel (and sound) as if it's in disarray. The library discourse is often fractured and fragmented with so many difference viewpoints. Perhaps this is a result of being in our postmodern information age. Bodi and Maier-O'Shea's The Library of Babel: Making Sense of Collection Management in a Postmodern World asserts that libraries have to invest in and prepare for a digital future while maintaining collections and services based on a predominantly print world.

How is it that we're in postmodern world of academic library collection management? Collections are no longer limited to a physical collection in one location; rather, they are a mixture of local and remote, paper and electronic. Hence, in their experimentations of collection development at two research and liberal arts college libraries, the authors realize that there should be three principles. We aren't reinventing the wheel here; but sometimes, amidst our heavy work days and busy lives, we often forget to step back to reassess how things can be done better. The authors offer an interesting viewpoint in this light:

(1) Break down assessment by subject or smaller sub-topics when necessary

(2) Blending of variety of assessment tools appropriate to the discipline

(3) Match print and electronic collections to departmental learning outcomes through communication with faculty members

Wednesday, February 20, 2008

Top 25 Web 2.0 Tools

Jessica Hupp from College Degree has written some insightful articles about information technology. 25 Useful Social Networking Tools for Librarians might be one of the best. She profiles 25 of the best Web 2.0 tools available that librarians should consider using for their professional work. I'm just going to introduce the list. I encourage you to read her actual entry.

1. Communication - Keep in touch with staff, patrons, and more with these tools

MySpace

Facebook

Ning


Blog

Meebo


LinkedIn


Twitter


2. Distribution
- Tools make it easy to share information from anywhere

Flickr


YouTube


TeacherTube


Second Life


Wikipedia


PBwiki


Footnote


Community Walk


SlideShare


Digg


StumbleUpon


Daft Doggy

3. Organization
- Keep all of your information handy and accessible with these tools

aNobii


Del.icio.us


Netvibes

Connotea

LibraryThing


lib.rario.us

Thursday, February 14, 2008

The Googling Librarian

An article from the Chronicle of Higher Education popped up which once again highlighted the information (or lack of) needs of college students. It has been a recent phenomenon -- this argument and counter-arguments of the necessity of libraries and librarians in the face of Google-ization. For every viewpoint that the Internet has replaced the information services of libraries, there is the stance that users' are even more confused about information overload and the mess that is the Web.

I tend to agree with a what Dennis Dillon says in a new article, Google, Libraries, and Knowledge Management: From the Navajo to the National Security Agency. Libraries and the 'Net play are different entities: libraries play the library game, not the information game. Google is the same for everyone. It is not tailored for different user groups, and it does not change, as local users need shift. Google's very nature is different from that of libraries.

Here's the kicker folks: We could wake up tomorrow to the news that a banking conglomerate has purchased Google and intends to turn it into a private corporate information tool, and wants to convert the content to French. Although just a silly hypothetical situation, Dillon makes a good point that the nature of people and organizations such as Google are not playing the same games as libraries.

Perhaps this is what libraries with foresight such as McMaster University Libraries are doing. They're integrating new technologies to supplement and complement existing facilities. Before it's too late. I personally talk a great deal about emergent technologies, particularly Web 3.0 and the Semantic Web, but in the end, I believe that these are mere tools that facilitate for the growing organism of libraries. In the end, interior design is as every bit relevant to how users perceive the physical spaces of the library as Facebook's uses for increasing outreach to students. But put the two together: and we pack a powerful punch. Dillon leaves us with a freshly yet somewhat disconcerting commenting:
Libraries have become so enamoured of technology that we sometimes cannot see what is in front of our faces, which is that there are still people in our buildings and they are there for a reason.

Wednesday, February 06, 2008

The Future of Digital Librarians

My colleague and mentor The Google Scholar discussed a bit about the Semantic Web and Web 2.0. Is it relevant to the profession of librarianship? Absolutely. How do we achieve it? Edie Rasmussen and Youngok Choi released a study in 2006 that surveys the skills that practitioners lack in What is Needed to Educate Future Digital Librarians. In this study, the two authors found that while many librarians are young and fresh out of graduate LIS school, they often lack the skills that are necessary for them to thrive in the increasingly digital world of libraries. LIS curricula are often limited to introductory classification and rudimentary information technology courses. There appears to be a real disjunct between the actual job descriptions that are required for newer positions and the actual skills that librarians receive in LIS school. Rasmussen and Choi's study finds that respondents are often frustrated over the "training gaps" during their studies for the following:

(1) Overall understanding of the complex interplay of software

(2) Lack of vocabulary to communicate to technical staff

(3) Knowledge of Web-related languages and technologies

(4) Web design

(5) Digital imaging and formatting

(6) Digital technology

(7) Programming and scripting languages

(8) XML standards and technologies

(9) Basic systems administration

In my own experience as an information professional, I find that these skills are sorely lacking in my own education. I'm finding it increasingly my own initiative to get caught up in the literature and the technologies. Who really has time to learn OAI-PMH metadata standards, XML, EAD, and TEI? Many librarians keep abreast of their field -- but on top of their current duties. But the problem remains that LIS schools do not to train technicians even though that is what they're doing - their mandate is to nurture scholars. Which I can understand. Yet, we can't fit a square peg into a circle. There lies the conundrum: something's got to give. But what? That has remained the intense tension in the field of LIS since its inception. With the advent of the Web and newer technologies, this gap will only widen.

Thursday, January 31, 2008

Web 3.0 as in Automation?

I often wonder what kind of automation will make it possible for the Semantic Web. I know there needs to be an automated web browser (or something similar), but what would it look like? The solution could look something like Automatic Character Switch (ACtS), which is a strategy and a philosophy rather than a standard, meaning community moderators can independently implement their own ACtS methods. Similar to AJAX, ACtS is invoked only when it is necessary; that is, only when a web space is connected to a community.

So what is ACtS? According to Yihong Ding, ACtS only allows different communities to recognize whatever they can identify from a web space. A web user can set up a local web spce that stores his web resources. When he subscribes to a new web community, he uploads his local web space to the site while the site customizes its resources based on the community specifications. ACts begins with a user's subscribing a web space to a community. The community server thus performs a community-sensitive resource identification procedure to categorize (information retrieval) and annotate (semantic annotation) public resources stored in the web space. Thus, the local web space creates a community-specific view over its resources, which composes a community-specific sub-space. But ACtS is only a theory. For it to be realized, there needs to be two premises:

(1) A uniform representation - Web spaces similar to what is on Web 1.0. This requires advancement on HTML encoding. In particular, this means independent HTML encoding of individual web resources.

(2) Character recognition and casting technology - A combination of information retrieval and semantic annotation methods.

Wednesday, January 30, 2008

Public Library 2.0?

Much has been discussed about the role of public libraries as they are increasingly facing budget cuts while facing greater needs for technological innovations. Some have argued that this is natural, as we have entered Library 2.0, which is all about rethinking library services in the light of re-evaluating user needs and the opportunities produced by new technologies. Although there have been great resources written about Library 2.0, there hasn't been one as thorough in its analysis of public libraries as Public Library 2.0: Towards a new mission for public libraries as a "network of community knowledge"? Chowdhury, Poulter, and McMenemy proposes Public Library 2.0, inspired by Ranganathan's famous five principles. They make great fodder for further discussion, don't they?

(1) Community knowledge is for use
- Since the value of a community is the knowledge it possesses, people who leave a community will have memories. Yet, little has been carried out in public libraries to digitize local resources.

(2) Every user should have access to his or her community knowledge - Knowledge is for sharing; community knowledge becomes valuable only when it can be accessed and used by others. Facilitating the creation and wider use of this knowledge should be the new role of public libraries.

(3) All community knowledge should be made available to its users - No community knowledge should be allowed to be wasted. Rather, public libraries should facilitate the creation of such knowledge so that it is recorded and preserved. Nothing should be lost.

(4) Save the time of the user in creating and finding community knowledge - Just like the paper records of past lives, the digital records of current lives are accumulating in an ad hoc manner but in a much greater quantity and variety. Hence, public library staff should fill the role of advisors on local content creation, management, and implementation of controlled description, as well as access schemes.

(5) Local community knowledge grows continually - Because community knowledge creation is a continual process, public libraries should act as local knowledge hubs must use existing standards and technology for digitization as well as metadata for the management of, and access to, the digitized resources

Sunday, January 27, 2008

The Semantic Catalogue

It's important that librarians keep at the back of their minds how to integrate the Semantic Web into the catalogue, which is ultimately the bridge that users cross to access the library's resources. But it's easy to forget about it, particularly since many libraries have difficulty keeping up with Web 2.0 technologies. But regardless of how far we've come along, it's necessary to peer into the future and see what kinds of changes we'll need to embrace. It could be ten years down the road before we hit the Semantic Web . . . or five . . . or even less. Take a look at Campbell and Fast's Academic Libraries and the Semantic Web: What the Future May Hold for Research Supporting Library Catalogues. They make an excellent case for integrating existing web resources into a dynamic, information-rich, and user-centred catalogue.

Meshing services such as IMDB, Amazon, AFI's Catalogue, the authors suggest that academic libraries could use the Semantic Web as a source of rich metadata that can be retrieved and inserted into bibliographic records to enhance the user's information searches and to expand the role of the library cataloguer as a research tool rather than a mere locating device. (Something along the lines of the Pipl search engine technology). In doing so, the cataloguer acts as an information intermediary, using a combination of subject knowledge and information expertise to facilitate the growth of semantically encoded metadata. In a Web 3.0 world, the cataloguer's new responsibilities would include the following:

(1) Locate - RDF-encoded information on specific subjects, scrutinizing its reliability, and assessing its usefulness in meeting cataloguing objectives

(2) Select - RDF resources for the specific item being catalogue

(3) Participate - In markup projects within a specific knowledge domain, thus promoting the growth of open-access domain-specific metadata

Thursday, January 24, 2008

Google Scholar, Windows Live Academic Search, and LIS 2.0

That School of Information and Library Science at the University of North Carolina at Chapel Hill sure churns out some great theses. The latest one is Josiah Drewey's Google Scholar, Windows Live Academic Search, and Beyond: A Study of New Tools and Changing Habits in ARL Libraries offers remarkable insight into these two academic search engines. Little has been written about Windows Live Academic Search, so much so that it appears most people have forgotten about it. (Including its own creators). Drewey's paper reveals that such is not the case. It's worth a read. Here are my favourite points that Drewey makes about GS and WLAS. I'll share them with you all, it deserves some attention here:

(1) Citation Ranking - Search results are largely influenced by citation counts generated by Google's link-analysis, which means that users see the most highly cited (and therefore, the most influential) articles

(2) Citation Linking - GS rivals Web of Science and Scopus with its ability to link to each article through a "cited by" feature that allows users to see which other authors have cited that particular article. GS is superior in this aspect as it stretches into the Humanities as well.

(3) Versioning - GS compiles each different version of a particular article or other work in one place. Different versions can come from publisher's databases, preprint repositories or even faculty homepages.

(4) Open Access - GS increasingly brings previously unknown or unpublicized content to users.

(5) Ability to link to libraries - GS has the bility to link to content already paid for by libraries. Thus, search results from GS can lead directly to the libraries' databases.

(6) Federated Search Engine - Instead of searching many databases as a query is made, GS' resources are compiled prior to the search and return very quickly.

In contrast, Drewey makes some great insights into Windows Live Academic Search. Here are the main strengths of WLAS:


(1) Better interface - WLAS uses a "preview pane" to display initial search results, which the user can mouse over a citation to show the abstract in another pane to the right, whereas GS is inflexible

(2) Names of authors are hyperlinked - Search results take the user to other works by each author

(3) Citations Export - Although GS allows this, WLAS are much more easily visible to export to BibTeX, RefWorks, and EndNote

(4) User-friendly - In many ways, WLAS offers more features tailored for users. Not only does it offer RSS feeds, it enables uses to store their preferences and save search parameters. GS surprisingly does not have such features.

Tuesday, January 22, 2008

The Long Tail and Libraries

To date, Lorcan Dempsey's Libraries and the Long Tail has offered the most insightful analysis of the Long Tail's importance in libraries. As I've written before, the Long Tail is an effective strategy to utilize when implementing Library 2.0 for the modern library. The question is: could it be implemented without a huge overhaul of most existing libraries? These are some points that Dempsey argues:

(1) Transaction Costs - The better connected libraries are, the lower the transaction costs

(2) Data about choice and behavious - Transactional behavioural data is used to adapt and improve systems. Examples of such data are holdings data, circulation and ILL data, and database usage data.

(3) Inventory - As more materials are available electronically, we will see more interest in managing the print collection in a less costly way. Although historical library models have been based on physical distribution of materials, resources are decreasingly needed to be distributed in advance of need; they can be held in consolidated stores

(4) Navigation - There are better ways to exploit large bibliographic resources. Ranking, recommendations, and relation help connect users to relevant material and also help connect the more heavily used materials to potentially useful, but less used, materials

(5) Aggregation of Demand - The library resource is fragmented. In the new network environment, this fragmentation reduces gravitational pull, which means that resources are prospected by the persistent or knowledgeable user, but they may not be reached by others to whom the resources are potentially useful. What OCLC is doing is making metadata about those books available to the major search engines and routing users back to library services

Saturday, January 19, 2008

Google = God?

Maybe Google got it right all along. But is it God? That often appears to be the way that most people do their searching online nowadays, expecting to find the answer to just about anything. Yihong Ding calls this kind of searching, "oracle-based" web searching, which search engines such as Google are assumed to know everything. But this worked relatively well in the early days of the Web because it a pragmatic and affordable strategy; at that time, the quantity of web resources was comparatively small. We rarely searched for meaning. Based on this premise, to build a semantic oracle (i.e. Semantic Google) is equiavalent to create a real God (who knows everything) to human beings.

Perhaps, according to Ding, a better alternative is collaborative searching. Since current answer-based search strategy is motivated by questions, collaborative search is motivated by answers. In our answer-based search model, the ones who answer questions may not have passion (or enough knowledge) to questions. But an inanimate search engine such as Google doesn't know this -- nor does it care.

However, Web 2.0 is slowly changing this course of searching. Already, search engines such as Cha Cha are harvesting collective intelligence and wisdom of the crowds to retrieve more "relevant" results. Ding goes one point further: Web 3.0 will be based on community-sensitive link resources. It will reverse the relation between horizontal search engines and vertical search engines. The current model of vertical search engines being built upon generic search engines are not working well because they are too immature to provide communicate-specific search by themselves. (Just look at the limitations of Rollyo). What will the Semantic Web search engine look like? Maybe something like this.

Friday, January 18, 2008

The Future of I.S.

Meet Ramesh Srinivasan, professor of Information Studies at UCLA. During my trip to Los Angeles, I met with the IS faculty and visited some of the libraries there at UCLA. My conversation with this up and coming academic star was fascinating to say the least. Ramesh's interests includes exploring connections between diasporic/indigenous communities and new media and how information technologies shape, transform, and differentially impact nations, cultures, societies along educational, political, health-related, social, and infrastructural dimensions.

Among his more interesting projects is the Emerging Databases, Emerging Diversity (ED2): National Science Foundation-funded initiative to study methods by which digital collections can be shared via systems that maintain diverse tags, ontologies, and interfaces. In collaboration with Cambridge University's Museum of Anthropology and Archaeology, and the Zuni community of New Mexico, the $300,000-funded project inquires how digital access to ancestral objects affects diverse communities. Ramesh's work involves extensive field work in places like Kyrgyzstan and India. (Exciting!)

The faculty at UCLA represents Library and Information Science's gradual shift towards the iSchool movement. Academics such as Ramesh Srinivasan represent the new face of LIS. This has important implications for librarians, who will ultimately be bred and nurtured by these new scholars nontraditional perspectives to LIS. Rather than basing their studies on users of libraries, newer scholars such as Srinivasan, whose background is as diverse as his research (his PhD is in Design), go beyond the traditional domain of LIS. Inevitably, librarianship will change because of this new approach. New ways of thinking and research will be injected into the profession -- perhaps this is where the source of innovation in libraries will come from as well. From the classroom.

Wednesday, January 16, 2008

Metcalfe's Law

As I had opined in previous posts, the next stage of the Web will be built on the existing infrastructure of Web 2.0. One of the foremost thinkers of the Semantic Web makes an insightful analysis of the progress from Web 2.0 to the Semantic Web. Along with Jennifer Golbeck, James Hendler puts forth the idea of Metcalfe's Law, arguing that value increases as the number of users increases. Because of this, potential links increase for every user as a new person joins. Not surprisingly, Metcalfe's Law is the essence of Web 2.0.

As the number of people in the network grows, the connectivity increases, and if people can link to each other's content, the value grows at an enormous rate. The Web, if it were simply a collection of pages of content, would not have the value it has today. Without linking, the Web would be a blob of disconnected pages.

As information professionals and librarians, we shouldn't miss out on the obvious links between Web 2.0 and the Semantic Web. Social networking is critical to the success of Web 2.0; but by combining the social networks of Web 2.0 with the semantic networks of the Semantic Web, a tremendous value is possible. Here's a scenario from Tom Gruber which I find very compelling:

Real Travel "seeds" a Web 2.0 travel site with the terms from a gazetteer ontology. This allows the coupling of place names and locations, linked together in an ontology structure, with the dynamic content and tagging of a Web 2.0 travel site. The primary user experience is of a site where travel logs (essentially blogs about trips), photos, travel tools and other travel-related materials are all linked together. Behind this, however, is the simple ontology that knows that Warsaw is a city in Poland, that Poland is a country in Europe, etc. Thus a photo taken in Warsaw is known to be a photo from Poland in a search, browsing can traverse links in the geolocation ontology, and other "fortuitous" links can be found. The social construct of the travel site, and communities of travelers with like interests, can be exploited by Web 2.0 technology, but it is given extra value by the simple semantics encoded in the travel ontology.
Genius.

Monday, January 07, 2008

Pragmatic Web as HD TV

The Pragmatic Web: A Manifesto makes a return to simplification. For all the hype about Web 3.0, we've still seen very little substantial evidence that it exists. Schoop, De Moor, and Dietz proposes a "Pragmatic Web" as a solution which does not replace the current web but rather, extend the Semantic Web.

Rather than waiting for everyone to come together and collaborate -- that could take forever or worse yet . . . never -- the best hope might be to encourage the emergence of communities of interest and practice that develop their own consensus knowledge on the basis of which they will standardize their representations. Thus, the vision of the Pragmatic Web is to augment human collaboration effectively by appropriate technologies. Thus, the Pragmatic Web complements the Semantic Web by improving the quality and legitimacy of collaborative, goal-oriented discourses in communities.

I liken this scenario to High-definition Television. By 2010, the majority of programming in North America will move to HDTV specifications, thus effectively removing other TV formats such as plasma TV's from competition. In the meantime, consumers are free to continue using their existing TV sets. The Web could very well employ this model, as it's logical and crosses the path of least damage. Using the HD TV scenario, Web users can continue using their current browsers and existing ways of surfing while those who want to maximize the full potential of the Web will use Semantic Web browsers (e.g. Piggy Bank) that are designed specifically to utilize the portion of the Web that is "Semantic Web-compliant."

Meanwhile, in the background, semantic annotation will be slowly integrated into Web pages, programs, and services. As time progresses, users will eventually catch onto the "rave" that is the Semantic Web . . .

Saturday, January 05, 2008

E-Commerce 2.0

Web 2.0 has been quite the hype over the past few years, perhaps too much. Much of it pertains to best practices using blogs, wikis, RSS feeds, and mashups. But not very much has been discussed - well, not enough in my opinion - about practical commercial applications other than the ubiquitous eBay and Amazon. Not anymore. Meet Zopa, the world's first social finance company. In 2005 Zopa pioneered a way for people to lend and borrow directly with each other online as part of our continuing mission to give people around the world the power to help themselves financially at the same time that they help others. According to Kupp and Anderson's Zopa: Web 2.0 Meets Retail Banking, here's how Zopa works:

(1) Zopa looks at the credit scores of people looking to borrow and determines whether they're an A*, A, B, or C-rated borrower. If they're none of the those, then Zopa's not for them

(2) Leners make lending offers such as "I'd like to lend this much to A-rated borrowers for this long and at this time

(3) Borrowers review the rates offered to them and acept the ones they like. If they are dissatisfied with the offered rates on any particular day, they can come back on subsequent days to see if rates have changed

(4) To reduce any risk, Zopa spreads lender capital widely. A lender putting forth, for instance, 500 pounds or more would have his or her money across at least 50 borrowers

(5) Borrowers enter into legally binding contracts with their lenders

(6) Borrowers repay monthly by direct debit. If repayments are defaulted, a collections agency uses the same recovery process that the High Street banks use

Thursday, January 03, 2008

Mashups for '09

It's already been two years since my publication of an article on library web mashups. There have been developments, but still no breakthroughs with that killer application that could popularize mashups for the masses. The main challenge with mashups is that they are still a programmer's world. In merging two or more web programs together, web mashups are the next stage of Web 2.0 and are changing the way that the web is being used. There are already several mashup editors that help user create or edit mashups. Yahoo pipes, Google Mashup Editor, Microsoft Popfly, and Mozilla Ubiquity. But they require some programming skills. I believe mashups are the next stage of the web, the Semantic Web. Why? Because mashups open up data, breaking down the information silos.

I've updated my last article with Mashups, Social Software, and Web 2.0: How Remixing Programming Code Has Changed The Web. In taking a look at mashups, I think libraries need to pay attention, as they open up virtual information services to a much larger audience.

When Times Are Tough . . .

I love libraries, everything from the smell of books, to the warmth of staff, the comfy carpets, to the great DVD collections that are all free to borrow with just a library card and nothing more. But we are in tough times lately, and the downfall of the economy has proven just how useful libraries are to society. As the Los Angeles Times has reported, that although retail stores may be quiet these days, but libraries are hopping as people look for ways to save money. The Los Angeles Public Library is “experiencing record use,” said spokesman Peter Persic, with 12% more visitors during fiscal 2008 than the previous year. At the San Francisco Public Library, about 12% more items were checked out in October than a year earlier. The Chicago Public Library system experienced a 35% increase in circulation. The New York Public Library saw 11% more print items checked out (a spokesman said that could be partly explained by extended hours) . . .

And I`ve begun to experience this myself. Patrons are starting to use collections more, and realizing the financial pinch that the economy has given us. Fear not. The library isn`t going anywhere anytime soon.

Wednesday, January 02, 2008

Mashups for '09

It's almost two years since I first researched on web mashups. I still remember having a working draft of an article I had been doing for the Journal of Canadian Health Libraries on New Year's Eve. (Hey, it was a slow day). Lo and behold, two years later, and there still have only been a handful of articles on mashups. My idol Michelle "The Krafty Librarian" Kraft has written an excellent chapter in Medical Librarian 2.0 which is perhaps the most concise to date.

I've recently written another entry on mashups, Mashups, Social Software, and Web 2.0
How Remixing Programming Code Has Changed The Web
. The challenge with mashups is that it's still unfortunately a web programmer's tool. However, the next stage of the Web will be mashups. It's about opening data for others, and breaking down information silos.

11 Ways to the Library of 2012

Don't blink. It's only five years away. Inundated with the day-to-day duties working in a large academic library has sometimes removed me from the "larger" picture of what libraries look like not only to users, but ultimately how libraries will look like in the future. I've written a great deal about the Semantic Web and Web 2.0; but how do they fit libraries: physically and conceptually? Visions: The Academic Library in 2012 offers a meta-glimpse of how libraries might look like in 2012. As you'll notice, some of the features are suspiciously Web 2.0 and Library 2.0? Let's take a look, shall we?

(1) Integrated Library System - the system will recognize the patron and quickly adapt and respond to the patron's new questions and needs (A Semantic Web portal?)

(2) Information Available - collections will undergo dramatic transformations, as they will be largely patron-selected, featuring multi-media resources and databases, many provided collaboriatvely through extensive consortial arrangements with other libraries and information providers (Think longtail?)

(3) Access to Information - print-on-demand schemes will be developed utilizing the dissertation production experience of UMI but providing mechanisms by which the user can return the fresh, undamaged manuscript for credit, and for binding and future use (Kindle?)

(4) Study Space - Space for work and study will be adaptable, with easily reconfigured physical and virtual spaces (Information Commons? Learning commons?)

(5) Information Instruction - Training and learning support, delivered both in person and through appliance-delivered (desktop, hand-held, and small-group), videoconferencing, will characterize all this

(6) Information Printouts - Articles, videos, audios, an on-demand printing of various formats will not only be commonplace, but displays of titles will be coordinated with publishers and booksellers to enhance information currency, to market small-run monographs, and to generate revenues

(7) Organizational Aspects - Library staff will be engaged, networked, matrix-structured, and largely "transparent" unless the patron is standing inside the facility facing the individual

(8) Orientation - Library's perspective will be "global" - ubiquitous automatic translators will facilitate truly global information-accessing programs

(9) Computer Access - From OPACS to wireless access for collapsible laptops and personal appliances

(10) Financial - the viable library will have developed dependable revenue streams to facilitate ongoing innovation and advancement (Library as Bookstore model?)

(11) Consortia - Collaborating to create and publish academic journals and resources, particularly e-journals, e-books, and collections of visual resouces in various media (Open Access?)