Thursday, June 19, 2008

Stephen Abrams at SLA in Seattle

Day #2 of SLA was full of fascinating discussions. Stephen Abram's session, "Reality 2.0 - Transforming ourselves and our Associations" offered the most thought provoking ideas - definitely the highlight of my experience at this conference.

For those who don't already know, Stephen Abram is President 2008 of SLA and was past-President of the Canadian Library Association. He is Vice President Innovation for SirsiDynix and Chief Strategist for the SirsiDynix Institute.

Here's a flavour of what I thought were key points that really gave me food for thought:

(1) What's wrong with Google and Wikipedia? - It's okay for librarians to refer to Google or Wikipedia. Britannica has 4% error; Wikipedia has 4% error, plus tens of thousands of more entries. It's not wrong to start with Wikipedia & Google, but it is wrong when we stop there.

(2) Don't dread change - This is perhaps the whiniest generation this century. The generation that dealt with two world wars and a depression did fine learning new tools like refrigerators, televisions, radios, and typewriters. And they survived. Why can't we? Is it so hard to learn to use a wiki?

(3) Focus! - We need to focus on the social rather than the technology. Wikis, blogs, and podcasts will come and go. But connecting with users won't. We must not use technology just for the sake of catching up. There has to be a reason to use them.

(4) Don't Be Anonymous - Do we give our taxes to a nameless accountant? Our teeth to a nameless dentist? Heart surgeon who has no title? If these professions don't, then why are information professionals hiding behind their screens. Go online! Use social networking as your tools to reach out to users!

(5) Millennials - This is perhaps the 1st generation in human history that its younger generation teaches its previous generation. However, though there is much to learn from youths about technology, there is also much need to mentor and train for this profession to prosper and flourish.

(6) Change is to come! - Expect the world to be even more connected than it already has. With HDTV, that means more cables are freed up for telecommunications. Google's endgame is to provide wireless accesss through electricity. There're already laser keyboards where you can type on any surface. The world is changing. So must information professionals.

(7) Build paths, not barriers - When there are pathlines created by pedestrians, libraries commonly erect fences to prevent walking. Why not create a path where one exists already so that the library becomes more accessible? Librarians must go to the user, not the other way around. If patrons are using Facebook, then librarians need to use that as a channel for communication.

Stephen's power presentation is here for your viewing pleasure as well.

Tuesday, June 17, 2008

SLA Day #1

Just when one thought that bibliographic control has changed, it might change some more. On Day 1 of SLA in Seattle, I went to a fascinating session given by Jose-Marie Griffiths, called On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control offered a fascinating multifaceted glimpse into the current situation of bibliographic control and cataloguing. What is intriguing about this working group is the fact that it comprises both the library world and the private sector. Led by a tri-membership of Google, the American Library Association, and the Library of Congress, the working group created a recommendation document which proposed five general recommendations: 1) increasing efficiency; 2) enhancing access; 3) positioning technology; 4) positioning the community for the future; and 5) strengthening the profession.

What is controversial about the proposal is the suspension of the Resource and Description Access (RDA). Not only does the working group believe that the RDA is too confusing and difficult to implement, it also requires much more testing. The report also proposes for a more continue education in bibliographic control for professionals and students alike. By designing an LIS curriculum and building an evidence base for LIS research can the profession be strengthened for the future.

Although the session had a fairly spare audience, I found this session to be highly engaging and perhaps even ominous for the future of librarianship. Because the Library of Congress accepted the report with support (although unofficially), this could mean a schism in progress of RDA, which is viewed as the successor of the AACR2. Also, the fact that this working group included the non-library world (i.e. Google and Microsoft), the future of bibliographic control won't be limited to librarians. Rather, it will involve input from the private sector, including publishers, search firms, and the corporate world. Is this a good thing? Time will tell. For better or for worse.

Thursday, June 12, 2008

B2B in a World of Controlled Vocabularies and Taxonomies

The e-readiness rankings have been released. And it reveals that the US and Hong Kong are the leaders in this category for e-readiness. How do you measure it? According to the Economic Intelligence Unit, connectivity is a measure of e-readiness. Digital channels in a country which are plentiful, fast, and reliable enough for its people and its organizations to make the most of the Internet is the basic infrastructure and measure for this to happen. But if individuals and businesses do not find the available channels useful in completing transactions, then the number of PC's or mobile phones in a country is a worthless measure.

Hence, the EIU designed its findings by looking at the opportunities that a country provides them to businesses and consumers to complete transactions. Market analysts Forrester estimate that online retail sales in the US grew by 15% in 2006; US $44 billion was spent online in the third quarter and the firm estimates that 2006 online sales in the Christmas holiday season alone reached US 427 billion. Another research firm, IDC, estimates that business-to-business (B2B) transaction volume in the US will reach US $650 billion by 2008, which amounts to two-thirds of the world's US $1 trillion B2B market by that time.

Even though there is concern that the great weight of the US in online activity takes away from the rest of the world, the fact that its online adoption also benefits other countries; China is one beneficiary of the growth of B2B volumes in the US, so much so that there has been the creation of some sizeable and sophisticated B2B transaction service providers, including one of the world's largest online B2B marketplaces, Alibaba.

Over 15 million business and consumer customers in China user Alibaba's online platform. While most do not pay to use basic services, more than 100,000 businesses do. In fact, Yahoo! had bought 40% stake in Alibaba for US $1 billion in 2005). The chinese firm is evolving into a comprehensive supplier of online business development resources for Chinese customers, many of whom would not be doing business online at all if not for Alibaba.

What does this mean for information professionals? A great deal. Look at the financial implications of B2B in the current telecommunications infrastructure. We're essentially running the online and digital economy on the bricks and mortars of outdated networks. We're in a good position to take advantage of the this upcoming economy.

Thursday, June 05, 2008

Talis on Web 2,0, Semantic Web, and Web 3.0

I was honoured to have been interviewed by Richard Wallis of Talis. I was also quite humbled by the whole experience, as I learned just how far I've come in my understanding of the SemWeb and how much more I have to go. We had a good chat about Web 2.0, Semantic Web, and Web 3.0. Have a listen to the podcast. Any comments are welcomed. For those who want a synopsis of what we had discussed, here is my distilled version:

1. Why librarians? - Librarians have an important role to play in the SemWeb. Information organization are traits and skills that librarians have which are relevant to the SemWeb architecture. Cataloguing, classification, indexing, metadata, taxonomies & ontologies -- these are the building blocks of LIS.

2. What will the SemWeb look like? - Think HDTV. I believe the SemWeb will be a seamless transition, one that will be lead by innovators - companies and individuals that will pave the way with the infrastructure for it to happen, yet at the same time will not alienate those who don't want to encode their applications and pages with SemWeb standards. But like HDTV, those who fall behind will realize that they'll need they'll eventually need to convert...

3. Is this important right now? - Not immediately. The SemWeb might have minimal effect on the day-to-day work of librarians, but the same could be said for computer programmers and software engineers. Right now, we are all waiting for that killer application that will drive home the potentials of the SemWeb. So until that transpires, there is much speculation and skepticism.

4. What do librarians need to do? -
Learn XML, join the blogosphere's discussion of the SemWeb, discuss with colleagues, pay attention to RDA, continue questioning the limitations of Web 2.0. Just because we don't see it yet, doesn't mean it should stop us from joining the discourse. Think string theory.

Wednesday, June 04, 2008

Easterlin Paradox of Information Overload

According to Wikipedia, the Easterlin Paradox is a key concept in happiness economics. Theorized by economist Richard Easterlin in the 1974 paper "Does Economic Growth Improve the Human Lot? Some Empirical Evidence," it proposes that, contrary to expectation, economic growth doesn't lead to greater happiness. It quickly caught fire, as Easterlin became famous beyond famous, and the paradox quickly became a social science classic, cited in academic journals and the popular media. As the New York times says, the Easterlin Paradox tapped into a near-spiritual human instinct to believe that money can’t buy happiness. Although there have been attempts to debunk the Easterlin Paradox, I believe the concept applies quite well to Web 2.0 and the information overload it has presented to the current state of the Web.

As one information expert has put it, Web 2.0 is about searching, Web 3.0 will be about finding. Well said. That is exactly the problem about Web 2.0. There are a plethora of excellent free and very useful tools out there - blogs, wikis, RSS feeds, mashups - but at what point does it become too much? Recently, I noticed that my Google Reader has gotten out of hand. I just can't keep up anymore. I skim and I skim and I skim. I'm pulling in a lot of information, but am I really processing it? Am I really happy with the over abundance of rich content of Web 2.0? Not really. Are you?

Tuesday, June 03, 2008

Semantic Web and Librarians At Talis

I've always believed that librarians should and will play a part in the rise of the Semantic Web and Web 3.0. I've gone into the theory and conceptual components, but really haven't discussed too much about the practical elements of how librarians will realize this. Meet Talis. Besides its contribution to the blogosphere, Talis has recently dipped into publishing with its inaugural issue of Nodalities: The Magazine of the Semantic Web. It's a wonderful read - take a look.

How did Talis come about? It's been in the works for quite a while now, and it's worth noting how it came to be. In 1969 a number of libraries founded a small co-operative project, based in Birmingham to provide services that would help the libraries become more efficient. The project was known as the Birmingham Libraries Cooperative Mechanisation Project, or BLCMP. At this time the concept of automation was so new that the term mechanisation was often used in its place.

BLCMP built a co-operative catalogue of bibliographic data at the start of its work, a database that now contains many millions of records. BLCMP moved into using microfiche and later IBM mainframes with dedicated terminals at libraries in the mid-seventies and was one of the first library automation vendors to provide a GUI on top of Microsoft Windows to provide a better interface for end-users. The Integrated Library System was first called Talis. Talis became the name of the company during re-structuring and the ILS became known as Alto. In 1995 Talis was the first library systems vendor to produce a web enabled public access catalogue. Much of Talis' work now focusses on the transition of information to the web, specifically the Semantic Web and Talis have lead much of the debate about how Web 2.0 attitudes affect traditional libraries.

How does this include librarians? This ambitious Birmingham-based software company began life in the 1970s as a university spin-off. For many years it was a co-operative owned by its customers (a network of libraries), but in 1996 it was restructured as a commercial entity. It has a well-established pedigree of supplying large-scale information management systems to the public in the UK and academic libraries: in fact, more than 60% of UK public libraries now use the company's software, which benefits some 9m library users. In 2002, the company embarked on Talis 2.0, a change programme to take advantage of "the next wave of technology" (Web 2.0 and the semantic web). In the year ending March 2004, turnover was £7.5m with profits of £226,000. Who says librarians can't make a buck, right?

Saturday, May 31, 2008

Introducing WebAppeal

There are some good Web 2.0 applications and websites. Then there is WebAppeal. The web service is based on the principle of 'Software as a Service' (SaaS), which is rapidly gaining popularity. The uprise of innovative online applications makes traditional and expensive software unnecessary. Examples of successful web applications are video service YouTube and free music service Last.fm. To bring some structure and insight into these ever-growing technologies, http://www.appappeal.com/ informs consumers as comprehensively as possible about all the possibilities SaaS web applications have to offer.

Although we're in the age of Web 2.0, one of the main challenges remains information overload. Too much information does not necessarily mean knowledge. That's why I find AppAppeal to be a convincing website which provides insightful reviews of applications and indexes them according to utility. On this website, all applications are organized in categories such as "Blogging", "Personal Finance" and "Wiki Hosting". The website is still being developed. Soon, tools will be added to create an interactive community around web-based applications.

There are already Web 2.0 review sites such as Mashable, All Things Web 2.0, or Bob Stumpel's Everything 2.0. But WebAppeal goes one step further. It analyzes the advantages and disadvantages of particular applications, providing demo videos. I really like this website. It's a good complement to a project that Rex Turgano and I are collaborating on: Library Development Camp, which not only reviews Web 2.0 applications, but offers trial accounts for users to try out different applications. Together we make a great punch. Stay tuned. More to come. . .

Thursday, May 29, 2008

Day 4 of TEI/XML Bootcamp

Day 4 has come and gone. What did I learn? XML is not easy. Programming is even tough business, not for the faint of heart or mind. The main challenge that I had, and made my head spin, was learning the complexities behind XHTML and XSLT. A powerful tool for the construction of the Semantic Web is XHTML. Most people are acquainted with the "meta" tags which can be used to embed metadata about the document as a whole. Yet there are more powerful, granular techniques available too. Although largely unused by web authors, XHTML and XSLT offer numerous facilities for introducing semantic hints into markup to allow machines to infer more about the web page content than just the text. These tools include the "class" attribute, used most often with CSS stylesheets. A strict application of these can allow data to be extracted by a machine from a document intended for human consumption.

Although there have been several proposals for embedding RDF inside HTML pages, the technique of using XSLT transformations has a much broader appeal. Because not everyone is keen to learn RDF, and it thus presents a barrier to the creation of semantically rich web pages. Using XSLT provides a way for web developers to add semantic information with minimal extra effort. Dan Connolly of the W3C has conducted quite a number of experiments in this area, including HyperRDF, which extracts RDF statements from suitably marked-up XHTML pages. What can librarians do?
The Resource Description and Access is just around the corner. And there is much buzz (good and bad) that it's going to change the way librarians and catalogers think about information science and librarianship. I encourage information professionals to be aware of the changes to come. Although most are not going to be involved directly with the Semantic Web, they can keep abreast of developments, particularly exciting developments in information organization and classification. Workshops and presentations about the RDA are out in droves. Pay attention. Stay tuned. There could relevancy in these new developments that spill into the SemWeb.

Tuesday, May 27, 2008

The Digital Humanities

I am Day 2 of the Digital Humanities Summer Institute. Prior to this workshop, I had no inkling of what was digital humanities. Not anymore. The Digital Humanities, also known as Humanities Computing, is a field of study, research, teaching, and invention concerned with the intersection of computing and the disciplines of the humanities. It is methodological by nature and interdisciplinary in scope. It involves investigation, analysis, synthesis and presentation of knowledge using computational media. provides an environment ideal to discuss, to learn about, and to advance skills in new computing technologies influencing the work of those in the Arts, Humanities and Library communities.

I'm currently taking Text Encoding Fundamentals and their Application at the University of Victoria from May 26–30, 2008, taught by Julia Flanders and Syd Bauman experts in using the Text Encoding Initiative (TEI) an XML language which collectively develops and maintains a standard for the representation of texts in digital form in order to specify encoding methods for machine-readable texts. And it has been a blast. This has been the seventh year of its existence, and already it has gained the attention of academics and librarians across the world.

The DHSI takes place across a week of intensive coursework, seminar participation, and lectures. It brings together faculty, staff, and graduate student theorists, experimentalists, technologists, and administrators from different areas of the Arts, Humanities, Library and Archives communities and beyond to share ideas and methods, and to develop expertise in applying advanced technologies to activities that impact teaching, research, dissemination and preservation. What have I learned so far? Lots. But most of all, just how much XML plays in the Semantic Web. But more on that in the next posting . . . stay tuned.

Friday, May 23, 2008

One Million Dollar Semantics Challenge and API

The SemanticHacker $1Million Innovators’ Challenge and new open API for Semantic Discovery has recently launched by TextWise, LLC. The Challenge enables developers to showcase the power of TextWise’s patented Semantic Signature® technology and accelerate developing breakthrough applications.

The Challenge provides incentives to encourage creation of software prototypes and/or business plans that demonstrate commercial viability in specific industries. Are you up to the Challenge? Go to Semantichacker.com to experience the technology first-hand in our demo and learn more about how to enter the $ 1 million challenge.

But what are Semantic Signatures®? They identify concepts and assign them weights; in order words, they're the ‘DNA’ of documents which in essence become highly effective at describing what the documents are ‘about.’ Semantic Signatures® enable Web publishers and application developers to automatically embed consistent, semantically meaningful tags within their content for use in classification, organization, navigation and search.

In many ways, that's what librarians can offer in terms of information structuring and organization. Interestingly, textwise technology will have a spot at the Semantic Technology Conference in San Jose on May 21, 2008. I won't be able to attend. But if you are, could you give a write-up? I would be forever in your debt.

Thursday, May 22, 2008

Dublin Core is Dead, Long Live MODS

Jeff Beall wrote an article called Dublin Core: An Obituary. In it Beall asserts that the Dublin Core Metadata Initiative is a failed experiment. Instead, MODS is the way to go. And this was back in 2004! What is MODS? The Library of Congress' Network Development and MARC Standards Office, with interested experts, is developing a schema for a bibliographic element set that may be used for a variety of purposes, and particularly for library applications. As an XML schema it is intended to be able to carry selected data from existing MARC 21 records as well as to enable the creation of original resource description records.

It includes a subset of MARC fields and uses language-based tags rather than numeric ones, in some cases regrouping elements from the MARC 21 bibliographic format. This schema is currently in draft status and is being referred to as the "Metadata Object Description Schema (MODS)". MODS is expressed using the XML schema language of the World Wide Web Consortium. The standard is maintained by the Network Development and MARC Standards Office of the Library of Congress with input from users.

Here's what MODS can do that the Dublin Core can't:
1. The element set is richer than Dublin Core
2. The element set is more compatible with library data than ONIX
3. The schema is more end user oriented than the full MARCXML schema
4. The element set is simpler than the full MARC format

In my article at the Semantic Report, I argue that the DCMI is potentially relevant to the SemWeb because implementations of Dublin Core use not only XML, but are based on the Resource Description Framework (RDF) standard. The Dublin Core is an all-encompassing project maintained by an international, cross-disciplinary group of professionals from librarianship, computer science, text encoding, the museum community, and other related fields of scholarship and practice. As part of its Metadata Element Set, the Dublin Core implements metadata tags such as title, creator, subject, access rights, and bibliographic citation, using the resource description framework and RDF Schema.

So will the Dublin Core’s role in knowledge management activity representation be significant in the emergence of the SemWeb? So far, MODS hasn't done the job. Even though it has claimed that it can do so. Is this the problem similar to the situation during ancient Chinese period of the Hundred Schools of Thought? Who will win in the end? Or which ones? Perhaps opportunities and possibilities are much higher than narrowly looking for one path for absolute knowledge. So we march on . . .

Tuesday, May 20, 2008

Post-modern business in the Free World - Open Access & Librarians

I came across this interesting article from the Vancouver Sun, Post-modern business model: It's free. Videogame company Nexon has been giving away its online games for free, and making its revenue from selling digital items that gamers use for their characters. Garden says his business is as much about psychology as it is about game design. It’s no good to sell a bunch of cool designer threads to a character who is isolated in a game, because no one will see how good he looks.
Free games can have a dozen different revenue models, from Nexon’s microtransactions to advertising, product placement within a game, power and level upgrades, or downloadable songs. However, on the question of videogames (or any other digital product) being offered to consumers for free. Much of the principles of Nexon is based on Chris Anderson's "free" concept.

“No one says you can’t make money from free." What does this mean for libraries? Especially since much of the mandates and goals of libraries are not to make money? The possibilities are there. A great number of libraries are already dipping into open access initiatives, particularly at a time when database vendors and publishers are charging arms, legs, and first-borns. With Web 2.0 technologies forming an important foundation for digital and virtual outreach opportunities, and the SemWeb on the horizon, I encourage librarians and information professionals to put on their thinking caps and think together in a collaborative environment to break down the silos of information gathering, and move towards information sharing.

Sunday, May 18, 2008

Librarian 2.0

Sometimes you just read an article, and go, I get it. A lightbulb shines brightly above you. Then you quickly turn it off to be energy saving. And quickly run to the computer to blog about it. Professionalizing knowledge sharing and communications is worthy of praising.

There’re a lot of articles that deal with the Library 2.0 mantra. But John Cullen goes beyond that, and proposes the idea that Library 2.0 should extend to the librarian. It should be Librarian 2.0. And what does that mean?

The key is developing communicative orientation: one that turns the old, tiring stereotype of library work being quiet, reflective and procedural, to one that is primarily focused on listening, engaging and developing understanding of the unique position of every individual.

In other words, just as much as technology is important to the library, we must also be alert of the changing nature of information and the profession. No longer are librarians doing the same duties repetitively and mindlessly. Web 2.0 technologies are merely the surface manifestation of L2. The opportunity is there to use this paradigm shift for us in teaching other professions how to actively engage with their service consumers. All aboard!

Friday, May 16, 2008

Search Monkey and the SemWeb

We're getting closer. Yahoo is incubating a project code-named "Search Monkey," a set of open-source tools that allow users and publishers to annotate and enhance search results associated with specific web sites. Using SearchMonkey, developers and site owners can use structured data to make Yahoo! Search results more useful and visually appealing, and drive more relevant traffic to their sites.

The new enhancements differ from Yahoo's "Shortcuts" that sometimes appear at the top of search result pages. Shortcuts are served by Yahoo whenever the search engine is confident that the shortcut links are more relevant than the other web search results on the page. Often, shortcuts highlight content from Yahoo's own network of sites.

The new enhancements can be applied to any web site. Publishers can add additional information that will be displayed with the web search result. For example, retailers can include product information, restaurants can include links to menus and reviews, local merchants can display operating hours, address, and phone information, and so on—far more information than a title, URL, and description that make up current generation search results.

Here's the exciting thing. As Search Engine Land reports:
Anyone can create an app for a web site. Yahoo is collecting the most useful apps into a gallery that you as a searcher can enable for your own Yahoo search results. For example, if you like the app that was created for LinkedIn, which shows a mini-profile of a person, you can include that app so that the mini-profiles display whenever you search on a person's name.

It's true. The SearchMonkey developer tool helps users find and construct data services that you can use to build apps. Once you've built your app, you can use it yourself and share it with others. Take a look at this :)


Wednesday, May 14, 2008

From Dublin Core to the Semantic Web

I've just published a piece in the Semantic Report titled, The Semantics of the the Dublin Core – Metadata for Knowledge Management. It's an experimental piece about the potential for applying principles from the Dublin Core Metadata Initiative for the SemWeb. In a previous article about half a year ago, Dean and I had proposed that the library catalogue could be used as a blueprint for the Semantic Web. Perhaps theoretical and conceptual, the arguments fleshed out the ideas, but not the practical applications. In this latest article, I wanted to outline in greater detail how exactly developments in library and information science are playing out, not only in the SemWeb, but for knowledge management in general.

Can the DCMI provide the infrastructure for the SemWeb? It could. Or it could not. Some have gone as far as saying that the Dublin Core is dead. But I'm not going to add more to that discourse. What I wanted to do was find apparently disparate entities: B2B, the Dublin Core, and the SemWeb, and tie them together using principles of knowledge organization in the form of the DCMI. Blasphemous? Perhaps.

My point in the article isn't to create something out of nothing. The purpose is to extend the idea that knowledge management for librarians and information science is nothing new. In 2002, two years before Tim O'Reilly's coining of the term, "Web 2.0," librarian Katherine Adams had already argued that librarians will be an essential piece to the SemWeb equation. Her seminal piece, The Semantic Web: Differentiating between Taxonomies and Ontologies, Adams argues that ontologies and taxonomies are synonymous - computer scientists refer to hierarchies of structured vocabularies as "ontology" while librarians call them "taxonomy." What the Dublin Core offers is an opportunity to bridge together different topics and extend across disciplines to navigate the complexities of the SemWeb. Fodder for discussion. But good fodder nonetheless I hope.

Monday, May 05, 2008

Library Development Camp

I'm excited to announce the formation of Library Development Camp. Our initiative is to help fellow librarians and information professionals in Canada to explore and learn about the latest web tools and technologies from colleagues who actually use them. This web community is open to any one working in the library or information management field in Canada.

How does this work? Most of the magic happens "offline" as we try to meet up in person to discuss these tools as well as give demos, training, hold discussions and debates, and share ideas and tips on how to effectively use these tools in a workplace or even on a personal level. It's all about sharing. We hope to spawn other LibraryDevCamp groups across Canada. If you would like to start one up in your city, lets us know and we'll set up a section on our web site.

Any library/information professional who already use any of these web tools/services are welcome to join and be a LibraryDevCamp.ca contributor or moderator. So far, we have an all-star cast of experts, such as Dean Giustini, Eugene Barsky, and Rex Turgano. We hope to have you join us, too. In the spirit of Web 2.0, our virtual meeting place is hosted by Moveable Type, a weblog publishing system developed by the company Six Apart. Please stay tuned as we expect our community to grow, not only in members but also in exciting ventures.

Thursday, May 01, 2008

Economics 2.0

Although I enjoyed Economics 100 (Micro and Macroeconomics) and had learned a great deal - I have to admit it wasn't the most exciting courses at time. The textbook we had used was Gregory Mankiw's Principals of Economics. (I still have copies of the textbooks). He has written two popular college-level textbooks: one in intermediate macroeconomics and the more famous Principles of Economics, which is popular among high-school Advanced Placement Economics teachers. More than one million copies of the books have been sold in seventeen languages.

Mankiw was also an important person in American politics, as he was appointed by President George W. Bush as Chairman of the Council of Economic Advisors in 2003. He has since resumed teaching at Harvard, taking over the introductory economics course Social Analysis 10 (which he affectionately refers to as "Ec. 10"). However, Mankiw also believes in using Web 2.0.

This is Mankiw's purpose for the blog:
I am a professor of economics at Harvard University, where I teach introductory economics (ec 10) among other courses. I use this blog to keep in touch with my current and former students. Teachers and students at other schools, as well as others interested in economic issues, are welcome to use this resource.

What's exciting about Mankiw's blog is the fact that it dips into the Web 2.0 blogosphere. The blog is much more than just a website. It's an intellectual and virtual space for him to keep in touch with colleagues and students, of marketing his profession and work to the non-expert. It's fantastic outreach. Librarians everywhere should take notice.

Friday, April 25, 2008

Library 2.0

Michael Casey and Laura Savastinuk's article in the Library Journal not only changed the way libraries are perceived, but also how librarians run them. In a way, Library 2.0 principles are nothing new. Interlibrary loan is very much a "long tail" concept. In fact, would it be possible to view Library 2.0 as change management in its most extreme form? Nonetheless, it was a brilliant read when the book was published. Here's what I got out of the book about Library 2.0 concepts.

(1) Plan, Implement, and Forget - Changes must be constant and purposeful. Services need to be continually evaluated.

(2) Mission Statement - A library without a clear mission is like a boat without a captain. It drives the organization, serving as a guide when selecting services for users and letting you set a clear course for Library 2.0

(3) Community Analysis - Know your users. Talk to them, have a feel for who you're serving, and who they are.

(4) Surveys & Feedback - Get both users and staff feedback. It's important to know what works and what doesn't.

(5) Team up with competitors - Don't think of the library as being in a "box." Look at what users are doing elsewhere that they could be doing through the library. Neither should bookstores or cafes or the Internet. Create a win-win relationship with local businesses that benefits everyone.

(6) Real input from staff - Having feedback means implementing ideas, and not just for show. Eventually, staff will realize the hoax, and morale will suffer.

(7) Evaluating services - Sacred cows do not necessarily need to be eliminated; however, nothing should be protected from review.

(8) Three Branches of Change model - This allows all staff - from frontline workers to the director - to understand the changes made. The three teams are: investigative, planning, and review team.

(9) Long tail - Web 2.0 concepts should be incorporated into the Library 2.0 model as much as possible. For example, the Netflix model does something few services can do: get materials into the hands of people who do not come into libraries. Think virtually as well as physically.

(10) Constant change & user participation - These two concepts form the crux of Library 2.0.

(11) Web 2.0 technologies - They give users access to a wide variety of applications that are neither installed nor approved by IT. The flexibility is there for libraries to experiment unlike ever before. It is important to have conversation where none exists before. Online applications help fill this gap.

(12) Flattened organizational structure - Directors should not make all the decisions. Instead, front line staff input should be included. Committees that include both managers and lower level staff help 'flatten' hierarchical structure, creating a more vertical structure that leads to more realistic decision-making.

Tuesday, April 22, 2008

7 Opportunities for the Semantic Web

Dan Zambonini’s 7 f(laws) of the Semantic Web is a terrific read, and perhaps offers a refreshing perspective of the challenges of realizing the SemWeb. Too often we hear a dichotomy of arguments, but Zambonini’s calmly lays out what he believes are hurdles for the SemWeb. Instead of regurgitating his points, I’m going to complement them with my own comments:

(1) Not all SemWeb data are created equal - There’s a lot of RDF files on the web, in various formats. But that doesn’t equate to the SemWeb. But this is a bit of a strawman. In fact, it emphasizes the point that the components of the SemWeb are here. The challenge is the finding the mechanism or application that can glue everything together.

(2) A Technology is only as good as developers think it is - Search analysis reveals that people are actually more interested in AJAX than RDF Schema, despite the fact that RDF has a longer history. Zambonini believes that this is because the SemWeb is so incredibly exclusive in an ivory-towerish way. I agree. However, what is to say that the SemWeb won’t be able to accommodate a broader audience in the future? We’ll just need to wait and see.

(3) Complex systems must be built from successively simpler system - I agree with this point. Google is successful in the search engine wars because it learnt how to build up slowly, and created a simple system that got more complex as it needed to. People love Web 2.0 because they’re easy to use and understand. But whereas Web 2.0 was about searching, the SemWeb should be about finding. Nobody said C+ and Java were easy, but complexity pays off in the long run.

(4) A new solution should stop an obvious pain - The SemWeb needs to prove what problems it can solve, and prove its purpose. Right now, Web 2.0 and 1.0 do a good job, so why would we need any more? Fair enough. But information is still in silos. Until we open up the data web, we’re still in many ways living in the dark.

(5) People aren’t perfect - Creating metadata and classifications is difficult. People are sloppy. Will adding SemWeb rules add to the mess that is the Web? I seriously can’t answer this one. We can only predict. But perhaps it’s too cynical to prematurely write off people’s metadata creating skills. HTML wasn’t easy, but we managed.

(6) You don’t need an ontology of everything. But it would help - Zambonini argues for a top-down ontology which would a one-fits-all solution for the entire Web rather than building from a bottom-up approach based on folksonomies of the social web. I would argue that for this to work, we need to look at it from different angles. Perhaps we can meet half way?

(7) Philanthropy isn’t commercially viable - Why would any sane organization buy into the SemWeb and expose their data? We need that killer application in order for this to work. Agree. Ebay did wonders. Let’s hope there’s a follow-up on the way.

Saturday, April 19, 2008

Four Ways to Library 2.0

Library 2.0 has stirred controversy since the day Michael Casey and Linda Savastinuk’s Library 2.0: Service for the next-generation library had hit online newsstands. A loosely defined model for a modernized form of library service that reflects a transition within the library world in the way that services are delivered to users, the concept of Library 2.0 borrows from that of Business 2.0 and Web 2.0 and follows some of the same underlying philosophies. It’s still being debated in the library community about its relevancy to the profession. (Haven’t we always had to serve our users in the first place. What’s new about that?)

Michael Stephens and Maria Collins’ Web 2.0, Library 2.0, and the Hyperlinked Library is a fascinating for those interested in learning more about these concepts. Certainly, at the core of Library 2.0 is blogs, RSS, podcasting, wikis, IM, and social networking sites. But it’s much more than that, and Stephens and Collins boils it down nicely to four main themes of Library 2.0:

(1) Conversations – The library shares plans and procedures for feedback and then responses. Transparency is real and personal.

(2) Community and Participation –
Users are involved in planning library services, evaluating those services, and suggest improvements.

(3) Experience – Satisfying to the user, Library 2.0 is about learning, discovery, and entertainment. Bans on technology and the stereotypical “shushing” are replaced by a collaborative and flexible space for new initiatives and creativity.

(4) Sharing – Providing ways for users to share as much or as little of themselves as they like, users are encourage to participate via online communities and connect virtually with the library.

Thursday, April 17, 2008

The Year Is 2009...

We're not that far. . . In 2002, Paul Ford wrote an amazing piece predicting what the world would look like in 2009. Well, we're almost there. Ford thought about a "Semantic Web scenario," one which had a short feature from a business magazine published in 2009. While Amazon and Ebay both worked as virtual marketplaces (they outsourced as much inventory as possible) by bringing together buyers and sellers while taking a cut of every transaction, Google focused on the emerging Semantic Web.

This is how Ford explains the SemWeb, which is one of the most concise I've seen to date.

So what's the Semantic Web? At its heart, it's just a way to describe things in a way that a computer can “understand.” Of course, what's going on is not understanding, but logic, like you learn in high school:

If A is a friend of B, then B is a friend of A.

Jim has a friend named Paul.

Therefore, Paul has a friend named Jim.

Jim has a friend named Paul.

Therefore, Paul has a friend named Jim.


Of course, it's much more than just A's and B's. But the idea that Google will eventually integrate the SemWeb into its applications is exciting. And for an article that was written back in 2002 with such clarity, it's a highly engaging read.

Saturday, April 12, 2008

Google and Web 3.0?

Maybe Google gets it afterall. Google has made its foray into the Semweb with its new Social Graph API coding. What's that? And why should you care? In having the Social Graph API, it makes information about the public connections between people on the Web, expressed by XFN and FOAF markup and other publicly declared connections, easily available and useful for developers. The public web is made up of linked pages that represent both documents and people. Google Search helps make this information more accessible and useful.

In other words, if you take away the documents, you're left with the connections between people. Information about the public connections between people is really useful. A user might want to see who else you're connected to, and as a developer of social applications, you can provide better features for your users if you know who their public friends are. There hasn't been a good way to access this information.

The Social Graph API looks for two types of publicly declared connections:

  1. It looks for all public URLs that belong to you and are interconnected. This could be a blog, Facebook, and a Twitter account.
  2. It looks for publicly declared connections between people. For example, your blog may link to someone else's blog while your Facebook and Twitter are linked to each other.

This index of connections enables developers to build many applications including the ability to help users connect to their public friends more easily. Google is taking the resulting data and making it available to third parties, who can build this into their applications (including their Google Open Social applications). Of course, the problem is that few people use FOAF and XFN to declare their relationships, but Google's new API could make them more visible and social applications could use them. Ultimately, Google could also index the relationships from social networks if people are comfortable with that.

What does this mean for information professionals? Stay tuned. By having Google on board the Semweb train, (or ship), it could pave the way for more bricks to be laid on the road to realizing the goal of differentiating Paris from Paris.

Wednesday, April 09, 2008

7 Things You Need to Know about the Semantic Web

Over at Read/Write Web, Alex Iskold has come up with what I consider a seminal piece in the Semantic Web literature. In Semantic Web Patterns: A Guide to Semantic Technologies, Iskold synthesizes the main concepts of the Semantic Web, asserting that it offers improved information discoverability, automation of complex searches, and innovative web browsing. Here’re the main themes:

(1) Bottom-Up vs. Top-Down – Do we focus on annotating information in pages (using RDF) so that it is machine-readable in top-down fashion? Or do focus on leveraging information in existing web pages so that they meaning can be derived automatically (folksonomies) in a botton-up approach? Time will tell.

(2) Annotation Technologies – RDF, Microformats, and Meta Headers. The more annotations there are in web pages, the more standards are implemented, and the more discoverable and powerful information becomes.

(3) Consumer and Enterprise – People currently don’t care much for the Semantic Web because all they look for is utility and usefulness. Until an application can be deemed a “killer application,” we continue to wait.

(4) Semantic APIs – Unlike Web 2.0 APIs which are coding used to mash up existing services, Semantic APIs take as an input unstructured information and relationships to find entities and relationships. Think of them as mini natural language processing tools. Take a look.

(5) Search Technologies – The sobering fact is that it’s a growing realization that understanding semantics wont’ be sufficient to build a better search engine. Google does a fairly good job at finding us the capital city of Canada, so why do we need to go any further?

(6) Contextual Technologies - Contextual navigation does not improve search, but rather short cuts it. It takes more guessing out of the equation. That's where the Semweb will overtake Google.

(7) Semantic Databases – The challenge of keeping up with the world is common to all database approaches, which are effectively information silos. That’s where semantic databases come in, as focus on annotating web information to be more structured. Take a look at Freebase.

As librarians and information professionals, we gather, organize, and disseminate. The challenge will be to do this as information is exploding at an unprecedented rate in human history, all the while trying to stay afloat and explaining to our users the technology. Feels like walking on water, don’t you agree?

Tuesday, April 08, 2008

Semantic Librarianship

If I had my stocks for Web 3.0, where would I put them?

How about a neat web service called Freebase. It’s a semanticized version of Wikipedia. But with a bigger potential. Much bigger. Freebase is said to be an open shared database of the world's knowledge, and a massive, collaboratively-edited database of cross-linked data. Until recently accessible by invitation only, this application is now open to the public as a semi-trial service.

What does this have to do with librarians? As Freebase argues, “Wikipedia and Freebase both appeal to people who love to use and organize information.” Hold that though. That’s enough to whet our information organizational appetites.

In our article, Dean and I argued that the essence of the Semantic Web is the ability to differentiate entities that the current Web is unable to do. For example, how can we currently parse Paris from Paris? Although still in its initial stages with improvements to come, Freebase does a nice job to a certain extent. Freebase covers millions of topics in hundreds of categories. Drawing from large open data sets like Wikipedia, MusicBrainz, and the SEC, it contains structured information on many popular topics, like movies, music, people and locations—all reconciled and freely available via an open API.

As a result, Freebase builds on the Social Web 2.0 layer, while providing the Semantic Web infrastructure through RDF technology. For example, Paris Hilton would appear in a movie database as an actress, a music database as a singer and a model database as a model. In Freebase, there is only one topic for Paris Hilton, with all three facets of her public persona brought together. The unified topic acts as an information hub, making it easy to find and contribute information about her.

While information in Freebase appears to be structured much like a conventional database, it’s actually built on a system that allows any user to contribute to the schemas—or frameworks—that hold the data - RDF, as I had mentioned. This wiki-like approach to structuring information lets many people organize the database without formal, centralized planning. And it lets subject experts who don’t have database expertise find one another, and then build and maintain the data in their domain of interest. As librarians, we have a place in all of this. It's out there. Waiting for us.

Wednesday, April 02, 2008

Moving Out & Moving On

Everyone needs a change every now and again. On May 1st, 2008, I will be moving to the Irving K. Barber Learning Centre as Program Services Librarian. Having worked with some very talented and supportive colleagues, I feel supremely fortunate because without them, I would not be at where I am at this point of my career.

Over the past few years, I have enjoyed working in a variety of jobs, from public libraries, to hospital libraries, to research centres, to academic libraries. (I also dabbled in publishing, archival, as well as teaching ventures). The integration of these experiences has been wonderful as it has helped build skills most essential in my upcoming endeavours.

What will this new position entail? To a certain extent, everything that I'm not doing now as an academic librarian. The Irving K. Barber Learning Centre itself is not a "traditional" library. It's a new building, a space for collaborative learning and ideas. A learning commons. A new way of learning. It also represents a new direction for librarianship. If there is one thing that typifies this position, it would be digital outreach. Web 2.0, Semantic Web, and Web 3.0? Stay tuned.

The possibilities are exciting.

I'd like to thank everyone who helped me along the way, particularly Dean Giustini, Eugene Barsky, Eleanor Yuen, Tricia Yu, May Yan, Henry Yu, Hayne Wai, Chris Lee, Rob Ho, Peter James & friends at HSSD, Rex Turgano, Rob Stibravy, Susie Stephenson, Matthew Queree, and Angelina Dawes, among the many. And of course, Hoyu. Thank you to all.

Thursday, March 27, 2008

The Social Web Into the Semantic Web

"What can happen if we combine the best ideas from the Social Web and the Semantic Web?" - Tom Gruber

In other words, can we channel folksonomies, tagging, user-created knowledge into one coherent structured Web? A Semantic Web? Tom Gruber seems to think so. In Collective Knowledge Systems, he proposes the Semantic Web vision points to a representation of the entity - for example, a city - rather than its surface manifestation. Therefore, one of the problems that we've always had accessing the Web's content is the difficulty in differentiating the city of Paris from the celebrity Paris Hilton when using a search engine.

In many ways, harnessing Web 2.0 technologies and refining them for the Semantic Web has been speculated a great deal. How do we move from collected intelligence to collective intelligence? There are three approaches to realizing the Semantic Web. Here they are:

(1) Expose structured data that already underlies unstructured web pages - Site builders would generate unstructured web pages from a database and expose the data using standard formats (think FOAF)

(2) Extract structured data from unstructured user contributions - Manually dentify people, companies, and other entities with proper names, products, instances of relations

(3) Capture structured data on the way into the system - A "snap to grid" system in which users enter structure to the data and helps users enter data within the structure. (Think of automatic spell check).

Where do librarians come in? We have always used our training to structure content, package it, and disseminate to our users. In our article, Dean and I argue that the catalogue is very much an analogy for how the Semantic Web can organize information in a way that the current Web is unable to do. Recent developments in RDA from the library side offer a promising glimpse into the possibilities for Web 3.0. True, we are only surmising. But let's not prevent us from creating.

Tuesday, March 25, 2008

Quantum Information Science?

Have you heard of quantum information science? Eventually, it might solve the problems of information-mess and access. Although quantum physics, information theory, and computer science were among the apex of intellectual achievements of the 20th century, they were often framed as separate entities. Currently, a new synthesis of these themes is quietly emergine. The emerging field of quantum information science is offering important insights into fundamental issues at the interface of computation and physical science, and may guide the way to revolutionary technological advances.

Director of the Institute for Quantum Information, John Preskill proposes in his lecture, that quantum bits (“qubits”), the indivisible units of quantum information, will be central for “quantum cryptography,” wherein the privacy of secret information can be founded on principles of fundamental physics. The quantum laws that govern atoms and other tiny objects differ radically from the classical laws that govern our ordinary experience. Physicists are beginning to recognize that we can put the weirdness to work. That is, there are tasks involving the acquisition, transmission, and processing of information that are achievable in principle because Nature is quantum mechanical, but that would be impossible in a "less weird" classical world.

What does this mean ultimately mean? A “quantum computer” operating on just a few hundred qubits could perform tasks that ordinary digital computers could not possibly emulate. Although constructing practical quantum computers will be tremendously challenging, particularly because quantum computers are far more susceptible to making errors than conventional digital computers, newly developed principles of fault-tolerant quantum computation may enable a properly designed quantum computer with imperfect components to achieve reliability.
How long will it take before we achieve quantum computing? Please be patient. These folks are working on it.

Friday, March 21, 2008

Free on CBC

The Canadian Broadcasting Corporation, long known for its traditional family-style programs (Road to Avonlea and Coronation Street) and NHL hockey, is actually making a splash in technology. A huge one at that. It's decided to apply the 1% principle and open up its content for anyone to freely download. That's right. Free.

In doing so, CBC becomes the major broadcaster in North America to release a high quality, DRM-free copy of a primetime show using BitTorrent technology. On top of that, CBC will also be distributing a version that can put in iPod's. The show, Canada’s Next Great Prime Minister, will completely free (and legal) for anyone to download, share & burn to the heart’s desire. For many, Bit Torrent has meant illegal, downright dirty business. In the future, however, it might actually be a better means to access for information and entertainment. CBC is attempting to prove that there are other means beyond the "box." It's trying to move past physical barriers and into the virtual. Shouldn't libraries be doing the same?

Sunday, March 16, 2008

5 Essences to Librarianship 3.0

What will the future of librarianship look like? Traditional cataloging, collection development, and reference will look very different, even five years from now. Changes are in motion. Don't you get the feeling that things are going to be fast and furious? There seems to be a lot of anxiety and uncertainty among librarians about what the future holds. But change is inevitable in life. From the card catalog to OPACs to the Internet, librarians and information professionals have had to adjust and adapt accordingly to new technologies. But unlike other professions that rely on technology, it's always had to catch up rather than take the lead. But we might not have a choice in the new Web. Here are 5 opportunities for us to look ahead to.

(1) Resource Description and Access - With the Anglo-American Cataloging Rules 2 (AACR2) moving way for its successor, the RDA will play an essential role for how information is to be classified and held in libraries and information organizations. However, the RDA will move beyond just the physical and include Web resources as well. You may ask, how can we catalog something that changes constantly? That's where the Semantic Web comes in.

(2) Information Architecture - Librarians have had to organize information. It's their jobs. As Web become more integrated into their work (as if it weren't already?), librarians will rely ever more so on the Web to conduct their work with patrons. Digital outreach is the key to survival. In order to achieve this, building accessible and user-centred websites will be essential.

(3) Virtual Worlds -
Everywhere gate counts are going down in libraries. Patrons are frequenting libraries less and less for information seeking, and more for products and spaces. This means that reference librarianship is changing, too. To a certain extent, we've experimented with virtual reference. In the future, if we are to embrace the possibilities of how we can bring our expertise to the user through other means. Whether it's Facebook, MySpace, Second Life, or Meebo. Think beyond the walls.

(4) Open Access -
Traditional publishing is nearing its last legs. Things fall apart; the centre cannot hold. Textbook publishers are churning new editions of the same text in order to prevent re-selling; journal publishers are forcing the print copies to be sold as a package with their electronic versions. Why? Fear. Publishers are scrambling to stay in business. Open access will open up new opportunities for how students and users buy books. Why not build you own textbook?

(5) "Free-conomics"
- Everything that users will want will be "free." To understand this principle, just look at the things that you are using without paying. It's based on the 1% principle, where 99% of users get access to the basics of a product while 1% of the others pay for the full premium. The spirit of librarianship has been about the principle of public good and collaboration. It's only natural we find ways to integrate the 1% principle to its full extent.

Sunday, March 09, 2008

Bill Gates Retires from Microsoft

Recently, Forbes revealed that Bill Gates has slipped to number three on the list of the world's wealthiest people. On top of that, Bill Gates is also stepping back from Microsoft to devote more time to the Bill & Melinda Gates Foundation. But that doesn't mean that Bill left with a whimper. Take a look at this video, particularly his going-away comedy skit. Nice job, Bill. Good-bye, but not farewell.

Friday, March 07, 2008

Librarians and Web 3.0

For better or worse, Web 3.0 is around the corner. Okay, maybe the technology is lagging; but we must admit that the third generation (third decade) Web is coming. In a post I had made back in September, Paul Miller of Talis made an insightful response, one which is relevant for today's discussion.
Although I'm slightly surprised at the sector's lack of overt engagement with this obviously synergistic area too, there are certainly examples in which librarians are grasping the Semantic Web and in which Semantic Web developers are recognising the rich potential offered by libraries' structured data...

Ed Summers over at Library of Congress would be one person I'd pick out to mention. Also, the work OCLC and Zepheira are doing on PURL, and our own focus on the Talis Platform within Talis; that's Semantic Web through and through, and we have significant products in the final stages of beta that put semantic technologies such as RDF and SKOS to work in delivering richer, better, more flexible applications to libraries and their users. Things really begin to get interesting, though, when you take the next step from enabling existing product areas with semantic technologies to actually beginning to leverage the resulting connections by joining data up, and reusing those links, inferences and contexts to cross boundaries between libraries, systems, and application areas.

There's also library-directed research at institutes such as DERI here in Europe, and even conferences like the International Conference on Semantic Web and Digital Libraries, which was in India this year.

Finally - for now - there's also a special issue of Library Review in preparation; Digital Libraries and the Semantic Web: context, applications and research, and I'll be speaking on The Semantic Web and libraries - a perfect fit? at the Talis Insight conference in November It's funny that you mention Jane in your post, because I'll also be doing something for her later in November that encompasses some of these themes...

Sometimes moving forward doesn't necessarily mean progress. Sometimes we need to take one step back before we can move two steps in the right direction. But it appears as if the infrastructure is there for us to move in the direction of Web 3.0. What does this mean for librarians? I suspect it means we should stop the bickering about Web versions, and start reflecting on the reasons why patrons are physically relying on library collections and coming to the libraries for information. Googlization of information has resulted in fears for the future of librarianship. But what are we to do? Standing idly by and playing the trumpets as the ship sinks isn't the right way to take it. What to do? Let's try move in the right direction.

Saturday, March 01, 2008

The Business of Free-conomics

He's done it again. Fresh off the press is Chris Anderson's "Free" in Wired Magazine. In 2004, Anderson changed the way business and the Web was conducted through his visionary Long Tail. Two years later, Anderson's back with the idea of "free." While the long tail proved the staple for Web 2.0, please put "free" into your lexicon for the upcoming Web 3.0.

Giving away things for free has been around for a long time. Think Gillette. In fact, the open source software movement is not unlike the shareware movement a decade earlier. (Remember that first game of Wolfenstein?) Like the long tail, Anderson synthesizes "Free" according to six principles:

(1) "Freemium" - Another percent principle: the 1% rule. For every user who pays for the premium version of the site, 99 others get the basic free version.

(2) Advertising - What's free? How about content, services, and software, just to name a few. Who's it free to? How about everyone.

(3) Cross-subsidies - It's not piracy even though it appears like piracy. The fact is, any product that entices you to pay for something else. In the end, everyone will to pay will eventually pay, one way or another.

(4) Zero Marginal Cost - Anything that can be distributed without an appreciable cost to anyone.

(5) Labour Exchange - The act of using sites and services actually creates something of value, either improving the service itself or creating information that can be useful somewhere else.

(6) Gift Economy - Money isn't everything in the new Web. In the monetary economy, this free-ness looks like madness; but that it's only shortsightedness when measuring value about the worth of what's created.

Tuesday, February 26, 2008

Collection Management 2.0

Librarianship sometimes feel (and sound) as if it's in disarray. The library discourse is often fractured and fragmented with so many difference viewpoints. Perhaps this is a result of being in our postmodern information age. Bodi and Maier-O'Shea's The Library of Babel: Making Sense of Collection Management in a Postmodern World asserts that libraries have to invest in and prepare for a digital future while maintaining collections and services based on a predominantly print world.

How is it that we're in postmodern world of academic library collection management? Collections are no longer limited to a physical collection in one location; rather, they are a mixture of local and remote, paper and electronic. Hence, in their experimentations of collection development at two research and liberal arts college libraries, the authors realize that there should be three principles. We aren't reinventing the wheel here; but sometimes, amidst our heavy work days and busy lives, we often forget to step back to reassess how things can be done better. The authors offer an interesting viewpoint in this light:

(1) Break down assessment by subject or smaller sub-topics when necessary

(2) Blending of variety of assessment tools appropriate to the discipline

(3) Match print and electronic collections to departmental learning outcomes through communication with faculty members