Monday, September 01, 2008

The Third Digital (Dis)order


Just finished reading David Weinberger's Everything is Miscellaneous: The Power of the New Digital Disorder. A terrific ideas-driven text, which proposes the idea that we have to relinquish the notion that there is one of way organization information hierarchies. From the Dewey Decimal System to the way we organize our CD collections, Weinberger critiques and takes a shot at everything along the way. But he does make an exellent argument: in the digital world the laws of physics no longer apply. Just take a look at your computer files, and you realize you can organize your music by any number of criteria -- artist, genre, song name, length, or price -- you name it, you've got it. Because the Web is a hyperlinked web of information that grows organically, it's really a mess out there. And Web 2.0 doesn't help at all with the glut that has emerged.

Weinberger proposes that in this new digital world, there are three planes to disorder:

(1) Physical Disorder - The natural state of disorder, when things are left as they are, disorder inevitably arises.

(2) Metadata Disorder - Because of this disorder -- lists, classification systems, hierarchies, taxonomies, ontologies, catalogues, ledgers, anything -- that brings order to the physical realm

(3) Digital Disorder - In the digital world, it makes bringing order that much more difficult, yet also that much more interesting and convenient. There are more ways than one to bring order to the chaos. Just look at Wikipedia.

Friday, August 29, 2008

Open Access: The Beginning of the End?

I jotted down a few ideas about open access, and wouldn't you know it, turned it into an article. OA's an interesting phenomenon. It's here, but not really. There is still so much skepticism regarding whether it'll work out that we just don't know whether it will make it. There are already textbooks that are mashed up using bits and pieces of many other textbooks for students to access digitally rather than buying the whole expensive mess at the beginning of every semester. Journals are starting to slip off in terms of purchases by libraries, especially the academic ones. With the rise of the Semantic Web, open access and open source must go hand-in-hand in order for them to collectively contribute to the new way of searching and organization online information. Librarians take heed? Peter Suber, are you listening?

Tuesday, August 26, 2008

A LEAP of Faith

One of the main tasks I do in my position is to evaluate digital technologies and how they fit into the Library model. I always am looking at how other organizations integrate emergent technologies into their webpages. One organization that has done a superb job is the Learning Enhancement Academic Partnership (LEAP) program at UBC. They really have some outstanding concepts. Libraries are increasingly moving towards the Library 2.0 (L2) model. Catalogues and homepages play only a part of the whole picture, but an important one. Here's why LEAP surpasses most library homepages by leaps and bounds. Here's hoping it catches on. And quick.

(1) User-generated content – As opposed to content posted solely by the site author(s), LEAP encourages user feedback, with things such as online surveys, polls, and student blogs.

(2) Treats users as co-developers of the site – The more people using the service, the better it becomes. LEAP treats this fundamental treatise to the core, encouraging student’s reviews, comments, and rants. Collective intelligence in its purest form.

(3) Customizable content and interface – LEAP allows students (and faculty) to merge their blog content to the

(4) Core application of the website runs through the browser and web serve – Rather than on a desk platform. We don’t need Dreamweaver. All we need is a freely downloadable open source software. LEAP uses Wordpress, a beautiful piece of work.

(5) Social software – the LEAP homepage is maximizes on this. Blogs, tagging, video and image sharing. You name it, they’ve got it. The whole Web 2.0 suite.

(6) Integration of emerging web technologies – LEAP uses this, building on AJAX, RSS, and using API’s for mashups.

Tuesday, August 19, 2008

7 Ways to Better Teaching

Paul Axelrod’s Student Perspectives on Good Teaching: What History Reveals makes some perceptive insight into what makes a good teaching. As academic librarians, we teach almost as much as faculty. Many don't know about this seedy side of the profession. Axelrod puts things into perspective. Librarians need to take charge of instruction - it's an integral part of the profession. What good is technology if there's no one to translate it to users? Here are the top seven things a good teacher should have:

(1) Accessibility and Approachability

(2) Fairness

(3) Open-Mindedness

(4) Mastery and Delivery

(5) Enthusiasm

(6) Humour

(7) Knowledge and Inspiration Imparted

Wednesday, August 13, 2008

Information Anarchy

I've just written a short piece about the Semantic Web. What is it? I know what it isn't. The current web is in many ways, an information anarchy, where the multitude of user acccounts and passwords coupled with the vast amount of similar operating web programs, have made online searching not only a difficult task at times, but confusing and frustrating most of the time. In my short article, I explain what the SemWeb proposes to do, and offer the famous seven layered cake as my model of grand understanding. As usual, comments are most welcomed.

Tuesday, August 05, 2008

Five Weeks to a Semantic Web Class

Over at the Semantic Library, which I admire and follow religiously, Melissa is developing a Semantic Library course, very much in line with the 6 Weeks to a Social Library class by Meredith Farkas. What would I teach if I were involved in this very exciting initiative? Well, why don’t I just say right here?

(1) Standards – What is RDF? What kind of metadata is it? What does it have to do with librarians?

(2) Classification and Metadata – What does the Dublin Core Metadata Initiative, Resource Description and Access, and MARC 21 have to do with the SemWeb?

(3) From HTML to AJAX to SPARQL – The evolution of programming has led to different versions of the same thing. Is SPARQL the key to unlocking the mystery of the SemWeb? Or are there alternatives?

(4) Realizing the two Tim’s – O’Reilly and Berners-Lee’s vision of the Web. Where we are and where we’re heading? Is Nova Spivak the answer?

Saturday, August 02, 2008

Making Academic Web Sites Better

Shu Liu's Engaging Users: The Future of Academic Library Web Sites is an insightful analysis into the present situation of academic library homepages. Academic library websites are libraries' virtual presentation to the world. Liu argues for Web 2.0 concepts for library websites. I enjoyed this article tremendously. It lays out the vision that many websites can handily and readily use in the current landscape of the Web. Take a look, it's worth a read.

(1) User Focus - Focus on library users by presenting library resources in a targeted an customized manner

(2) Personalization - Recognize library users as individuals by giving them opportunities to configure their own library interfaces and to select tools and content based on personal needs

(3) User engagement - Provide sufficient tools to allow and encourage library users in content creation and exchange

(4) Online communities - Nurture the development of online communities by connecting individuals through online publishing, and sharing Web 2.0 tools

(5) Remixability - Employ a mashup approach to aggregate current and emerging information technologies to provide library users with opportunities to explore new possibilities of information resources.

Tuesday, July 29, 2008

WHATWG?

I've written about the potential of Resource Description & Access playing a role in the Semantic Web, and the importance of librarians in this development. Not only that, but Resource Description Framework would be the crux of this new Web. Brett Bonfield, a graduate student in the LIS program at Drexel University, intern at the Lippincott Library at the University of Pennsylvania and an aspiring academic librarian, has pointed out that the WHATWG, "Web Hypertext Application Technology Working Group," is a growing community of people interested in evolving the Web. It focuses primarily on the development of HTML and APIs needed for Web applications -- might have some influence in how things will play out.


The WHATWG was founded by individuals of Apple, the Mozilla Foundation, and Opera Software in 2004, after a W3C workshop. Apple, Mozilla and Opera were becoming increasingly concerned about the W3C’s direction with XHTML, lack of interest in HTML and apparent disregard for the needs of real-world authors. So, in response, these organisations set out with a mission to address these concerns and the Web Hypertext Application Technology Working Group was born.

There was a time when RDF’s adoption would have been a given, when the W3C was seen as nearly infallible. Its standards had imperfections, but their openness, elegance, and ubiquity made it seem as though the Semantic Web was just around the corner. Unfortunately, that future has yet to arrive: we’re still waiting on the next iteration of basic specs like CSS; W3C bureaucracy persuaded the developers of Atom to publish their gorgeous syndication spec with IETF instead of W3C; and, perhaps most alarmingly, the perception that W3C’s HTML Working Group was dysfunctional encouraged Apple, Mozilla, and Opera to team with independent developers in establishing WHATWG to create HTML’s successor spec independently from the W3C. As more non-W3C protocols took on greater prominence, W3C itself seemed to be suffering a Microsoft-like death of a thousand cuts.

This is interesting indeed. As Bonfield reveals, on April 9, WHATWG’s founders proposed to W3C that it build its HTML successor on WHATWG’s draft specification. On May 9, W3C agreed. W3C may never again be the standard bearer it once was, but this is compelling evidence that it is again listening to developers and that developers are responding. The payoff in immediate gratification—the increased likelihood of a new and better HTML spec—is important, but just as important is the possibility of renewed faith in W3C and its flagship project, the Semantic Web. Things are moving along just fine, I think.

Fascinating. There're two roads that lead to the same path. But the question remains. Are we any closer to the SemWeb?

Tuesday, July 22, 2008

Web 3.0 in 600 words

I've just penned an article on Web 3.0 from a librarian's standpoint. In my article, What is Web 3.0? The Next Generation Web: Search Context for Online Information, I lay out what I believe are the essential ingredients of Web 3.0. (Note I don't believe the SemWeb and Web 3.0 are synonymous even though some may believe them to be so - and I explain why). Writing it challenged me tremendously in coming to grips with what exactly constitutes Web 3.0. It forced me to think more concisely and succinctly about the different elements that bring it together.

It's conceptual; therefore, it's murky. And as a result, we overlook the main elements which are already in place. One of the main points I make is, whereas Web 2.0 is about information overload, Web 3.0 will be about regaining control. So, without further adieu, please take a look at this article, and let me know your thoughts. The article should not leave out the excellent help of the legendary librarian, the Google Scholar, Dean. He helped me out a great deal in fleshing out these ideas. Thanks DG.

Sunday, July 20, 2008

Web 3.0 and Web Parsing

Ever thought how Web 3.0 and the SemWeb can read webpages in an automated, intelligent fashion? Take a look at how Website Parse Template (WPT) works. WPT is an XML based open format which provides HTML structure description of website pages. WPT format allows web crawlers to generate Semantic Web RDFs for web pages.

Website Parse Template consists of three main entities:

1) Ontologies - The content creator defines concepts and relations which are used in on the website.

2) Templates - The creator provides templates for groups of web pages which are similar by their content category and structure. Publisher provides the HTML elements’ XPath or TagIDs and links with website Ontology concepts

3)
URLs - The creator provides URL Patterns which collect the group of web pages linking them to "Parse Template". In the URLs section publisher can separate form URLs the part as a concept and link to website Ontology.

Friday, July 18, 2008

Kevin Kelly on Web 3.0




At the Northern California Grantmakers & The William and Flora Hewlett Foundation Present: Web & Where 2.0+ on Feb. 14th, 2008, Kevin Kelly talks about Web 3.0. Have a good weekend everyone. Enjoy.

Thursday, July 17, 2008

EBSCO in a 2.0 World

EBSCOhost 2.0 is here. It's got a brand new look and feel, based on extensive user testing and feedback, and provides users with a powerful, clean and intuitive interface available. This is the first redesign of the EBSCOhost interface since 2002, and its functionality incorporates the latest technological advances.

1) Take a look at EBSCOhost 2.0 Flash demonstration here.

2) It's also got a spiffy marketing web site also features new EBSCOhost 2.0 web pages, where you can learn more about its key features, here. (http://www.ebscohost.com/2.0)

EBSCO has really moved into the 2.0 world: simple, clean, and Googleized. But perhaps that's the way that information services need to go. We simply must keep up. I had gone to a presentation at Seattle SLA '08, and EBSCO gave an excellent presentation (not to mention a lunch) in which it showed the 2.0-features of the new EBSCO interface. In essence, it's customizable for users: you can have it as simple as a search box or as complex as it is currenly. The retrieval aspects have not changed that much. Yet, perception is everything don't you think?

Wednesday, July 09, 2008

Why Be a Librarian?

There seems to be a real fear by some to be called 'librarians.' There's a mysterious aura around what a librarian does. In fact, some have cloaked their librarian status as 'metadata specialist' or 'information specialist' or even 'taxonomist.' Why be a librarian? That's a good question. I like some of the answers offered by Singapore Library Association's Be A Librarian :

As technology allows the storage and uploading of information at ever greater speeds and quantities, people are becoming oerwhelmed by the “information overload”. The information professional is a much needed guide to aid people in their search for knowledge.

The librarian learns to seek, organize and locate information from a wide variety of sources, from print materials such as books and magazines to electronic databases. This knowledge is needed by all industries and fields, allowing librarians flexibility in choosing their working environments and in developing their areas of expertise.

The librarian keeps apace with the latest technological advances in the course of their work. They are web authors, bloggers, active in Second Life. They release podcasts, produce online videos and instant message their users. The librarian rides at the forefront of the technology wave, always looking out for new and better ways to organize and retrieve information
for their users.

At the same time, librarians remember their roots, in traditional print and physical libraries, and continue to acquire and preserve books, journals and other physical media for their current users and for future generations.

Well said. I like it!

Tuesday, July 08, 2008

Expert Searching in a SemWeb World

If we are to move into a Web 3.0 SemWeb-based world, taking a closer look at initiatives such as Expert System makes sense. This company is a provider of semantic software, which discovers, classifies and interprets text information. I like the approach it's taking, by offering a free online seminar to make its pitch. In "Making Search Work for a Living," the webinar shows users how to improve searching. Here's what it is:

As an analysts or knowledge worker you are busy everyday searching for information, often in onerous and time consuming ways. The goal of course is to locate the strategic knuggets of information and insight that answer questions, contribute to reports and inform all levels of management. Yet current search technology proves to be a blunt tool for this task. What you are looking for is trapped in the overwhelming amount of information available to you in an endless parade of formats and forced user interfaces. Immediate access to strategic information is the key to support monitoring, search, analysis and automatic correlation of information.

Join this presentation and roundtable discussion with Expert System on semantic technology that solves this every day, every business problem.

This is a free webinar brought to you by Expert System.
To register send an e-mail to webinar@expertsystem.net

  • You are looking for a semantic indexing, search and analysis innovative tool to manage your strategic internal and external information.
  • You want to overcome the limits of traditional search systems to manage the contents of large quantities of text.
  • You have ever wondered how you can improve the effectiveness of the decision making process in your company.

DATE/TIME: July 10th 2008, 9:00 am PT, 12:00 pm ET USA; 5:00 pm UK.
Duration: 60 Minutes
Focus On: semantics as a leading technology to understand, search, retrieve, and analyze strategic contents.

The webinar will teach how to:

  • Conceptualize search and analysis on multilingual knowledge bases;
  • Investigate the documents in an interactive way through an intuitive web interface;
  • Highlight all the relations, often unexpected, that link the elements across the documents.
  • Monitor specific phenomena constantly and then easily generate and distribute ways for others to understand them.

It's worth a look-see, I think.

Sunday, July 06, 2008

End of Science? End of Theory?

Chris Anderson has done it again, this time with an article about the end of theory. How? In short: raw data. In End of Theory, he believes that with massive data, the millennial-long scientific model of hypothesize, model, test is becoming obsolete, Anderson believes.


Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n dimensional grand unified models over the past few decades (the "beautiful story" phase of a discipline starved of data) is that we don't know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.

And according to Anderson, biology is heading in the same direction. What does this say about science and humanity? In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including IBM's Tivoli and open source versions of Google File System and MapReduce.

Anderson's been right before. See Long Tail and Free. But this one's just speculation of course. Perhaps one commentator hit the point when he says, "Yeah, whatever. We still can't get a smart phone with all the bells and whistles to be able to use any where in the world with over 2 hours worth of use and talk time...so get back to me when you've perfected all of that." Well said. Let's wait and see some more.

Tuesday, July 01, 2008

Catalogue 2.0

It's blogs like Web 2.0 Catalog that keep me going. Catalogues have been the crux of librarianship, from the card catalogue to the OPAC. But for libraries, the catalogue has always seemed to be a separate entity. It's as if there is a dichotomy: the Social Web and the catalogue -- from there, the twine shall never meet. What would a dream catalogue look like to me? I have 8 things I’d like to see. Notice that it’s not out of the stretch of imagination. Here they are:

(1) Wikipedia – What better way to get the most updated information for a resource than the collective intelligence of the Web? Can we integrate this into the OPAC records? We should try.

(2) Blog – “Blog-noting” as I call it. To a certain extent, some catalogues already allow users to scribble comments on records. But blog-noting allows users to actually write down reflections of what they think of the resource. The catalogue should be a “conversation” among users.

(3) Amazon.ca - Wouldn’t it be nice to have an idea what a book costs out on the open market? And wouldn’t it make sense to throw in an idea of how much the used cost would be?

(4) Worldcat - Now that you know the price, wouldn’t it be useful to have an idea of what other libraries carry the book?

(5) Google-ability – OPAC resources are often online, but “hidden” in the deep web. If opened up to search engines, it makes it that much accessible.

(6) Social bookmarking – If the record is opened to the Web, then it naturally makes sense to be linked to Delicious, Refshare & Citulike (or similar bibliographic management service).

(7) Cataloguer’s paradise – Technical servicemen and women are often hidden in the pipelines of the library system, their work often unrecognized. These brave men and women should have their profiles right on the catalogue, for everyone to see, to enjoy. Makes for good outreach, too. (Photo is optional).

(8) Application Programming Interface - API's are sets of declarations of the functions (or procedures) that an operating system, library or service provides to support requests made by computer programs. It's like the interoperable sauce which adds taste to web service. It's the crux of Web 2.0, and will be important for the Semantic Web when the Open Web will finally arrive. As a result, API's need to be explored in detail by OPACs, for ways to integrate different programs and provide open data for reuse for others.

Are these ideas out of the realms of possibility? Your thoughts?

Monday, June 23, 2008

Seth Godin at SLA in Seattle

Seth Godin is a best-selling author, entrepreneur and agent of change. He is the author of Permission Marketing, a New York Times best seller that revolutionizes the way corporations approach consumers. Fortune Magazine named it one of their Best Business Books, and Promo magazine called Godin "The Prime Minister of Permission Marketing" and "ultimate entrepreneur for the information age" by Business Week Magazine.

Best known as being an author of books such as Unleashing the Ideavirus, the Purple Cow, and Permission Marketing, Godin’s blog is not only one of the most popular blogs in the world, Godin also helped create a a popular website Squidoo, which is
a network of user-generated lenses --single pages that highlight one person's point of view, recommendations, or expertise . According to Godin, the way marketing works now is not by interrupting large numbers of people; rather, it is through soliciting a small segment of rabid fans who can eagerly spread the word about one's idea. The challenge is how to engage each person to go and bring five friends. What tools do we give them so that they can reach out to colleagues? A website like Zappos is so successful not because it sells shoes, but because it connects consumers to products, and then encourages consumers to spread the word to their friends and colleagues -- and hence, more consumers.

In this new era of permission marketing, spamming no longer works. Services such as PayPal which connect users to products or Sonos, which engage users as customers through recreating data into knowledge, and producing a conversation using the web as its platform are the new models of success. "Be remarkable," Godin argues, and "tell a story to your sneezers" so that they could spread the word and "get permission" from consumers for their attention to the product. Godin concludes with a controversial assertion. "Books are souvenirs," he said, to a hushed audience. Most people find everyday facts and information from digital documents. "When was the last time you got your information from a book?" Although Godin might have made a gross generationalization, his assertion of the divergence between the digital and the physical is a reality. In the Web 2.0 world, our enemy is obscurity, not piracy.

Together, Abram and Godin's sessions at SLA 2008 in Seattle were both rewarding experiences. They ultimately propose that information professionals need to shift their mentality from one of passivity to one of actively promoting themselves, of engaging information services in new ways, and of accepting change with an open mind.

Thursday, June 19, 2008

Stephen Abrams at SLA in Seattle

Day #2 of SLA was full of fascinating discussions. Stephen Abram's session, "Reality 2.0 - Transforming ourselves and our Associations" offered the most thought provoking ideas - definitely the highlight of my experience at this conference.

For those who don't already know, Stephen Abram is President 2008 of SLA and was past-President of the Canadian Library Association. He is Vice President Innovation for SirsiDynix and Chief Strategist for the SirsiDynix Institute.

Here's a flavour of what I thought were key points that really gave me food for thought:

(1) What's wrong with Google and Wikipedia? - It's okay for librarians to refer to Google or Wikipedia. Britannica has 4% error; Wikipedia has 4% error, plus tens of thousands of more entries. It's not wrong to start with Wikipedia & Google, but it is wrong when we stop there.

(2) Don't dread change - This is perhaps the whiniest generation this century. The generation that dealt with two world wars and a depression did fine learning new tools like refrigerators, televisions, radios, and typewriters. And they survived. Why can't we? Is it so hard to learn to use a wiki?

(3) Focus! - We need to focus on the social rather than the technology. Wikis, blogs, and podcasts will come and go. But connecting with users won't. We must not use technology just for the sake of catching up. There has to be a reason to use them.

(4) Don't Be Anonymous - Do we give our taxes to a nameless accountant? Our teeth to a nameless dentist? Heart surgeon who has no title? If these professions don't, then why are information professionals hiding behind their screens. Go online! Use social networking as your tools to reach out to users!

(5) Millennials - This is perhaps the 1st generation in human history that its younger generation teaches its previous generation. However, though there is much to learn from youths about technology, there is also much need to mentor and train for this profession to prosper and flourish.

(6) Change is to come! - Expect the world to be even more connected than it already has. With HDTV, that means more cables are freed up for telecommunications. Google's endgame is to provide wireless accesss through electricity. There're already laser keyboards where you can type on any surface. The world is changing. So must information professionals.

(7) Build paths, not barriers - When there are pathlines created by pedestrians, libraries commonly erect fences to prevent walking. Why not create a path where one exists already so that the library becomes more accessible? Librarians must go to the user, not the other way around. If patrons are using Facebook, then librarians need to use that as a channel for communication.

Stephen's power presentation is here for your viewing pleasure as well.

Tuesday, June 17, 2008

SLA Day #1

Just when one thought that bibliographic control has changed, it might change some more. On Day 1 of SLA in Seattle, I went to a fascinating session given by Jose-Marie Griffiths, called On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control offered a fascinating multifaceted glimpse into the current situation of bibliographic control and cataloguing. What is intriguing about this working group is the fact that it comprises both the library world and the private sector. Led by a tri-membership of Google, the American Library Association, and the Library of Congress, the working group created a recommendation document which proposed five general recommendations: 1) increasing efficiency; 2) enhancing access; 3) positioning technology; 4) positioning the community for the future; and 5) strengthening the profession.

What is controversial about the proposal is the suspension of the Resource and Description Access (RDA). Not only does the working group believe that the RDA is too confusing and difficult to implement, it also requires much more testing. The report also proposes for a more continue education in bibliographic control for professionals and students alike. By designing an LIS curriculum and building an evidence base for LIS research can the profession be strengthened for the future.

Although the session had a fairly spare audience, I found this session to be highly engaging and perhaps even ominous for the future of librarianship. Because the Library of Congress accepted the report with support (although unofficially), this could mean a schism in progress of RDA, which is viewed as the successor of the AACR2. Also, the fact that this working group included the non-library world (i.e. Google and Microsoft), the future of bibliographic control won't be limited to librarians. Rather, it will involve input from the private sector, including publishers, search firms, and the corporate world. Is this a good thing? Time will tell. For better or for worse.

Thursday, June 12, 2008

B2B in a World of Controlled Vocabularies and Taxonomies

The e-readiness rankings have been released. And it reveals that the US and Hong Kong are the leaders in this category for e-readiness. How do you measure it? According to the Economic Intelligence Unit, connectivity is a measure of e-readiness. Digital channels in a country which are plentiful, fast, and reliable enough for its people and its organizations to make the most of the Internet is the basic infrastructure and measure for this to happen. But if individuals and businesses do not find the available channels useful in completing transactions, then the number of PC's or mobile phones in a country is a worthless measure.

Hence, the EIU designed its findings by looking at the opportunities that a country provides them to businesses and consumers to complete transactions. Market analysts Forrester estimate that online retail sales in the US grew by 15% in 2006; US $44 billion was spent online in the third quarter and the firm estimates that 2006 online sales in the Christmas holiday season alone reached US 427 billion. Another research firm, IDC, estimates that business-to-business (B2B) transaction volume in the US will reach US $650 billion by 2008, which amounts to two-thirds of the world's US $1 trillion B2B market by that time.

Even though there is concern that the great weight of the US in online activity takes away from the rest of the world, the fact that its online adoption also benefits other countries; China is one beneficiary of the growth of B2B volumes in the US, so much so that there has been the creation of some sizeable and sophisticated B2B transaction service providers, including one of the world's largest online B2B marketplaces, Alibaba.

Over 15 million business and consumer customers in China user Alibaba's online platform. While most do not pay to use basic services, more than 100,000 businesses do. In fact, Yahoo! had bought 40% stake in Alibaba for US $1 billion in 2005). The chinese firm is evolving into a comprehensive supplier of online business development resources for Chinese customers, many of whom would not be doing business online at all if not for Alibaba.

What does this mean for information professionals? A great deal. Look at the financial implications of B2B in the current telecommunications infrastructure. We're essentially running the online and digital economy on the bricks and mortars of outdated networks. We're in a good position to take advantage of the this upcoming economy.

Thursday, June 05, 2008

Talis on Web 2,0, Semantic Web, and Web 3.0

I was honoured to have been interviewed by Richard Wallis of Talis. I was also quite humbled by the whole experience, as I learned just how far I've come in my understanding of the SemWeb and how much more I have to go. We had a good chat about Web 2.0, Semantic Web, and Web 3.0. Have a listen to the podcast. Any comments are welcomed. For those who want a synopsis of what we had discussed, here is my distilled version:

1. Why librarians? - Librarians have an important role to play in the SemWeb. Information organization are traits and skills that librarians have which are relevant to the SemWeb architecture. Cataloguing, classification, indexing, metadata, taxonomies & ontologies -- these are the building blocks of LIS.

2. What will the SemWeb look like? - Think HDTV. I believe the SemWeb will be a seamless transition, one that will be lead by innovators - companies and individuals that will pave the way with the infrastructure for it to happen, yet at the same time will not alienate those who don't want to encode their applications and pages with SemWeb standards. But like HDTV, those who fall behind will realize that they'll need they'll eventually need to convert...

3. Is this important right now? - Not immediately. The SemWeb might have minimal effect on the day-to-day work of librarians, but the same could be said for computer programmers and software engineers. Right now, we are all waiting for that killer application that will drive home the potentials of the SemWeb. So until that transpires, there is much speculation and skepticism.

4. What do librarians need to do? -
Learn XML, join the blogosphere's discussion of the SemWeb, discuss with colleagues, pay attention to RDA, continue questioning the limitations of Web 2.0. Just because we don't see it yet, doesn't mean it should stop us from joining the discourse. Think string theory.

Wednesday, June 04, 2008

Easterlin Paradox of Information Overload

According to Wikipedia, the Easterlin Paradox is a key concept in happiness economics. Theorized by economist Richard Easterlin in the 1974 paper "Does Economic Growth Improve the Human Lot? Some Empirical Evidence," it proposes that, contrary to expectation, economic growth doesn't lead to greater happiness. It quickly caught fire, as Easterlin became famous beyond famous, and the paradox quickly became a social science classic, cited in academic journals and the popular media. As the New York times says, the Easterlin Paradox tapped into a near-spiritual human instinct to believe that money can’t buy happiness. Although there have been attempts to debunk the Easterlin Paradox, I believe the concept applies quite well to Web 2.0 and the information overload it has presented to the current state of the Web.

As one information expert has put it, Web 2.0 is about searching, Web 3.0 will be about finding. Well said. That is exactly the problem about Web 2.0. There are a plethora of excellent free and very useful tools out there - blogs, wikis, RSS feeds, mashups - but at what point does it become too much? Recently, I noticed that my Google Reader has gotten out of hand. I just can't keep up anymore. I skim and I skim and I skim. I'm pulling in a lot of information, but am I really processing it? Am I really happy with the over abundance of rich content of Web 2.0? Not really. Are you?

Tuesday, June 03, 2008

Semantic Web and Librarians At Talis

I've always believed that librarians should and will play a part in the rise of the Semantic Web and Web 3.0. I've gone into the theory and conceptual components, but really haven't discussed too much about the practical elements of how librarians will realize this. Meet Talis. Besides its contribution to the blogosphere, Talis has recently dipped into publishing with its inaugural issue of Nodalities: The Magazine of the Semantic Web. It's a wonderful read - take a look.

How did Talis come about? It's been in the works for quite a while now, and it's worth noting how it came to be. In 1969 a number of libraries founded a small co-operative project, based in Birmingham to provide services that would help the libraries become more efficient. The project was known as the Birmingham Libraries Cooperative Mechanisation Project, or BLCMP. At this time the concept of automation was so new that the term mechanisation was often used in its place.

BLCMP built a co-operative catalogue of bibliographic data at the start of its work, a database that now contains many millions of records. BLCMP moved into using microfiche and later IBM mainframes with dedicated terminals at libraries in the mid-seventies and was one of the first library automation vendors to provide a GUI on top of Microsoft Windows to provide a better interface for end-users. The Integrated Library System was first called Talis. Talis became the name of the company during re-structuring and the ILS became known as Alto. In 1995 Talis was the first library systems vendor to produce a web enabled public access catalogue. Much of Talis' work now focusses on the transition of information to the web, specifically the Semantic Web and Talis have lead much of the debate about how Web 2.0 attitudes affect traditional libraries.

How does this include librarians? This ambitious Birmingham-based software company began life in the 1970s as a university spin-off. For many years it was a co-operative owned by its customers (a network of libraries), but in 1996 it was restructured as a commercial entity. It has a well-established pedigree of supplying large-scale information management systems to the public in the UK and academic libraries: in fact, more than 60% of UK public libraries now use the company's software, which benefits some 9m library users. In 2002, the company embarked on Talis 2.0, a change programme to take advantage of "the next wave of technology" (Web 2.0 and the semantic web). In the year ending March 2004, turnover was £7.5m with profits of £226,000. Who says librarians can't make a buck, right?

Saturday, May 31, 2008

Introducing WebAppeal

There are some good Web 2.0 applications and websites. Then there is WebAppeal. The web service is based on the principle of 'Software as a Service' (SaaS), which is rapidly gaining popularity. The uprise of innovative online applications makes traditional and expensive software unnecessary. Examples of successful web applications are video service YouTube and free music service Last.fm. To bring some structure and insight into these ever-growing technologies, http://www.appappeal.com/ informs consumers as comprehensively as possible about all the possibilities SaaS web applications have to offer.

Although we're in the age of Web 2.0, one of the main challenges remains information overload. Too much information does not necessarily mean knowledge. That's why I find AppAppeal to be a convincing website which provides insightful reviews of applications and indexes them according to utility. On this website, all applications are organized in categories such as "Blogging", "Personal Finance" and "Wiki Hosting". The website is still being developed. Soon, tools will be added to create an interactive community around web-based applications.

There are already Web 2.0 review sites such as Mashable, All Things Web 2.0, or Bob Stumpel's Everything 2.0. But WebAppeal goes one step further. It analyzes the advantages and disadvantages of particular applications, providing demo videos. I really like this website. It's a good complement to a project that Rex Turgano and I are collaborating on: Library Development Camp, which not only reviews Web 2.0 applications, but offers trial accounts for users to try out different applications. Together we make a great punch. Stay tuned. More to come. . .

Thursday, May 29, 2008

Day 4 of TEI/XML Bootcamp

Day 4 has come and gone. What did I learn? XML is not easy. Programming is even tough business, not for the faint of heart or mind. The main challenge that I had, and made my head spin, was learning the complexities behind XHTML and XSLT. A powerful tool for the construction of the Semantic Web is XHTML. Most people are acquainted with the "meta" tags which can be used to embed metadata about the document as a whole. Yet there are more powerful, granular techniques available too. Although largely unused by web authors, XHTML and XSLT offer numerous facilities for introducing semantic hints into markup to allow machines to infer more about the web page content than just the text. These tools include the "class" attribute, used most often with CSS stylesheets. A strict application of these can allow data to be extracted by a machine from a document intended for human consumption.

Although there have been several proposals for embedding RDF inside HTML pages, the technique of using XSLT transformations has a much broader appeal. Because not everyone is keen to learn RDF, and it thus presents a barrier to the creation of semantically rich web pages. Using XSLT provides a way for web developers to add semantic information with minimal extra effort. Dan Connolly of the W3C has conducted quite a number of experiments in this area, including HyperRDF, which extracts RDF statements from suitably marked-up XHTML pages. What can librarians do?
The Resource Description and Access is just around the corner. And there is much buzz (good and bad) that it's going to change the way librarians and catalogers think about information science and librarianship. I encourage information professionals to be aware of the changes to come. Although most are not going to be involved directly with the Semantic Web, they can keep abreast of developments, particularly exciting developments in information organization and classification. Workshops and presentations about the RDA are out in droves. Pay attention. Stay tuned. There could relevancy in these new developments that spill into the SemWeb.

Tuesday, May 27, 2008

The Digital Humanities

I am Day 2 of the Digital Humanities Summer Institute. Prior to this workshop, I had no inkling of what was digital humanities. Not anymore. The Digital Humanities, also known as Humanities Computing, is a field of study, research, teaching, and invention concerned with the intersection of computing and the disciplines of the humanities. It is methodological by nature and interdisciplinary in scope. It involves investigation, analysis, synthesis and presentation of knowledge using computational media. provides an environment ideal to discuss, to learn about, and to advance skills in new computing technologies influencing the work of those in the Arts, Humanities and Library communities.

I'm currently taking Text Encoding Fundamentals and their Application at the University of Victoria from May 26–30, 2008, taught by Julia Flanders and Syd Bauman experts in using the Text Encoding Initiative (TEI) an XML language which collectively develops and maintains a standard for the representation of texts in digital form in order to specify encoding methods for machine-readable texts. And it has been a blast. This has been the seventh year of its existence, and already it has gained the attention of academics and librarians across the world.

The DHSI takes place across a week of intensive coursework, seminar participation, and lectures. It brings together faculty, staff, and graduate student theorists, experimentalists, technologists, and administrators from different areas of the Arts, Humanities, Library and Archives communities and beyond to share ideas and methods, and to develop expertise in applying advanced technologies to activities that impact teaching, research, dissemination and preservation. What have I learned so far? Lots. But most of all, just how much XML plays in the Semantic Web. But more on that in the next posting . . . stay tuned.

Friday, May 23, 2008

One Million Dollar Semantics Challenge and API

The SemanticHacker $1Million Innovators’ Challenge and new open API for Semantic Discovery has recently launched by TextWise, LLC. The Challenge enables developers to showcase the power of TextWise’s patented Semantic Signature® technology and accelerate developing breakthrough applications.

The Challenge provides incentives to encourage creation of software prototypes and/or business plans that demonstrate commercial viability in specific industries. Are you up to the Challenge? Go to Semantichacker.com to experience the technology first-hand in our demo and learn more about how to enter the $ 1 million challenge.

But what are Semantic Signatures®? They identify concepts and assign them weights; in order words, they're the ‘DNA’ of documents which in essence become highly effective at describing what the documents are ‘about.’ Semantic Signatures® enable Web publishers and application developers to automatically embed consistent, semantically meaningful tags within their content for use in classification, organization, navigation and search.

In many ways, that's what librarians can offer in terms of information structuring and organization. Interestingly, textwise technology will have a spot at the Semantic Technology Conference in San Jose on May 21, 2008. I won't be able to attend. But if you are, could you give a write-up? I would be forever in your debt.

Thursday, May 22, 2008

Dublin Core is Dead, Long Live MODS

Jeff Beall wrote an article called Dublin Core: An Obituary. In it Beall asserts that the Dublin Core Metadata Initiative is a failed experiment. Instead, MODS is the way to go. And this was back in 2004! What is MODS? The Library of Congress' Network Development and MARC Standards Office, with interested experts, is developing a schema for a bibliographic element set that may be used for a variety of purposes, and particularly for library applications. As an XML schema it is intended to be able to carry selected data from existing MARC 21 records as well as to enable the creation of original resource description records.

It includes a subset of MARC fields and uses language-based tags rather than numeric ones, in some cases regrouping elements from the MARC 21 bibliographic format. This schema is currently in draft status and is being referred to as the "Metadata Object Description Schema (MODS)". MODS is expressed using the XML schema language of the World Wide Web Consortium. The standard is maintained by the Network Development and MARC Standards Office of the Library of Congress with input from users.

Here's what MODS can do that the Dublin Core can't:
1. The element set is richer than Dublin Core
2. The element set is more compatible with library data than ONIX
3. The schema is more end user oriented than the full MARCXML schema
4. The element set is simpler than the full MARC format

In my article at the Semantic Report, I argue that the DCMI is potentially relevant to the SemWeb because implementations of Dublin Core use not only XML, but are based on the Resource Description Framework (RDF) standard. The Dublin Core is an all-encompassing project maintained by an international, cross-disciplinary group of professionals from librarianship, computer science, text encoding, the museum community, and other related fields of scholarship and practice. As part of its Metadata Element Set, the Dublin Core implements metadata tags such as title, creator, subject, access rights, and bibliographic citation, using the resource description framework and RDF Schema.

So will the Dublin Core’s role in knowledge management activity representation be significant in the emergence of the SemWeb? So far, MODS hasn't done the job. Even though it has claimed that it can do so. Is this the problem similar to the situation during ancient Chinese period of the Hundred Schools of Thought? Who will win in the end? Or which ones? Perhaps opportunities and possibilities are much higher than narrowly looking for one path for absolute knowledge. So we march on . . .

Tuesday, May 20, 2008

Post-modern business in the Free World - Open Access & Librarians

I came across this interesting article from the Vancouver Sun, Post-modern business model: It's free. Videogame company Nexon has been giving away its online games for free, and making its revenue from selling digital items that gamers use for their characters. Garden says his business is as much about psychology as it is about game design. It’s no good to sell a bunch of cool designer threads to a character who is isolated in a game, because no one will see how good he looks.
Free games can have a dozen different revenue models, from Nexon’s microtransactions to advertising, product placement within a game, power and level upgrades, or downloadable songs. However, on the question of videogames (or any other digital product) being offered to consumers for free. Much of the principles of Nexon is based on Chris Anderson's "free" concept.

“No one says you can’t make money from free." What does this mean for libraries? Especially since much of the mandates and goals of libraries are not to make money? The possibilities are there. A great number of libraries are already dipping into open access initiatives, particularly at a time when database vendors and publishers are charging arms, legs, and first-borns. With Web 2.0 technologies forming an important foundation for digital and virtual outreach opportunities, and the SemWeb on the horizon, I encourage librarians and information professionals to put on their thinking caps and think together in a collaborative environment to break down the silos of information gathering, and move towards information sharing.

Sunday, May 18, 2008

Librarian 2.0

Sometimes you just read an article, and go, I get it. A lightbulb shines brightly above you. Then you quickly turn it off to be energy saving. And quickly run to the computer to blog about it. Professionalizing knowledge sharing and communications is worthy of praising.

There’re a lot of articles that deal with the Library 2.0 mantra. But John Cullen goes beyond that, and proposes the idea that Library 2.0 should extend to the librarian. It should be Librarian 2.0. And what does that mean?

The key is developing communicative orientation: one that turns the old, tiring stereotype of library work being quiet, reflective and procedural, to one that is primarily focused on listening, engaging and developing understanding of the unique position of every individual.

In other words, just as much as technology is important to the library, we must also be alert of the changing nature of information and the profession. No longer are librarians doing the same duties repetitively and mindlessly. Web 2.0 technologies are merely the surface manifestation of L2. The opportunity is there to use this paradigm shift for us in teaching other professions how to actively engage with their service consumers. All aboard!

Friday, May 16, 2008

Search Monkey and the SemWeb

We're getting closer. Yahoo is incubating a project code-named "Search Monkey," a set of open-source tools that allow users and publishers to annotate and enhance search results associated with specific web sites. Using SearchMonkey, developers and site owners can use structured data to make Yahoo! Search results more useful and visually appealing, and drive more relevant traffic to their sites.

The new enhancements differ from Yahoo's "Shortcuts" that sometimes appear at the top of search result pages. Shortcuts are served by Yahoo whenever the search engine is confident that the shortcut links are more relevant than the other web search results on the page. Often, shortcuts highlight content from Yahoo's own network of sites.

The new enhancements can be applied to any web site. Publishers can add additional information that will be displayed with the web search result. For example, retailers can include product information, restaurants can include links to menus and reviews, local merchants can display operating hours, address, and phone information, and so on—far more information than a title, URL, and description that make up current generation search results.

Here's the exciting thing. As Search Engine Land reports:
Anyone can create an app for a web site. Yahoo is collecting the most useful apps into a gallery that you as a searcher can enable for your own Yahoo search results. For example, if you like the app that was created for LinkedIn, which shows a mini-profile of a person, you can include that app so that the mini-profiles display whenever you search on a person's name.

It's true. The SearchMonkey developer tool helps users find and construct data services that you can use to build apps. Once you've built your app, you can use it yourself and share it with others. Take a look at this :)


Wednesday, May 14, 2008

From Dublin Core to the Semantic Web

I've just published a piece in the Semantic Report titled, The Semantics of the the Dublin Core – Metadata for Knowledge Management. It's an experimental piece about the potential for applying principles from the Dublin Core Metadata Initiative for the SemWeb. In a previous article about half a year ago, Dean and I had proposed that the library catalogue could be used as a blueprint for the Semantic Web. Perhaps theoretical and conceptual, the arguments fleshed out the ideas, but not the practical applications. In this latest article, I wanted to outline in greater detail how exactly developments in library and information science are playing out, not only in the SemWeb, but for knowledge management in general.

Can the DCMI provide the infrastructure for the SemWeb? It could. Or it could not. Some have gone as far as saying that the Dublin Core is dead. But I'm not going to add more to that discourse. What I wanted to do was find apparently disparate entities: B2B, the Dublin Core, and the SemWeb, and tie them together using principles of knowledge organization in the form of the DCMI. Blasphemous? Perhaps.

My point in the article isn't to create something out of nothing. The purpose is to extend the idea that knowledge management for librarians and information science is nothing new. In 2002, two years before Tim O'Reilly's coining of the term, "Web 2.0," librarian Katherine Adams had already argued that librarians will be an essential piece to the SemWeb equation. Her seminal piece, The Semantic Web: Differentiating between Taxonomies and Ontologies, Adams argues that ontologies and taxonomies are synonymous - computer scientists refer to hierarchies of structured vocabularies as "ontology" while librarians call them "taxonomy." What the Dublin Core offers is an opportunity to bridge together different topics and extend across disciplines to navigate the complexities of the SemWeb. Fodder for discussion. But good fodder nonetheless I hope.

Monday, May 05, 2008

Library Development Camp

I'm excited to announce the formation of Library Development Camp. Our initiative is to help fellow librarians and information professionals in Canada to explore and learn about the latest web tools and technologies from colleagues who actually use them. This web community is open to any one working in the library or information management field in Canada.

How does this work? Most of the magic happens "offline" as we try to meet up in person to discuss these tools as well as give demos, training, hold discussions and debates, and share ideas and tips on how to effectively use these tools in a workplace or even on a personal level. It's all about sharing. We hope to spawn other LibraryDevCamp groups across Canada. If you would like to start one up in your city, lets us know and we'll set up a section on our web site.

Any library/information professional who already use any of these web tools/services are welcome to join and be a LibraryDevCamp.ca contributor or moderator. So far, we have an all-star cast of experts, such as Dean Giustini, Eugene Barsky, and Rex Turgano. We hope to have you join us, too. In the spirit of Web 2.0, our virtual meeting place is hosted by Moveable Type, a weblog publishing system developed by the company Six Apart. Please stay tuned as we expect our community to grow, not only in members but also in exciting ventures.

Thursday, May 01, 2008

Economics 2.0

Although I enjoyed Economics 100 (Micro and Macroeconomics) and had learned a great deal - I have to admit it wasn't the most exciting courses at time. The textbook we had used was Gregory Mankiw's Principals of Economics. (I still have copies of the textbooks). He has written two popular college-level textbooks: one in intermediate macroeconomics and the more famous Principles of Economics, which is popular among high-school Advanced Placement Economics teachers. More than one million copies of the books have been sold in seventeen languages.

Mankiw was also an important person in American politics, as he was appointed by President George W. Bush as Chairman of the Council of Economic Advisors in 2003. He has since resumed teaching at Harvard, taking over the introductory economics course Social Analysis 10 (which he affectionately refers to as "Ec. 10"). However, Mankiw also believes in using Web 2.0.

This is Mankiw's purpose for the blog:
I am a professor of economics at Harvard University, where I teach introductory economics (ec 10) among other courses. I use this blog to keep in touch with my current and former students. Teachers and students at other schools, as well as others interested in economic issues, are welcome to use this resource.

What's exciting about Mankiw's blog is the fact that it dips into the Web 2.0 blogosphere. The blog is much more than just a website. It's an intellectual and virtual space for him to keep in touch with colleagues and students, of marketing his profession and work to the non-expert. It's fantastic outreach. Librarians everywhere should take notice.

Friday, April 25, 2008

Library 2.0

Michael Casey and Laura Savastinuk's article in the Library Journal not only changed the way libraries are perceived, but also how librarians run them. In a way, Library 2.0 principles are nothing new. Interlibrary loan is very much a "long tail" concept. In fact, would it be possible to view Library 2.0 as change management in its most extreme form? Nonetheless, it was a brilliant read when the book was published. Here's what I got out of the book about Library 2.0 concepts.

(1) Plan, Implement, and Forget - Changes must be constant and purposeful. Services need to be continually evaluated.

(2) Mission Statement - A library without a clear mission is like a boat without a captain. It drives the organization, serving as a guide when selecting services for users and letting you set a clear course for Library 2.0

(3) Community Analysis - Know your users. Talk to them, have a feel for who you're serving, and who they are.

(4) Surveys & Feedback - Get both users and staff feedback. It's important to know what works and what doesn't.

(5) Team up with competitors - Don't think of the library as being in a "box." Look at what users are doing elsewhere that they could be doing through the library. Neither should bookstores or cafes or the Internet. Create a win-win relationship with local businesses that benefits everyone.

(6) Real input from staff - Having feedback means implementing ideas, and not just for show. Eventually, staff will realize the hoax, and morale will suffer.

(7) Evaluating services - Sacred cows do not necessarily need to be eliminated; however, nothing should be protected from review.

(8) Three Branches of Change model - This allows all staff - from frontline workers to the director - to understand the changes made. The three teams are: investigative, planning, and review team.

(9) Long tail - Web 2.0 concepts should be incorporated into the Library 2.0 model as much as possible. For example, the Netflix model does something few services can do: get materials into the hands of people who do not come into libraries. Think virtually as well as physically.

(10) Constant change & user participation - These two concepts form the crux of Library 2.0.

(11) Web 2.0 technologies - They give users access to a wide variety of applications that are neither installed nor approved by IT. The flexibility is there for libraries to experiment unlike ever before. It is important to have conversation where none exists before. Online applications help fill this gap.

(12) Flattened organizational structure - Directors should not make all the decisions. Instead, front line staff input should be included. Committees that include both managers and lower level staff help 'flatten' hierarchical structure, creating a more vertical structure that leads to more realistic decision-making.

Tuesday, April 22, 2008

7 Opportunities for the Semantic Web

Dan Zambonini’s 7 f(laws) of the Semantic Web is a terrific read, and perhaps offers a refreshing perspective of the challenges of realizing the SemWeb. Too often we hear a dichotomy of arguments, but Zambonini’s calmly lays out what he believes are hurdles for the SemWeb. Instead of regurgitating his points, I’m going to complement them with my own comments:

(1) Not all SemWeb data are created equal - There’s a lot of RDF files on the web, in various formats. But that doesn’t equate to the SemWeb. But this is a bit of a strawman. In fact, it emphasizes the point that the components of the SemWeb are here. The challenge is the finding the mechanism or application that can glue everything together.

(2) A Technology is only as good as developers think it is - Search analysis reveals that people are actually more interested in AJAX than RDF Schema, despite the fact that RDF has a longer history. Zambonini believes that this is because the SemWeb is so incredibly exclusive in an ivory-towerish way. I agree. However, what is to say that the SemWeb won’t be able to accommodate a broader audience in the future? We’ll just need to wait and see.

(3) Complex systems must be built from successively simpler system - I agree with this point. Google is successful in the search engine wars because it learnt how to build up slowly, and created a simple system that got more complex as it needed to. People love Web 2.0 because they’re easy to use and understand. But whereas Web 2.0 was about searching, the SemWeb should be about finding. Nobody said C+ and Java were easy, but complexity pays off in the long run.

(4) A new solution should stop an obvious pain - The SemWeb needs to prove what problems it can solve, and prove its purpose. Right now, Web 2.0 and 1.0 do a good job, so why would we need any more? Fair enough. But information is still in silos. Until we open up the data web, we’re still in many ways living in the dark.

(5) People aren’t perfect - Creating metadata and classifications is difficult. People are sloppy. Will adding SemWeb rules add to the mess that is the Web? I seriously can’t answer this one. We can only predict. But perhaps it’s too cynical to prematurely write off people’s metadata creating skills. HTML wasn’t easy, but we managed.

(6) You don’t need an ontology of everything. But it would help - Zambonini argues for a top-down ontology which would a one-fits-all solution for the entire Web rather than building from a bottom-up approach based on folksonomies of the social web. I would argue that for this to work, we need to look at it from different angles. Perhaps we can meet half way?

(7) Philanthropy isn’t commercially viable - Why would any sane organization buy into the SemWeb and expose their data? We need that killer application in order for this to work. Agree. Ebay did wonders. Let’s hope there’s a follow-up on the way.