Sunday, September 07, 2008

Web 2.0 + Semantic Web = Web 3.0

Finally, the latest issue of Talis' Nodalities it out. One of the brightest minds of the SemWeb industry, Alex Iskold has written an article, Semantic Search: Myth and Reality, which is really worth the wait and the read. He argues that the SemWeb shouldn't about competing with Google since its algorithm has so successfully ruled the web for over a decade. Why fix something that's not broken?

Try typing in the query: "What is the capital of China?" And Google automatically spits out the answer. But when you need it to answer a question such as "What is the best vacation for me now?" and the answers that a search engine provides might not be so clear afterall; in fact, probably impossible. That is where the SemWeb comes in.

In analyzing SemWeb search engines such as Search Monkey, Freebase, Powerset, and Hakia, Iskold proposes that the SemWeb should be about solving problems that can't be solved by Google today. In fact, the search box must go, in order for the SemWeb to work.

Friday, September 05, 2008

Quantum Computer Reviewed

Back in the 80's, quantum computing was viewed as something of a futuristic scenario, something out of a sci-fi flick like Minority Report. However, in 1994, interest ameliorated immediately after Peter Shor, then at Bell Laboratories (now at MIT), published his famous quantum factoring algorithm capable of undermining widely used cryptosystems that relied on the difficulty of factoring large numbers.

Currently, there are physicists, computer sciencists, and engineers in more than 100 groups in universities, institutes, and companies around the world are exploring the frontiers of quantum information, encompassing quantum computing, as well as recently commercialized quantum cryptography and quantum teleportation communication techniques.

Ross and Oskin's Quantum Computing is definitely worth a read. Exponentially scalable computing power that could solve problems beyond the capabilities of conventional computers. The key is exploiting the superposition of quantum-entangled information units, or qubits. But the research challenges are daunting: How to create and reliably compute with the qubits, which require the seemingly mutually exclusive conditions of exquisite classical control while being isolated from any external influences that could destroy the entanglement.

What does this mean for information professionals? A lot. With Web 3.0 around the corner, information processing at high levels will be necessary. It's still cloudy as to how it will all look like. But with quantum computing, we're on the right track.

Monday, September 01, 2008

The Third Digital (Dis)order


Just finished reading David Weinberger's Everything is Miscellaneous: The Power of the New Digital Disorder. A terrific ideas-driven text, which proposes the idea that we have to relinquish the notion that there is one of way organization information hierarchies. From the Dewey Decimal System to the way we organize our CD collections, Weinberger critiques and takes a shot at everything along the way. But he does make an exellent argument: in the digital world the laws of physics no longer apply. Just take a look at your computer files, and you realize you can organize your music by any number of criteria -- artist, genre, song name, length, or price -- you name it, you've got it. Because the Web is a hyperlinked web of information that grows organically, it's really a mess out there. And Web 2.0 doesn't help at all with the glut that has emerged.

Weinberger proposes that in this new digital world, there are three planes to disorder:

(1) Physical Disorder - The natural state of disorder, when things are left as they are, disorder inevitably arises.

(2) Metadata Disorder - Because of this disorder -- lists, classification systems, hierarchies, taxonomies, ontologies, catalogues, ledgers, anything -- that brings order to the physical realm

(3) Digital Disorder - In the digital world, it makes bringing order that much more difficult, yet also that much more interesting and convenient. There are more ways than one to bring order to the chaos. Just look at Wikipedia.

Friday, August 29, 2008

Open Access: The Beginning of the End?

I jotted down a few ideas about open access, and wouldn't you know it, turned it into an article. OA's an interesting phenomenon. It's here, but not really. There is still so much skepticism regarding whether it'll work out that we just don't know whether it will make it. There are already textbooks that are mashed up using bits and pieces of many other textbooks for students to access digitally rather than buying the whole expensive mess at the beginning of every semester. Journals are starting to slip off in terms of purchases by libraries, especially the academic ones. With the rise of the Semantic Web, open access and open source must go hand-in-hand in order for them to collectively contribute to the new way of searching and organization online information. Librarians take heed? Peter Suber, are you listening?

Tuesday, August 26, 2008

A LEAP of Faith

One of the main tasks I do in my position is to evaluate digital technologies and how they fit into the Library model. I always am looking at how other organizations integrate emergent technologies into their webpages. One organization that has done a superb job is the Learning Enhancement Academic Partnership (LEAP) program at UBC. They really have some outstanding concepts. Libraries are increasingly moving towards the Library 2.0 (L2) model. Catalogues and homepages play only a part of the whole picture, but an important one. Here's why LEAP surpasses most library homepages by leaps and bounds. Here's hoping it catches on. And quick.

(1) User-generated content – As opposed to content posted solely by the site author(s), LEAP encourages user feedback, with things such as online surveys, polls, and student blogs.

(2) Treats users as co-developers of the site – The more people using the service, the better it becomes. LEAP treats this fundamental treatise to the core, encouraging student’s reviews, comments, and rants. Collective intelligence in its purest form.

(3) Customizable content and interface – LEAP allows students (and faculty) to merge their blog content to the

(4) Core application of the website runs through the browser and web serve – Rather than on a desk platform. We don’t need Dreamweaver. All we need is a freely downloadable open source software. LEAP uses Wordpress, a beautiful piece of work.

(5) Social software – the LEAP homepage is maximizes on this. Blogs, tagging, video and image sharing. You name it, they’ve got it. The whole Web 2.0 suite.

(6) Integration of emerging web technologies – LEAP uses this, building on AJAX, RSS, and using API’s for mashups.

Tuesday, August 19, 2008

7 Ways to Better Teaching

Paul Axelrod’s Student Perspectives on Good Teaching: What History Reveals makes some perceptive insight into what makes a good teaching. As academic librarians, we teach almost as much as faculty. Many don't know about this seedy side of the profession. Axelrod puts things into perspective. Librarians need to take charge of instruction - it's an integral part of the profession. What good is technology if there's no one to translate it to users? Here are the top seven things a good teacher should have:

(1) Accessibility and Approachability

(2) Fairness

(3) Open-Mindedness

(4) Mastery and Delivery

(5) Enthusiasm

(6) Humour

(7) Knowledge and Inspiration Imparted

Wednesday, August 13, 2008

Information Anarchy

I've just written a short piece about the Semantic Web. What is it? I know what it isn't. The current web is in many ways, an information anarchy, where the multitude of user acccounts and passwords coupled with the vast amount of similar operating web programs, have made online searching not only a difficult task at times, but confusing and frustrating most of the time. In my short article, I explain what the SemWeb proposes to do, and offer the famous seven layered cake as my model of grand understanding. As usual, comments are most welcomed.

Tuesday, August 05, 2008

Five Weeks to a Semantic Web Class

Over at the Semantic Library, which I admire and follow religiously, Melissa is developing a Semantic Library course, very much in line with the 6 Weeks to a Social Library class by Meredith Farkas. What would I teach if I were involved in this very exciting initiative? Well, why don’t I just say right here?

(1) Standards – What is RDF? What kind of metadata is it? What does it have to do with librarians?

(2) Classification and Metadata – What does the Dublin Core Metadata Initiative, Resource Description and Access, and MARC 21 have to do with the SemWeb?

(3) From HTML to AJAX to SPARQL – The evolution of programming has led to different versions of the same thing. Is SPARQL the key to unlocking the mystery of the SemWeb? Or are there alternatives?

(4) Realizing the two Tim’s – O’Reilly and Berners-Lee’s vision of the Web. Where we are and where we’re heading? Is Nova Spivak the answer?

Saturday, August 02, 2008

Making Academic Web Sites Better

Shu Liu's Engaging Users: The Future of Academic Library Web Sites is an insightful analysis into the present situation of academic library homepages. Academic library websites are libraries' virtual presentation to the world. Liu argues for Web 2.0 concepts for library websites. I enjoyed this article tremendously. It lays out the vision that many websites can handily and readily use in the current landscape of the Web. Take a look, it's worth a read.

(1) User Focus - Focus on library users by presenting library resources in a targeted an customized manner

(2) Personalization - Recognize library users as individuals by giving them opportunities to configure their own library interfaces and to select tools and content based on personal needs

(3) User engagement - Provide sufficient tools to allow and encourage library users in content creation and exchange

(4) Online communities - Nurture the development of online communities by connecting individuals through online publishing, and sharing Web 2.0 tools

(5) Remixability - Employ a mashup approach to aggregate current and emerging information technologies to provide library users with opportunities to explore new possibilities of information resources.

Tuesday, July 29, 2008

WHATWG?

I've written about the potential of Resource Description & Access playing a role in the Semantic Web, and the importance of librarians in this development. Not only that, but Resource Description Framework would be the crux of this new Web. Brett Bonfield, a graduate student in the LIS program at Drexel University, intern at the Lippincott Library at the University of Pennsylvania and an aspiring academic librarian, has pointed out that the WHATWG, "Web Hypertext Application Technology Working Group," is a growing community of people interested in evolving the Web. It focuses primarily on the development of HTML and APIs needed for Web applications -- might have some influence in how things will play out.


The WHATWG was founded by individuals of Apple, the Mozilla Foundation, and Opera Software in 2004, after a W3C workshop. Apple, Mozilla and Opera were becoming increasingly concerned about the W3C’s direction with XHTML, lack of interest in HTML and apparent disregard for the needs of real-world authors. So, in response, these organisations set out with a mission to address these concerns and the Web Hypertext Application Technology Working Group was born.

There was a time when RDF’s adoption would have been a given, when the W3C was seen as nearly infallible. Its standards had imperfections, but their openness, elegance, and ubiquity made it seem as though the Semantic Web was just around the corner. Unfortunately, that future has yet to arrive: we’re still waiting on the next iteration of basic specs like CSS; W3C bureaucracy persuaded the developers of Atom to publish their gorgeous syndication spec with IETF instead of W3C; and, perhaps most alarmingly, the perception that W3C’s HTML Working Group was dysfunctional encouraged Apple, Mozilla, and Opera to team with independent developers in establishing WHATWG to create HTML’s successor spec independently from the W3C. As more non-W3C protocols took on greater prominence, W3C itself seemed to be suffering a Microsoft-like death of a thousand cuts.

This is interesting indeed. As Bonfield reveals, on April 9, WHATWG’s founders proposed to W3C that it build its HTML successor on WHATWG’s draft specification. On May 9, W3C agreed. W3C may never again be the standard bearer it once was, but this is compelling evidence that it is again listening to developers and that developers are responding. The payoff in immediate gratification—the increased likelihood of a new and better HTML spec—is important, but just as important is the possibility of renewed faith in W3C and its flagship project, the Semantic Web. Things are moving along just fine, I think.

Fascinating. There're two roads that lead to the same path. But the question remains. Are we any closer to the SemWeb?

Tuesday, July 22, 2008

Web 3.0 in 600 words

I've just penned an article on Web 3.0 from a librarian's standpoint. In my article, What is Web 3.0? The Next Generation Web: Search Context for Online Information, I lay out what I believe are the essential ingredients of Web 3.0. (Note I don't believe the SemWeb and Web 3.0 are synonymous even though some may believe them to be so - and I explain why). Writing it challenged me tremendously in coming to grips with what exactly constitutes Web 3.0. It forced me to think more concisely and succinctly about the different elements that bring it together.

It's conceptual; therefore, it's murky. And as a result, we overlook the main elements which are already in place. One of the main points I make is, whereas Web 2.0 is about information overload, Web 3.0 will be about regaining control. So, without further adieu, please take a look at this article, and let me know your thoughts. The article should not leave out the excellent help of the legendary librarian, the Google Scholar, Dean. He helped me out a great deal in fleshing out these ideas. Thanks DG.

Sunday, July 20, 2008

Web 3.0 and Web Parsing

Ever thought how Web 3.0 and the SemWeb can read webpages in an automated, intelligent fashion? Take a look at how Website Parse Template (WPT) works. WPT is an XML based open format which provides HTML structure description of website pages. WPT format allows web crawlers to generate Semantic Web RDFs for web pages.

Website Parse Template consists of three main entities:

1) Ontologies - The content creator defines concepts and relations which are used in on the website.

2) Templates - The creator provides templates for groups of web pages which are similar by their content category and structure. Publisher provides the HTML elements’ XPath or TagIDs and links with website Ontology concepts

3)
URLs - The creator provides URL Patterns which collect the group of web pages linking them to "Parse Template". In the URLs section publisher can separate form URLs the part as a concept and link to website Ontology.

Friday, July 18, 2008

Kevin Kelly on Web 3.0




At the Northern California Grantmakers & The William and Flora Hewlett Foundation Present: Web & Where 2.0+ on Feb. 14th, 2008, Kevin Kelly talks about Web 3.0. Have a good weekend everyone. Enjoy.

Thursday, July 17, 2008

EBSCO in a 2.0 World

EBSCOhost 2.0 is here. It's got a brand new look and feel, based on extensive user testing and feedback, and provides users with a powerful, clean and intuitive interface available. This is the first redesign of the EBSCOhost interface since 2002, and its functionality incorporates the latest technological advances.

1) Take a look at EBSCOhost 2.0 Flash demonstration here.

2) It's also got a spiffy marketing web site also features new EBSCOhost 2.0 web pages, where you can learn more about its key features, here. (http://www.ebscohost.com/2.0)

EBSCO has really moved into the 2.0 world: simple, clean, and Googleized. But perhaps that's the way that information services need to go. We simply must keep up. I had gone to a presentation at Seattle SLA '08, and EBSCO gave an excellent presentation (not to mention a lunch) in which it showed the 2.0-features of the new EBSCO interface. In essence, it's customizable for users: you can have it as simple as a search box or as complex as it is currenly. The retrieval aspects have not changed that much. Yet, perception is everything don't you think?