Friday, November 02, 2007
University of Guelph Chief Librarian Michael Ridley, similarly sees a future where the university library serves as an “academic town square,” a place that "brings people and ideas together in an ever-bigger and more diffuse campus. Services in the future will include concerts, lectures, art shows – anything that trumpets the joy of learning."
Is this the future of libraries? Yes, it's a matter of time. That's where we're heading -- that's where we'll end up. It is a matter of time. Change is difficult, particularly in larger academic institutions where bureaucracy and politics play an essential role in all aspects of operations. There is great skepticism towards Jeff Trzeciak's drastic changes to McMaster Library -- he's either a pioneer if he succeeds, or an opportunist if he fails. A lot is riding on Jeff's shoulders.
Tuesday, October 30, 2007
(1) Ontological Semantics (OntoSem) - A formal and comprehensive linguistic theory of meaning in natural language. As such, it bears significantly on philosophy of language, mathematical logic, and cognitive science
(2) Query Detection and Extraction (QDEX) - A system invented to bypass the limitations of the inverted index approach when dealing with semantically rich data
(3) SemanticRank algorithm - Deploys a collection of methods to score and rank paragraphs that are retrieved from the QDEX system for a given query. The process includes query analysis, best sentence analysis, and other pertinent operations
(4) Dialogue - In order establish a human-like dialogue with the user, the dialogue algorithm's goal is to convert the search engine's role into a computerized assistant with advanced communication skills while utilizing the largest amount of information resources in the world.
(5) Search mission - Google mission was to organize the world's information and make it universally accessible and useful. hakia's mission is to search for better search.
Monday, October 22, 2007
Today's web pages are designed for human use, and human interpretation is required to understand the content. Because the content is not machine-interpretable, any type of automation is difficult. The Semantic Web augments today's web to eliminate the need for human reasoning in determining the meaning of web-based data. The Semantic Web is based on the concept that documents can be annotated in such a way that their semantic content will be optimally accessible and comprehensible to automated software agents and other computerized tools that function without human guidance. Thus, the Semantic Web might have a more significant impact in integrating resources that are not in a traditional catalog system than in changing bibliographic databases.
Thursday, October 11, 2007
(1) A Universal Library - Readily accessed and used by humans in a variety of information use and contexts. This perspective arose as a reaction to the disorder of the Web, which was not ordered in categorization until search engines came along. Metadata, cataloguing, and schemas were seen as the answer.
(2) Computational Agents - Completing sophisticated activities on behalf of their human counterparts. Tim Berners-Lee envisioned an infrastructure for knowledge acquisition, representation, and utilization across diverse use contexts. This global knowledge base wil be used by personal agents to collect and reason about information, assisting people with tasks common to everyday life.
(3) Federated Data and Knowledge Base - In this vision, federated components are developed with some knowledge of another or at least with a shared anticipation of the type of applications that will use the data. In essence, this Web encompasses languages used for syntactically sharing data rather than having to write specialized converters for each pair of languages.
Wednesday, October 10, 2007
Stage 1 - Internet of Intellectual Capital - this initial stage of KM was driven primarily by IT. In this stage, organizations realized that their stock in trade was information and knowledge -- yet the left hand rarely knew what the right hand did. When the Internet emerged, KM was about how to deploy the new technology to accomplish those goals.
Stage 2 - Human & Cultural dimensions - the hallmark phrase is communities of practice. KM during this stage was about knowledge creation as well as knowledge sharing and communication.
Stage 3 - Content & Retrievability - consists of structuring content and assigning descriptors (index terms). In content management and taxonomies, KM is about arrangement description, and structure of that content. Interestingly, taxonomies are perceived by the KM community as emanating from natural scientists, when in fact they are the domain of librarians and information scientists. To take this one step further, The Semantic Web is also built on taxonomies and ontologies. Anyone see a trend? Perhaps a convergence?
Monday, October 08, 2007
I argue that we can go one step further because with the advent of Web 2.0, social search is actually the closest that we have to gathering input from all of the world’s users. How? Why? Let me explain with an analogy.
It’s not a matter of how, but a matter of when. Web 2.0 is very much like an apple. An apple can be food, a paperweight, a target, or a weapon if needed. It can be whatever you want it to be when you want it to be. The same goes for social searching. It is not search engines.
Del.icio.us is a social bookmarking web service. But it can be a powerful search tool if used properly; essentially, it taps into the social preferences of other users. Same goes for Youtube: it’s a video sharing website, but what’s to say that it can’t be used for searching videos for relevant topics, what’s to say that you can’t search related videos based on videos bookmarked by others? Social search is not based on program; it is mindset, a metaphorical sweet fruit, if you will.
In many ways, social searching is not unlike what librarians did (and still do) in the print-based world where an elegant craft of creativity and perserverence was required to find the right materials and putting them into the hands of the patron; the only difference is that the search has become digital.
Friday, October 05, 2007
Wednesday, October 03, 2007
(1) Taxonomies: An Important Part of the Semantic Web - The new Web entails adding an extra layer of infrastructure to the current HTML Web - metadata in the form of vocabularies and the relationships that exist between selected terms will make this possible for machines to understand conceptual relationships as humans do.
(2) Defining Ontologies and Taxonomies - Ontologies and taxonomies are used synonymously -- Computer Scientists refer to hierarchies of structured vocabularies as "ontology" while librarians call them "taxonomy."
(3) Standardized Language and Conceptual Relationships - Both taxonomies and ontologies consist of a structured vocabulary that identifies a single key term to represent a concept that could be described using several words.
(4) Different Points of Emphasis - Computer Science is concerned with how software and associated machines interact with ontologies; librarians are concerned with how patrons retrieve information with the aid of taxonomies. However, they're essential different sides of the same coin.
(5) Topic Maps As New Web Infrastructure - Topic maps will ultimately point the way to the next stage of the Web's development. They represent a new international standard (ISO 13250). In fact, even the OCLC is looking to topic maps in its Dublin Core Initiative to organize the Web by subject.
Monday, October 01, 2007
It's not unlike the library before Melvil Dewey introduced the idea of organizing and cataloguing books in a classification system. In many ways, we see the parallels here 130 years later. It's not surprising at all to see the OCLC at the forefront in developing Semantic Web technologies. Many of the same techniques of bibliographic control apply to the possibilities of the Semantic Web. It was the computer scientists and computer engineers who had created Web 1.0 and 2.0, but it will ultimately be individuals from library science and information science who will play a prominent role in the evolution of organizing the messiness into a coherent whole for users. Are we saying that Web 2.0 is irrelevant? Of course not. Web 2.0 is an intermediary stage. Folksonomies, social tagging, wikis, blogs, podcasts, mashups, etc -- all of these things are essential basic building blocks to the Semantic Web.
Thursday, September 27, 2007
Monday, September 24, 2007
(2) The 3D Web - A Web you can walk through. Without leaving your desk, you can go house hunting across town or take a tour of Europe. Or you can walk through a Second Life–style virtual world, surfing for data and interacting with others in 3D.
(3) The Media-Centric Web - A Web where you can find media using other media—not just keywords. You supply, say, a photo of your favorite painting and your search engines turn up hundreds of similar paintings.
(4) The Pervasive Web - A Web that's everywhere. On your PC. On your cell phone. On your clothes and jewelry. Spread throughout your home and office. Even your bedroom windows are online, checking the weather, so they know when to open and close
Tuesday, September 18, 2007
(1) Expressing Meaning - Bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.
(2) Knowledge Representation - For Web 3.0 to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning: this is where XML and RDF comes in, but are they only preliminary languages?
(3) Ontologies - But for a program that wants to compare or combine information across two databases, it has to know what two terms are being used to mean the same thing. This means that the program must have a way to discover common meanings for whatever database it encounters. Hence, an ontology has a taxonomy and a set of inference rules.
(4) Agents - The real power of the Semantic Web will be the programs that actually collect Web content from diverse sources, process the information and exchange the results with other programs. Thus, whereas Web 2.0 is about applications, the Semantic Web will be about services.
(5) Evolution of Knowledge - The Semantic Web is not merely a tool for conducting individual tasks; rather, its ultimate goal is to advance the evolution of human knowledge as a whole. Whereas human endeavour is caught between the eternal struggle of small groups acting independently and the need to mesh with the greater community, the Semantic Web is a process of joining together subcultures when a wider common language is needed.
Saturday, September 15, 2007
Ora Lassila and James Hendler, who co-authored along with Tim Berners-Lee, on the article which predicted what the semantic web would look like in 2001, argues in their most recent article, Embracing "Web 3.0" that the technologies that make it possible for the semantic web is slowly but surely maturing. In particular,
As RDF acceptance has grown, the need has become clear for a standard query language to be for RDF what SQL is for relational data. The SPARQL Protocol and RDF Query Language (SPARQL), now under standardization at the W3C, is designed to be that language.
But that doesn't mean that Web 2.0 technologies are obsolete. Rather, they are only a terminal stage of the evolution to Web 3.0. In particular, it is interesting that the authors note
(1) Folksonomies - tagging provides and organic, community-driven means of creating structure and classification vocabularies.
(2) Microformats - the use of HTML markup to decode structured data are a step toward "semantic data." Of course, although not in Semantic Web formats, microformatted data is easy to transform into something like RDF or OWL.
As you can see, we're moving along. Take a look at this: on the surface, Yahoo Food looks just like any Web service; underneath, it is made from SPARQL which really does "sparkle."
Monday, September 10, 2007
Saturday, September 01, 2007
(1) Social Networks - The content of a site should comprise user-provided information that attracts members of an ever-expanding network. (example: Facebook)
(2) Wisdom of Crowds - Group judgments are surprisingly accurate, and the aggregation of input is facilitated by the ready availability of social networking sites. (example: Wikipedia)
(3) Loosely Coupled API's - Short for "Application Programming Interface," API provides a set of instructions (messages) that a programmer can use to communicate between applications, thus allowing programmers to incorporate one piece of software to directly manipulate (code) into another. (example: Google Maps)
(4) Mashups - They are combinations of APIs and data that result in new information resources and services. (example: Calgary Mapped)
(5) Permanent Betas - The idea is that no software is ever truly complete so long as the user community is still commenting upon it, and thus, improving it. (example: Google Labs)
(6) Software Gets Better the More People Use It - Because all social networking sites seek to capitalize on user input, the true value of each site is definted by the number of people it can bring together. (example: Windows Live Messenger)
(7) Folksonomies - It's a classification system created in a bottom-up fashion and with no central coordination. Entirely differing from the traditional classification schemes such as the Dewey Decimal or Library of Congress Classifications, folksonomies allow any user to "social tag" whatever phrase they deem necessary for an object. (example: Flickr and Youtube)
(8) Individual Production and User Generated Content - Free social software tools such as blogs and wikis have lowered the barrier to entry, following the same footsteps as the 1980s self-publishing revolution sparked by the advent of the office laser printer and desktop publishing software. In the world of Web 2.0, with a few clicks of the mouse, a user can upload videos or photos from their digital cameras and into their own media space, tag it with keywords and make the content available for everyone in the world.
(9) Harness the Power of the Crowd - Harnessing not the "intellectual" power, but the power of the "wisdom of the crowds," "crowd-sourcing" and "folksonomies."
(10) Data on an Epic Scale - Google has a total database measured in hundreds of petabytes (a million, billion bytes) which is swelled each day by terabytes of new information. Much of this is collected indirectly from users and aggregated as a side effect of the ordinary use of major Internet services and applications such as Google, Amazon, and EBay. In a sense these services are 'learning' every time they are used by mining and sifting data for better services.
(11) Architecture of Participation - Through the use of the application or service, the service itself gets better. Simply argued, the more you use it - and the more other people use - the better it gets. Web 2.0 technologies are designed to take the user interactions and utilize them to improve itself. (e.g. Google search).
(12) Network Effects - It is general economic term often used to describe the increase in vaue to the existing users of a service in which there is some form of interaction with others, as more and more people to start to use it. As the Internet is, at heart, a telecommunications network, it is therefore subject to the network effect. In Web 2.0, new software services are being made available which, due to their social nature, rely a great deal on the network effect for their adoption. eBay is one example of how the application of this concept works so successfully.
(13) Openness - Web 2.0 places an emphasis on making use of the information in vast databases that the services help to populate. This means Web 2. 0 is about working with open standards, using open source software, making use of free data, re-using data and working in a spirit of open innovation.
(14) The Read/Write Web - A term given to describe the main differences between Old Media (newspaper, radio, and TV) and New Media (e.g. blogs, wikis, RSS feeds), the new Web is dynamic in that it allows consumers of the web to alter and add to the pages they visit - information flows in all directions.
(15) The Web as a Platform - Better known as "perpetual beta," the idea behind Web 2.0 services is that they need to be constantly updated. Thus, this includes experimenting with new features in a live environment to see how customers react.
(16) The Long Tail - The new Web lowers the barriers for publishing anything (including media) related to a specific interest because it empowers writers to connect directly with international audiences interested in extremely narrow topics, whereas originally it was difficult to publish a book related to a very specific interest because its audience would be too limited to justify the publisher's investment.
(17) Harnessing Collective Intelligence - Google, Amazon, and Wikipedia are good examples of how successful Web 2.0-centric companies use the collective intelligence of users in order to continually improve services based on user contributions. Google's PageRank examines how many links points to a page, and from what sites those links come in order to determine its relevancy instead of the evaluating the relevance of websites based solely on their content.
(18) Science of Networks - To truly understand Web 2.0, one must not only understand web networks, but also human and scientific networks. Ever heard of six degrees of separation and the small world phenomenon? Knowing how to open up a Facebook account isn't good enough; we must know what goes on behind the scene in the interconnectedness of networks - socially and scientifically.
(19) Core Datasets from User Contributions - Web 2.0 companies use to collect unique datasets is through user contributions. However, collecting is only half the picture; using the datasets is the key. These contributions are then organized into databases and analyzed to extract the collective intelligence hidden in the data. This extracted information is then used to extract collective knowledge that can be applied to the direct improvement of the website or web service.
(20) Lightweight Programming Models - The move toward database driven web services has been accompanied by new software development models that often lead to greater flexibility. In sharing and processing datasets between partners, this enables mashups and remixes of data. Google Maps is a common example as it allows people to combine its data and application with other geographic datasets and applications.
(21) The Wisdom of the Crowds - Not only has it blurred the boundary between amateur and professional status, in a connected world, ordinary people often have access to better information than officials do. As an example, the collective intelligence of the evacuees of the towers saved numerous lives in the face of disobeying authority which told them to stay put.
(22) Digital Natives - Because a generation (mostly the under 25's) have grown up surrounded by developing technologies, those fully at home in a digital environment aren't worried about information overload; rather, they crave it.
(23) Internet Economics - Small is the new big. Unlike the past when publishing was controlled by publishers, Web 2.0's read/write web has opened up markets to a far bigger range of supply and demand. The amateur who writes one book has access to the same shelf space as the professional author.
(24) "Wirelessness" - Digital natives are less attached to computers and are more interested in accessing information through mobile devices, when and where they need it. Hence, traditional client applications designed to run on a specific platform, will struggle if not disappear in the long run.
(25) Who Will Rule? - This will be the ultimate question (and prize). As Sharon Richardson argues, whoever rules "may not even exist yet."