Tuesday, January 22, 2008
The Long Tail and Libraries
(1) Transaction Costs - The better connected libraries are, the lower the transaction costs
(2) Data about choice and behavious - Transactional behavioural data is used to adapt and improve systems. Examples of such data are holdings data, circulation and ILL data, and database usage data.
(3) Inventory - As more materials are available electronically, we will see more interest in managing the print collection in a less costly way. Although historical library models have been based on physical distribution of materials, resources are decreasingly needed to be distributed in advance of need; they can be held in consolidated stores
(4) Navigation - There are better ways to exploit large bibliographic resources. Ranking, recommendations, and relation help connect users to relevant material and also help connect the more heavily used materials to potentially useful, but less used, materials
(5) Aggregation of Demand - The library resource is fragmented. In the new network environment, this fragmentation reduces gravitational pull, which means that resources are prospected by the persistent or knowledgeable user, but they may not be reached by others to whom the resources are potentially useful. What OCLC is doing is making metadata about those books available to the major search engines and routing users back to library services
Saturday, January 19, 2008
Google = God?
Perhaps, according to Ding, a better alternative is collaborative searching. Since current answer-based search strategy is motivated by questions, collaborative search is motivated by answers. In our answer-based search model, the ones who answer questions may not have passion (or enough knowledge) to questions. But an inanimate search engine such as Google doesn't know this -- nor does it care.
However, Web 2.0 is slowly changing this course of searching. Already, search engines such as Cha Cha are harvesting collective intelligence and wisdom of the crowds to retrieve more "relevant" results. Ding goes one point further: Web 3.0 will be based on community-sensitive link resources. It will reverse the relation between horizontal search engines and vertical search engines. The current model of vertical search engines being built upon generic search engines are not working well because they are too immature to provide communicate-specific search by themselves. (Just look at the limitations of Rollyo). What will the Semantic Web search engine look like? Maybe something like this.
Friday, January 18, 2008
The Future of I.S.
Among his more interesting projects is the Emerging Databases, Emerging Diversity (ED2): National Science Foundation-funded initiative to study methods by which digital collections can be shared via systems that maintain diverse tags, ontologies, and interfaces. In collaboration with Cambridge University's Museum of Anthropology and Archaeology, and the Zuni community of New Mexico, the $300,000-funded project inquires how digital access to ancestral objects affects diverse communities. Ramesh's work involves extensive field work in places like Kyrgyzstan and India. (Exciting!)
The faculty at UCLA represents Library and Information Science's gradual shift towards the iSchool movement. Academics such as Ramesh Srinivasan represent the new face of LIS. This has important implications for librarians, who will ultimately be bred and nurtured by these new scholars nontraditional perspectives to LIS. Rather than basing their studies on users of libraries, newer scholars such as Srinivasan, whose background is as diverse as his research (his PhD is in Design), go beyond the traditional domain of LIS. Inevitably, librarianship will change because of this new approach. New ways of thinking and research will be injected into the profession -- perhaps this is where the source of innovation in libraries will come from as well. From the classroom.
Wednesday, January 16, 2008
Metcalfe's Law
As the number of people in the network grows, the connectivity increases, and if people can link to each other's content, the value grows at an enormous rate. The Web, if it were simply a collection of pages of content, would not have the value it has today. Without linking, the Web would be a blob of disconnected pages.
As information professionals and librarians, we shouldn't miss out on the obvious links between Web 2.0 and the Semantic Web. Social networking is critical to the success of Web 2.0; but by combining the social networks of Web 2.0 with the semantic networks of the Semantic Web, a tremendous value is possible. Here's a scenario from Tom Gruber which I find very compelling:
Genius.
Real Travel "seeds" a Web 2.0 travel site with the terms from a gazetteer ontology. This allows the coupling of place names and locations, linked together in an ontology structure, with the dynamic content and tagging of a Web 2.0 travel site. The primary user experience is of a site where travel logs (essentially blogs about trips), photos, travel tools and other travel-related materials are all linked together. Behind this, however, is the simple ontology that knows that Warsaw is a city in Poland, that Poland is a country in Europe, etc. Thus a photo taken in Warsaw is known to be a photo from Poland in a search, browsing can traverse links in the geolocation ontology, and other "fortuitous" links can be found. The social construct of the travel site, and communities of travelers with like interests, can be exploited by Web 2.0 technology, but it is given extra value by the simple semantics encoded in the travel ontology.
Monday, January 07, 2008
Pragmatic Web as HD TV
Rather than waiting for everyone to come together and collaborate -- that could take forever or worse yet . . . never -- the best hope might be to encourage the emergence of communities of interest and practice that develop their own consensus knowledge on the basis of which they will standardize their representations. Thus, the vision of the Pragmatic Web is to augment human collaboration effectively by appropriate technologies. Thus, the Pragmatic Web complements the Semantic Web by improving the quality and legitimacy of collaborative, goal-oriented discourses in communities.
I liken this scenario to High-definition Television. By 2010, the majority of programming in North America will move to HDTV specifications, thus effectively removing other TV formats such as plasma TV's from competition. In the meantime, consumers are free to continue using their existing TV sets. The Web could very well employ this model, as it's logical and crosses the path of least damage. Using the HD TV scenario, Web users can continue using their current browsers and existing ways of surfing while those who want to maximize the full potential of the Web will use Semantic Web browsers (e.g. Piggy Bank) that are designed specifically to utilize the portion of the Web that is "Semantic Web-compliant."
Meanwhile, in the background, semantic annotation will be slowly integrated into Web pages, programs, and services. As time progresses, users will eventually catch onto the "rave" that is the Semantic Web . . .
Saturday, January 05, 2008
E-Commerce 2.0
(1) Zopa looks at the credit scores of people looking to borrow and determines whether they're an A*, A, B, or C-rated borrower. If they're none of the those, then Zopa's not for them
(2) Leners make lending offers such as "I'd like to lend this much to A-rated borrowers for this long and at this time
(3) Borrowers review the rates offered to them and acept the ones they like. If they are dissatisfied with the offered rates on any particular day, they can come back on subsequent days to see if rates have changed
(4) To reduce any risk, Zopa spreads lender capital widely. A lender putting forth, for instance, 500 pounds or more would have his or her money across at least 50 borrowers
(5) Borrowers enter into legally binding contracts with their lenders
(6) Borrowers repay monthly by direct debit. If repayments are defaulted, a collections agency uses the same recovery process that the High Street banks use
Thursday, January 03, 2008
Mashups for '09
I've updated my last article with Mashups, Social Software, and Web 2.0: How Remixing Programming Code Has Changed The Web. In taking a look at mashups, I think libraries need to pay attention, as they open up virtual information services to a much larger audience.
When Times Are Tough . . .
And I`ve begun to experience this myself. Patrons are starting to use collections more, and realizing the financial pinch that the economy has given us. Fear not. The library isn`t going anywhere anytime soon.
Wednesday, January 02, 2008
Mashups for '09
I've recently written another entry on mashups, Mashups, Social Software, and Web 2.0
How Remixing Programming Code Has Changed The Web. The challenge with mashups is that it's still unfortunately a web programmer's tool. However, the next stage of the Web will be mashups. It's about opening data for others, and breaking down information silos.
11 Ways to the Library of 2012
Tuesday, December 25, 2007
Happy Holidays and Seasons Greetings
I (as a librarian) found the article and the whole topic very important. I especially enjoyed the conclusion. You wrote that "Web 3.0 is about bringing the miscellaneous back together meaningfully after it's been fragmented into a billion pieces."I was wondering if in your opinion this means that the semantic web may turn a folksonomy into some kind of structured taxonomy. We all know the advantages and disadvantages of a folksonomy. Is it possible for web 3.0 to minimize those disadvantages and maybe even make good use out of them?
(3) Such a use of folksonomies could help overcome some of the inherent difficulties in ontology construction, thus potentially bridging Web 2.0 and the Semantic Web. By using folksonomies' collective categorization scheme as an initial knowledge base for constructing ontologies, the ontology author could then use the tagging distribution's most common tags as concepts, relations, or instances. Folksonomies do not a Semantic Web make -- but it's a good start.
Thursday, December 20, 2007
Information Science As Web 3.0?
In his article Information Science, Tefko Saracevic makes a bold prediction:fame awaits the researcher(s) who devises a formal theoretical work, bolstered by experimental evidence, that connects the two largely separated clusters i.e. connecting basic phenomena (information seeking behaviour) in the retrieval world (information retrieval). A best seller awaits the author that produces an integrative text in information science. Information Science will not become a full-fledged discipline until the two ends are connected successfully.
As Saracevic puts it, IR is one of the most widely spread applications of any information system worldwide. So how come Information Science has yet to produce a Nobel Prize winner?
As I've opined before, LIS will play a prominent role in the next stage of the Web. So who's it gonna be?
Tuesday, December 18, 2007
The Semantic Solution - A Browser?
Semantic Web browser—an end user application that automatically locates metadata and assembles point-and-click interfaces from a combination of relevant information, ontological specifications, and presentation knowledge, all described in RDF and retrieved dynamically from the Semantic Web. With such a tool, naïve users can begin to discover, explore, and utilize Semantic Web data and services. Because data and services are accessed directly through a standalone client and not through a central point of access . . . . new content and services can be consumed as soon as they become available. In this way we take advantage of an important sociological force that encourages the production of new Semantic Web content by remaining faithful to the decentralized nature of the Web
I like this idea of a portal. To have everyone agree about how to implement W3C standards - RDF, SPARQL, OWL - is unrealistic. Not everyone will accept the extra work for no real sustainable incentive. That is perhaps why there is no current real invested interest by companies and private investors to channel funding to Semantic Web research. However, the Semantic Web portal is one method to combat the malaise. In many ways, it resembles the birth of Web 1.0, before Yahoo!'s remarkable directory and search engines. All we need is one Jim Clark and one Marc Andreeson, I guess.
(Maybe a librarian and an information scientist, or two?)
Friday, December 14, 2007
"Web 3.0" AND OR the "Semantic Web"
In medicine, there is virtually no discussion about web 3.0 (see this PubMed search for web 3.0 (zero results) and most of the discussion on the semantic web (see this PubMed search - ~100 results) is from the perspective of biology/ bioinformatics.
The dichotomy in the literature is both perplexing and unsurprising. On the one hand, semanticists are looking at a new intelligent web has 'added meaning' to documents, and machine interoperability. On the other, web 3.0 advocates use '3.0' to be trendy, hip or to market themselves or their websites. That said, I prefer the web 3.0 label to the semantic web because it follows web 2.0 and suggests continuity.
It is important that medical librarians -- all librarians for that matter -- join in (and even lead) the discourse, particularly since the Semantic Web & Web 3.0 will be based heavily on the principles of knowledge and information organization. Whereas Web 1.0 and 2.0 could not distinguish among Acetaminophen, Paracetamol, and Tylenol -- Web 3.0 will.
Tuesday, December 11, 2007
Google and End of Web 2.0
What Google scholar has done is bring scholars and academics onto the web for their work in a way that Google alone did not. This has led to a greater use of social software and the rise of Web 2.0. For all its benefits, Web 2.0 has given us extreme info-glut which, in turn, will make Web 3.0 (and the semantic web) necessary.
I agree. Google Scholar (and Google) are very much Web 2.0 products. As I had elaborated in my previous entry, AJAX (which is Web 2.0-based), produced many remarkable programs such as Gmail and Google Earth.
Was this destiny? Not really. As Yihong Ding proposes, Web 2.0 did not choose Google; rather, it was Google that had decided to follow Web 2.0. If Yahoo had only known about the politics of the Web a little earlier, it might have precluded Google. (But that's for historians to analyze). Yahoo! realized the potential of Web 2.0 too late; it purchased Flickr without really understanding how to fit it into Yahoo!'s Web 1.0 universe.
Back to Dean's point. Google's strength might ultimately lead to its own demise. The PageRank algorithm might have a drawback similar to Yahoo!'s once dominant directory. Just as Yahoo! failed to catch up with the explosion of the Web, Google's PageRank will slowly lose its dominance due to the explosion caused by Web 2.0. With richer semantics, Google might not be willing to drastically alter its algorithm since it is Google's bread-and-butter. So that is why Google and Web 2.0 might be feeling the weight of the future fall too heavily on their shoulders.