My colleague Dean Giustini and I have collaborated on an article, The Semantic Web as a large, searchable catalogue: a librarian’s perspective. In it, we argue that librarians will play a prominent role in Web 3.0. The current Web is disjointed and disorganized, and searching is much like looking for a needle in the haystack.
It's not unlike the library before Melvil Dewey introduced the idea of organizing and cataloguing books in a classification system. In many ways, we see the parallels here 130 years later. It's not surprising at all to see the OCLC at the forefront in developing Semantic Web technologies. Many of the same techniques of bibliographic control apply to the possibilities of the Semantic Web. It was the computer scientists and computer engineers who had created Web 1.0 and 2.0, but it will ultimately be individuals from library science and information science who will play a prominent role in the evolution of organizing the messiness into a coherent whole for users. Are we saying that Web 2.0 is irrelevant? Of course not. Web 2.0 is an intermediary stage. Folksonomies, social tagging, wikis, blogs, podcasts, mashups, etc -- all of these things are essential basic building blocks to the Semantic Web.
Monday, October 01, 2007
Thursday, September 27, 2007
Libraries and the Semantic Web
Interestingly, not much has been talked about in terms of librarianship and Semantic Web technologies. It's as if there's a gap that can never be bridged: the rustic gatekeeper of books and high-end cutting edge programmer-speak. Quite recently, Jane Greenberg, professor of Library and Information Science at the University of North Carolina at Chapel Hill, has pointed out in Advancing the Semantic Web via Library Functions that there are many similarities between the library and Semantic Web. Here are some:
(1) Each has developed as a response to an abundance of information
(2) Both have mission statements grounded in service, information access, and knowledge discovery
(3) Both have advanced as a result of international and national standards
(4) Both have grown due to a collaborative spirit
(5) Both have become a part of society's fabric (although not so much yet for the Semantic Web)
Monday, September 24, 2007
Four Ways to Look at the Web
The Semantic Web is far from the monolithic artificial intelligent machine which could seemingly process the whim of a user's thoughts. Cade Metz's Web 3.0: Tomorrow's Web, Today offers an excellent and concise glimpse into the different multitude of possibilities of this new Web. Although still in its hyper-conceptual stages, Metz envisions four directions which Web 3.0 could take:
(1) The Semantic Web - A Web where machines can read sites as easily as humans read them. You ask your machine to check your schedule against the schedules of all the dentists and doctors within a 10-mile radius—and it obeys.
(2) The 3D Web - A Web you can walk through. Without leaving your desk, you can go house hunting across town or take a tour of Europe. Or you can walk through a Second Life–style virtual world, surfing for data and interacting with others in 3D.
(3) The Media-Centric Web - A Web where you can find media using other media—not just keywords. You supply, say, a photo of your favorite painting and your search engines turn up hundreds of similar paintings.
(4) The Pervasive Web - A Web that's everywhere. On your PC. On your cell phone. On your clothes and jewelry. Spread throughout your home and office. Even your bedroom windows are online, checking the weather, so they know when to open and close
(2) The 3D Web - A Web you can walk through. Without leaving your desk, you can go house hunting across town or take a tour of Europe. Or you can walk through a Second Life–style virtual world, surfing for data and interacting with others in 3D.
(3) The Media-Centric Web - A Web where you can find media using other media—not just keywords. You supply, say, a photo of your favorite painting and your search engines turn up hundreds of similar paintings.
(4) The Pervasive Web - A Web that's everywhere. On your PC. On your cell phone. On your clothes and jewelry. Spread throughout your home and office. Even your bedroom windows are online, checking the weather, so they know when to open and close
Tuesday, September 18, 2007
The Seminal on The Semantic
Before Tim O'Reilly, there was Sir Tim Berners-Lee, who often credited as the creator of the Internet. However, what many do not know is that Berners-Lee also preceded many so-called Web 2.0 experts when he had envisioned the Semantic Web (or as many refer to it synonymously as "Web 3.0"). While O'Reilly came along in 2004 to coin Web 2.0, Berners-Lee had long ago created the conceptual foundations in an article co-produced with James Hendler and Ora Lassila, titled The Semantic Web in Scientific American in 2001. Although librarians and information professionals don't need to know the specifics behind the coding technology behind the Semantic Web (that would be asking too much, for much of it is still in development), it is important to have a good grasp of the concepts and a strong understanding of the history and evolution of the Web. Thus, it is important to know that the Semantic Web will be defined by five concepts:
(1) Expressing Meaning - Bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.
(2) Knowledge Representation - For Web 3.0 to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning: this is where XML and RDF comes in, but are they only preliminary languages?
(3) Ontologies - But for a program that wants to compare or combine information across two databases, it has to know what two terms are being used to mean the same thing. This means that the program must have a way to discover common meanings for whatever database it encounters. Hence, an ontology has a taxonomy and a set of inference rules.
(4) Agents - The real power of the Semantic Web will be the programs that actually collect Web content from diverse sources, process the information and exchange the results with other programs. Thus, whereas Web 2.0 is about applications, the Semantic Web will be about services.
(5) Evolution of Knowledge - The Semantic Web is not merely a tool for conducting individual tasks; rather, its ultimate goal is to advance the evolution of human knowledge as a whole. Whereas human endeavour is caught between the eternal struggle of small groups acting independently and the need to mesh with the greater community, the Semantic Web is a process of joining together subcultures when a wider common language is needed.
(1) Expressing Meaning - Bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.
(2) Knowledge Representation - For Web 3.0 to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning: this is where XML and RDF comes in, but are they only preliminary languages?
(3) Ontologies - But for a program that wants to compare or combine information across two databases, it has to know what two terms are being used to mean the same thing. This means that the program must have a way to discover common meanings for whatever database it encounters. Hence, an ontology has a taxonomy and a set of inference rules.
(4) Agents - The real power of the Semantic Web will be the programs that actually collect Web content from diverse sources, process the information and exchange the results with other programs. Thus, whereas Web 2.0 is about applications, the Semantic Web will be about services.
(5) Evolution of Knowledge - The Semantic Web is not merely a tool for conducting individual tasks; rather, its ultimate goal is to advance the evolution of human knowledge as a whole. Whereas human endeavour is caught between the eternal struggle of small groups acting independently and the need to mesh with the greater community, the Semantic Web is a process of joining together subcultures when a wider common language is needed.
Saturday, September 15, 2007
Web 3.0 & the Sem-antic Web
Ready or not, like it or not, Web 3.0 is around the corner. It's coming - so it's best to understand the technologies. Particularly for librarians, we need to understand the intricate technologies behind what the semantic web will look like, how it runs, and what to expect from its much anticipated (although still hyper-theoretical) features.
Ora Lassila and James Hendler, who co-authored along with Tim Berners-Lee, on the article which predicted what the semantic web would look like in 2001, argues in their most recent article, Embracing "Web 3.0" that the technologies that make it possible for the semantic web is slowly but surely maturing. In particular,
But that doesn't mean that Web 2.0 technologies are obsolete. Rather, they are only a terminal stage of the evolution to Web 3.0. In particular, it is interesting that the authors note
As you can see, we're moving along. Take a look at this: on the surface, Yahoo Food looks just like any Web service; underneath, it is made from SPARQL which really does "sparkle."
Ora Lassila and James Hendler, who co-authored along with Tim Berners-Lee, on the article which predicted what the semantic web would look like in 2001, argues in their most recent article, Embracing "Web 3.0" that the technologies that make it possible for the semantic web is slowly but surely maturing. In particular,
As RDF acceptance has grown, the need has become clear for a standard query language to be for RDF what SQL is for relational data. The SPARQL Protocol and RDF Query Language (SPARQL), now under standardization at the W3C, is designed to be that language.
But that doesn't mean that Web 2.0 technologies are obsolete. Rather, they are only a terminal stage of the evolution to Web 3.0. In particular, it is interesting that the authors note
(1) Folksonomies - tagging provides and organic, community-driven means of creating structure and classification vocabularies.
(2) Microformats - the use of HTML markup to decode structured data are a step toward "semantic data." Of course, although not in Semantic Web formats, microformatted data is easy to transform into something like RDF or OWL.
As you can see, we're moving along. Take a look at this: on the surface, Yahoo Food looks just like any Web service; underneath, it is made from SPARQL which really does "sparkle."
Monday, September 10, 2007
Six Kinds of (Social) Searching
Librarians need to be aware of social searching. It's important and it's here to stay. What makes social searching so integral for librarians' information retrieval skills is that it requires knowledge of Web 2.0 (mashups, wisdom of crowds, long tail, etc.) It doesn't mean that "traditional" search skills are obsolete. Far from it. Rather, social searching just adds another layer in the librarian's toolkit. Here are some of my favourites.
6. Collaborative harvesters - iRazoo, Digg, Flickr, Youtube, Netscape, Reddit, Tailrank, popurls.com
Saturday, September 01, 2007
Top 25 Definitions for Web 2.0
Summer has gone by so quickly. What happened to June? I've been culling readings from all over everywhere, aggregating the best definitions of Web 2.0. Notice there is a lot: twenty-five in all. I tried making sense of everything, even trying to arrange and shuffle for a catchy acronym (think ROY G. BIV). I challenge all librarians and other information professionals interested in Web 2.0 to do the same: find a catchy acronym and share it with us all. I will share my own in one month's time.
(1) Social Networks - The content of a site should comprise user-provided information that attracts members of an ever-expanding network. (example: Facebook)
(2) Wisdom of Crowds - Group judgments are surprisingly accurate, and the aggregation of input is facilitated by the ready availability of social networking sites. (example: Wikipedia)
(3) Loosely Coupled API's - Short for "Application Programming Interface," API provides a set of instructions (messages) that a programmer can use to communicate between applications, thus allowing programmers to incorporate one piece of software to directly manipulate (code) into another. (example: Google Maps)
(4) Mashups - They are combinations of APIs and data that result in new information resources and services. (example: Calgary Mapped)
(5) Permanent Betas - The idea is that no software is ever truly complete so long as the user community is still commenting upon it, and thus, improving it. (example: Google Labs)
(6) Software Gets Better the More People Use It - Because all social networking sites seek to capitalize on user input, the true value of each site is definted by the number of people it can bring together. (example: Windows Live Messenger)
(7) Folksonomies - It's a classification system created in a bottom-up fashion and with no central coordination. Entirely differing from the traditional classification schemes such as the Dewey Decimal or Library of Congress Classifications, folksonomies allow any user to "social tag" whatever phrase they deem necessary for an object. (example: Flickr and Youtube)
(8) Individual Production and User Generated Content - Free social software tools such as blogs and wikis have lowered the barrier to entry, following the same footsteps as the 1980s self-publishing revolution sparked by the advent of the office laser printer and desktop publishing software. In the world of Web 2.0, with a few clicks of the mouse, a user can upload videos or photos from their digital cameras and into their own media space, tag it with keywords and make the content available for everyone in the world.
(9) Harness the Power of the Crowd - Harnessing not the "intellectual" power, but the power of the "wisdom of the crowds," "crowd-sourcing" and "folksonomies."
(10) Data on an Epic Scale - Google has a total database measured in hundreds of petabytes (a million, billion bytes) which is swelled each day by terabytes of new information. Much of this is collected indirectly from users and aggregated as a side effect of the ordinary use of major Internet services and applications such as Google, Amazon, and EBay. In a sense these services are 'learning' every time they are used by mining and sifting data for better services.
(11) Architecture of Participation - Through the use of the application or service, the service itself gets better. Simply argued, the more you use it - and the more other people use - the better it gets. Web 2.0 technologies are designed to take the user interactions and utilize them to improve itself. (e.g. Google search).
(12) Network Effects - It is general economic term often used to describe the increase in vaue to the existing users of a service in which there is some form of interaction with others, as more and more people to start to use it. As the Internet is, at heart, a telecommunications network, it is therefore subject to the network effect. In Web 2.0, new software services are being made available which, due to their social nature, rely a great deal on the network effect for their adoption. eBay is one example of how the application of this concept works so successfully.
(13) Openness - Web 2.0 places an emphasis on making use of the information in vast databases that the services help to populate. This means Web 2. 0 is about working with open standards, using open source software, making use of free data, re-using data and working in a spirit of open innovation.
(14) The Read/Write Web - A term given to describe the main differences between Old Media (newspaper, radio, and TV) and New Media (e.g. blogs, wikis, RSS feeds), the new Web is dynamic in that it allows consumers of the web to alter and add to the pages they visit - information flows in all directions.
(15) The Web as a Platform - Better known as "perpetual beta," the idea behind Web 2.0 services is that they need to be constantly updated. Thus, this includes experimenting with new features in a live environment to see how customers react.
(16) The Long Tail - The new Web lowers the barriers for publishing anything (including media) related to a specific interest because it empowers writers to connect directly with international audiences interested in extremely narrow topics, whereas originally it was difficult to publish a book related to a very specific interest because its audience would be too limited to justify the publisher's investment.
(17) Harnessing Collective Intelligence - Google, Amazon, and Wikipedia are good examples of how successful Web 2.0-centric companies use the collective intelligence of users in order to continually improve services based on user contributions. Google's PageRank examines how many links points to a page, and from what sites those links come in order to determine its relevancy instead of the evaluating the relevance of websites based solely on their content.
(18) Science of Networks - To truly understand Web 2.0, one must not only understand web networks, but also human and scientific networks. Ever heard of six degrees of separation and the small world phenomenon? Knowing how to open up a Facebook account isn't good enough; we must know what goes on behind the scene in the interconnectedness of networks - socially and scientifically.
(19) Core Datasets from User Contributions - Web 2.0 companies use to collect unique datasets is through user contributions. However, collecting is only half the picture; using the datasets is the key. These contributions are then organized into databases and analyzed to extract the collective intelligence hidden in the data. This extracted information is then used to extract collective knowledge that can be applied to the direct improvement of the website or web service.
(20) Lightweight Programming Models - The move toward database driven web services has been accompanied by new software development models that often lead to greater flexibility. In sharing and processing datasets between partners, this enables mashups and remixes of data. Google Maps is a common example as it allows people to combine its data and application with other geographic datasets and applications.
(21) The Wisdom of the Crowds - Not only has it blurred the boundary between amateur and professional status, in a connected world, ordinary people often have access to better information than officials do. As an example, the collective intelligence of the evacuees of the towers saved numerous lives in the face of disobeying authority which told them to stay put.
(22) Digital Natives - Because a generation (mostly the under 25's) have grown up surrounded by developing technologies, those fully at home in a digital environment aren't worried about information overload; rather, they crave it.
(23) Internet Economics - Small is the new big. Unlike the past when publishing was controlled by publishers, Web 2.0's read/write web has opened up markets to a far bigger range of supply and demand. The amateur who writes one book has access to the same shelf space as the professional author.
(24) "Wirelessness" - Digital natives are less attached to computers and are more interested in accessing information through mobile devices, when and where they need it. Hence, traditional client applications designed to run on a specific platform, will struggle if not disappear in the long run.
(25) Who Will Rule? - This will be the ultimate question (and prize). As Sharon Richardson argues, whoever rules "may not even exist yet."
(1) Social Networks - The content of a site should comprise user-provided information that attracts members of an ever-expanding network. (example: Facebook)
(2) Wisdom of Crowds - Group judgments are surprisingly accurate, and the aggregation of input is facilitated by the ready availability of social networking sites. (example: Wikipedia)
(3) Loosely Coupled API's - Short for "Application Programming Interface," API provides a set of instructions (messages) that a programmer can use to communicate between applications, thus allowing programmers to incorporate one piece of software to directly manipulate (code) into another. (example: Google Maps)
(4) Mashups - They are combinations of APIs and data that result in new information resources and services. (example: Calgary Mapped)
(5) Permanent Betas - The idea is that no software is ever truly complete so long as the user community is still commenting upon it, and thus, improving it. (example: Google Labs)
(6) Software Gets Better the More People Use It - Because all social networking sites seek to capitalize on user input, the true value of each site is definted by the number of people it can bring together. (example: Windows Live Messenger)
(7) Folksonomies - It's a classification system created in a bottom-up fashion and with no central coordination. Entirely differing from the traditional classification schemes such as the Dewey Decimal or Library of Congress Classifications, folksonomies allow any user to "social tag" whatever phrase they deem necessary for an object. (example: Flickr and Youtube)
(8) Individual Production and User Generated Content - Free social software tools such as blogs and wikis have lowered the barrier to entry, following the same footsteps as the 1980s self-publishing revolution sparked by the advent of the office laser printer and desktop publishing software. In the world of Web 2.0, with a few clicks of the mouse, a user can upload videos or photos from their digital cameras and into their own media space, tag it with keywords and make the content available for everyone in the world.
(9) Harness the Power of the Crowd - Harnessing not the "intellectual" power, but the power of the "wisdom of the crowds," "crowd-sourcing" and "folksonomies."
(10) Data on an Epic Scale - Google has a total database measured in hundreds of petabytes (a million, billion bytes) which is swelled each day by terabytes of new information. Much of this is collected indirectly from users and aggregated as a side effect of the ordinary use of major Internet services and applications such as Google, Amazon, and EBay. In a sense these services are 'learning' every time they are used by mining and sifting data for better services.
(11) Architecture of Participation - Through the use of the application or service, the service itself gets better. Simply argued, the more you use it - and the more other people use - the better it gets. Web 2.0 technologies are designed to take the user interactions and utilize them to improve itself. (e.g. Google search).
(12) Network Effects - It is general economic term often used to describe the increase in vaue to the existing users of a service in which there is some form of interaction with others, as more and more people to start to use it. As the Internet is, at heart, a telecommunications network, it is therefore subject to the network effect. In Web 2.0, new software services are being made available which, due to their social nature, rely a great deal on the network effect for their adoption. eBay is one example of how the application of this concept works so successfully.
(13) Openness - Web 2.0 places an emphasis on making use of the information in vast databases that the services help to populate. This means Web 2. 0 is about working with open standards, using open source software, making use of free data, re-using data and working in a spirit of open innovation.
(14) The Read/Write Web - A term given to describe the main differences between Old Media (newspaper, radio, and TV) and New Media (e.g. blogs, wikis, RSS feeds), the new Web is dynamic in that it allows consumers of the web to alter and add to the pages they visit - information flows in all directions.
(15) The Web as a Platform - Better known as "perpetual beta," the idea behind Web 2.0 services is that they need to be constantly updated. Thus, this includes experimenting with new features in a live environment to see how customers react.
(16) The Long Tail - The new Web lowers the barriers for publishing anything (including media) related to a specific interest because it empowers writers to connect directly with international audiences interested in extremely narrow topics, whereas originally it was difficult to publish a book related to a very specific interest because its audience would be too limited to justify the publisher's investment.
(17) Harnessing Collective Intelligence - Google, Amazon, and Wikipedia are good examples of how successful Web 2.0-centric companies use the collective intelligence of users in order to continually improve services based on user contributions. Google's PageRank examines how many links points to a page, and from what sites those links come in order to determine its relevancy instead of the evaluating the relevance of websites based solely on their content.
(18) Science of Networks - To truly understand Web 2.0, one must not only understand web networks, but also human and scientific networks. Ever heard of six degrees of separation and the small world phenomenon? Knowing how to open up a Facebook account isn't good enough; we must know what goes on behind the scene in the interconnectedness of networks - socially and scientifically.
(19) Core Datasets from User Contributions - Web 2.0 companies use to collect unique datasets is through user contributions. However, collecting is only half the picture; using the datasets is the key. These contributions are then organized into databases and analyzed to extract the collective intelligence hidden in the data. This extracted information is then used to extract collective knowledge that can be applied to the direct improvement of the website or web service.
(20) Lightweight Programming Models - The move toward database driven web services has been accompanied by new software development models that often lead to greater flexibility. In sharing and processing datasets between partners, this enables mashups and remixes of data. Google Maps is a common example as it allows people to combine its data and application with other geographic datasets and applications.
(21) The Wisdom of the Crowds - Not only has it blurred the boundary between amateur and professional status, in a connected world, ordinary people often have access to better information than officials do. As an example, the collective intelligence of the evacuees of the towers saved numerous lives in the face of disobeying authority which told them to stay put.
(22) Digital Natives - Because a generation (mostly the under 25's) have grown up surrounded by developing technologies, those fully at home in a digital environment aren't worried about information overload; rather, they crave it.
(23) Internet Economics - Small is the new big. Unlike the past when publishing was controlled by publishers, Web 2.0's read/write web has opened up markets to a far bigger range of supply and demand. The amateur who writes one book has access to the same shelf space as the professional author.
(24) "Wirelessness" - Digital natives are less attached to computers and are more interested in accessing information through mobile devices, when and where they need it. Hence, traditional client applications designed to run on a specific platform, will struggle if not disappear in the long run.
(25) Who Will Rule? - This will be the ultimate question (and prize). As Sharon Richardson argues, whoever rules "may not even exist yet."
Wednesday, August 22, 2007
Librarian 3.0
I was recently asked in a job interview how Web 3.0 would work for a law firm. It's made me think on the fly: how would the Web of the future work in such a scenario? We're barely even half-way into Web 2.0...I had to think back to an article that Michael V. Copeland of Business 2.0 Magazine had written entitled, What's next for the Internet to envision a glimpse of the "future."
Finally, it's the library's turn now. Mr. X. sends an email to the librarian, (after all, she is the one responsible for the library's more intricate databases), simply with the message "Wang V. Granville LLP" (both pseudonyms of course), and immediately, the librarian works her magic and types in the necessary key terms. All of the acts, statutes, regulations, as well as updated case files relating to the case are electronically retrieved and stored onto a file which is automatically sent to the lawyer's dossier. (The librarian's job is behind the scene - she is the one who carefully collates the materials and gives them tags which the semantic databases will translate into its own readable language).
The lawyer walks out of the firm nonchalantly and begins his afternoon with everything he needs, but taking only one-tenth of the time and effort he would need back in the days of Web 2.0. That, in my hypothetical world based on user history and preferences and interlocking databases, is how the future of Web 3.0 might look like.
The semantic Web in the Berners-Lee vision acts more like a series of connected databases, where all information resides in a structured form. Within that structure is a layer of description that adds meaning that the computer can understand.Since we're on the topic of visions and dreams, here would be my answer: Imagine the lawyer, Mr. X, flipping open his laptop (which by then would be priced similarly to a cell phone), and typing in "2 o'clock meeting with Angela at Starbucks." All of a sudden, his online calendar would pop open and a series of clients names would appear, and the correct "Angela Smith" would be sent an email with details of the meeting agenda sent to the printer. Starbucks would receive an electronic notification with the usual order of Venti Chai Latte (two cups) and a newspaper -- the Globe and Mail (his favourite) to boot. Because Mr. X's car is in the shop because of a recent accident and a replacement car isn't ready yet, a taxi has been order automatically for Mr. X and will be ready for him upon arrival for 1:30 at the entrance. The ride is estimated for 15 minutes to his destination, but his preference has always been for early arrival.
Finally, it's the library's turn now. Mr. X. sends an email to the librarian, (after all, she is the one responsible for the library's more intricate databases), simply with the message "Wang V. Granville LLP" (both pseudonyms of course), and immediately, the librarian works her magic and types in the necessary key terms. All of the acts, statutes, regulations, as well as updated case files relating to the case are electronically retrieved and stored onto a file which is automatically sent to the lawyer's dossier. (The librarian's job is behind the scene - she is the one who carefully collates the materials and gives them tags which the semantic databases will translate into its own readable language).
The lawyer walks out of the firm nonchalantly and begins his afternoon with everything he needs, but taking only one-tenth of the time and effort he would need back in the days of Web 2.0. That, in my hypothetical world based on user history and preferences and interlocking databases, is how the future of Web 3.0 might look like.
Monday, August 13, 2007
The Paradox of Choice
As information professionals, we face a plethora of choice each and everyday of our working lives, from what brand of coffee to buy in the morning to the database we want to conduct for a search. So many choices, so little time to choose. Barry Schwartz, Professor of Social Theory and Social Action, reveals in The Paradox of Choice strategies that can refine our decision-making processes to more effective results. His book is worth a read. Here are some major points:
(1) Choose When to Choose - If choice makes you feel worse about what you've chosen, you really haven't gained anything from the opportunity to choose. By restricting our options, we will be able to choose less and feel better.
(2) Be a Chooser, Not a Picker - Choosers make the time to modify their goals; pickers do not. Good decisions take time and attention, and the only way we can find the needed time and attention is by choosing our spots.
(3) Satisfice More and Maximize Less - Maximizers suffer most in a culture that provides too many choices. Learn to accept "good enough" since it will simplify decision making and increase satisfaction. Results are subjective sometimes; yet, satisficers will almost always feel better about their decisions.
(4) Think About the Opportunity of Opportunity Costs - The more we think about opportunity costs, the less satisfaction we'll derive from whatever we choose.
(5) Make Your Decisions Nonreversible - The very option of being allowed to change our minds seems to increase the chances that we will change our minds. When we can change our minds about decisions, we are less satisfied with them.
(6) Practice and "Attitude of Gratitude" - Our evaluation of our choices is profoundly affected by what we compare them with, including comparisons with alternatives that exist only in our imaginations. The experience can be either disappointing or delightful. We can improve our subjective experience by consciously striving to be grateful more often for what is good about a choice and to be disappointed less by what is bad about it.
(7) Regret Less - The sting of regret (actual or potential) colours many decisions, and influences us to avoid making decision at all sometimes. Although it is often appropriate and instructive, when it becomes so pronounced that it poisons or even prevents decisions, we should make an effort to minimize it.
(8) Anticipate Adaptation - Learning to be satisfied as pleasures turn into mere comforts will reduce disappointment with adaption when it occurs.
(9) Control Expectations - The easiest route to increasing satisfaction with the results of decisions it to remove excessively high expectations about them.
(10) Curtail Social Comparison - We evaluate the quality of our experiences by comparing ourselves to others, so by comparing ourselves to others less, we will be satisfied more.
(11) Learn to Love Constraints - As the number of choices we face increases, freedom of choice eventually becomes a tyranny of choice. Choice within constraints, freedom within limits, is what enables us marvelous possibilities.
(1) Choose When to Choose - If choice makes you feel worse about what you've chosen, you really haven't gained anything from the opportunity to choose. By restricting our options, we will be able to choose less and feel better.
(2) Be a Chooser, Not a Picker - Choosers make the time to modify their goals; pickers do not. Good decisions take time and attention, and the only way we can find the needed time and attention is by choosing our spots.
(3) Satisfice More and Maximize Less - Maximizers suffer most in a culture that provides too many choices. Learn to accept "good enough" since it will simplify decision making and increase satisfaction. Results are subjective sometimes; yet, satisficers will almost always feel better about their decisions.
(4) Think About the Opportunity of Opportunity Costs - The more we think about opportunity costs, the less satisfaction we'll derive from whatever we choose.
(5) Make Your Decisions Nonreversible - The very option of being allowed to change our minds seems to increase the chances that we will change our minds. When we can change our minds about decisions, we are less satisfied with them.
(6) Practice and "Attitude of Gratitude" - Our evaluation of our choices is profoundly affected by what we compare them with, including comparisons with alternatives that exist only in our imaginations. The experience can be either disappointing or delightful. We can improve our subjective experience by consciously striving to be grateful more often for what is good about a choice and to be disappointed less by what is bad about it.
(7) Regret Less - The sting of regret (actual or potential) colours many decisions, and influences us to avoid making decision at all sometimes. Although it is often appropriate and instructive, when it becomes so pronounced that it poisons or even prevents decisions, we should make an effort to minimize it.
(8) Anticipate Adaptation - Learning to be satisfied as pleasures turn into mere comforts will reduce disappointment with adaption when it occurs.
(9) Control Expectations - The easiest route to increasing satisfaction with the results of decisions it to remove excessively high expectations about them.
(10) Curtail Social Comparison - We evaluate the quality of our experiences by comparing ourselves to others, so by comparing ourselves to others less, we will be satisfied more.
(11) Learn to Love Constraints - As the number of choices we face increases, freedom of choice eventually becomes a tyranny of choice. Choice within constraints, freedom within limits, is what enables us marvelous possibilities.
Sunday, August 12, 2007
Long Tail, Searching, and Libraries
The Long Tail is the essence of Web 2.0. Understanding how the Long Tail works not only helps in examining how social software such as blogs and wikis impact users and libraries, but ultimately in evaluating how future products (i.e. not invented yet) can be used more creatively and maximized to its full potential. Chris Anderson's concept of the Long Tail analyzes how the media and entertainment industries can succeed not by pushing only mass market hits that are popular among many but by also mining the collective of interest among a few in less-popular books, songs, movies and more.
Rule # 2 - Let Customers Do the Work - Have user-submitted reviews, which are often well-informed, articulate, and most important, trusted by other users.
Rule #3 - One distribution Method Doesn't Fit All - Some want to go to stores, some want to shop online. Some want to research online, others buy in stores. Some want them now, some can wait. Let the customer choose.
Rule #4 - One Product Doesn't Fit All - Allow for different formats of the same thing. A CD album can be "microchunked" into music videos, remixes, all in a number of formats and sampling rates. One size fits one; many sizes fit many.
Rule #5 - One Price Doesn't Fit All - Although this doesn't apply to most libraries, it's important to keep in mind that different people are willing to pay different prices for any number of reasons, from how much money they have to how much time they have. Whatever the library charges should reflect room for flexibility.
Rule #6 - Share Information - More information is better only if it's presented in a way that helps order choice, not confuse it further. Thus, information about buying patterns, when transformed into recommendations can be a powerful marketing tool.
Rule #7 - Think "and" not "or" - In markets with infinite capacity (virtual ones), the right strategy is almost always to offer it all.
Rule #8 - Trust the Market To Do Your Job - Online markets are nothing if not highly efficient measures of wisdom of crowds. Collaborative filters, popularity rankings, and ratings are all tools that reach this goal: don't predict; measure and respond.
Rule #9 - Understand the Power of Free - A powerful feature of digital markets is that they put free within reach; since costs are zero, their prices can be, too. Services such as Sktype and Gmail attract users with a free service and convince some of them to update to a subscription-based premium that adds higher quality features. Libraries need to use digital economics to their advantage: perhaps use free as a starting point for profits?
In other words, although thousands may want to buy a hit song, if you add up all those who want to buy lesser-known titles, they might generate as much or more revenue than the hits themselves. Working in a library or information centre, it is important to tap into both the "head" of interest and the "long tail" that follows behind. Here are the major concepts if applied to libraries:
Rule # 2 - Let Customers Do the Work - Have user-submitted reviews, which are often well-informed, articulate, and most important, trusted by other users.
Rule #3 - One distribution Method Doesn't Fit All - Some want to go to stores, some want to shop online. Some want to research online, others buy in stores. Some want them now, some can wait. Let the customer choose.
Rule #4 - One Product Doesn't Fit All - Allow for different formats of the same thing. A CD album can be "microchunked" into music videos, remixes, all in a number of formats and sampling rates. One size fits one; many sizes fit many.
Rule #5 - One Price Doesn't Fit All - Although this doesn't apply to most libraries, it's important to keep in mind that different people are willing to pay different prices for any number of reasons, from how much money they have to how much time they have. Whatever the library charges should reflect room for flexibility.
Rule #6 - Share Information - More information is better only if it's presented in a way that helps order choice, not confuse it further. Thus, information about buying patterns, when transformed into recommendations can be a powerful marketing tool.
Rule #7 - Think "and" not "or" - In markets with infinite capacity (virtual ones), the right strategy is almost always to offer it all.
Rule #8 - Trust the Market To Do Your Job - Online markets are nothing if not highly efficient measures of wisdom of crowds. Collaborative filters, popularity rankings, and ratings are all tools that reach this goal: don't predict; measure and respond.
Rule #9 - Understand the Power of Free - A powerful feature of digital markets is that they put free within reach; since costs are zero, their prices can be, too. Services such as Sktype and Gmail attract users with a free service and convince some of them to update to a subscription-based premium that adds higher quality features. Libraries need to use digital economics to their advantage: perhaps use free as a starting point for profits?
Friday, August 10, 2007
Web 2.0-ness
Tim O'Reilly offers an intriguing hierarchy of Web 2.0-ness. In this hierarchy, the highest level is to embrace the network, to understand what creates network effects, and then to harness them in everything you do. It's not just about social software; it's much, much more conceptual. It looks something like this:
Level 3 - The application can only exist on the net and draws its eesentaial power from the network and the connections it makes possible between people or applications.
Level 2 - The application could exist offline, but it is uniquely advantaged by being online.
Level 1 - The application can and does exist successfully offline.
Level 0 - The application has primarily taken hold online, but it would work just as well offline.
Thursday, August 09, 2007
A Definition of Web 3.0
Google CEO Eric Schmidt gives a fairly succinct definition of what exactly is Web 3.0.
Friday, August 03, 2007
Happy Long Weekend
It's BC Day here in British Columbia, Canada. Have restful, happy, and sunny long weekend everyone. Here's a fireside chat between Stephen Colbert and Jimmy Wales to keep us in good company.
Monday, July 30, 2007
The Constant Inconstancy...
Lance Ulanoff is not telling me anything new with his article in PC Magazine. I’ve been saying this for quite a while now. The internet technology of today is the wasteland of tomorrow. Which is nothing to cry over – change is a byproduct of Web 2.0.
Nothing is meant to be stable, everything is wobbly and incoherent. Here are the technologies that have changed so much over the past decade. Think of the changes to come! So many more passwords to remember!
(1) ICQ (90s) –> MSN Messenger
(2) Yahoo! (90s) –> Google
(3) Friendster (90s) -> Facebook/MySpace
Nothing is meant to be stable, everything is wobbly and incoherent. Here are the technologies that have changed so much over the past decade. Think of the changes to come! So many more passwords to remember!
(1) ICQ (90s) –> MSN Messenger
(2) Yahoo! (90s) –> Google
(3) Friendster (90s) -> Facebook/MySpace
(4) Geocities Personal Homepages (90s) -> Blogs
As librarians and information specialists, I think it’s unproductive to lament about the constant inconstancy. Rather, we should channel our energies at anticipating new technologies and tools and integrating them into the workplace. Be comfortable with change. Think of it like this. Just like collection management, books come and go. We weed by replacing and displacing. The same goes with internet technologies. If it is our jobs to keep up with the latest titles, then why can’t we do the same with the latest technologies?
I’d like to end off with a haiku of my own:
Hi Web 2.0
I admire your brevity
We will meet again
As librarians and information specialists, I think it’s unproductive to lament about the constant inconstancy. Rather, we should channel our energies at anticipating new technologies and tools and integrating them into the workplace. Be comfortable with change. Think of it like this. Just like collection management, books come and go. We weed by replacing and displacing. The same goes with internet technologies. If it is our jobs to keep up with the latest titles, then why can’t we do the same with the latest technologies?
I’d like to end off with a haiku of my own:
Hi Web 2.0
I admire your brevity
We will meet again
Friday, July 27, 2007
Academic Library 2.0
Ellyssa Kroski, Reference Librarian at Columbia University, has just come out with a fantastic article on using Web 2.0 for academic libraries. The Social Tools of Web 2.0: Opportunities for Academic Libraries explores the social tools of Web 2.0 and their potential applications for academic libraries by focusing on four main types:
(1) Content Collaboration (wikis and online office applications)
(2) Social Bookmarking (Connotea, Del.icio.us)
(3) Media Sharing (Youtube, Yahoo! Video)
(4) Social Networking (Facebook, MySpace)
Take a look! It's well worth the read.
(1) Content Collaboration (wikis and online office applications)
(2) Social Bookmarking (Connotea, Del.icio.us)
(3) Media Sharing (Youtube, Yahoo! Video)
(4) Social Networking (Facebook, MySpace)
Take a look! It's well worth the read.
Subscribe to:
Posts (Atom)