Friday, April 25, 2008
(1) Plan, Implement, and Forget - Changes must be constant and purposeful. Services need to be continually evaluated.
(2) Mission Statement - A library without a clear mission is like a boat without a captain. It drives the organization, serving as a guide when selecting services for users and letting you set a clear course for Library 2.0
(3) Community Analysis - Know your users. Talk to them, have a feel for who you're serving, and who they are.
(4) Surveys & Feedback - Get both users and staff feedback. It's important to know what works and what doesn't.
(5) Team up with competitors - Don't think of the library as being in a "box." Look at what users are doing elsewhere that they could be doing through the library. Neither should bookstores or cafes or the Internet. Create a win-win relationship with local businesses that benefits everyone.
(6) Real input from staff - Having feedback means implementing ideas, and not just for show. Eventually, staff will realize the hoax, and morale will suffer.
(7) Evaluating services - Sacred cows do not necessarily need to be eliminated; however, nothing should be protected from review.
(8) Three Branches of Change model - This allows all staff - from frontline workers to the director - to understand the changes made. The three teams are: investigative, planning, and review team.
(9) Long tail - Web 2.0 concepts should be incorporated into the Library 2.0 model as much as possible. For example, the Netflix model does something few services can do: get materials into the hands of people who do not come into libraries. Think virtually as well as physically.
(10) Constant change & user participation - These two concepts form the crux of Library 2.0.
(11) Web 2.0 technologies - They give users access to a wide variety of applications that are neither installed nor approved by IT. The flexibility is there for libraries to experiment unlike ever before. It is important to have conversation where none exists before. Online applications help fill this gap.
(12) Flattened organizational structure - Directors should not make all the decisions. Instead, front line staff input should be included. Committees that include both managers and lower level staff help 'flatten' hierarchical structure, creating a more vertical structure that leads to more realistic decision-making.
Tuesday, April 22, 2008
(1) Not all SemWeb data are created equal - There’s a lot of RDF files on the web, in various formats. But that doesn’t equate to the SemWeb. But this is a bit of a strawman. In fact, it emphasizes the point that the components of the SemWeb are here. The challenge is the finding the mechanism or application that can glue everything together.
(2) A Technology is only as good as developers think it is - Search analysis reveals that people are actually more interested in AJAX than RDF Schema, despite the fact that RDF has a longer history. Zambonini believes that this is because the SemWeb is so incredibly exclusive in an ivory-towerish way. I agree. However, what is to say that the SemWeb won’t be able to accommodate a broader audience in the future? We’ll just need to wait and see.
(3) Complex systems must be built from successively simpler system - I agree with this point. Google is successful in the search engine wars because it learnt how to build up slowly, and created a simple system that got more complex as it needed to. People love Web 2.0 because they’re easy to use and understand. But whereas Web 2.0 was about searching, the SemWeb should be about finding. Nobody said C+ and Java were easy, but complexity pays off in the long run.
(4) A new solution should stop an obvious pain - The SemWeb needs to prove what problems it can solve, and prove its purpose. Right now, Web 2.0 and 1.0 do a good job, so why would we need any more? Fair enough. But information is still in silos. Until we open up the data web, we’re still in many ways living in the dark.
(5) People aren’t perfect - Creating metadata and classifications is difficult. People are sloppy. Will adding SemWeb rules add to the mess that is the Web? I seriously can’t answer this one. We can only predict. But perhaps it’s too cynical to prematurely write off people’s metadata creating skills. HTML wasn’t easy, but we managed.
(6) You don’t need an ontology of everything. But it would help - Zambonini argues for a top-down ontology which would a one-fits-all solution for the entire Web rather than building from a bottom-up approach based on folksonomies of the social web. I would argue that for this to work, we need to look at it from different angles. Perhaps we can meet half way?
(7) Philanthropy isn’t commercially viable - Why would any sane organization buy into the SemWeb and expose their data? We need that killer application in order for this to work. Agree. Ebay did wonders. Let’s hope there’s a follow-up on the way.
Saturday, April 19, 2008
Michael Stephens and Maria Collins’ Web 2.0, Library 2.0, and the Hyperlinked Library is a fascinating for those interested in learning more about these concepts. Certainly, at the core of Library 2.0 is blogs, RSS, podcasting, wikis, IM, and social networking sites. But it’s much more than that, and Stephens and Collins boils it down nicely to four main themes of Library 2.0:
(1) Conversations – The library shares plans and procedures for feedback and then responses. Transparency is real and personal.
(2) Community and Participation – Users are involved in planning library services, evaluating those services, and suggest improvements.
(3) Experience – Satisfying to the user, Library 2.0 is about learning, discovery, and entertainment. Bans on technology and the stereotypical “shushing” are replaced by a collaborative and flexible space for new initiatives and creativity.
(4) Sharing – Providing ways for users to share as much or as little of themselves as they like, users are encourage to participate via online communities and connect virtually with the library.
Thursday, April 17, 2008
This is how Ford explains the SemWeb, which is one of the most concise I've seen to date.
Of course, it's much more than just A's and B's. But the idea that Google will eventually integrate the SemWeb into its applications is exciting. And for an article that was written back in 2002 with such clarity, it's a highly engaging read.
So what's the Semantic Web? At its heart, it's just a way to describe things in a way that a computer can “understand.” Of course, what's going on is not understanding, but logic, like you learn in high school:
If A is a friend of B, then B is a friend of A.
Jim has a friend named Paul.
Therefore, Paul has a friend named Jim.
Jim has a friend named Paul.
Therefore, Paul has a friend named Jim.
Saturday, April 12, 2008
In other words, if you take away the documents, you're left with the connections between people. Information about the public connections between people is really useful. A user might want to see who else you're connected to, and as a developer of social applications, you can provide better features for your users if you know who their public friends are. There hasn't been a good way to access this information.
The Social Graph API looks for two types of publicly declared connections:
- It looks for all public URLs that belong to you and are interconnected. This could be a blog, Facebook, and a Twitter account.
- It looks for publicly declared connections between people. For example, your blog may link to someone else's blog while your Facebook and Twitter are linked to each other.
This index of connections enables developers to build many applications including the ability to help users connect to their public friends more easily. Google is taking the resulting data and making it available to third parties, who can build this into their applications (including their Google Open Social applications). Of course, the problem is that few people use FOAF and XFN to declare their relationships, but Google's new API could make them more visible and social applications could use them. Ultimately, Google could also index the relationships from social networks if people are comfortable with that.What does this mean for information professionals? Stay tuned. By having Google on board the Semweb train, (or ship), it could pave the way for more bricks to be laid on the road to realizing the goal of differentiating Paris from Paris.
Wednesday, April 09, 2008
(1) Bottom-Up vs. Top-Down – Do we focus on annotating information in pages (using RDF) so that it is machine-readable in top-down fashion? Or do focus on leveraging information in existing web pages so that they meaning can be derived automatically (folksonomies) in a botton-up approach? Time will tell.
(2) Annotation Technologies – RDF, Microformats, and Meta Headers. The more annotations there are in web pages, the more standards are implemented, and the more discoverable and powerful information becomes.
(3) Consumer and Enterprise – People currently don’t care much for the Semantic Web because all they look for is utility and usefulness. Until an application can be deemed a “killer application,” we continue to wait.
(4) Semantic APIs – Unlike Web 2.0 APIs which are coding used to mash up existing services, Semantic APIs take as an input unstructured information and relationships to find entities and relationships. Think of them as mini natural language processing tools. Take a look.
(5) Search Technologies – The sobering fact is that it’s a growing realization that understanding semantics wont’ be sufficient to build a better search engine. Google does a fairly good job at finding us the capital city of Canada, so why do we need to go any further?
(6) Contextual Technologies - Contextual navigation does not improve search, but rather short cuts it. It takes more guessing out of the equation. That's where the Semweb will overtake Google.
(7) Semantic Databases – The challenge of keeping up with the world is common to all database approaches, which are effectively information silos. That’s where semantic databases come in, as focus on annotating web information to be more structured. Take a look at Freebase.
As librarians and information professionals, we gather, organize, and disseminate. The challenge will be to do this as information is exploding at an unprecedented rate in human history, all the while trying to stay afloat and explaining to our users the technology. Feels like walking on water, don’t you agree?
Tuesday, April 08, 2008
How about a neat web service called Freebase. It’s a semanticized version of Wikipedia. But with a bigger potential. Much bigger. Freebase is said to be an open shared database of the world's knowledge, and a massive, collaboratively-edited database of cross-linked data. Until recently accessible by invitation only, this application is now open to the public as a semi-trial service.
What does this have to do with librarians? As Freebase argues, “Wikipedia and Freebase both appeal to people who love to use and organize information.” Hold that though. That’s enough to whet our information organizational appetites.In our article, Dean and I argued that the essence of the Semantic Web is the ability to differentiate entities that the current Web is unable to do. For example, how can we currently parse Paris from Paris? Although still in its initial stages with improvements to come, Freebase does a nice job to a certain extent. Freebase covers millions of topics in hundreds of categories. Drawing from large open data sets like Wikipedia, MusicBrainz, and the SEC, it contains structured information on many popular topics, like movies, music, people and locations—all reconciled and freely available via an open API.
As a result, Freebase builds on the Social Web 2.0 layer, while providing the Semantic Web infrastructure through RDF technology. For example, Paris Hilton would appear in a movie database as an actress, a music database as a singer and a model database as a model. In Freebase, there is only one topic for Paris Hilton, with all three facets of her public persona brought together. The unified topic acts as an information hub, making it easy to find and contribute information about her.
While information in Freebase appears to be structured much like a conventional database, it’s actually built on a system that allows any user to contribute to the schemas—or frameworks—that hold the data - RDF, as I had mentioned. This wiki-like approach to structuring information lets many people organize the database without formal, centralized planning. And it lets subject experts who don’t have database expertise find one another, and then build and maintain the data in their domain of interest. As librarians, we have a place in all of this. It's out there. Waiting for us.
Wednesday, April 02, 2008
Over the past few years, I have enjoyed working in a variety of jobs, from public libraries, to hospital libraries, to research centres, to academic libraries. (I also dabbled in publishing, archival, as well as teaching ventures). The integration of these experiences has been wonderful as it has helped build skills most essential in my upcoming endeavours.
What will this new position entail? To a certain extent, everything that I'm not doing now as an academic librarian. The Irving K. Barber Learning Centre itself is not a "traditional" library. It's a new building, a space for collaborative learning and ideas. A learning commons. A new way of learning. It also represents a new direction for librarianship. If there is one thing that typifies this position, it would be digital outreach. Web 2.0, Semantic Web, and Web 3.0? Stay tuned.
The possibilities are exciting.
I'd like to thank everyone who helped me along the way, particularly Dean Giustini, Eugene Barsky, Eleanor Yuen, Tricia Yu, May Yan, Henry Yu, Hayne Wai, Chris Lee, Rob Ho, Peter James & friends at HSSD, Rex Turgano, Rob Stibravy, Susie Stephenson, Matthew Queree, and Angelina Dawes, among the many. And of course, Hoyu. Thank you to all.