Monday, June 08, 2009
By now, this has become world headlines. Laura Ling and Euna Lee were earlier arrested by the North Korean state and sentenced to twelve years of hard labour. What is most distressing is that the capture of these two American journalists could be a politically-motivated strategic move by an authoritarian regime on its last legs. I've been a large fan of Current TV, and although it shocks and saddens me to see how journalists are used as bargaining chips, I truly believe grassroots journalism in a social media-savvy world will bring down political barriers in the end.
Wednesday, June 03, 2009
Monday, June 01, 2009
(1) Linked Data Initiative - In order for the Web to be move from a messy, siloed, and unregulated frontier, the SemWeb will require a standards-based approach, one which data on the Web would become interchangeable formats. By linking data together, one could find and take pieces of data sets from different places, aggregate them, and use them freely and accessibly. Because of this linking of data, the Web won't be limited to just web-based information, but ultimately to the non-Web-based world. To a certain extent, we are already experiencing this with smart technologies. Semantic technologies will help us extend this to the next version of the Web, often ambiguously dubbed Web 3.0.
(2) Resource Description Framework - RDF is key to the SemWeb as it allows for the federation of Web data and standards, one which uses XML to solve a two-dimension relational database world cannot. RDF provides a global and persistent way to link data together. RDF isn't a programming language, but a method (a metahporical "container") for organizing the mass of data on the Web, while paving the way for a fluid exchange of different standards on the Web. In doing so, data is not in cubes or tables; rather, they're in triples - subject-predicate-object combinations that provide for a a multidimensional representation and linking of the Web, connecting nodes in an otherwise disparate silo of networks.
(3) Ontologies and Taxonomies - LIS and cataloguing professionals are familiar with these concepts, as they often form the core of their work. The SemWeb moves from taxonomic to an ontological world. While ontologies describe relationships in an n-dimensional manner, easily allowing information from multiple perspectives, taxonomies are limited to hierarchical relationships. In an RDF environment, ontologies provide a capability that extends the utility of taxonomies. The beauty of ontologies is that it can be linked to another ontology to take advantage of its data in conjunction with your own. Because of this linkability, taxonomies are clearly limited as they are more classification schemes that primarily describe part-whole relationships between terms. Ontologies are the organizing, sense-making complement to graphs and metadata, and mapping among ontologies is how domain-level data become interconnected over the data Web.
(4) SPARQL and SQL - It overcomes the limits of SQL because SPARQL because graphs can receive and be converted into a number of different data formats. In contrast, the rigidness of SQL limits the use of table structures. In constructing a query, one has to have knowledge of the database schema; with the abstraction of SPARQL, this problem is solved as developers can move from one resource to another. As long as data messages in SPARQL reads within RDF, tapping into as many data sources becomes inherently possible. De-siloing data was not possible without huge investment of time and resources; with semantic technologies, anything is possible.
(5) De-siloing the Web - This means is that we would need to give up some degree of control on our own data if we wish to have a global SemWeb. This new iteration of the Web takes the page-to-page relationships of the link document Web and augments them with linked relationships between and among individual data elements. By using ontologies, we can link to data we never included in the data set before, thus really "opening" up the Web as one large global database.
Thursday, May 28, 2009
In the Journal of Social Computing, Peter Sweeney argues that whatever we call Web 3.0, it is going to be a the automation of tasks which displaces human work. Our information economy is ultimately in the midst of an Industrial Revolution. He makes another excellent point:
Billions are being spent worldwide on semantic technologies to create the factories and specialized machinery for manufacturing content. Railways of linked data and standards are being laid to allow these factories to trade and co-operate. And the most productive information services in the world are those that leverage Web 3.0 industrial processes and technologies. Web 3.0 is a controversial term, as it confuses those who are just only beginning to feel comfortable with the concept Web 2.0 and those who are embracing the Semantic Web. Web 3.0 disrupts these traditional, safe thoughts. It not only blurs the terminology, it also offers business advocates an opportunity to cash in.
But I see Sweeney's arguments as a multidimensional argument that transcends nickels and dimes. He makes an excellent point when he argues that many dismiss Web 3.0 as a fad; however, when we think of the Web as a manufacturing process, that is a disruptive technology -- very much like the Industrial Revolution -- then we can begin to understand what Web 3.0 represents.
Monday, May 25, 2009
Bing is a combination of Microsoft's Live Search search engine and semantic Web technology (which Microsoft had quietly acquired in Powerset last July, 2008). It is said that Kumo is designed as a "Google killer" in mind. However, not without a cost.
It's been reported that the amount of resources Microsoft had spent on Kumo has caused deep divisions within the vendor's management. Many within the hierarchical monolith are arguing for staying put with the companie's money-making ways rather than spreading it elsewhere on fruitless desire for the holy search grail.
This is important new developments for information professionals - especially librarians - to take note. While the Semantic Web adds structure to Web searches in the backend technology, what users will see in the front end is increased structure such as the search results in the center of the page and a hierarchical organization of concepts or attributes in the left (or right)-hand column. This could be what Bing ultimately looks like.
What this implies is that with so much of the spotlight currently on "practical" social media and Web 2.0 applications, much is happening underneath the surface among the information giants. Google itself is quietly conducting much research into the SemWeb. Who will be the first to achieve Web sainthood? Until last week, we thought it was these guys.
Wednesday, May 20, 2009
In turbulent economic times, it is critically important to understand what opportunities exist to make our businesses run better. The emergence of a new era of technologies, collectively known as Web 3.0, provides this kind of strategically significant opportunity.
The core idea behind web 3.0 is to extract much more meaningful, actionable insight from information. At the conference, we will explore how companies are using these technologies today, and should be using them tomorrow, for significant bottom line impact in areas like marketing, corporate information management, customer service, and personal productivity.
I would be hesitant to accept this definition of Web 3.0, particularly when the words "in turbulent economic times." It's awfully reminiscent of how Web 2.0 had started: the burst of the dot-c0m economy in 2001, which lead to programmers convening at the first Web 2.0 conference. For better or worse, Web 2.0 was born; but it was never endorsed by academia. The creators of the internet never envisioned for Web 2.0 technologies; the World Wide Web Consortium (W3C) never had Web 2.0 standards. Rather, the Semantic Web has its roots from the very beginning.
Unfortunately, I fear the same is happening with Web 3.0. Much is being slapped by corporate and technology interests and labelled "Web 3.0." Because of the downturn in the economy, information professionals beware.
Wednesday, April 29, 2009
Tuesday, April 21, 2009
The Library not only offers an array of books, maps, manuscripts and films from around the world, in seven different languages, it ultimately aims to bridge a cultural divide not only by offering people in poorer countries the same access to knowledge as those in richer ones - but also by making available the cultural heritage of Asian, Africa, Middle Eastern, and Latin American cultures.
Friday, April 10, 2009
Sunday, March 29, 2009
Michael Stephens is one of my favourite librarians. One of the most enjoyable things is the memories of how libraries affect a person's memories and shape a person's life. This is a very honest, intimate discussion of Stephens' love of libraries. He's coming to Vancouver for the upcoming British Columbia Library Association 2009 conference. I'm looking forward to it.
Monday, March 23, 2009
And thus is the profession of librarianship. Perhaps we will be known by another title, another name, as some of us already are known as metadata managers, taxonomists, information architects, and knowledge managers. Library schools have evolved into I-Schools. Who knows, LIS might evolve the point where it not longer is recognizable to us -- as the apothecary is no longer recognizable to the pharmacist. But the art of searching, sharing knowledge, collecting, organizing, and disseminating information in whatever shape and form they may be, will never change. And hence, whatever we may become, we will never change.
Saturday, March 14, 2009
Search algorithms today are largely based on a common paradigm: link
analysis. But they've ignored a mother lode of data: The network.
Nicely said. Although there are a multitude of variations of search algorithms, architectures and tweaks, search technology has been based largely on three canonical approaches. In a nutshell, here they are:
1) Human-powered directories - Hierarchically organized into taxonomies (e.g. Yahoo!)
2) Crawler-based index - Generates results largely prioritized by link analysis. (e.g. Google)
3) Collaborative tagging - Users tag pages with keywords so that future searchers can find
However, these three options still fail to prevent click fraud and also content unreacheable in the Deep Web. Weinman proposes the Network Service Providers as a fourth option, which uses data and metadata associated with the actual network transport of Web content—including HTML pages, documents, spreadsheets, almost anything —to replace and/or augment traditional Web crawlers, improve the relevance and currency of search results ranking, and reduce click fraud. A network service provider could better determine aggregate surfing behavior and hold times at sites or pages, in a way sensitive to the peculiarities of browser preferences and regardless of whether a search engine is used.
Weinman's proposal is an interesting deviation to the thoughts of Semantic Web enthusiasts. It does throw a quirk into the speculation of the future of Web search technology. And so the search continues . . .
Monday, March 09, 2009
What is interesting is that Yandex's search algorithm is rooted in the highly inflected and very peculiar Russian language. Words can take on some 20 different endings to indicate their relationship to one another. Like the many other non-English languages, this inflection makes the language of Russian precise, but makes search extremely difficult. Google fetches the exact word combination you enter into the search bar, leaving out the slightly different forms that mean similar things. However, Yandex is unique in that it does catch the inflection. Fortune has written an interesting article on Yandex, and my favourite part is its examination into the unique features of this Russian search giant:
While some of its services are similar to offerings available in the U.S. (blog rankings, online banking), it also has developed some applications that only Russians can enjoy, such as an image search engine that eliminates repeated images, a portrait filter that ferrets out faces in an image search, and a real-time traffic report that taps into users' roving cellphone signals to monitor how quickly people are moving through crowded roads in more than a dozen Russian cities.
Thursday, March 05, 2009
considering how best to build websites we’d recommend you throw out the Photoshop and embrace Domain Driven Design and the Linked Data approach every time. Even if you never intend to publish RDF it just works. The longer term aim of this work is to not only expose BBC data but to ensure that it is contextually linked to the wider web.