Sunday, December 29, 2013

Smart Technology, Context-Awareness, and the Internet of Things



Popular media has focused much attention on context-awareness media technology. In the May 2013 issue of Wired Magazine, its feature article Welcome to the Programmable World discusses how context-awareness technologies can soon “choreograph” to respond to our needs, solve our problems, and even save our lives. In its August issue of Wired, it features another article about context-awareness in The Age of Invisible Design Has Arrived.  Technology bloggers Robert Scoble and Shel Israel’s very recently authored Age of Context: Mobile, Sensors, Data and the Future of Privacy, a book that highlights what the future will look like and, in many ways, what today already does using context-awareness technologies.

Technologist Peter Semmelhack (http://www.gadgetocracy.com/) argues that by connecting a device to a network, the aim is to exchange and share information with other devices on that same netwok. Semmelhack proposes seven key attributes called the “social seven” which I believe is an excellent framework in examining how context-aware technologies can enable educational technology designers to better design their products and machines with the end user in mind.

The Globe and Mail article The Smartphone Knows What You're Thinking in 2012 featured context-aware computing when it predicted smartphones would have sensors that could detect a person’s location, the time of day and the presence of others to tell whether a person is in a meeting or listening to a presentation, or even dismissing incoming calls or revert to silent mode.

The technology-based company, Gartner, identifies context-aware computing as one of the Top 10 Strategic Technologies for 2011. And by 2013, Gartner predicts that more than half of Fortune 500 companies will have context-aware computing initiatives and by 2016, one-third of worldwide mobile consumer marketing will be context-awareness-based. Connected World Magazine is already the leading business and technology publication that provides the intelligence industry titans need and the guidance consumers crave.   Do I need to say more about our upcoming connected smart-ready world?

Technology columnist Brian Proffitt predicts three things for the Internet of Things for 2014 which is insightful and helps explain what are some critical technologies that need to be in place for the IoT to be realized.   If at least one of these predictions come true, we could see a very different world of technology in the upcoming year ahead.

Prediction #1:  More commercial deals like AllSeen that will get vendors working towards a common communications platform through which devices can readily pass information along to each other. Just take a look at AllSeen Alliance get formed.

Prediction #2: Consumers will start to see more examples of device-to-device communication as more hardware vendors incorporate smarter communication devices within their products.

Prediction #3: Payment systems, whether existing credit and debit cards, new systems like Coin or all-online systems like PayPal and Google Wallet, will become more integrated with the Internet of Things, smoothing the friction for transactions.


Wednesday, December 18, 2013

Context-Awareness in a "Smart" World

Having recently completed a self-directed studies research project on context-awareness media in education with Dr. David Vogt, have accumulated a substantial amoung of knowledge in the cutting edge development of "smart" technologies.  Smart technologies using context-awareness is really about how machines can "talk" to each other.

What do self-driving automated cars have to do with context-awareness technology? While California recently passed the self-driving car bill, the concept of the self-driving automated car has been the fascination of engineers since the early 1930's, as revealed in this 1934 Popular Science Magazine.

Simone Fuchs, Stefan Rass, Bernhard Lamprecht, and Kyandoghere Kyamakya from the University of Klagenfurt have done extensive research on context-awareness and driver assistance systems (DAS). In their 2006 paper, "Context-Awareness and Collaborative Driving for Intelligent Vehicles and Smart Roads," the authors assert that a context-aware system is one in which there is a constant exchange of information that is generated by and for other vehicles, or "inter-vehicle collaboration" (Fuchs, et al., 2006). http://youtu.be/b_m8DqTlOLE In a way, this is already happening on a small-scale with current "self-parking" cars.

 In 2006, when the Lexus LS 460 was unveiled at Detroit's North American International Auto Show, the vehicle and its ability to parallel park itself was very a novel concept and it generated instant media buzz. A number of car manufacturers have rolled out their own self-parking systems, which guide cars into parking spaces with little help from the driver. 
  • A number of companies and research organizations have developed working prototype autonomous vehicles, including Mercedes-Benz, General Motors, Continental Automotive Systems, Autoliv Inc., Bosch, Nissan, Toyota, Audi, and Google 
How Self-Parking works 
The self-parking system accesses the car park's management system in order to find and allocate a free parking space and transmit the route to the car. The system uses context-awareness technology in order for a "driver-less" car to function. 
  • The Advanced Parking Guidance System (APGS) for Lexus models in the United States is the first production automatic parking system 
  • Since most modern car parks have more than one level or are underground, GPS-based positioning is not really an option, so instead the management system uses Wi-Fi to transmit the route. 
  • Computer processors which are tied to the vehicle's (sonar warning system) feature, backup camera, and two additional forward sensors on the front side fenders. 
Sonar park sensors include multiple sensors on the forward and rear bumpers which detect obstacles, allowing the vehicle to sound warnings and calculate optimum steering angles during regular parking
These sensors in addition to the two additional parking sensors are tied to a central computer processor, which in turn is integrated with the backup camera system to provide the driver parking information
The representative box on the screen correctly identifies the parking space; if the space is large enough to park, the box will be green in color; if the box is incorrectly placed, or lined in red, using the arrow buttons moves the box until it turns green

The Social Seven Criteria 
But how do we define what is context-awareness, as oppose to something like location-based media?   Using a framework to better understand context-awareness, I chose Peter Semmelheck’s “Social Seven” criteria to help us examine the multiple layers that construct a context-aware environment.  Like a taxonomy, the social seven helps us define the characteristics of "smartness."  
Level 1 - Identity - Each driver is unique and has his or her own mobile phone; each car is assigned a unique identifier.
 Level 2 - Discoverability - Each car that enters the parking lot has sensors that automatically connect it to the system.
Level 3 - Presence - Visual and audio cues coming from the mobile will alert users that they are connected to the system.
Level 4 - Activity - There is constant communication flowing between the mobile phone app, the sensors in the car, and also the parking lot's main processor. Level 5 - Status - As the car is shifting gears and into driving mode, the central computer dashboard indicates activity is happening during the sequence of events. Level 6 - Access - Drivers must first user their mobiles to log-in to the system in order for the car to access the system (and vice versa).

Level 7 - Privileges - There is a level of control that mobile transit users can set using their mobiles so that they can let the system know how much "context" about the traveler is necessary.

The Future? Google Car The self-driving car's autonomous mechanisms are for the large part developed for Google by Stanford's Sebastian Thrun. The Google Car's underlying technologies consist of the Doppler radar and the remote-sensing laser LIDAR used in conjunction with optical sensors and General-purpose computing on graphics processing units (GPGPU) processing to feed data into machine learning systems that are programmed to identify threats.

For example, if a live object leaps out into the road, the Google Car's ABS brakes are automatically applied to help the driver steer around danger – or at the very least, reduce the risk of harm to the driver and passengers by pre-arming airbags and other safety systems.

The autonomous self-driving car has much potential, and its early prototypes of self-parking is immensely exciting. Yet the self-parking car that finds its own space in a vacant parking stall is by no means a social machine. The context-awareness technology is employs, using sensors so that machines can communicate with each other so that navigation can happen is a transformative educational process where drivers will have the opportunity to rethink and "re-learn" how to communicate with other drivers socially while on the road.

The eminent cognitive scientist and HCI researcher Donald Norman has long argued that designers tend to focus on technology, attempting to automate whatever possible for safety and convenience (Norman, 2009, 5). However, such "intelligence" is limited as no machine can have sufficient knowledge of the factors that go into human decision-making - the "intelligence" is in the mind of the designer. As Norman puts forth, learning to "read" machines is in fact a critical part of creating smart machines. The "smart" is really at times a misnomer.

Friday, December 13, 2013

Augmented Reality in the Library


Although still in its embryonic stages of use in libraries, museums, and art galleries, augmented reality has really taken off in the entertainment industries. For example, the British multinational grocery and general merchandise retailer Tesco is using mobile technology to enable customers to scan quick response codes and look through a virtual catalog to view some of the food range that it has to offer, in addition to their collection of decorations and gifts. All of the items that are available in that catalog and in the store are also available online at the store’s official website. 

It gives consumers the chance to click and purchase the items that they want so that they can pick them up without a shipping charge at their local Metro store by the next day. Of course, the Christmas window display isn’t just designed to be a shopping experience for mobile users. This type of use of QR codes and augmented reality technology is becoming increasingly popular and has drawn a great deal of attention to retailer displays in the U.K. and many other places around the world.

I've had an opportunity to test out Layar, an augmented reality (AR) app - and found it a useful tool for highlighting the Irving K. Barber Learning Centre’s eighty-eight year history as the Main Library using UBC Library’s digital collections.   Patrons using their smartphones or iPads can view the current Wall of Recognition and see the wall "come alive" with archival images and videos of students and alumni talking about their experiences in the building - past and present.

In 2010’s Horizon Report, AR is forecasted as an important technology in two to three years time. While the capability to deliver augmented reality experiences has been around for decades, it is only very recently that those experiences have become easy and portable. Advances in mobile devices as well as in the different technologies that combine the real world with virtual information have led to augmented reality applications that are as near to hand as any other application on a laptop or a smart phone.   This is an exciting development, but it's still taking its time in libraries - as of yet, it's still an "emerging" technology that has yet to meet the tipping point.