What do self-driving automated cars have to do with context-awareness technology? While California recently passed the self-driving car bill, the concept of the self-driving automated car has been the fascination of engineers since the early 1930's, as revealed in this 1934 Popular Science Magazine.
Simone Fuchs, Stefan Rass, Bernhard Lamprecht, and Kyandoghere Kyamakya from the University of Klagenfurt have done extensive research on context-awareness and driver assistance systems (DAS). In their 2006 paper, "Context-Awareness and Collaborative Driving for Intelligent Vehicles and Smart Roads," the authors assert that a context-aware system is one in which there is a constant exchange of information that is generated by and for other vehicles, or "inter-vehicle collaboration" (Fuchs, et al., 2006). http://youtu.be/b_m8DqTlOLE In a way, this is already happening on a small-scale with current "self-parking" cars.
In 2006, when the Lexus LS 460 was unveiled at Detroit's North American International Auto Show, the vehicle and its ability to parallel park itself was very a novel concept and it generated instant media buzz. A number of car manufacturers have rolled out their own self-parking systems, which guide cars into parking spaces with little help from the driver.
- A number of companies and research organizations have developed working prototype autonomous vehicles, including Mercedes-Benz, General Motors, Continental Automotive Systems, Autoliv Inc., Bosch, Nissan, Toyota, Audi, and Google
- Audi's self-parking system is called Piloted Parking.
The self-parking system accesses the car park's management system in order to find and allocate a free parking space and transmit the route to the car. The system uses context-awareness technology in order for a "driver-less" car to function.
- The Advanced Parking Guidance System (APGS) for Lexus models in the United States is the first production automatic parking system
- Since most modern car parks have more than one level or are underground, GPS-based positioning is not really an option, so instead the management system uses Wi-Fi to transmit the route.
- Computer processors which are tied to the vehicle's (sonar warning system) feature, backup camera, and two additional forward sensors on the front side fenders.
Sonar park sensors include multiple sensors on the forward and rear bumpers which detect obstacles, allowing the vehicle to sound warnings and calculate optimum steering angles during regular parking
These sensors in addition to the two additional parking sensors are tied to a central computer processor, which in turn is integrated with the backup camera system to provide the driver parking information
These sensors in addition to the two additional parking sensors are tied to a central computer processor, which in turn is integrated with the backup camera system to provide the driver parking information
The representative box on the screen correctly identifies the parking space; if the space is large enough to park, the box will be green in color; if the box is incorrectly placed, or lined in red, using the arrow buttons moves the box until it turns green
The Social Seven Criteria
But how do we define what is context-awareness, as oppose to something like location-based media? Using a framework to better understand context-awareness, I chose Peter Semmelheck’s “Social Seven” criteria to help us examine the multiple layers that construct a context-aware environment. Like a taxonomy, the social seven helps us define the characteristics of "smartness."
Level 1 - Identity - Each driver is unique and has his or her own mobile phone; each car is assigned a unique identifier.
Level 2 - Discoverability - Each car that enters the parking lot has sensors that automatically connect it to the system.
Level 3 - Presence - Visual and audio cues coming from the mobile will alert users that they are connected to the system.
Level 4 - Activity - There is constant communication flowing between the mobile phone app, the sensors in the car, and also the parking lot's main processor. Level 5 - Status - As the car is shifting gears and into driving mode, the central computer dashboard indicates activity is happening during the sequence of events. Level 6 - Access - Drivers must first user their mobiles to log-in to the system in order for the car to access the system (and vice versa).
Level 7 - Privileges - There is a level of control that mobile transit users can set using their mobiles so that they can let the system know how much "context" about the traveler is necessary.
The Future? Google Car The self-driving car's autonomous mechanisms are for the large part developed for Google by Stanford's Sebastian Thrun. The Google Car's underlying technologies consist of the Doppler radar and the remote-sensing laser LIDAR used in conjunction with optical sensors and General-purpose computing on graphics processing units (GPGPU) processing to feed data into machine learning systems that are programmed to identify threats.
For example, if a live object leaps out into the road, the Google Car's ABS brakes are automatically applied to help the driver steer around danger – or at the very least, reduce the risk of harm to the driver and passengers by pre-arming airbags and other safety systems.
The autonomous self-driving car has much potential, and its early prototypes of self-parking is immensely exciting. Yet the self-parking car that finds its own space in a vacant parking stall is by no means a social machine. The context-awareness technology is employs, using sensors so that machines can communicate with each other so that navigation can happen is a transformative educational process where drivers will have the opportunity to rethink and "re-learn" how to communicate with other drivers socially while on the road.
The eminent cognitive scientist and HCI researcher Donald Norman has long argued that designers tend to focus on technology, attempting to automate whatever possible for safety and convenience (Norman, 2009, 5). However, such "intelligence" is limited as no machine can have sufficient knowledge of the factors that go into human decision-making - the "intelligence" is in the mind of the designer. As Norman puts forth, learning to "read" machines is in fact a critical part of creating smart machines. The "smart" is really at times a misnomer.
The Social Seven Criteria
But how do we define what is context-awareness, as oppose to something like location-based media? Using a framework to better understand context-awareness, I chose Peter Semmelheck’s “Social Seven” criteria to help us examine the multiple layers that construct a context-aware environment. Like a taxonomy, the social seven helps us define the characteristics of "smartness."
Level 1 - Identity - Each driver is unique and has his or her own mobile phone; each car is assigned a unique identifier.
Level 2 - Discoverability - Each car that enters the parking lot has sensors that automatically connect it to the system.
Level 3 - Presence - Visual and audio cues coming from the mobile will alert users that they are connected to the system.
Level 4 - Activity - There is constant communication flowing between the mobile phone app, the sensors in the car, and also the parking lot's main processor. Level 5 - Status - As the car is shifting gears and into driving mode, the central computer dashboard indicates activity is happening during the sequence of events. Level 6 - Access - Drivers must first user their mobiles to log-in to the system in order for the car to access the system (and vice versa).
Level 7 - Privileges - There is a level of control that mobile transit users can set using their mobiles so that they can let the system know how much "context" about the traveler is necessary.
The Future? Google Car The self-driving car's autonomous mechanisms are for the large part developed for Google by Stanford's Sebastian Thrun. The Google Car's underlying technologies consist of the Doppler radar and the remote-sensing laser LIDAR used in conjunction with optical sensors and General-purpose computing on graphics processing units (GPGPU) processing to feed data into machine learning systems that are programmed to identify threats.
For example, if a live object leaps out into the road, the Google Car's ABS brakes are automatically applied to help the driver steer around danger – or at the very least, reduce the risk of harm to the driver and passengers by pre-arming airbags and other safety systems.
The autonomous self-driving car has much potential, and its early prototypes of self-parking is immensely exciting. Yet the self-parking car that finds its own space in a vacant parking stall is by no means a social machine. The context-awareness technology is employs, using sensors so that machines can communicate with each other so that navigation can happen is a transformative educational process where drivers will have the opportunity to rethink and "re-learn" how to communicate with other drivers socially while on the road.
The eminent cognitive scientist and HCI researcher Donald Norman has long argued that designers tend to focus on technology, attempting to automate whatever possible for safety and convenience (Norman, 2009, 5). However, such "intelligence" is limited as no machine can have sufficient knowledge of the factors that go into human decision-making - the "intelligence" is in the mind of the designer. As Norman puts forth, learning to "read" machines is in fact a critical part of creating smart machines. The "smart" is really at times a misnomer.
No comments:
Post a Comment