Toyota Research Institute Demos Automated Driving Progress
Sept. 29, 2017—Toyota Research Institute (TRI) is demonstrating its progress in the development of automated driving technology and other project work to the investor community this week.
"In the last few months, we have rapidly accelerated our pace in advancing Toyota's automated driving capabilities with a vision of saving lives, expanding access to mobility, and making driving more fun and convenient," said Dr. Gill Pratt, CEO of TRI. "Our research teams have also been evolving machine intelligence that can support further development of robots for in-home support of people."
As Dave Hobbs noted in his interview with Ratchet+Wrench, training will be required to properly repair vehicles with advanced driver-assistance systems (ADAS). He says that shop if technicians and service advisors don't obtain training on this advancing vehicle technology, shops are not just going to lose customers to dealerships and progressive shops—they’re going to put drivers in danger.
Here are the updates from Toyota:
Since unveiling its Platform 2.0 research vehicle in March 2017, TRI has quickly updated its automated driving technology. The next iteration, dubbed Platform 2.1, is being shown for the first time on a closed-course. In parallel with the creation of this innovative test platform, TRI has made strong advances in deep learning computer perception models that allow the automated vehicle system to more accurately understand the vehicle surroundings, detecting objects and roadways, and better predict a safe driving route. These new architectures are faster, more efficient and more highly accurate. In addition to object detection, the models' prediction capabilities can also provide data about road elements, such as road signs and lane markings, to support the development of maps, which are a key component of automated driving functionality.
Platform 2.1 also expands TRI's portfolio of suppliers, incorporating a new high-fidelity LIDAR system provided by Luminar. This new LIDAR provides a longer sensing range, a much denser point cloud to better detect positions of three-dimensional objects, and a field of view that is the first to be dynamically configurable, which means that measurement points can be concentrated where sensing is needed most. The new LIDAR is married to the existing sensing system for 360-degree coverage. TRI expects to source additional suppliers as disruptive technology becomes available in the future.
On Platform 2.1, TRI created a second vehicle control cockpit on the front passenger side with a fully operational drive-by-wire steering wheel and pedals for acceleration and braking. This setup allows the research team to probe effective methods of transferring vehicle control between the human driver and the autonomous system in a range of challenging scenarios. It also helps with development of machine learning algorithms that can learn from expert human drivers and provide coaching to novice drivers.
TRI has also designed a unified approach to showing the various states of autonomy in the vehicle, using a consistent UI across screens, colored lights and a tonal language that is tied into Guardian and Chauffeur. The institute is also experimenting with increasing a driver's situational awareness by showing a point cloud representation of everything the car "sees" on the multi-media screen in the center stack.
With its broad-based advances in hardware and software, Platform 2.1 is a research tool for concurrent testing of TRI's dual approaches to vehicle autonomy using a single technology stack. Under Guardian, the human driver maintains vehicle control and the automated driving system operates in parallel, monitoring for potential crash situations and intervening to protect vehicle occupants when needed. Chauffeur is Toyota's version of SAE Level 4/5 autonomy where all vehicle occupants are passengers. Both approaches use the same technology stack of sensors and cameras. This week marks the first time the Guardian and Chauffeur systems have been demonstrated on the same platform, which includes multiple test scenarios to demonstrate TRI's advances in both applications.
These include the ability of the Guardian system to detect distracted or drowsy driving in certain situations, and to take action if the driver does not react to turns in the road. In such a situation, the system first warns and then will intervene with braking and steering to safely follow the road's curvature. Chauffeur test scenarios demonstrate the vehicle's ability to drive itself on a closed course, navigate around road obstacles, and make a safe lane change around an impediment in its path with another vehicle travelling at the same speed in the lane next to it.
In addition to real-world testing, TRI is using simulation to accurately and safely test engineering assumptions, and investors can experience automated driving test scenarios in a virtual simulator.
Robotics and AI
TRI is also making advancements in robotics and artificial intelligence.
As part of its research into human support robots that can assist with tasks in the home, such as item retrieval, TRI has pioneered new tools to give future robots enhanced, human-like dexterity in order to grasp and manipulate objects so that they are not dropped or damaged. TRI is also applying computer vision and artificial intelligence to robot development, allowing robots to detect the physical presence of humans and objects, note their locations and retrieve objects for humans when prompted. The robots can detect when objects have been relocated, updating the item's location in the robot's database, and even detect faces of known people and differentiate individuals.
TRI's progress in robotics have been made possible by its ability to increase the value and accuracy of simulation to augment physical testing. Since it is impossible to physically test the wide variety of situations robots may encounter in the real world, the institute uses simulated environments, constantly adapting them with data collected in real-world testing for greater precision.
Additionally, TRI is pursuing new concepts for applying artificial intelligence inside a vehicle cabin to keep occupants comfortable, safe and satisfied. The institute has created a simulator showing an in-car AI agent that can detect a driver's skeletal pose, head and gaze position and emotion to anticipate needs or potential driving impairments. For example, when the system detects the driver taking a drink and facial expressions which might indicate discomfort, the agent hypothesizes that the driver might be feeling warm and can adjust the air conditioning or roll down the windows. If the agent detects drowsiness, it might provide a verbal prompt in the cabin suggesting that the driver pull over for coffee or navigate the car to a coffee shop.
Automated Driving White Paper
In addition to its technology demonstrations, Toyota also released a comprehensive overview of its work on automated driving, including the philosophy that guides its approach to the technology, its ongoing research programs, and its near-term product plans. The white paper reflects Toyota's understanding of the potential for automated driving to dramatically expand mobility options for people around the world, helping to create a society where mobility is safe, convenient, enjoyable, and available to everyone. It summarizes the dual concepts of Guardian and Chauffeur that guide Toyota's research and the Mobility Teammate Concept that guides its product development.
"Vehicles with automated driving technology will bring many benefits to society, but one of the top priorities at Toyota is to help make the traffic environment safer," said Kiyotaka Ise, Chief Safety Technology Officer and Senior Managing Officer of Toyota Motor Corporation. "By having our vehicle technologies seamlessly anticipate and interact with human beings and the traffic environment, we will get closer to realizing a future without traffic injuries or fatalities."