Google likes to play, and the company’s innovative ATAP group has been toying with a 3D-mapping project that would enable hardware to approximate with the same visual acuity of the human eye. The feature was already available in autonomous robots and military research labs, but Google’s Advanced Technologies and Projects division wants to make it available to everyone. Project lead Johnny Lee and his team intend to break the boundaries of mobile devices, currently limited to their own screens, and extend to them the human-scale understanding of space and motion. The mission is to build mobile devices capable of using depth sensors and high-spec cameras to craft three-dimensional maps more cheaply and easily than existing efforts. In collaboration with universities, research labs, and industrial partners, the team has built prototypes and shared them with developers who can imagine a wide range of possibilities and work on bringing those ideas into reality. So far ATAP released two pieces of hardware: a prototype smartphone equipped with Kinect-like 3D sensors and other components and a more powerful seven-inch tablet. The tablet has a 1080p display that runs on Android 4.4 KitKat powered by NVIDIA’s quad-core Tegra K1 chip next to 4GB of RAM and 128GB internal storage, without microSD slot. Additionally it features USB 3.0, micro-HDMI, Bluetooth LE and LTE. The Tango tablet was built with a depth sensor on the back and two cameras: one has a 4MP sensor capable of offering high light sensitivity and fast speeds, and the other tracks motion more broadly with a 170-degree wide-angle fisheye lens. Designed with developers in mind, the tablet doesn’t focus on aesthetics, but doesn’t disregard it completely. The cameras are mounted at a 13-degree angle to give the needed view for gathering accurate data without having...