Scaling up vision and AI performance

 

Demand is growing for faster processor architectures to support embedded vision and artificial intelligence.

With the demand for image sensors growing rapidly and new opportunities emerging in the mobile, virtual reality (VR), automotive and surveillance markets, demand for applications that are able to mix vision and artificial intelligence (AI) is surging.

"We are seeing work on a range of future applications from phones that automatically identify the user, to autonomous cars that are able to recognise an individual’s driving style. But whatever the application, all of them are looking at vision sensors that use AI to make decisions," says Pulin Desai, Product Marketing Director for Cadence’s Tensilica Vision DSP Product Line.

"Each of them brings with them challenges for the design engineer. Crucially, they’ll have to be able to process at higher resolutions, use algorithms that are capable of processing more frames and, while achieving higher performance levels, will need to do so by using less power."

Looking at one specific market – mobile phones – changing consumer requirements will see end users creating more video content and using a much broader range of effects in the process. All of this will require greater computational capacity. Likewise, as more augmented reality (AR) and VR based applications are developed for mobile devices, so these too, will require vision based simultaneous localisation and mapping using AI.

Being able to improve the user’s experience of AR/VR for example, will require more processing capabilities, lower latency and headsets with on-device AI for object detection, as well as recognition and eye tracking.

Recently, the UK chip designer ARM sought to ‘prime the AI pump’ with the launch of two new processor designs that are intended to address the growing need for machine-learning devices.

The ARM Machine Learning (ML) processor is intended to speed up general AI applications from facial recognition to machine translation. While the Object Detection (OD) processor targets the processing of visual data and object detection.

The ML processor will primarily address the needs of tablets and smartphones, while the OD processor is expected to deliver smarter vision capabilities to the wider market including camera surveillance and drones.

Speaking earlier this year, Jem Davies, ARM’s vice president of machine learning said that while AI processors tended to appear in high end devices there was a growing move towards putting the technology into entry level smartphones, suggesting that devices using this technology could appear as early as next year.

As processing speeds increase its becoming increasingly apparent that application requirements are putting pressure on neural networks and until recently, according to Desai, much of that processing was been conducted in the Cloud.

"That is problematic," he contends," when we’re seeing such rapid growth in edge applications that require lower latency. At Cadence we have noticed a growing move towards on-device AI, and DSPs are becoming an increasingly important solution."

Edge computing, where processing is done on the device, has a number of advantages over Cloud computing. It’s said to be more secure, data cannot be accessed in transit and it’s also much quicker and more reliable. Importantly, it’s also seen as being significantly cheaper for both the user and the service provider.

ARM’s announcements came as a growing number of companies are looking to optimise their silicon to address the needs of AI. Qualcomm is developing its own AI platform, while Intel unveiled a new line of AI specialised chips in 2017.

Cadence, too, has responded, unveiling the Tensilica Vision Q6 DSP earlier this month.