Thursday, October 18, 2018

Robotics, computer vision is the future of Artificial Intelligence

The bigger challenge for companies is getting access to the right data to make the predictions and drive the intelligence that one is seeking
Robotics ai artificial intelligence
‘Robotics, computer vision is the future of Artificial Intelligence’

The technology has also made its way into smartphones with neural engine processors and smart assistants.

Artificial Intelligence and machine learning are taking over our lives with the technologies being at the heart of data analytics to the simplest of things like smart assistants on our phone. Going forward, say 10 years down the line, we will see a lot more computer vision and robotics deployed at home, Atif Kureishy, who is the VP, Emerging Practices at Teradata told indianexpress.com.

“So, not just your Roombas, but you will actually have robot assistants that can probably do larger things. The flipside to that is as you get more access and digitisation in your homes, you got to ensure that it is met with security and trust. That is the only impedance to any of this advancing,” he said.

The technology has also made its way into smartphones with neural engine processors and smart assistants. But what is interesting is the use case of AI in cameras, which we are seeing more and more each passing day. When it comes to leveraging AI technology in consumer space, Kureishy believes it is going to be more focused on things that are more socially oriented.

The bigger challenge for companies is getting access to the right data to make the predictions and drive the intelligence that one is seeking. “Another challenge is around the bigger risk management and model safety. It is very straightforward in the sense that the more you push into machine intelligence, the more opportunity to do something wrong. So you need certain guardrails in place,” he explained.

In the past, we have seen machine learning and AI models go wrong, with the biggest example being Microsoft’s teenage girl ‘Tay’ bot that was introduced in 2016 and was taken down immediately for racist, sexist and other offensive remarks. Last year, Google came under fire when its Photos app automatically labelled an African American user and his friends as gorillas. The search giant apologised for the glitch in the feature.

Kureishy is of the opinion that to control bias in the ML models, one needs to stress test them and understand all data variables. “So, if I am building increasingly complex models and I am doing it faster, experimenting more and iterating faster; then I can start to understand and see the behaviours and be able to adapt and change,” he explained.

However, such models going wrong in the enterprise sector could pose an even larger threat and cost companies huge amounts of money. For example, a robot in a manufacturing line that creates high-end chips going wrong could be millions of dollars in loss for the company. “So the point is model risk management is going to be very important if you are going to start to have more intelligent applications going wrong.”