|
Contributors | Messages | Polls | Resources |
|
Heavy Reading's Steve Bell on the Intersection of AI & IoTArtificial intelligence and machine learning show great potential for furthering advancements in the Internet of Things (IoT) by taking a variety of data and problems and developing new applications and optimizing network operations. As cognitive AI evolves beyond recognition of objects to transforming enterprise applications, AI can be increasingly leveraged to exponentially advance the growth of IoT applications. In part one of this Q&A, Telco Transformation spoke to Steve Bell, IoT Senior Analyst at Heavy Reading , about the challenges and opportunities of utilizing advancements in AI for the development of IoT applications. Telco Transformation: Data processing and analytics capabilities are now aided by more powerful tools such as computer vision, GPUs, programmable chips and parallel processing of data with languages such as Spark. How do they help to expand the possibilities for IoT applications such as smart cities, smart buildings, autonomous cars, robots or residential applications for the home and connected cars? Steve Bell: Computer vision, vital to autonomous cars, is an example of the step function changes brought about by artificial intelligence (AI) and big data analytics. It has been the subject of intense research for the last 50 years. However, a huge breakthrough was achieved about five years back at Stanford University by Professor Dr. Fei-Fei Li, director at the Artificial Intelligence Lab and the Stanford Vision Lab (now at Google). In the past, researchers were trying to write specific algorithms which would dissect and describe the contents of an image, such as a cat on a sofa, viewed by a computer. However, if another image was shown with the same cat on the same sofa but resting on its back with its legs in the air, it would require another specific algorithm. Dr. Li took another tack, taking a cue from how a child learns. A child learns by continuous exposure to millions of images -- without any language, framework or preexisting references. Dr. Li wondered if a General Learning Algorithm, using huge numbers of images fed continuously to the computer, would enable object identification and learning. This was the first time two different capabilities, machine learning and computer vision, were married to achieve object identification. Machine learning algorithms need vast amounts of data to learn, and this data was available from the millions of images captured by smartphones and stored in the cloud. These images, 120 million of them, were categorized and catalogued to train machine learning algorithms. Graphics Processing Units (GPUs) provided the processing power to parse the data. It is this convergence of multiple technologies, with their combined capabilities, that is helping to achieve these step function advances. Computer vision is a foundational technology for autonomous cars but it's not complete without the computing power to interpret the 3D sensor fusion data, from LIDAR and depth sensing cameras that creates 3D Simultaneous Localization and Mapping (SLAM) modelling of objects, and deep-learning algorithms which use data from these sensors to generate 360-degree views of the world. TT: What are the other related step function changes in this context? SB: Video recognition obviously benefits from the processing power of GPUs. But GPUs consume enormous amounts of power which puts at risk the financial viability of its applications. Moreover, computer vision alone does not enable a larger application such as autonomous cars. Machine learning goes with it and the requirement for GPUs to enable this could be too expensive and power inefficient for car manufacturers. So, we are starting to see in architectures for IoT an ability to use hybrid systems-on-a-chip (SoC) for parallel processing of data. These SoCs combine application-specific integrated chips (ASICs) which are cheaper and are relevant for specific types of data processing with CPUs for general processing of data, and GPUs for video data processing. TT: In the same vein, how would you characterize the step function changes in AI? SB: The most notable change is the introduction of AI platforms by Amazon, Microsoft and Google which are made available to developers. There are not enough people who understand AI and what can be achieved with it. The platforms allow developers to use natural language capabilities, or other types of cognitive AI, and incorporate them in applications without the need to develop their own machine learning capabilities. So, developers have been finding ways to use chatbots, for example, to improve unified communications or connected home applications, and even industrial IoT where equipment can be instructed to execute functions with voice commands. A great deal of startup activity is also concentrated in cybersecurity. Real-time analytics on streaming data finds anomalous patterns that point to malicious activity which humans cannot possibly spot with a manual inspection of the data. The cloud provides a means to aggregate IoT data, external data and enterprise data and format the data series in multiple ways to examine numerous scenarios. The food industry, for example, can use external data to predict the span of the growing season, match it with IoT supply-chain data and enterprise sales data, to identify patterns which help to match demand with supply and plan distribution. Additionally, it is now possible to analyze data from videos, images, speech and voice. You could, for example, look for references to celebrities and know when they are in social media feeds such as Periscope. This was the application that Dextro developed. However, this also has applications in the public safety and security industry, which is why Axon, a division of Taser, acquired Dextro in February 2017. Video from security cameras and body cams can be used for facial recognition but the system can also redact images from the video so that it can be released to the public for law enforcement procedures, such as alerts and court hearings. — Kishore Jethanandani, Contributing Writer, Telco Transformation |
In part two of this Q&A, the carrier's group head of network virtualization, SDN and NFV calls on vendors to move faster and lead the cloudification charge.
It's time to focus on cloudification instead, Fran Heeran, the group head of Network Virtualization, SDN and NFV at Vodafone, says.
5G must coexist with LTE, 3G and a host of technologies that will ride on top of it, says Arnaud Vamparys, Orange Network Labs' senior vice president for radio networks.
The OpenStack Foundation's Ildiko Vancsa suggests that 5G readiness means never abandoning telco applications and infrastructures once they're 'cloudy enough.'
IDC's John Delaney talks about how telecom CIOs are addressing the relationship between 5G, automation and virtualization, while cautioning that they might be forgetting the basics.
On-the-Air Thursdays Digital Audio
ARCHIVED | December 7, 2017, 12pm EST
Orange has been one of the leading proponents of SDN and NFV. In this Telco Transformation radio show, Orange's John Isch provides some perspective on his company's NFV/SDN journey.
Special Huawei Video
Huawei Network Transformation Seminar The adoption of virtualization technology and cloud architectures by telecom network operators is now well underway but there is still a long way to go before the transition to an era of Network Functions Cloudification (NFC) is complete. |
|
|
||
Telco Transformation
About Us
Contact Us
Help
Register
Twitter
Facebook
RSS
Copyright © 2024 Light Reading, part of Informa Tech, a division of Informa PLC. All rights reserved. Privacy Policy | Cookie Policy | Terms of Use in partnership with
|