technologies-drones-robots-autonomous-vehicles
technologies-drones-robots-autonomous-vehicles

Technologies for drones, robots and autonomous vehicles

drone

High-performance computing, vision systems and sensors, interconnect, memory, networking technologies, and increasingly advanced technologies like Artificial Intelligence (AI) are working in concert to provide autonomous technologies ever-higher levels of autonomy.

When we talk about autonomous applications, devices such as drones, robots and driverless vehicles come to mind. Typically, what they all have in common is the ability to gather information on their surroundings, process this data, interpret it and act on it. The key difference between a simple remote-controlled or programmable device and an autonomous machine is, of course, its intelligent ability to do things on its own. This almost intuitive response to the data presented is called artificial intelligence, or AI.

The push toward industrial digital transformation and automation is generating a growing demand for greater autonomy in robotics throughout the industrial sector. The level of autonomy in a robot or drone varies depending on the application. Some, but not all machines will have an element of operator intervention or remote control. Greater autonomy comes from the use of AI for machine learning, to develop algorithms, apply them to new data and act on the result. Called inference, the process needs significantly more computing power than previously programmable automated devices. Typically, AI requires huge amounts of computing power, and data is sent to the cloud for processing. But times are changing.

Processor technology has advanced dramatically giving more computing power in a smaller package without excessive power consumption, with better heat dissipation, and at a price point appropriate for many autonomous applications. Often, AI processing can be done locally – a technology dubbed edge computing, edge intelligence or simply AI edge. The jury is still out on the precise definition of the “edge.” It could certainly be a drone, for example, or a robot, though some call these the endpoint. In a smart factory, the edge could be defined as a line-side control box running multiple robots, or an on-site control center for several processes or warehouse logistics operations.

So, what makes a smart robot?

Robots can be incredibly small, such as those used by surgeons, ranging in size up to vehicles such as the Mars Rover. In our mind’s eye, we might see complex machines, static on production lines or moving about in a warehouse, freeing up people from tedious or dangerous tasks. In operation, it is the repetition of the sense-process-act cycle, creating a feedback loop, that makes a machine smart. This smartness involves taking complex data and processing it with AI to enable machine learning. This is what differentiates a truly autonomous robot from a simpler, pre-programmed piece of automation.

The latest technology is having a dramatic effect on robot development. Machine vision, machine learning, and accessible computing power mean that robots can sense and process much more data much more efficiently. They are becoming smarter, more autonomous.

Industrial robots block diagram

chart

Figure 1: High-level block diagram of a typical robot showing central controller, sensor hub and manipulator section (Source: Infineon)

 

The autonomous mobile robot (AMR) is the latest must-have. Seen as part of Industry 4.0, AMRs are ideal for applications such as automated material handling and in-house transportation, particularly in logistics operations with densely packed warehouses, but also in industrial and automotive manufacturing. Fitted with computer vision to avoid obstacles, AMRs also can perform route planning and work scheduling, while onboard systems communicate with other AMRs and transmit data to a central system or an operator. Applications for AMRs are also expanding outside the industrial sector, in domestic tasks such as household vacuuming and pool cleaning, as well as in education and research.

Can drones be smart?

Originally developed for military use at least 50 years ago for surveillance and more sinister applications, drones range in size from miniature machines that launch from the palm of your hand, to seriously large unmanned aircraft with fixed wings that require short runways to launch.

Again, high performance, miniaturized and affordable digital technology has revolutionized drone development and opened commercial and private use. Fitted with onboard sensors and GPS, they are deployed in a huge and still growing number of applications. Some, but not all drones are controlled remotely by ground pilots. They all need to be able to transmit data somewhere.

Diagram of typical drone

chart

Figure 2: Block diagram of a typical drone, showing processing units, electronic speed control, integrated sensors and gimbal with dedicated controller and power/battery management. (Source: Infineon)

 

Drones are often associated with dark and devious activity, but today they are used increasingly in humanitarian applications as well as for fun. They can save lives, survey disaster areas, deliver medication or equipment to people in trouble in remote locations, detect poisonous gases or inspect potentially dangerous structures.

Also known as unmanned aerial vehicles (UAVs), drones are being used in agriculture to assess crop needs such as irrigation and nutrients, as well as crop spraying and tracking livestock. They have applications in archaeology, meteorology, mining and construction, environmental monitoring and conservation, and can even locate poachers. On the lighter side, drones are useful in filmmaking, consumer entertainment, tourism and the world of sport.

But what makes a drone truly autonomous, and not simply a remote-controlled device like a model aircraft? In a word: intelligence. An advanced drone can be sent to a specific GPS location, it can track a moving subject, it can avoid collisions with fixed or moving objects, and it can fly or land safely if communication with the pilot is lost.

Machine insight

Machine vision is arguably the most important factor enabling autonomous applications to sense their environment. The technology has advanced in leaps and bounds in recent years. It is not just about the camera, but also the sensor and how the images are captured, stored, interpreted and communicated. But let's start with the sensors that enable machines to see.

There is a huge range of cameras available and selection depends on performance, as well as size, weight and cost constraints. Drone aerial photography and mapping, for example, might require the latest 4K video as well as 12 megapixel still shots. Inspecting the outside of structures might require zoom facilities.

It’s the advent of 3D sensing that enables increased autonomy such that the machine can analyze the images and make more complex decisions. This is a fast-growing market comprising a range of advanced and still-developing technologies, including lidar, stereo vision and structured light. Stereo vision is popular in drones and some automotive applications, giving long-range sensing above 10m. Conversely, structured light has been adopted for short-range sensing, less than 1m.

Light detection and ranging (lidar) technology is already widely used in drones and unmanned vehicles as well as in driver assistance systems. Whether using mechanical, flash or solid-state technology, it measures the distance from an object using laser light. A typical lidar includes lasers, photodetectors and beam steering control circuitry.

The advantage of mechanical lidar is its use of high-grade optics and a rotating assembly to create a wide field of view, up to 360 degrees, with a superior signal-to-noise ratio. In operation, scanning lidar uses a focused pulsed laser beam for long-range, directed by a mechanical or MEMS mirror. MEMS sensor innovation is aiding the miniaturization of lidar mirrors. For detecting very small objects at longer ranges, mechanical lidar is popular in applications including AMRs and autonomous vehicles.

Flash lidar, meanwhile, enables Time-of-Flight (ToF) sensing, using multiple beams of laser light simultaneously to measure distances to objects in a complete scene in a single shot. ToF depth-ranging camera sensors can be used for obstacle avoidance, object tracking, object scanning, 3D photography, and more, both indoors and out. The field of view can be extended by mounting multiple side-looking sensors.

Solid-state lidar, typically based on CMOS and/or VCSEL image sensing technology, has the advantages of being scalable in performance and small in size, with no moving parts and low power consumption. It is critical in consumer products and finding favor in automotive and industrial systems. Even the smallest devices offer ToF technology and high performance and deliver VGA resolution.

Laser and beam scanning technology avoids moving mirrors and includes optical phased arrays, laser diodes, edge-emitting lasers and VCSELs. The latter are the new kids on the block, combining high-power density and simple packaging and featuring an infrared LED with the spectral width and speed of a laser. Although taking more space than edge-emitting lasers, VCSELs are targeting robotics in industrial applications and flash lidar.

Solid-state image sensors, often based on CMOS technology are becoming much smarter, incorporating on-chip AI processor units for compressing or pre-processing image data before shifting it to the CPU or image signal processor. This can significantly increase the speed of image recognition, for example, and simplify system design.

A host of sensors

A host of other sensors can be mounted on drones or robotic systems. These include radar, ultrasonic, infrared, photogrammetry and multispectral cameras. Sensors to detect CO2, smoke and other gases or particulates, are useful in both indoor and outdoor applications for checking air quality, for example, or a potentially dangerous environment in a disaster area. Sound detection and voice recognition can be provided by audio sensors or MEMS microphones. For a noisy environment, active noise cancellation is vital.

The application of RFID passive devices incorporating smart sensors is an emerging technology with wide potential in the industrial, machinery and construction sectors. Parameters such as temperature, humidity, motion, ambient light, electrical continuity and material characteristics can be measured passively. Drones equipped with RFID readers can gather relevant data at a range of 5m to 10m from buildings, bridges, open fields, greenhouses, and even livestock, fitted with these UHF RFID tags.

Along with 3D, sensor fusion is revolutionizing data gathering for autonomous applications. The ability to link and interpret data from different sensors is making drones and robots even smarter, giving them the ability to better understand and act on their environment. Combining the data from a camera and a MEMS motion detector gives a more complete picture. Further, all sensors have a tolerance error. Sensor fusion improves integrity, reliability and robustness, and can mitigate to some extent against malfunction or tolerance errors.

The key is to combine smart sensors with the necessary processing hardware and software and computer algorithms for deep learning to analyze and interpret the data efficiently at the endpoint. Although initially under development for driverless vehicles, sensor fusion is already finding its way into industrial autonomous applications.

Combining smart sensors

chart

Figure 3: Robotic vehicles are often battery-powered, so reducing the total sensor count using smarter devices, plus the ability to merge the computation of multiple sensor data within a single processing unit, simplifies design, reduces power consumption and lowers cost. (Source: ARC Processors, Synopsys)

 

Data processing

With so much more sensor data captured, it’s not surprising that demand for higher performance computing has increased dramatically. And more of it is needed at the edge and/or the endpoints. Dedicated image signal processors with greater computing density for AI edge, deep learning applications are emerging. AI engines and accelerators can be added to optimize processing units for machine vision applications.

A host of other digital signal processor (DSP) or graphics processing unit (GPU) architectures are also available for computer vision and neural network/AI applications. General purpose, image processing, multi-core FPGA-based SoCs, adaptive SoCs, and GPUs are meeting the requirements of many autonomous applications, delivering affordable performance for neural network processing and hardware acceleration.

Xilinx Versal AI Edge platform

chart

 

Figure 4: A whole host of high-performance SoC devices, IC cores and modules are available for high-performance embedded AI computing, including various combinations of MCUs, FPGAs, DSPs, image processors, accelerators and dedicated AI engines. (Source: Xilinx)

 

But if embedded computing using AI techniques at the edge sounds way too out there, it’s heading toward mainstream. Fortunately, product developers no longer need to be AI experts. There is a growing range of integrated modules and subsystems available. Some camera modules, for example, incorporate image signal processors as well as some of the basic software. Some image signal processors are tailored specifically to be easily integrated into vision systems. Development systems and prototyping kits are becoming available, making it much easier for designers to incorporate embedded vision technology.

Thanks for the memory

Together with high-performance embedded processing and AI compute engines comes the need for high-performance and high-density memory. A lot of dynamic random-access memory (DRAM) is needed to feed the processors with the data gathered from multiple smart sensors, and it needs to be moved fast. Memory bottlenecks may be another determining factor on whether, and how much, sensor data can be processed at the sensor, on the CPU, or in the module, or whether it must be sent to a server in the cloud or locally for processing.

Several potential solutions are in the pipeline. One is for accelerator RAM with an enhanced memory hierarchy to accommodate evolving AI algorithms. Another, at the top end, is using high bandwidth 3D stacks of DRAM which can deliver 16GB capacity and 460GB/s bandwidth. However, for less compute-intensive applications, commodity memory, including low-power DDR4 or DDR5 DRAM, can offer a cost-effective option. Non-volatile and flash devices, and local static random-access memory (SRAM) also have their place, depending on power, speed and space constraints.

Powerful connections

There are many other technologies critical to autonomous applications that are outside the scope of this article, such as the aeronautical and navigation systems of drones, and the power electronics required to drive motors and actuators for static and mobile robots. However, there are two that should get a passing mention: interconnect and power subsystems for AC/DC and DC/DC conversion and battery management.

Interconnect is often overlooked, but high-performance, complex autonomous systems require efficient and reliable solutions to integrate the various substrates, modules, and subsystems. Not only are size, density and weight important, but effective performance can only be assured with reliable signal transmission and robust connections. Specialized flex assemblies and custom-designed modules might fit the form factor, but they will still need to be interconnected, as will plug-and-play modules.

Drones, for example, often need high-density, fine pitch connectors, in the 0.2mm to 2mm range. Industrial robots need to meet demanding temperature, shock and vibration conditions while maintaining high signal integrity and security. Floating board-to-board connectors help tolerate vibration. Other interconnect approaches, such as miniature wire-to-board, board-to-board micro-USB, compression, and memory card connectors are available to meet specific demands. More onboard sensors also mean more wired connections.

Power electronics are important, too, as industrial robots are being upgraded with more efficient motors and motorized drives. Greater autonomy also comes with improved battery life and new initiatives such as wireless charging. Power semiconductors are key components. Modern applications often use 48V power, and the power subsystem will include AC/DC conversion, battery management, DC/DC conversion, multiphase converters, point-of-load conversion, linear regulation, and motor drivers.

Point-of-load conversion is critical, perhaps taking the 48V supply down to an intermediate 12V bus level, and then to the IC supply voltage, typically below 5V. MOSFETs are another common power solution for multi-phase motor driving, for example, and the quest for higher efficiency is driving growing demand for silicon carbide (SiC) power devices. Associated devices might include high voltage gate drivers to facilitate processor control, high and low-side drivers, power factor correction ICs, electronic fuses, rectifiers, and current sensors.

Data transmission

So, what happens once an autonomous application, such as a drone or a factory robot, has captured data? Well, either it processes it (and acts on it) on the spot, or transmits it somewhere, such as a local processing facility or the cloud. Some devices, of course, will do all the above.

But whether wireless or wired, how much data needs to be transmitted to where and at what speed, determines the data communication technology required in any application. Edge AI is moving processing capability closer to the sources that need it, but that still requires fast and effective, high-volume data transmission if/when decisions need to be made at speed.

Not all applications need instant feedback on all newly captured data. A drone sent out and controlled by a pilot might need to send live video and receive back instructions immediately. However, large amounts of captured data might be better analyzed in the cloud: 3D mapping, for example, or developing new algorithms for later use, or determining the condition of buildings or machines for predictive maintenance.

Streaming of big data to servers in the cloud for processing is one of the key applications for 5G networks. Indeed, 5G is heralded as the driver and accelerator of Industry 4.0 and digital transformation. Its use is not only linked to the cloud but is touted for close-to-the edge autonomous applications too. It may not completely replace factory networks, but operating with and augmenting traditional connectivity systems. Many installations will necessarily continue to run and acquire data from legacy equipment as well as from edge AI devices. There will be a need to interface to well-established protocols and communications standards, maybe using a mix of both wired and wireless connectivity systems. Meanwhile, issues such as privacy, security and latency are further driving edge AI.

Wireless communications are the norm for drones, although tethered drones are emerging for certain applications. Wireless data transmission is increasingly used on the factory floor. Conventional wireless protocols, of which there are many (standard, low-power, wide-area and application-specific) offer the advantages of proven technology, availability and affordability. Aside from 5G, private LTE technology in the 600 MHz range is emerging to meet demand in certain sectors, such as the use of drones in industry and utilities. Choice, as ever, depends on requirements for speed, range, capacity and cost, among other parameters.

Wired networking, whether fiber optic or copper-based, may still be the best solution in some applications. Standards such as 10G and even 100G Ethernet offer high-speed, high-data-rate transmission and can operate over short and long distances.

What next?

We can see a complex interdependency here. Industrial digital transformation is accelerating, driven not only by the promise of increased productivity and profit but also by necessity as companies have had to increase their use of automation in these difficult times to keep employees safe and meet global logistics challenges. The trend in the “connected factory” is a shift toward greater autonomy in automation. Sensor fusion is enabling more data to be collected, embedded computing and edge AI is capable of processing that data and generating smarter operation, while 5G networks transmit the data to and from where it is needed.

However, at the edge, greater autonomy requires more data and the sensor data flow is always going to be limited by the amount of embedded computing power and memory available. For sure, AI techniques will continue to improve, along with higher performance onboard computing power, but not fast enough for some. 5G has the potential to shift data from the endpoint for nearby processing, at or close to the edge. But 5G has its roll-out issues before it becomes ubiquitous. And the edge? Well, the edge will shift closer to the endpoint.

Sensors are the very foundation of autonomous devices. Avnet stocks over 14,000 different sensors, including all the types described above, and offers the application support to help you apply them effectively.

 

technologies-drones-robots-autonomous-vehicles
technologies-drones-robots-autonomous-vehicles
Related Articles
Integration of Multiple Technologies Heralds a New Era in Smart Healthcare
4D Radar: Extraordinary sensors for the cars of tomorrow
August 15, 2024
The full realization of NOA requires three main sensing technologies – camera lens, millimeter wave (mmWave) radar, and light detection and ranging (LiDAR). Each of these technologies has its own strengths and limitations, which complement the othe
warehouse robot
Navigating the Future: How Avnet is addressing challenges in AMR Design
By Jamie Pederson   -   April 16, 2024
Autonomous Mobile Robots are performing tasks that are too dangerous, tedious, or costly for humans. Designing AMRs involves many technical and business challenges. This article covers these challenges and how Avnet will help you overcome them.
technologies-drones-robots-autonomous-vehicles
Related Events

No related Events found