Image Processing On Embedded Platforms

Suhas Kadu
5 min readDec 13, 2022

--

Modern image processing is increasingly being carried out directly on small, affordable embedded systems rather than just on large, expensive computers. Equipment and machines have eyes! But in the application field of embedded systems, a few routines and reflexes that would normally be true ,need to be given another thought.

Because of how powerful microprocessors and mini PC modules have gotten, even digital image processing has moved into the embedded field. This is not normal because the complexity of the vision algorithms and the vast amount of data required for the numerical processing of camera images in real-time. Thankfully, the new computer units’ power usage and cost have not risen in step with their computing power. In contrast to desktop PCs, modern dual or quad core PCs in credit card form consume few watts and cost only 100 to 200 francs.

A similar evolution has occurred in cameras: cheap and compact CMOS sensors have largely replaced the complex CCD technology. However, the highest-resolution cameras are still not used, but rather those that capture the relevant object adequately. This saves a lot of unnecessary computing power.

O-3000 Embedded Camera

Issues the Developer Must Face

1. Bigger is more attractive; more doesn’t always equal better. Instead, a series of simple queries are posed to the creator of Embedded Image Processing Systems:

2. How is image processing implemented (using common software)?

3. What standard form factor PC is to be used?

4. What is the typical interface used to connect the camera to the computer?

5. How do the photos enter the standard driver or personal software?

The short answer is that standards, at least in the embedded fields, are in terrible condition. And this is what we want to demonstrate here: it works fairly well and makes sense this way. because the diversity of embedded systems matches the homogeneity of their applications. The cost per item, the size of the construction, the connecting line, etc. are the only factors that ultimately matter.

First, it is demonstrated how a Raspberry Pi 3 may be configured to quickly and easily capture images with an O-3000 camera and save them in TIFF format. The Raspberry is linked to the O-3000 camera through USB. One wire is required because the camera is also charged by USB.

The command sequence is now input into the Raspberry’s shell after it has been opened. Please be aware that an internet connection is necessary. The lines denoted with a 1> ensure that the system is current and that all software packages required by the application are installed. Github downloads the O-3000 software at position 2. The O-3000 driver is created and installed by the commands at 3>. The example app is assembled at 4>. The example software is run via the command on line 5. Now, the Raspberry reads camera images and stores them as TIFF files. CTRL-C can be used to end the procedure. Any program that is appropriate can be used to view the recorded images.

Board Design for Image Processing Embedded Systems

Image Processing Applications in Embedded Systems

Embedded systems for advanced image processing applications, especially applications involving machine learning or AI models, require significant computing power and memory in addition to high-definition video. Ideally, these functions should be integrated into a single package with a small form factor and enough on-board memory to store data. Add to that network or wireless connectivity and you have all the elements needed for a powerful machine learning based image processing system. Many designers looking to develop new products in this area can certainly design a custom board for their product, but there are other options available. Most development boards (eg Arduino) limit your allowed form factor and functionality to modules that can be connected via standard headers or USB. You will also be limited to running rather simple machine learning models with low latency. This is fine for still image processing, but these simpler systems are unsuitable for video processing.

Embedded Image Processing Systems for Automatic Recognition of Cracks using UAVs

In this article the cracks on the building walls are detected using Embedded image processing. Multiple approaches are used to solve this problem.

Algorithm:

The approach employs Sobel Filter-based edge detection. After being captured in RGB format, the image was converted to grayscale. The Sobel edge operator is then used to carry out the edge detection procedure, applying a threshold reference factor made up of 40% to 20% of the gradient magnitude forming the edges boundaries of a crack. Calculating partial derivates of the x,y coordinates yields the magnitude grade.

Scenario A:

A desktop PC in the ground station handles image processing; the UAV is simply utilized for data collection and storage. There are numerous sub-scenarios that can be created from this main scenario. Two scenarios exist: one in which the UAV saves all captured images before returning to the ground station to download them for further processing (scenario A1), and the other in which the UAV could send the captured images to the ground station wirelessly (scenario A2). The desktop at the ground station can either run the image processing algorithms as C++ code on top of a Linux-based OS or via a Matlab runtime.

Scenario B:

In Scenario B the UAV is equipped with an onboard Raspberry-PI processor with the embedded image processing algorithms running on it and in this case the crack detection can be performed during the flight.

Scenario C:

The UAV is equipped with a FPGA- board on which the generated C / C++ program is ported to a FPGA-Board with a Microblaze soft processor and OS support (scenario C).

Conclusion:

To enable the prompt identification of cracks during a UAV flight and to enable a system vessel to interface directly with the UAV, the Sobel Filter can be implemented on the Raspberry PI platform. In image processing, choosing photographs with a 1-megapixel size is acceptable since it preserves detail while taking pictures of objects that are quite close to a facade. We find that scenario 2 is the best option when utilising the Sobel filter, although it results in increased system utilisation because of data transfer. However, in order to gather information about the crack’s size, angle, and width, segmentation by edge detection is crucial. The idea of combining the suggested scenarios appears to be highly promising based on the results. This is part of an ongoing work, whose results will be presented at the conference in case the paper is accepted.

Group Details:

Tanaya Gogawale — 07

Suhas Kadu — 35

Saket Kolpe — 61

Abhita Lakkabathini — 67

--

--

Suhas Kadu
Suhas Kadu

Responses (1)