Evaluation of weighted fusion for scalar images in multi-sensor network

Received Sep 18, 2020 Revised Dec 16, 2020 Accepted Jan 4, 2021 The regular image fusion method based on scalar has the problem how to prioritize and proportionally enrich image details in multi-sensor network. Based on multiple sensors to fuse and manipulate patterns of computer vision is practical. A fusion (integration) rule, bit-depth conversion, and truncation (due to conflict of size) on the image information are studied. Through multisensor images, the fusion rule based on weighted priority is employed to restructure prescriptive details of a fused image. Investigational results confirm that the associated details between multiple images are possibly fused, the prescription is executed and finally, features are improved. Visualization for both spatial and frequency domains to support the image analysis is also presented.


INTRODUCTION
Fusion for digital images [1] is the practice of associating correspondent details from several images into an image. Image fusion processes are extensively employed by current applications such as bio-medical technology, satellite map, and remote sensors. They are critical as they can progress the presentation of remote substance/pattern recognition by fusing various sources from ground, space or air-based objects with prescribed attributes for well decision-making. Besides, they are smoothing digital images [2], enhancing quality of geographic [3], recovering particular details which are invisible, and compensating substandardized substance. They also keep retaining insightful details from all source images. Similarly, in multi-sensor network [4], the lossless fusion develops visual detail from different input images. Multi-sensor image fusion can be beneficial regarding temporal and spatial features [5], less ambiguity, improved network performance, and higher reliability. The main benefit of fusing is to reconstruct an image which encloses richer feature of the consideration than any source images. Especially, we need the fusion for high definition on real world and multi-spectral images [6].
There are a number of image fusion algorithms for example DCT [7], IHS [8], and DWT [9], which have been used in several applications. These algorithms fuse the detail from multi-sensor images to achieve an output image which becomes valuable for computer vision and has adequate details for human perception. Regarding the process, image fusion [10] is categorized into feature, pixel, and decision methods. Feature method [11] fuses the characteristic of multiple sources. Pixel method [12] is fundamental and commonly used. It fuses pixels (scalar) in the sources and preserves most of the original details. The fused image from this method obtains higher rigorous final result. Decision method [13] depends on details either from the

PROPOSED FUSION MODEL
Basic image fusion model is executing pixel operations on numeracy from multiple source images. Pixel by pixel fusion helps enhance details related to integration of multiple sources. The process develops in-depth detail in each pixel from 2D intensity images to produce a prescriptive final image. Source images originate from a multi-sensor in the network. An application based on pixel by pixel execution to reform medical images is discussed in [14][15][16]. The enhancement in quality depends on the performance of fusion process [17][18][19] by comparing the fused image utilization to the usage of original images without fusing. The application of the basic fusion includes substance recognition as the appearance of specific form such as curves, edges, or lines in an image from a sensor normally presents again in another image from different sensors. However, data source content from individual sensor has to be identical regarding spatial domain (size and bit-depth) and resolution. Although it is feasible to perform the common multi-sensor fusion for pixel operation but we intend to cope with some straightforward fusion models [20]. Then, we propose sophisticated approach based on priority levels for scalar (pixel) operation which will be more practical to execute the fusion in the spatial domain.
The straightforward operations (addition, subtraction, multiplication, and division) are now explained [21][22][23]. Addition is the modest operation, and functions by calculating the intensity figure of the N source images on a pixel-by-pixel basis. The addition of N images is executed directly in a single iteration. The fused pixel values (F (i, j)) are described by ( , ) = ∑ =0 ( , ), where Pk (i, j) represents pixel value of the k th image. It involves an exact spatial domain and undertakes a semantic orientation. It can also cover up any noise which exists in the input images by adding up a constant value C to an individual image then Fk (i, j)=Pk (i, j)+C. The subtraction operation takes N input images and calculates a fused image whose pixel figures are those of the 1 st image minus the related pixel figures from the N-1 images. The fused pixel values (F (i, j)) are defined by ( , ) = 1 ( , ) − ∑ =2 ( , ). It can also subtract a constant value C from all the pixels to an individual image then Fk (i, j)=Pk (i, j)-C. Multiplication and division are not commonly used as fusion operation. However, the multiplication of N images is executed by ( , ) = ∏ =0 ( , ). We can multiply a constant value C to all the pixels then Fk (i, j)=Pk (i, j)*C. Note that C is a decimal number, and lower than one, which will suppress the image intensities. It can be negative constant if the system provides. Division by a constant C is computed using Fk (i, j)=Pk (i, j)/C. It only implements integer or decimal number division. If only integer result is required, then result will be rounded down for the fused figure.
Development of up-to-date technologies helps escalate the efficiency of farming machines with their priority levels which give a substantial compression of the soil [24]. However, in this article a prioritized technique based on scalar value is proposed for images fusion in multi-sensor system. The proposed technique to develop a fused image is executed using prioritized calculation by the equation, ( , ) = ∑ =0 ( , ) + ∏ =0 ( , ), where Pk(i,j) represents scalar (pixel) value of the k th image and wk is given priority (weight), which can be either positive or negative integer.

RESULTS AND ANALYSIS
Image fusion is a method of aggregating many images into one. The objective of this integration is to combine images from several sources in multi-sensor network, to investigate and to comprehend with securing the value per se. This concept is widely utilized to mark multiple objects' edges found within the images. The approach designates the pixels regarding their appearances and hue. These parts reflect the whole images and represent their borders such as similarity and hue upon fusion. The fusion concept is employed to develop 2D contour of the fused object. It is also selected in object edges detection, machine recognition, anomaly detection and augmented reality, and biological analysis. Image fusion is broken down into types: (i) fair fusion and (ii) weighted (priority) fusion. The fair fusion functions principally in a simple analysis of the multiple images without an impressiveness. The idea performs a straightforward calculation of pixels compared to the latter kind. The priority fusion executes all images with different emphasis and involves more numbers to calculate.
The experiment in images fusion for multi-sensor network has been coded in MATLAB for verifying the fused image. Original images (A, B, and C) from the network and their corresponding frequency domain representations based on fast fourier transform (FFT) technique [25] are summarized as shown in Figure 1. The result of priority fusion using aforementioned three images for 2A+B-C+ABC is presented in Figure 2. The FFT graphic of fused image is also analyzed as listed in Figure 2

CONCLUSION
The priority-based image fusion approach is presented. The fusion is attained using mathematics representations such as addition, power and division. The computation and weighted level for fusing multiple sources are programmed using MATLAB application and the fused image of the approach is confirmed. To verifying to the fused image the same is visualized upon the FFT algorithm in frequency domain. We found that the performance depends on how the system prioritizes individual original sources prior to fusing. It is also proving that the fused image is dimmed but combining edge structures from each image source. Weighted fusion can be employed for the reconstructed object borders.