Bulletin of Electrical Engineering and Informatics

Tavarov Saijon Shiralievich, Sidorov Alexander Ivanovich, Shonazarova Shakhnoza Mamanazarovna, Sultonov Olamafruz Olimovich, Parviz Yunusov Department of Life Safety, Polytechnic Institutes, South Ural State University (National Research University), Russian Department of Theoretical Fundamentals of Electrotechnology, Polytechnic Institutes, South Ural State University (National Research University), Russian Department of Electric Stations, Polytechnic Institutes, Grids and Systems, South Ural State University (National Research University), Russian


INTRODUCTION
Fog computing is a creative architecture that emigrates some data center features to the server's edge. Fog computing is a distinct architecture based on edge servers that distributes limited computing, hardware, and networking capabilities between end devices and traditional cloud computing data centers. The primary goal of fog computing is to ensure low and predictable latency in latency-sensitive internet of things (IoTs) applications such as healthcare services [1].
In real-time applications, fog computing has a lot of advantages. It is often utilized in IoT applications that deal with real-time data. It functions as a more advanced kind of cloud computing [2]. It serves as a link between the cloud and end users (closer to end users). It can be utilized in both machine-tomachine and human-to-machine scenarios [3].
Fog computing is based on bringing networking resources closer to the nodes that generate data at the lowest perceptual layer. Fog is a highly virtualized platform that connects end nodes in an IoT context to traditional cloud computing, storage, and networking services [4]. In addition, resource management techniques are often divided into six groups. Which contribute to the application development process and the possibility of scheduling and providing resources and the process of unloading resource allocation and load balancing in some of the services it provides, where the use of the unit for the central processing and reaction time as well as productivity, energy consumption, data movement, and the percentage of service migration available to know the number of active nodes of the tools on which many researchers rely [5].
Required processes for enabling the fog computing assessment, network model can be determined for further evaluation [6]. Based on the proposition of the fog computing setting, different sets of resources computing architecture. Furthermore, the distinctions between fog and cloud computing data centers in terms of energy usage and data management are investigated. Research by Gonçalves et al. [23], a method was adopted to contribute to the process of improving the special simulator called MobFogSim, which includes dynamic network slicing in service management for mobile fog networks, as well as contributing to adding an empirical evaluation of why the effect of network slicing in the fog environment to contribute to the reception of mobile devices and may lead to an increase in resource consumption and the posting tool for requests generated from the nodes. In this paper, simulating fog computing in OMNeT ++ has been proposed. It is totally presented as follows: i) introduction, ii) method, iii) results and discussion, and iv) conclusion.

METHOD
The architecture of the proposed system involving three layers. The first layer is the hosts to transmit data. The second layer is fog node, which is control on the network component, and resource allocation through network transmission. The third layer is the top-most layer (cloud) is where the cloud components are located for historical data storage and big data processing. Due to the fog node has the main view on the network connection and data traffic pass through out so it will build the network setting during the initialize state. The proposed system aims to achieve high network performance for all the types of computing nodes in the fog environment. So, after network executed and once the transmission costs of the links are calculated depending on the host devices traffic to the fog node, which redirect the traffic to the cloud network element for response to specific link required or for data storage service on the cloud. In addition, the main block of the proposed system showed in Figure 1, which it describes the simulation stages based on OMNeT ++ to create the proposed fog network topology with the three main elements as the host devices, fog node, and cloud to provide HTTP, file upload/download services.

Fog implementation environment
The proposed system was tested in an environment that met the parameters listed in Table 1. Furthermore, the suggested system's code was created in the OMNeT ++ simulation tool's C ++ language. While Table 2 presents the specifications of each network elements which effected on the resource allocation process, with network configuration IP address, network interface, and port number.

The proposed method
The major goal of the suggested technique, which is based on fog control data flow to the cloud, is to maximize the system throughput by maximizing the usage of each host's processing capabilities and reducing the average task response time. The fog node receives and forwards all task requests to the cloud for processing. In fog environment, the architecture has three layers: the first layer is hosts, the second layer is fog, and the last layer is the cloud resources, as it showed in Figure 2.

Evaluation metrics
The suggested system is based on the following three assessment criteria: − The number of packets received divided by the total number of packets transmitted is known as throughput, and the equation that was utilized as [24]: The latency time is the time it requires for the packet to travel from sender to receiver, and it is used to calculate the delay time simulation parameter for full packets. The (2)  Where d denotes the delay time in seconds, L denotes the packet length in bits, and R denotes the data rate in bits per time unit [25]. − Channel resource allocation: it represents the maximum utilization of the resources available in a fog system [26].

RESULTS AND DISCUSSION
The propose system results based on the two case studies. The 1 st is the network evaluation system with the 5 IoT devices. The 2 nd is the case of 10 IoT devices evlaued to recognize the effecting of the network with increasing number of nodes.

The 1 st case study
The proposed system in the 1st case study based on 5 IoT devices. As Figure 3 showed the network topology. The Table 3 showed the system results.

The 2 nd case study
It is based on the 10 IoT device with the main evlauation metrics, Figure 4 showed HTTP request by nodes. Figure 5 showed the replay to the request by fog node and Table 4 showed the evaluation metrics. The proposed system's results show that as the number of devices in the network grows, the impact on the evlauation metrics grows, as shown in Figure 6.   Figure 6. The average payload throughput, average delay, and average of channel allocation of the proposed system

CONCLUSION
Fog computing refers to a distributed processing system in which computer, storage, and software are located between the data source and the cloud. Running the IoT application in the fog situation is difficult because of the resource allocation difficulty. The resource allocation strategy used to regulate the degree of QoS efficiency in cloud computing is the most important factor. This study combines IoT with FC to provide an IoT-based fog computing paradigm. The quality checks were done with OMNeT ++ and the fog NetSim ++ addon. The proposed scenario with FC is particularly useful, according to the article, since packets from comparable directions are added to the network root fog node manager using learning automats, and data exchange rates are utilized to control delay time, channel availability, and data exchange rates. The proposed FC scenario is particularly helpful, according to the article, since packets from comparable directions are added to the network root fog node manager using learning automats and data exchange rates are employed to regulate delay time, channel allocation, and throughput.