Bulletin of Electrical Engineering and Informatics

Received Aug 19, 2022 Revised Nov 2, 2022 Accepted Nov 18, 2022 This article aims to compare two different scanning devices (360 camera and digital single lens reflex (DSLR) camera) and their properties in the threedimensional (3D) reconstruction of the object by the photogrammetry method. The article first describes the various stages of the process of 3D modeling and reconstruction of the object. A point cloud generated to the 3D model of the object, including textures, is created in the following steps. The scanning devices are compared under the same conditions and time from capturing the image of a real object to its 3D reconstruction. The attributes of the scanned image of the recon-structed 3D model, which is a mandarin tree in a citrus greenhouse in a daylight environment, are also compared. Both created models are also compared visually. That visual comparison reveals the possibilities in the application of both scanning devices can be found in the process of 3D reconstruction of the object by photogrammetry. The results of this research can be applied in the field of 3D modeling of a real object using 3D models in virtual reality, 3D printing, 3D visualization, image analysis, and 3D online presentation.


INTRODUCTION
Today, there are many high-quality image capture devices. The goal is often to accurately reproduce an image in high digital quality for other possible outputs. These outputs can be not only in the form of images in digital and online environments such as web presentations [1] but also in the form of threedimensional (3D) models [2]. These models can serve as a basis for 3D printing [3], [4] or be used in augmented reality and virtual reality (AR/VR) [5]. The goal of the captured image is not only its high quality but also the short processing time of this image and its sharing or stream [6]. There are several methods for creating 3D models. This article focuses on using image data for modeling 3D clouds of plant structure points by photogrammetry.
The structure from motion (SfM) photogrammetry method for obtaining 3D structure from 2D sequences has been used for quite some time [7]. Modern digital image data processing has followed analogous methods of photogrammetry. Today, this method can be divided according to several criteria. Photogrammetry is primarily divided into aerial and terrestrial and, according to the number of image configurations, into single-frame and multi-frame [8]. At present, the focus of research is on aerial photogrammetry. Image data is captured from above by controlling aerial or satellite photographs and nowadays also with a high level of drones and light detection and ranging (LiDar) technologies [9]- [11]. However, this method is considered low-cost and finds practical application in many commercial and 869 scientific fields. The method is used, for example, in architecture, construction, materials engineering, microtechnology, medicine, waste management, culture, conservation, archeology, anthropology, design, forestry, agriculture, forensic sciences, and many others, as current studies have shown [8], [12]- [19]. There are many imaging devices for image data devices in today's digital technology market. The choice of equipment depends on the required quality and output. Photogrammetry works on the principle of using 2D image data, most often in the form of photographs. Therefore, classic compact cameras, mobile cameras, 3D or 360 cameras, laser 3D sceners, drones, or tablets with LiDar technologies, which are now available to the general public, are used for this method [6], [9], [10], [20]. The quality of the scanned image depends on the resolution, color gamut, lighting conditions, and attributes are affect the demand for additional outputs [12], [20], [21]. Post-processing image processing, graphics software, and methods designed to improve the quality and analysis of image data [22], [23].
This article aims not only to 3D modeling of an object into a model of point and texture clouds using the SfM method. Another goal is to compare two scanning devices and their suitability for image data processing by photogrammetry. This experiment uses a 360° camera with six lenses and a classic mid-range mirror camera to capture the image. The acquisition of image data of both devices and methods of subsequent image processing and the achieved results are presented in other chapters of this article. The resulting principles can be the basis for working with natural objects and their structure. Furthermore, for the creation of actual plant 3D models with use not only in agriculture. Possibilities of working with point cloud models and polygonal networks can be applied across disciplines, especially in industry 4.0. and virtual reality.
Various image capture options distinguish the sensing devices selected for this experiment and the characteristics of the image obtained. This experiment explores the possibility of using sensing devices in the 3D modeling of point clouds of a particular object and their subsequent analysis. This experiment assumes that two different 3D models will be created, depending on the type of sensing device chosen. For this purpose, a 360 camera with six "fisheye" lenses and a digital single lens reflex (DSLR) camera with an 18-55 mm lens were selected as experimental devices. The object for the 3D reconstruction was Mandarin chosen. The manufacturer envisages the design and use of the 360 camera primarily for direct application and transformation into a 3D image of space. However, this work reconstructs a 3D model of an object in the space in which the image was scanned.
A 3D camera is used to reconstruct a 3D model in this work. That model is in the space in which the image was scanned. The purpose is not to create a 3D or VR environment, but in this experiment, the emphasis is primarily on the possibility of using the device for the digital reconstruction of a particular object that is part of this space. For this, the method of photogrammetry was chosen. The prerequisite is to obtain the object model from the model reconstruction from the source image. Potentially, it is also possible to segment the model from the environment for further processing and use of the 3D model. Furthermore, analyze the image quality and information that the object model provides.
In this work, the 360 camera is confronted with a device by which the selected object can be reconstructed into a 3D model using the photogrammetry method relatively quickly. That is a classic digital mirrorless camera (DSLR) with a regular lens. These sensing devices have been commonly used for the 3D reconstruction of photogrammetry for a long time. A relatively low price also characterizes the devices compared to the above types of sensing devices and the 360 cameras used in this experiment. Last but not least, this thesis also considers the possibility of combining both devices to obtain a natural 3D environment with models of real objects transformed into this environment with the aim of the most serve authentic reproduction of an actual image in the digital environment.

METHOD OF MULTI-IMAGE 3D PHOTOGRAMMETRY
The direct linear transformation (DLT) method was proposed as early as 1971 as a basic mathematical photogrammetry model [24]. That is a contactless method of recording an object in space. The basic principle of the photogrammetry method is a definable point located in space, which is shown in at least two images. Photogrammetry generates 3D point positions by projecting lines from the camera position via the charge couple device (CCD) into space. When using two photographs, the intersection of the lines indicates the position of the point. Therefore, the camera angle plays an essential role in the accuracy of this method. Incorrect camera position or orientation will generate incorrect 3D point position. Figures 1 and 2 in reference [7] for the graph of the algorithm. Figure 1 shows the basic theory of photogrammetry and Figure 2 depicts the transformation between two references [7]. ´+ 1 + 2 + 3 + 4 9 + 10 + 11 +1 = 0 ∪ ´+ 5 + 6 + 7 + 8 9 + 10 + 11 +1 = 0 Where I´= I − I 0 ; J´= J − J 0 ; l 1 − l 11 is direct linear transformation parameter (DLTP). Coefficients 1 to 11 are functions of exterior landmarks and interior landmarks. The initial values of the external and internal orientation elements are not needed in the calculation. The DLT equation can be used in the photogrammetry of consumer-class digital cameras [7].
More than just two images are usually needed to reconstruct a specific object into a 3D model. Tens to hundreds of images can be used for 3D reconstruction [25]. The triangulation algorithm finds a common point in these images. Subsequently, another algorithm calculates the camera positions in space during object acquisition. Then the individual values x, y, z are assigned to each point according to the cartesian system [8]. That determines the basic information about the object's position, size, color, and geometric shape in space [25].

Representation of 3D data in a point cloud
3D data in basic form represents point cloud and can provide secondary information about color or intensity [25]. The information from the 2D image is contained in mathematically structured fields. This data already explicitly contains relationships between neighboring points. However, in a general 3D area, point clouds contain neighboring relationships only implicitly. The import internal and external data, the orientation of the scanning device, and the accuracy of its settings can be used as information for the 3D reconstruction of the object, providing image data that can be loaded together with the images. The point cloud is shown in Figure 1.

METHOD OF IMAGE CAPTURE AND GENERATE POINT CLOUD
The object for capturing image data is a young tangerine tree. A live plant with many green leaves, which define the object, will be converted into a 3D model. The tangerine tree grows in a citrus greenhouse environment and other citrus types. The young tree is 1 m tall, with branches scattering 0.5 m from the trunk. The temperature in the greenhouse environment was 24 °C at the time of image capture. The subject is captured in bright daylight without additional lighting devices. This experiment aims to create a faithful 3D model of the plant's tree structure in point cloud and texture for other possible desired outputs. Furthermore, this work aims to compare scanning devices and their 3D object modeling by photogrammetry.

A scanning device for obtaining image data
Two scanning devices were used for the input image data in the experiment. It is a 360 camera and a mirror camera with a lens. These devices differ in construction, lens characteristics, use, and type of image captured, as shown in Figure 2(a) DSLR camera Canon EOS Kiss X7 (left) and 360° Insta360 pro camera (right), and Figure 2(b) image taken by Insta360 pro with 200º F2 "fish-eye" lens, and Figure 2(c) image taken by Canon EOS Kiss X7 camera with EF-S 18-55 DCIII. The scanning devices and their properties are described in more detail in the following text.
Camera Insta360 pro is a panoramic 360° camera with six 200º F2 lenses. These lenses type the "fish-eye" are located around the circumference of the spherical body of the camera. The camera allows to take quality 3D images and has several modes for capturing images at a resolution of 4,000×3,000 px and 8K image creation. The camera takes six still images in normal mode, which can then be used for stapling and subsequent processing of 8K monoscopic and stereoscopic spherical images. The camera uses its own Insta360 stitcher application to stitch images. The camera also works in the primary mode of capturing the image in RAW format for subsequent image processing, calibration, or creating its color scale in digital negative (DNG) format. Brightness differences can be eliminated using high dynamic range (HDR) mode. This mode takes three series of six shots. These images can be used to stitch a highly dynamic spherical image. Burst mode takes ten series of six shots. The camera allows remote control and image control directly in its application and VR. The Canon Kiss X7 mirror camera is a DSLR camera that works with a maximum resolution of 5,184×3,456 pixels. The camera takes an 18 MP photo. The device also works well in low light. It works with a maximum sensitivity of ISO 12 800 can be expanded to 25 600. It includes the type of complementary metal oxide semiconductor (CMOS) sensor provided by the optical viewfinder and several modes selected according to the type of output and conditions in the environment where the image is captured. As mentioned above, the photogrammetry method can also be applied to this type of device.

Type and quality of the captured image
Both types of devices were used in this experiment in the same conditions, space, and at the same time. Since these were two different scanning devices with different properties, it is evident that the output image will be different. Especially during post-processing for 3D display in graphics software and subsequent modeling. This chapter presents and compares the fundamental differences in the attributes of the acquired image and subsequent image processing in graphics software.
The Insta360 pro 360° camera works with six fixed fisheye lenses around the circumference of the spherical body of the camera. The primary RAW mode was used to capture the image in the greenhouse, and six images were created for subsequent 3D modeling. Figure 2(a) shows both sensing devices used. The reference image is shown in Figure 2(b). A reference image from a series of 53 photographs needed to model the 3D point clouds taken by the DSLR camera is shown in Figure 2(c). The attributes of both of these figures are described in Table 1.  Table 1 shows some differences in the properties of the obtained image. Let us mention the difference mainly in the size of the image. The reference image from the single lens reflex (SLR) is 25.4 MB, and the reference image from the 360 camera is 18.3 MB. Therefore, it is necessary to mention that 3D modeling of photogrammetry, especially using cameras, will require the mirror capacity of the computing device.
As shown in Figure 1, the resulting image obtained from the scanning devices is very different. That is due not only to the different types of lenses used. In the case of an image obtained from 360° cameras and using the basic RAW format, we can obtain a stitched image from individual images taken by the camera simultaneously. From an SLR camera using a classic set lens, as this experiment shows, we get a classic photo for further editing and output. In the case of capturing the image with an DSLR camera for 3D modeling using the photogrammetry method, we must take subsequent individual photographs for connection to point clouds. A total of 53 photos were taken in this experiment. However, in this experiment, six individual images from the fundamental mode of 360 cameras are used for 3D modeling by photogrammetry. All images from both devices were compressed into JPG format before subsequent image processing.

Image processing for creating 3D model
As mentioned in previous chapters, several types of scanning devices are designed for different image processing. Also, 3D modeling of objects or environments requires different solutions depending on the image processing and the type of output required. This experiment is focused on the reproduction of a real object in a 3D environment. It compares the possibilities of processing and using acquired image data from two different scanning devices. This section and Figure 3 summarize the process of processing from the device and image transformation into these digital 3D environments, which can then be used for visualization in virtual reality.  Figure 3 shows the whole image processing process. From the initial capture of a real object to the output display in virtual reality, the final image analysis can be performed. As mentioned, the 360 camera allows you to process the captured image into a 3D image almost instantly. VR display functions can also be used in real-time. By automatically stitching individual images directly in the camera or using the output panoramic image, we can perform image quality analysis quickly. Adjusting and increasing the quality of the captured image can be done in classic 2D graphics software for photo editing and processing. The primary image adjustment is usually the brightness and contrast of the image. These can often lead to high levels of image quality improvement. In this experiment, this image analysis process was used to check the image quality of the images. Because the image was captured to model point clouds, the six basic images were processed by the same process as the 53 images taken by the DSLR camera. The captured images from the mentioned scanning devices were processed separately.
The 2D graphics software Adobe Photoshop (Adobe Systems Inc., San Jose) was used for the primary image quality control. This graphics software is designed primarily for working with bitmap graphics. Today, it is one of many graphics programs that complement and support each other throughout Adobe's creative suite graphics package. In this program, you can improve the image quality well with image attributes editing tools such as brightness, contrast, and more. The images were then uploaded to the 3D modeling software Agisoft Metashape Professional (Agisoft LLC, St. Petersburg), in which secondary quality control of the recorded images took place. This graphic software for working with 3D graphics and image data is a suitable solution for the photogrammetric processing of digital images. The resulting 3D data can be used in many areas of human activity. That can be the documentation of cultural heritage to create visual effects or work in geographic information system (GIS) applications. The software works in an automated data processing system, allowing to set parameters for specific tasks and different image data types. A dense cloud-based generation algorithm for network generation generates network models even for dense point clouds. These clouds can contain hundreds of millions of points. The implemented out-of-source solution in Agisoft software allows for the generation of polygonal models. The recorded images then formed the basis for obtaining the basic cloud of points. That defined the basis of the 3D model of the scanned object, its position, color, and other attributes. A dense cloud was then generated from the subsequently created depth maps, which already precisely defined the 3D object in the point cloud model in the attributes of shape, size, and color. Often, correcting point clouds, defining the area and extent of a 3D object, and reconstructing a polygon mesh make it necessary to modify the object for the final image display. However, point clouds can also be projected into a virtual environment, which is provided only for visual inspection of the image and its analysis. Depending on the output model, it is then generated from the clouds to another desired model. The transformation of a 3D model into a wireframe, and than to a texture model was used in this work. The following chapter compares the attributes and possibilities of the entire image processing into a 3D model and their visualization in individual steps.

Creating a primary point cloud from photos
All images in the series must be aligned to determine the primary position of the point cloud. Furthermore, control the quality of individual images in the series. Figure 4 visually describes the alignment process and subsequent modeling of the object into point clouds. Figure 4(a) shows a reference frame from a series of 6 frames of photographs taken by the 360° camera. Figure 4(b) shows the position of 6 photos obtained by the 360° camera, from which the primary 3D point cloud model is then defined. Figure 4(c) shows a reference image from a series of modeling images acquired from a DSLR camera. This series contains a total of 53 images. Figure 4(d) describes the 53 images' location and the point cloud obtained from a series of these images. As shown in Figure 4 and from the text above, the resulting point clouds from the 3D software are different. The images from the 360 cameras are arranged around the perimeter, and the obtained cloud of points follows this perimeter. That obtained a 3D model close to automatic or manual stitching of images in 360 camera applications. However, a point cloud cannot be obtained from this for further analysis of the 3D image structure. We obtained basic information about the interior environment in the point cloud. In contrast, 53 images taken by the DSLR camera created a cloud of points, especially in the space inside the position of all images. That obtained a 3D image of the desired object.

CREATING 3D TEXTURE MODEL FROM A POUINT CLOUD
From the basic image processing to the point cloud, further procedures in reconstructing the 3D image can be continued. The following figures visually show these procedures for processing image data from both scanning devices. Figure 5 shows a vertical view of the developed 3D model and Figure 6 shows a horizontal front view of the same 3D model. These figures compare the same image processing process and the differences in the generated 3D model when using two different scanning devices and different numbers of input photos. Six images were used for the model created from the 360° camera. Fifty-three photos were used to create a 3D model from a DSLR camera. Figures 5 show a vertical view of the image data processing process in a 3D model. Figure 5(a) shows the basic point cloud obtained from 6 photos taken by the 360 camera, and Figure 5(b) shows the next step in image processing. A dense cloud was generated from the base  Figure 5(c) shows the polygon mesh creation from dense cloud, and Figure 5(d) visualizes the final form of the 3D object texture. A vertical view of the process of 3D object visualization using a DSLR camera is shown in Figure 5(e), which presents the basic point cloud created from a series of 53 photographs. Figure 5(f) shows the generated 3D dense cloud model, Figure 5(g) the polygon mesh, and Figure 5(h) the texture of the generated 3D object using DSLR cameras. Figure 6(a)-(h) shows the same 3D model from a horizontal perspective in all it is steps of the image processing. The above visualizations of 3D model creation demonstrate the process of further image processing from point clouds. The next step is to create a dense cloud from cloud points, which specifies the object's properties. Then, created a polygonal network that connects all the points in the dense cloud points. In this experiment, the final step is to create a 3D texture model from a polygon mesh. Figures 5 and 6 comparing the structure of the created models show models that differ depending on the devices used and the properties of the scanned image. Table 2 compares the individual attributes of 3D models from the modeling process from both scanning devices. Here are the differences in image processing and its properties. In particular, the different number of photographs entering the modeling process is crucial. This difference is mainly reflected in the number of points in the point cloud/dense cloud, which is also due to the different faces and vertices in the 3D model. It is evident that these facts also affect the time and performance of computing units in modeling and the size of individual files. From the above, it can be assumed that it is more appropriate to use a DSLR camera to reproduce a real object in 3D, although the method for modeling by photogrammetry is more time-consuming and capacity-intensive. The 3D model by the 360 cameras in this experiment copies the character of the 3D perimeter model and is not fully modeled into the middle place, as shown in Figure 5 The detail represents one of the tree branches in the reconstructed 3D model. As can be seen from the image, the 3D model of point clouds is also very different. The model obtained from the 360 camera copies the perimeter of the entire 3D model. The position of the points in the point cloud of the tree model is also generated in this way. Figure 7(b) also shows this. This figure shows that the generated polygonal mesh from the point cloud has a 2D character. Part of the polygonal mesh, in this case, is also the background. Thus, in this way, the 3D model of the tangerine tree is not generated separately but is part of the entire environment.
Compared to the above models, the 3D character of the point cloud and polygonal mesh models of the tree model obtained from the source image data of the DSLR camera is noticeable, as shown in Figures 7(d) and (e). A 3D model of the point cloud detail of a branch indicates the position of individual points in space, and a 3D character polygonal mesh is created from these points. Another difference in the attributes of both models is the color information represented in the point clouds of both models being compared. These attributions can best be compared visually in Figures 7(c) and (f). These images show the details of the sheets in the texture model. The model obtained from the 360 camera is visually very near to the image type of the photo and fully corresponds to the background. A detailed view of the sheet detail texture from the DSLR camera model shows the 3D attributes of the model generated from the polygonal mesh, as shown in Figures 7(e) and (f).

APPLICATION OF THE CAPTURED IMAGE TO THE VR ENVIRONMENT
In the introduction and the previous text, the use of 360 cameras is expected primarily in the application of direct image transformation into a 3D and virtual environment. Figure 8 outlines this procedure. Its use can also be assumed in the combination of several methods of modeling and transformation of a natural environment or objects into one whole. The following describes this method of using the 360 cameras directly to visualize images in VR. At the same time, a 3D model of a tangerine tree created by photogrammetry is added to the created virtual 3D environment of the greenhouse. Figure 8 shows the transmission during image processing from both sensing devices. Figure 8. The process of the 3D model is integrated into a VR-created environment by a 360 camera Figure 8 shows that the first step is to check the image quality in the 3D imaging software. A panoramic image created by 360 cameras was used in this part of the experiment. This image is created automatically by the camera for almost all of its modes, which are described in chapter 3.1. Figure 9(a) shows a panoramic image created by 360 cameras and Figure 9(b) shows its visualization in the FSPViewer 3D imaging software. This 3D imaging software is available free of charge.
The software also checked the 3D image. Panoramic photography of the greenhouse environment was subsequently transformed into virtual reality creation software. In this experiment, the freely available Unity graphics software was used. The software is designed for a wide range of uses. It can create 2D and 3D graphics for applications, especially for VR technology. A virtual model of the greenhouse environment was created from the source panoramic photo. In Unity, it is treated as a material created for a specific object. This object is a sphere that corresponds to the character of the photograph. This procedure obtained the same environment as in the FSPViewer software. Attention in the virtual environment was again focused on the object of the tangerine bush.
(a) (b) Figure 9. Image processing by a 360° camera (a) panoramic photograph of the greenhouse environment created by 360-camera and (b) visualization of the 3D environment in FSPViewer software The next step in the experiment was to transform the created 3D model of the tree from the DSLR camera. The 3D object of the tree created by photogrammetry was exported from the Agisoft software in two models. The first 3D model was exported from a 3D polygonal mesh model created from points containing dense cloud. This model does not contain a texture. The second model is exported as a 3D texture model. Figures 10(a) and (b) show both exported 3D models for insertion into the virtual greenhouse environment. Both 3D models shown in Figure 10 were exported in v.obj format. The 3D model without texture was imported into a 3D greenhouse environment created in Unity software. The 3D texture model was then used to create the material to visualize the texture of the 3D model in Unity. The material created in this way was therefore applied to the 3D model. That created a model of the tangerine tree, including the surface and color of the structure. Thus, this model became a new object in the created virtual environment of the citrus greenhouse, which is visually presented in Figure 11. The virtual reality greenhouse environment created is shown in Figure 11(a), and a panoramic photo captured by a 360° camera was used for the environment. Figure 11(b) shows a basic 3D model created with a DSLR camera that will be added to the virtual environment. This model is in elemental form without texture, which will be added to the model in Unity software as a separate created material. The 3D model added in this way to the created VR environment is shown in Figure 12. The quality of the created 3D model, especially it is color reproduction, can thus be visually compared. In Figures 12(a) and (b), the natural form of the created 3D object in the VR environment is visualized. This visualization in the VR environment will allow visual quality control of the created model and comparison with the object that is part of the virtual environment.  12 visually represent a combination of two different approaches to using these sensing devices. The 360 camera is a sensing device with which the image can be easily converted into a VR environment. This environment can be used almost immediately to immerse the user in the created VR environment. This experiment used panoramic photography taken with a 3D camera. A 3D model of an object created from a series of photographs taken by a DSLR camera using the photogrammetry method was inserted into this environment as a separate object. It was necessary to use two different 3D models for the final visual output. Combining a 3D model consisting of a polygonal network of point clouds and a 3D texture model resulted in visual tree output approaching a real object. The model shows that it does not reach the quality of reproduction like a tree from a panoramic photo. These and other attributes are under discussion.

DISCUSSION OF RESULTS
The above experimental visualizations show that both devices can be used for image reconstruction using photogrammetry, but with different intentions of use. The mentioned 360 camera can quickly and easily capture an image for 3D display and quick application in a virtual environment. Therefore, it is unnecessary to convert the scanned image to a point cloud unless another intention is considered. The team can be a deeper analysis of image or point cloud or their transformation into a polygonal network. However, it was possible to show that the object could be reconstructed from the scanned image in this experiment. The selected object in the modeling process copied the perimeter of the environment. The devices are possibly used to model images into point clouds and the wireframe or texture model. The 360 camera has many modes and options for working with the captured image. In the experiment, that RAW mode was evaluated as suitable for the application of modeling by photogrammetry.
That raises the question of whether the reconstruction's result would be better if the combination of modes were used. These may be the subject of further research. The advantage of this device is the low number of applied images, higher modeling speed, and lower demands on the capacity of the computing device. However, it is necessary to consider the lower number of generated point cloud points, which are directly dependent on the number of images used and the image information obtained from them. The DSLR camera has significantly better results in the reconstruction of the building than the 360 cameras. The application is suitable for terrestrial multi-image photogrammetry methods and 3D reconstruction objects. Especially when no other equipment is available, for example, the team can be a highquality and powerful 3D laser handheld scanner for scanning an object. Unlike other devices, DSLR cameras are used to model a real object at a low cost. A total of fifty-three images were used in this experiment. They managed to reconstruct the 3D model of the real object. The number of images used plays an essential role in this. Compared to six images from the 360 cameras, a much higher number of points in the point cloud was generated from fifty-three images in this case. That is essential for the next steps in 3D object modeling and its quality. The spatial arrangement and other information about the object in the environment are better specified in the case of DSLR cameras. That also defines the final form of the model by photogrammetry in this case.
Another way to use two sensing devices is a combination of their properties and possibilities of use. In the DSLR camera case, the usefulness of using it for creating a 3D model of a real object using the photogrammetry method was confirmed. From the image taken by 360 cameras, the 3D circuit of the reconstructed environment was created by the above method. A 3D model of the tangerine bush object was not created. That tree model copied the shape of the overall 3D model and merged the environment of the greenhouse as a background. Also, the created polygonal network of these elements was created as a whole. The display details in the Figures 8(a)-(f), which also show details of the model created by the DSLR camera. The 3D dimension of the created tree model is noticeable. Differences can also be observed in the detailed display of the sheets of both models. In the case of the 360 camera, it can be theoretically assumed that the desired result could be achieved by using a different camera position and other image capture modes. That is a 3D reconstruction of a real object, a tangerine bush.
The 360 camera's ability to transform the captured image into a VR environment was used in another scenario. The option to take a panoramic image using this camera was used. This type of image allows converting to 3D images and VR quickly. These created a digital 3D image of the natural environment. That allows for the widespread use of sensing devices in many areas. The user gets the opportunity to dive into the VR environment very quickly. Especially if this created environment is shared over a greater distance, we can also import a 3D model created from a DSLR camera into this environment. In this experiment, it was necessary to export two types of the created model from 3D modeling software. One without the model's texture and the other a 3D texture model. These models were then combined for use in the software Unity. These created a 3D model of a tangerine bush in a virtual citrus greenhouse environment. The object thus became part of the created virtual environment. The advantage is the ability to manipulate and modify some attributes of the model in the environment. its position, size, or some display options. For example, in the VR environment, the created structure, properties, and shortcomings of the polygonal mesh can be visually analyzed, and more. That allows the 3D model to become a complementary visual tool for identifying errors in the model. The 3D model of the tree is imperfect compared to the visualized tree from panoramic photography. It is thus possible to analyze the shortcomings in the properties of the created model.
The most noticeable drawback of the 3D model by photogrammetry is the diffusion effect of the environment. Visually, the blending of colors is noticeable, especially in the foot part of the model, where many undesirable points are created due to the shadow and dense concentration of the leaves. These points are not only a carrier of color information. By connecting these points, a polygonal mesh is created and other models, including a textural one. Therefore, these errors are also reflected in all the other steps of object modeling. However, by visualizing in a virtually natural environment, all attributes can be compared to a real-world view, as this experiment shows. It can also visually help identify all the bottlenecks and properties of the 3D model compared to the visualized image of the object from the created environment. Such a tip of additional control in a VR environment can be applied in many fields. Especially in the issue of the application of materials and their properties.

CONCLUSION
This experiment aimed to discover the possibilities of two scanning devices to create a 3D model of a real object by photogrammetry. Two different image capture devices were used in this study. It was a 360 camera with six fish-eye lenses around the perimeter of her body and a DSLR camera using a set lens. Because both devices had different construction and property of the acquired image, it could be assumed that their application for 3D modeling will probably be different. This experiment focused on reconstructing the object into a digital 3D form. A mandarin tree in a citrus greenhouse was chosen for the 3D reconstruction to reproduce the 3D model as accurately as possible. Due to the nature and structure of the building, a more difficult reconstruction of the building is also expected. Comparing the process of working with the image from both devices thus allows more efficient use of both devices for future experiments of 3D reconstruction of the environment or image. The 360 cameras can be expected especially for capturing the environment and easy transformation into a virtual reality environment. The DSLR camera continues to appear as a low-cost device for applying the ground multi-image photogrammetry method for the 3D reconstruction of a real object. However, using multiple source images for 3D reconstruction of the building, higher demands are also placed on computer technology. However, the work can assume more complex object transformations into a virtual reality environment.