Skip to main content

Mapping and 3D modelling using quadrotor drone and GIS software

Abstract

Background

The main obstacle for local and daily or weekly time-series mapping using very high-resolution satellite imagery is the high price and availability of data. These constraints are currently obtaining solutions in line with the development of improved UAV drone technology with a wider range and imaging sensors that can be used.

Findings

Research conducted using Inspire 2 quadcopter drones with RGB cameras, developing 3D models using photogrammetric and situation mapping uses geographic information systems. The drone used has advantages in a wider range of areas with adequate power support. The drone is also supported by a high-quality camera with dreadlocks for image stability, so it is suitable for use in mapping activities.

Conclusions

Using Google earth data at two separate locations as a benchmark for the accuracy of measurement of the area at three variations of flying height in taking pictures, the results obtained were 98.53% (98.68%), 95.2% (96.1%), and 94.4% (94.7%) for each altitude of 40, 80, and 100 m. The next research is to assess the results of the area for more objects from the land cover as well as for the more varied polygon area so that the reliability of the method can be used in general

Introduction

Mapping techniques with remote sensing and three-dimensional (3D) earth modeling have now achieved significant progress both in terms of vehicles and sensors as well as the techniques and software used. From the vehicle side, thematic mapping with very high-resolution satellite imagery (0.5–0.3 cm) can be done easily and good results. The main obstacle of mapping using very high-resolution satellite imagery data is the acquisition of data which is still quite expensive, especially for mapping in a local-scale area and requires more frequent (daily or weekly) time series of data repetitions. One alternative to the problems as mentioned above is to use unmanned aerial vehicles (UAV) are low cost, have very high-resolution, and can be acquired at any time with few restrictions for local-scale areas [1]. According to [2], a UAV is a tool and as such, it should be used for the right application for mapping/monitoring of small areas, i.e. less than 10,000 m2 (1 Hectare). Mapping specifically for land-use and land-cover (LULC) using drone data has been proven to improve mapping accuracy from 78.1 percent with Synthetic Aperture Radar (SAR) to 92.3% using UAV Drone data [3]. Figure 1 shows the Inspire 2 quadcopter drone used in this study.

Fig. 1
figure1

Inspire 2 quadcopter drone

The UAV sensors and platforms nowadays are being used in almost every mapping application e.g. agriculture, forestry, archeology and architecture, environment, emergency management and traffic monitoring [4] that needs observed information from the top or oblique views [5]. UAV can produce several different types of maps such as geographically accurate ortho-rectified two-dimensional maps, elevation models, thermal maps, and 3D maps or models. According to Lizuka et al. [3], drone information can be utilized as a tool for local-scale area analysis. The orthophotos generated from multiple photos demonstrate the potential for obtaining the detailed information of a landscape with the ground resolution was approximately 0.05 m which is much higher than the resolution of any current satellite imagery. Mapping based on UAV according to Panjares [6], the UAV technology has allowed for the development of numerous methods, procedures, and strategies specifically adapted for these systems from a unique perspective of the problem to be solved and obtained together with economic aspects derived from their relatively low cost are enhancing their use and extending the range of performance and applications. According to Saadatseresht et al. [7], the advantages of UAV photogrammetry in comparison to field surveying higher quality and reliability of spatial products, more diversity of spatial products, more user-friendly, Speed up the mapping process, the legal validity and consistency checks, more reasonable, fewer interruptions in operations, and more accessibility to rough areas.

One type of UAV that is popularly used today is Multi-copters can be divided into different types depending on the number of rotor arms. According to Hassler and Gurel [8] most other applications are dominated by the rotorcrafts for their ease of use (no runway is required), lower cost, and high spatial resolution imaging due to the ability to hover as seen in Fig. 1. Multi-copters come in many forms with any number of propellers, with the most common configurations including traditional helicopters, hexa-copters (6 propellers), octo-copters (8 propellers) and of course quadcopters with 4 propellers. In general, the use of quadcopter vehicles in the mapping area has been carried out by [9] using a multispectral camera for mapping agriculture with a three-dimensional mosaic. The weakness of Navia's research is the problem of accuracy because it has not been able to produce images with better ortho-mosaic due to the geo-reference process that has not implemented differential GPS (DGPS) or the user is not experienced or does not have basic knowledge on surveying and photogrammetry [2]. In 2017, Puri et al. [10], use UAV a variety of including quadcopter to see its potential for use in mapping and smart farming and its special benefits for farmers. One of the most common uses of UAVs in addition to capturing landscapes is for the use of 3D mapping, with various applications both for topographic surveys, photogrammetric solutions, progress monitoring, disaster management, and agriculture and forestry [2]. According to Pytharouli et al. [2], the advantages of 3D mapping with drones, especially with drone quadcopters in many cases replacing terrestrial surveying equipment. Its ability to cover local scale working area in very little time is a highly desirable characteristic in an era where quick and effective intervention has become the norm. Lizuka et al. [3] in their research produced a 3D orthophoto and digital surface model (DSM) to produce LULC maps.

To overcome the problem as mentioned earlier, this paper will discuss methodology and mapping techniques with an approach to developing 3D models using photogrammetric data taken using drone quadcopters. Processing and modeling will use geographic information system software. Research offers a new 3D mapping and modeling approach that can produce faster and more accurate outputs as can be proven in the evaluation process.

The structure of the paper is as follows: (1) introduction that describes the use of drones specifically for mapping and 3D modeling, general research objectives and the structure of writing paper; (2) mapping and 3D modeling that discusses various examples of using remote sensing data with drones for various mapping and 3D modeling purposes; (3) research methodology consisting of descriptions of the tools, data and research methods used; (4) result and discussion which discusses the results obtained as research outputs and results in discussion in the form of a comparison of research results and similar research studies that have been conducted as well as; (5) the conclusion section which contains the main findings of the research and prospects for further research development.

Mapping and 3D modelling

A UAV or un-crewed aerial vehicle, commonly known as a drone is an aircraft without a human pilot onboard and a type of unmanned vehicle [11]. Some of the use of UAV drones for civilian purposes especially for mapping is as stated by [12] which began around the 2006s which included thematic mapping for agriculture, forestry, archeology and architecture, environment, emergency management and traffic monitoring [4] both from the aspects of projects, regulations, classifications and UAV applications in the mapping domain [13]. Observing the application side, the use of UAV drones for thematic mapping purposes has continued to develop since 2017 both for applications in the field of ecology [14,15,16,17], crop productivity [18,19,20], LULC mapping [1, 3, 5, 21], plant diseases detection [22,23,24] and application for plant/tree detection and counting [25] and [26].

In the field of ecology, Cruzan et al. [15], were used to create a composite image (ortho-mosaic) and 3D DSM. Vegetation classification was conducted manually and using an automated routine. A comparison between manual and automated habitat classification confirmed that mapping methods were accurate. A species with high contrast to the background matrix allowed an adequate estimate of its coverage. Villanueva et al. [17] investigates the contribution of LiDAR elevation data in DEM generation based on fixed-wing UAV imaging for flood estimates including volume and area. While in the previous year, Ruwaimana [16], present study compares the utility of drone images and satellite images for mangrove mapping, and Casella et al. [14] present a review of the state of art in drones and photogrammetry for beach surveys and the respective achieved measurement quality.

Sola Guirado et al. [20] calculates the actual yield (in years) of the crop which links it to the geometric parameters of the tree canopy using orthophoto that were acquired with UAV drone. Actual yield was forecasted using both manual canopy volume and individual tree crown as the main factors for olive productivity. In conclusion, Sola Guirado et al. [20] mention a thematic map describing spatial AY variability that may be a powerful tool for farmers, insurance systems, and market forecasts to detect agronomical problems. Huang et al. [18] compared satellite imagery, piloted aircraft remote sensing, and UAV data to capture high spatial resolution imagery to generate an accurate weed cover map over the rice field and with the relatively same goal. Nuijten et al. [19] have used high-resolution optical imagery from UAV Drone for providing insight into the potential of drone data for determining crop productivity on a large scale.

In mapping LULC using drones vehicle after 2016, it was initiated by Kalantar et al. [1] presents a novel method that integrates the fuzzy unordered rule induction algorithm (FURIA) into OBIA to achieve accurate landcover extraction from UAV images with study area located in Universiti Putra Malaysia (UPM) campus in the state of Selangor. Lizuka et al. [3] with drone’s image data are used to collect ground survey information and microscale information by implementing a structure from motion (SfM) technique to develop mosaicked orthorectified images of the sites. The orthophoto and DSM derived from the drone-based data had resolutions of 0.05 and 0.1 m, respectively. In the same year, Natesan et al. [21] work was to investigate the use of UAV equipped with an RGB camera and a compact spectrometer for land cover classification. Yao et al. [5], Conduct a comparison application between LULC mapping and change detection using ultra-high-resolution UAV drone data.

Plant disease detection application has been developed by [22,23,24]. Sandino et al. [23] proposed a methodology for effective detection and mapping of indicators of poor health in forest and plantation trees integrating drone technology and artificial intelligence approaches. The approach is illustrated with an experimentation case of myrtle rust (Austropuccinia psidii) on paperbark tea trees (Melaleuca quinquenervia) in New South Wales (NSW). Di Girolamo-Neto et al. [22] explore the potential of texture features derived from images acquired by an optical sensor onboard a UAV to detect Bermudagrass in sugarcane and using the same vehicle [24], evaluate the potential of three-band multispectral imagery from a multi-rotor UAV platform for the detection of ramularia leaf blight from different flight heights in an experimental field. The study found that increasing infection had caused progressive degradation of spectral vegetation signals, however, it was not enough to distinguish the finer scale disease severity.

In the application for tree detection and counting Koh et al. [25], and Grznárová et al. [26] proposed different methods for the same goal. Koh et al. [25], estimated plant density or counts) in field experiments. Challenges posed to digital plant counting models are heterogeneous germination and mixed growth stages that are present in field experiments with diverse genotypes. Grznárová et al. [26], identify the influence of tree species on the accuracy of estimation of crown diameter using the same obtained from the drone. A method for safflower seedling count at early mixed growth stages using UAV data was developed and validated. The model performed well across heterogenous growth stages and has the potential to be used for plant density estimation across various crop species.

In terms of the technology used in the previous study, most researchers use data from multi-rotor drones with a combination of data from other remote sensing vehicles such as satellite data and an aerial photograph. Most researchers process 3D and ortho mosaic information from drone data to obtain better analysis results as Cruzan et al. [15], Kalantara et al. [1], Lizuka et al. [3], Nuijten et al. [19], Yao et al. [5] and Casella et al. [14]. Cruzan et al. [15] created a vegetation map for the entire region from the ortho mosaic and DSM, and mapped the density of the species and using a set of overlapping images with the corresponding referencing information and the next year [1] create a DEM and ortho mosaics to achieve accurate landcover extraction from UAV images using fuzzy unordered rule induction algorithm (FURIA). Lizuka et al., [3] organized orthophoto and DSM derived from the drone-based data had resolutions of 0.05 and 0.1 m for the case study of post-mining sites in Indonesia to produce LULC with high accuracy. Confirmed that the drone-based LULC map with orthophoto and DSM showed an average accuracy of 92.3% compared with The SAR-based LULC map showed an overall accuracy of 78.1%. Improved accuracy concerning the use of orthophoto and DSM in data processing was also shown by [1] with an overall accuracy of up to 91.23% and also by [19] which reconstructed a DSM and terrain model for determining crop productivity on a large scale. It was stated that there was an increase in accuracy in detecting individual crops up to 99.8% by processing data using the image segmentation approach. Yao et al. [5] fusing 3D information such as height, geometric and oblique information for remote sensing analysis in two project goals including LULC mapping and change detection. Contextual information and deep learning methods are essential for accuracy improvement. Casella et al. [14] produce DEM of a beach using a GNSS-RTK system. DEM taken at different times were used to calculate the before after sediment budget following a storm that hit a sandy coast in Sylt Island at the German North Sea coast. In the final, the average RMSE number of the digital elevation models was ~ 5 cm, with a survey efficiency of coverage area about 3 m2 min−1.

In processing high-resolution data from drones, most researchers use artificial intelligent approaches in their implementation such as [1] which implements Fuzzy FURIA [18], which proposes the use of the fully convolutional network (FCN) method for weed mapping of the collected imagery. A maximum-likelihood algorithm was proposed by Ruwaimana et al. [16] and Sandino et al. [23] to process hyperspectral data. ALOS-2 Phased Array L-band Synthetic Aperture Radar-2 (PALSAR-2) data L-band backscattering data are processed with a multilayer perceptron (MLP) supervised classification for generating a categorical map by Lizuka et al. [3]. Nuijten et al. [19] used Multi-resolution image segmentation (MIRS) and template matching algorithms for an integrated workflow to detect individual crops and delineate C. endivia crop covered areas and In [22] using random forest algorithms to perform automatic classification. Figure 2 shown below is the principle of drones doing the mapping.

Fig. 2
figure2

The principle of drones doing the mapping. Mapping with UAVs / drones is designed to ensure adequate image type by overlapping images that are contiguous [30]

Methodology

The main hardware used in this study includes DJI Inspire 2 drones with Zenmuse X5S cameras and personal computers (PC) for data processing. DJI Inspire 2 has advantages in a wider range of areas with adequate power support. The drone is also supported by a high-quality camera with dreadlocks for image stability so it is suitable for use in mapping activities. Inspire 2 is the development of Inspire 1 as the world's first film-making drone that integrates an HD video transmission system, 360° rotating gimbal, and the simplicity of application control. The launch of the Zenmuse X5S cameras further cemented the Inspire as an important tool for filmmakers around the world. Zenmuse X5S is the first aerial camera in the world capable of recording lossless 4 K video in RAW with framerates of up to 30fps and an average bitrate of 1.7 Gbps (maximum bitrate of 2.4 Gbps), the X5S makes it possible to produce dazzling professional-level recordings and is packaged with a powerful MFT sensor, this camera also captures 20MP images with stunning detail with stabilization dreadlocks Integrated 3-axis guarding level [27]. The min req for a PC is to have at least 12 GB of RAM, an 8th-gen i5/i7 CPU core or the equivalent AMD Ryzen processor or higher, an internal hard drive for 20 GB, Onboard Graphics similar to a desktop with 2–4 GB dedicated VRAM and 15-inch monitor or larger 1920 × 1080 (FHD).

The software used is AgiSoft Metashape from AgiSoft LLC for and ArcMap ver. 10.3 from Esri Inc. AgiSoft Metashape is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data to be used in GIS applications. AgiSoft was used for taking situation maps of the outdoor tennis court at Bogor city and an open field in Maguwo, Yogyakarta, Indonesia.

The stages of research as can be seen in the research flow diagram Fig. 3 which consists of stages of flight path planning and ground control point (GCP) determination, collecting drone images, ortho mosaic (including accuracy test) and creates DEM, 3D modeling, contour generation, and create situation map. According to Juniati and Harintaka [28], Orthophoto is a process to eliminate the effect of image perspective and correction of relief shifts caused by terrain conditions, to produce images or photos on orthogonal projection or to make the photo upright.

Fig. 3
figure3

Research methodology flowchart

The ortho mosaic model accuracy test is performed with the attribute/semantic accuracy which is a value describe the suitability level of an attribute object on the map with the actual attribute in the field. At this accuracy correction stage, the calculation is done using omission and commission equality. The commission is conditions where the results of interpretation are longer/broader from field data while Omission is a condition where the results of interpretation are shorter/narrower than the data in the field [29]. Accuracy is calculated using the following equation:

$$\text{Accuracy} = 1- \left[ \frac{\Delta}{\text{field} \, \text{Data}} \right] \times 100\%$$
(1)

with Δ = Field and data Interpretation.

Results and discussion

As the research aims to map and build 3D models from drone data. The resulting map can be classified thematically as a landcover map (one type of landcover). LULC mapping has been done for a long time both in terrestrial terms and using satellite or aerial photography. Application for the same purpose using drones has just been done in 2000 along with the development of drone technology and sensors that can be used. A kind has been done by Kalantar et al. [1]; Lizuka et al. [3]; Natesan et al. [21]; Yao et al. [5].

Based on the LULC object covered, Kalantar et al. [1] extracts accurate land cover from UAV images with a study area located on the Universiti Putra Malaysia (UPM) campus in the state of Selangor. The same thing was done in this study but only focused on one landcover object mainly to measure the accuracy of images taken from drones. The same is also done by Natesan et al. [21] with the addition of a compact spectrometer in addition to an RGB camera for land cover classification. Where both sensors are used to capture images of the earth’s surface simultaneously. Figure 4 shows an example of images taken from DJI Pilot.

Fig. 4
figure4

Images from DJI Pilot showing area of Tennis court at Bogor, Indonesia, we take 3 data with an altitude of 40, 80, and 100 m

After the multi-altitude data is obtained, the next step is to build a mesh and 3D model using Agisoft and ArcMap 0.3. The same RGB camera and method used by Kalantar et al. [1] included developing orthorectified images of mosaics from the site. Orthophoto and digital surface models (DSM) derived from drone-based data with 0.05 and 0.1 m resolution respectively achieve accurate landcover extraction [3]. In contrast to the research conducted by authors who use ortho mosaic and DEM as by Natesan et al. [21] and Yao et al. [5]. In Fig. 5 can be seen as the results of data processing to obtain results in the form of DEM data in this study.

Fig. 5
figure5

Result of building DEM (Digital Elevation Model)

Ortho mosaic photo and situation maps have been created using ArcGIS as shown in Fig. 6. The purpose of this process is to improve the accuracy of the area calculation results. The same method was also done before by Yao et al. [5] which specifically on 3D information such as height, geometric and oblique information to improve accuracy.

Fig. 6
figure6

Ortho mosaic photo and calculating area in Tennis Court in Bogor (a) and b area open field in Maguwo, Yogyakarta, Indonesia

The next step is to compare the results of extensive calculations performed with drone data with extensive calculations using data from Google earth. Drone data is an outdoor tennis court that has a length of 24 m and a width of 10.97 m, so the total area is 263.28 m2. The second is an area in the open field in Maguwoharjo Yogyakarta has a dimension of 147 m long (the longest part) and a width of 82 m (the widest part) which is the total area reached 11,586 m2 The accuracy of mapping based on drone using different latitude compared with Google Earth, the result shown below:

Based on Table 1, mapping with low latitude (40 m) give better accuracy rather than 80 and 100 m. This is because the accuracy is determined from resolution. If the altitude is very low, it will produce higher resolution data. The results will be more accurate but limited coverage compared to the higher altitude.

Table 1 Experimental result for mapping with the drone and calculating area of an outdoor tennis court and field in Maguwo compared with google earth data

The significant increase in accuracy because of the use of orthophoto and DEM/DSM and the use of very high-resolution image drone data has also been reported previously by Kalanatar et al. [1] with 91.2%. Lizuka et al. [3] also reported a high accuracy for the LULC map of around 92% because it uses orthophoto and DSM although the sensors used are different using SAR data.

First, if we want to map using a drone, we should know about Universal Transverse Mercator (UTM) zone for the Indonesia section. UTM system is a projection that works on every ellipsoid field that is limited by a wide range of meridian lines with a width of 60 called the Zone. For Indonesia, which is in a position of approximately 900BT-1440BT and 110LS-60LU, it is divided into 9 UTM zones, namely zone 46–54. For Bogor City-Indonesia, 48S is used Fig. 7. If we failed to configure this UTM in our software, then we cannot create a map correctly.

Fig. 7
figure7

Universal Transverse Mercator (UTM) Zone for Indonesia section

Conclusion

This work demonstrates that drones provide promising opportunities to create a very high-resolution and highly accurate map creation, especially for coverage of the limited area and require time-series data. Mapping techniques with remote sensing and 3D earth modeling have now achieved significant progress both in terms of vehicles and sensors as well as the techniques and software used. Mapping using a drone is very useful when we just want to create a map with a limited area and with a good resolution. Using Drone image data taken at outdoor tennis court locations at the agricultural research center in Bogor regency, Indonesia, google earth data as a benchmark and omission and commission equality assessment for the accuracy of measurement of an area at three variations of altitude in taking pictures. The results of evaluations conducted at two places showed accuracy values of 98.53, 95.2, and 94.4% for tennis court in Bogor and 98.68, 96.1 and 94.7% respectively for open field area in Maguwo, Yogyakarta at 40, 80 and 100 m drone flying high. The future research is to assess the results of the area for more detailed geographic objects from the land cover as well as for the more varied polygon so that the reliability of the method can be used in general.

Availability of data and materials

Not applicable.

Abbreviations

RGB:

Red, Green and Blue

3D Model:

Three-dimensional Model

UAV:

Unmanned Aerial Vehicles

LULC:

Land Use Land Cover

DEM:

Digital Elevation Model

GIS:

Geographical Information Systems

UTM:

Universal Transverse Mercator

References

  1. 1.

    Kalantar B, Mansor SB, Sameen MI, Pradhan B, Shafri HZ. Drone-based land-cover mapping using a fuzzy unordered rule induction algorithm integrated into object-based image analysis. Int J Remote Sens. 2017;38(8–10):2535–56.

    Article  Google Scholar 

  2. 2.

    Pytharouli S, Souter J, Tziavou O. Unmanned Aerial Vehicle (UAV) based mapping in engineering surveys: Technical considerations for optimum results. In the 4th Joint International Symposium on Deformation Monitoring; 2019.

  3. 3.

    Iizuka K, Itoh M, Shiodera S, Matsubara T, Dohar M, Watanabe K. Advantages of unmanned aerial vehicle (UAV) photogrammetry for landscape analysis compared with satellite data: A case study of postmining sites in Indonesia. Cogent Geoscience. 2018;4(1):1498180.

    Article  Google Scholar 

  4. 4.

    Nex F, Remondino F. UAV for 3D mapping applications: a review. Applied Geomatics. 2014;6(1):1–15.

    Article  Google Scholar 

  5. 5.

    Yao H, Qin R, Chen X. Unmanned aerial vehicle for remote sensing applications—a review. Remote Sensing. 2019;11(12):1443.

    Article  Google Scholar 

  6. 6.

    Pajares G. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogram Eng Remote Sens. 2015;81(4):281–330.

    Article  Google Scholar 

  7. 7.

    Saadatseresht M, Hashempour AH, Hasanlou M. UAV photogrammetry: a practical solution for challenging mapping projects. Int Arch Photogram Remote Sens Spatial Inform Sci. 2015;40(1):619.

    Article  Google Scholar 

  8. 8.

    Hassler SC, Baysal-Gurel F. Unmanned Aircraft System (UAS) technology and applications in agriculture. Agronomy. 2019;9(10):618.

    Article  Google Scholar 

  9. 9.

    Navia J, Mondragon I, Patino D, Colorado J. Multispectral mapping in agriculture: Terrain mosaic using an autonomous quadcopter UAV. In 2016 International Conference on Unmanned Aircraft Systems (ICUAS). New York: IEEE; 2016. pp. 1351–1358

  10. 10.

    Puri V, Nayyar A, Raja L. Agriculture drones: a modern breakthrough in precision agriculture. J Stat Manag Syst. 2017;20(4):507–18.

    Google Scholar 

  11. 11.

    ICAO’s Circular 328 AN/190: unmanned aircraft system. PDF. ICAO. Retrieved 3 February 2016.

  12. 12.

    Niranjan S, Gupta G, Sharma N, Mangal M, Singh V. Initial efforts toward mission-specific imaging surveys from aerial exploring platforms: UAV. In: Map World Forum, Hyderabad, India, 2007

  13. 13.

    Everaerts J (2008) The use of unmanned aerial vehicles (UAVS) for remote sensing and mapping. In: International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 37 (B1):1187–1192

  14. 14.

    Casella E, Drechsel J, Winter C, Benninghoff M, Rovere A (2020) Accuracy of sand beach topography surveying by drones and photogrammetry. Geo-Mar Lett, 1–14.

  15. 15.

    Cruzan MB, Weinstein BG, Grasty MR, Kohrn BF, Hendrickson EC, Arredondo TM, Thompson PG. Small unmanned aerial vehicles (micro-UAVs, drones) in plant ecology. Appl Plant Sci. 2016;4(9):1600041.

    Article  Google Scholar 

  16. 16.

    Ruwaimana M, Satyanarayana B, Otero V, Muslim AM. The advantages of using drones over space-borne imagery in the mapping of mangrove forests. PLoS ONE. 2018;13(7):e0200288.

    Article  Google Scholar 

  17. 17.

    Villanueva Escobar JR, Iglesias Martínez L, Pérez Montiel JI. DEM Generation from fixed-wing UAV imaging and LiDAR-derived ground control points for flood estimations. Sensors. 2019;19(14):3205.

    Article  Google Scholar 

  18. 18.

    Huang H, Deng J, Lan Y, Yang A, Deng X, Zhang L. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery. PLoS ONE. 2018;13(4):e019630.

    Google Scholar 

  19. 19.

    Nuijten RJ, Kooistra L, De Deyn GB. Using unmanned aerial systems (UAS) and object-based image analysis (OBIA) for measuring plant-soil feedback effects on crop productivity. Drones. 2019;3(3):54.

    Article  Google Scholar 

  20. 20.

    Sola-Guirado RR, Castillo-Ruiz FJ, Jiménez-Jiménez F, Blanco-Roldan GL, Castro-Garcia S, Gil-Ribes JA. Olive actual “on year” yield forecast tool based on the tree canopy geometry using UAS imagery. Sensors. 2017;17(8):1743.

    Article  Google Scholar 

  21. 21.

    Natesan S, Armenakis C, Benari G, Lee R. Use of UAV-borne spectrometer for land cover classification. Drones. 2018;2(2):16.

    Article  Google Scholar 

  22. 22.

    Girolamo-Neto CD, Sanches IDA, Neves AK, Prudente VHR, Körting TS, Picoli MCA. Assessment of texture features for bermudagrass (cynodon dactylon) detection in sugarcane plantations. Drones. 2019;3(2):36.

    Article  Google Scholar 

  23. 23.

    Sandino J, Pegg G, Gonzalez F, Smith G. Aerial mapping of forests affected by pathogens using UAVs, hyperspectral sensors, and artificial intelligence. Sensors. 2018;18(4):944.

    Article  Google Scholar 

  24. 24.

    Xavier TW, Souto RN, Statella T, Galbieri R, Santos ES, Suli GS, Zeilhofer P. Identification of Ramularia leaf blight cotton disease infection levels by multispectral, multiscale UAV imagery. Drones. 2019;3(2):33.

    Article  Google Scholar 

  25. 25.

    Koh JC, Hayden M, Daetwyler H, Kant S. Estimation of crop plant density at early mixed growth stages using UAV imagery. Plant Methods. 2019;15(1):64.

    Article  Google Scholar 

  26. 26.

    Grznárová A, Mokroš M, Surový P, Slavík M, Pondelík M, Merganič J. The crown diameter estimation from fixed wing type of UAV imager. Int Arch Photogram Remote Sens Spatial Inform Sci; 2019.

  27. 27.

    DJI.com. Inspire 2 Beyon Imagination- Inspire 2 Specs. 2020. https://www.dji.com/id/inspire-2/info. Accessed 8 May 2020.

  28. 28.

    JuniatiHarintaka EH. Perbandingan Ragam Input Model Ketinggian Untuk Pembentukan True Orthophoto Di Wilayah Urban (In Bahasa). Geomatika. 2018;24(2):49–60.

    Article  Google Scholar 

  29. 29.

    Ibrahim F, Suharyadi R. Teknik Klasifikasi Berbasis Objek Citra Penginderaan Jauh untuk Pemetaan Tutupan Lahan Sebagian Kecamatan Mlati Kabupaten Sleman (In Bahasa). Yogyakarta: Sekolah Vokasi Universitas Gadjah Mada; 2014.

    Google Scholar 

  30. 30.

    Greenwood F, Drone and aerial observations; 2011.

Download references

Acknowledgements

This work is supported by Bina Nusantara University.

Funding

This research supported by Bina Nusantara University Research Grant for the year 2019.

Author information

Affiliations

Authors

Contributions

Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Widodo Budiharto.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Budiharto, W., Irwansyah, E., Suroso, J.S. et al. Mapping and 3D modelling using quadrotor drone and GIS software. J Big Data 8, 48 (2021). https://doi.org/10.1186/s40537-021-00436-8

Download citation

Keywords

  • Drone
  • Mapping
  • ArcMap
  • AgiSof
  • 3D model
  • Situation map