What Are LIDAR Data?
May 15, 2008 by airianto
Filed under Geomatics Science, LIDAR
How LIDAR Data Are Collected
WORK EXECUTION AND WORK RESULT
January 19, 2008 by airianto
Filed under Photogrammetry, Uncategorized
In general, the implementation of the work of Airborne Laser Scanning (ALS) and aerial photomappingfor the construction of hydropower – Buttu Batu 2 x 100 MW is divided into 5 (five) stages of work: preparation, LiDAR and aerial photo data acquisition, LiDAR and aerial photo data processing, topographical map generation and quality control. The installation and measurement of the control point has not done by PT APG because it has been completed by PT Ajisaka team. PT APG uses only a few control points that have been installed in the field for LiDAR acquisition kinematic measurements.
In this chapter will be explained the detail of the stages of work.
2. Workflow of Airborne Laser Scanning (ALS) and aerial photo mapping in Enrekang area
2.1.1. Administrative Preparation
Administrative preparation involves submitting working permit to the authorized organizationfor the smooth work of data acquisition.
· Security clearance is issued by the Directorate General of Defense Strategy of the Ministry of Defense of the Republic of Indonesia by filling in Form A and the Activity Plan Form attached with the cover letter of security clearance. While security officer issued by TNI AU.
Submission of SC and SO licenses was immediately made after the contract signing between PT ASI Pudjiastuti Geosurvey with PT Ajisaka Destar Utama. On October 30, 2017, SC has been issued withletter number: SC/59/P/X/2017/DJSTRAapplicable for three months. Copy of Security Clearance can be seen in Appendix 3.
Due to the addition of aircraft and personnel during the acquisition, additional licenses are required to the Director General for Defense Strategy of Defense Director of the Ministry of Defense of the Republic of Indonesia. License for the Addition of Vehicle and Personnel was issued on November13, 2017 with letter number B/5704/XI/2017.
The application for Security Clearance is done simultaneously with the application of Security Officer (SO) to TNI-AU headquarters. SO License is valid for two weeks, if the work takes place more than two weeks it must be extended and there are personnel changes.
SO has been issued on November 2, 2017 with letter number: SPRIN/2663/XI/2017which is valid for November 3, 2017 until November 17, 2017. Until November 17, 2017 the data acquisition has not been completed then the PT APG has to extend the SO permit. The second SO issued on November 18, 2017 with letter number: SPRIN/2791/XI/2017valid unti December 2, 2017.
Copy of Security Officer can be seen in Appendix 4.
The submission process of SC and SO as in the workflow below:
Figure 3. Workflow of submission process ofSecurity Clearanceand Security Officer
· Aviation Insurance
Aviation insurance covers insurance for aircraft, navigation crew, and assigned SO personnel. The insurance arrangements for the flying crew are done after the SO license has been issued to find out the name of the assigned SO. the insurance issued, data acquisition was further made on the plane.
· Letter of Assignment
A letter of assignment is required to mobilize the work equipment as well as permission to enter the job site.
•Preparation of Logistic– Mobilizatation– Basecamp – Airport
Since the airport used as the base of the aircraft wasa major airport where avtur wasavailable directly from Pertamina, it wasnot necessary to deliver it to the airport base. Avtur provided by Susi Air as the part of the aircraft service.
The field team wasdivided into two:GPS measurement team and data acquisition team. GPS measurement team wason site using basecamps available on the field, while the acquisition team using the basecamp closest to the airport and having enough electricity to download the data.
2.1.2. Technical Preparation
Technical preparation involved the collection of secondary data supporting field work such as Rupa Bumi Indonesia map, SRTM, and control point that already exist at the job site and the design of safe and efficient flight path design.
The field survey was conducted before the acquisition team arrived at the job site. The task of the field survey team was to prepare a reference point to be used as a LiDAR and aerial photo data reference and prepared a ground check point for the job. The team from APG did not install and measure the control points. All GPS measurement work in the field was done by the Ajisaka’s team, including field kinematic GPS measurements.
In the work area has been installed five GPS control points (GCP) and seven ICP. As for the kinematic GPS observation work during the acquisition work, used two GPSs: GCP001 and GCP002.
Table 2. List of coordinate GCP and ICP installed in the area
There were some preparations have to fulfill before the acquisition team leaves for the field. Among them was to prepare flight plan or flight path and perform boresight calibration.
2.4.1. Flight Plan
Flight plan wasmade as a plan/guide of flight direction to get the data acquisition results in accordance with the specification (GSD, point spacing lidar) with the selection of effective, efficient, and economical path. The larger the scale of the resulting map, the planning of the flight planshould be more careful in order to get the accuracy of required data results.Flight plan waspreparedusing software Mission ProLeica.
4. Equipment used in LiDAR and aerial photo data acquisition
Since LiDAR data and aerial camerawere embeddedin one mounting, in a single acquisition itcan obtain results both aerial photographs and LiDAR data simultaneously, with a configuration result such as Figure 5.
5. Coverage area of LiDAR and aerial photo(FOV 50oaat 1000 m altitude)
Figure 6. Flight planning usingsoftware Mission Pro
a) The ideal flight altitude for aerial mapping in tropical area is ± 800 m – 1000 m above the ground level to reduce the cloud cover constraints. With this flight altitude, cloud cover constraints can be minimized and provide optimum LiDAR reflection effect. This work used the flight plan with altitudeof 1000m aboveground level.
b) Ground Sample Distance (GSD) is the distance between two consecutive pixel centers measured on the ground. The bigger the value of the image GSD, the lower the spatial resolution of the image and the less visible details. The GSD is related to the flight height: the higher the altitude of the flight, the bigger the GSD value. Generally, withCCD size 6 ?mand width of CCD sensor is 8956 X 6708, it can obtained ±12 cmGSD.
c) Acquisition of aerial photography using a digital camera Leica RCD30 with 6mm pixel resolution, with GSD 12 cm per pixel the result of coverage area are1074 m x 804 m above the ground.
d) Acquisition of Airborne Laser Scanning (ALS) using Leica ALS70 equipment that has a frequency capability of 500 KHz. In 1 second, the ALS70 sensor is able to issue 500,000 laser shots to the surface of the earth. Each return is recaptured by the LiDAR sensor, and will provide the object position information (XYZ) above ground level. So the information obtained will be more and more.
e) Flight paths directionselected bythe most effective flight plan related to the shape of the location, flight paths that do not require a lot of manoeuvring.
f) The direction of the flying path was related to the terrain condition. The navigator will searched the safest flight path associated with the existence of hills or mountains. Because of this terrain factor, sometimes the flying path must be made gradually altitude and around the mountain or hill to avoid the top of the mountain. In this work only use single line (one flight path) because the form of work area in the form of corridor with minimum coverage width of 100 m.Scanning will be done lane by lane, where the pilot will be directly flying the aircraft based on flight plan that has been connected with the aircraft.
g) Forward overlap was60% ± 5% and sidelap was30% ± 5%.
h) Cloud coverage must not be more than 5% of each photos and objects covered by clouds must not be buildings or transportation.
7. Flight plan for the LiDAR and aerial photo data acquisition in Enrekang area
2.4.2. AirBaseAcquisition
The airport was selected with consideration of location proximity and fuel availability for the aircraft. The aircraft used avtur as aviation fuel. The closest airport to the survey area was Pongtiku Airport, Tana Toraja. The distance from work location to Pongtiku Airport was between 27 Km – 50 Km.
At Pongtiku Airport there was no aviation fuel for the aircraft. Avtur was obtained from Sultan Hasanuddin Airport Makassar and sent by road to Pongtiku Airport.
8. Survey location and the nearest airport
2.4.3. BoresightCalibration
The following procedure was calibration. Calibration procedure must be undertaken by flying the aircraft above the airport into four different flight lanes. Two flight lanes were in the same direction with runway. The aircraft flew in exactly the same elevation with data acquisition flight. Two other flight lanes were performed with cross direction with runway and the aircraft flew on the half of data acquisition elevation flight. The result from calibration will be used for misalignment correction. Misalignment correction occurs because of the misalignment among LiDAR equipment, GPS and IMU positions. The value from misalignment correction will be used for correcting all LiDAR data acquisition results.
If the systematic errors happen, the object scanning result will not coincide for every scan path. The calibration result will be corrected to the LiDAR data system until object from each scan path coinciding. The calibration is performed when the system configuration happened to change in several times, such as: take-off and landing. The calibration adjustment will be processed using Leica software.
2.4.4. GPS DifferentialKinematic
BM reference that has been made, selected as much as two BM: GCP001 and GCP002 to be GPS kinematic differential. Observations of GPS kinematic differential were performed during the data acquisition.
Kinematic differential observation was done to tie the coordinates of the acquisition with the coordinates of the ground as well as the correction of GPS results. Update GPS observation rate every 1 second while update rate for IMU was every 1/200 second. Half an hour before the acquisition begins, during data acquisition and a half hour after the data acquisition was completed, the kinematic differential GPS must be turned on to resolve ambiguity.
2.4.5. LiDARand Aerial Photo Data Acquisition
The tools used for LiDAR data acquisition and aerial photography for the Enrekang project were Leica ALS70 LiDAR systems and RCD30 aerial camera. The installation of the equipment was conducted in Pontianak on November 10, 2017.
Figure9. Documentation of equipment installation into Pilatus Porter PC-6 aircraft
The developed flight plan is installed to the pilot’s navigation display and navigator’s systems. The navigator controled the flight path with taking into account the cloud cover, wind speed and terrain condition. During the survey, the Navigator ensured the LiDAR system tools, cameras, recorders, and computers were running well. Both navigator and pilot maintained the flight path correctly and keep the aircraft steady.
10. LiDAR and aerial camera system that has been installed into aircraft Pilatus Porter PC6
11. Documentation during LiDAR and aerial phot data acquisition
In case, the aircraft exceeds the tolerance limit which had been specified in the flight plan, the system will not record the data. During data acquisition, the navigator and pilot monitored the flight system by using Leica’s Flight and Sensor Control Management System (FCMS).
LiDAR data acquisition activities and aerial photography took place from November 16, 2017 to November 19, 2017. The details of the LiDAR data acquisition schedule and photos can be seen in table 3.
le 3. Flight log of Airborne Laser Scanning and aerial photo in Enrekang area
2.4.6. QCData
LiDAR data and digital aerial photo from acquisition process were downloaded and extracted every day for the quality checking as required by quality control (QC) procedures. The Consultant’s Quality Control Team already stood by to receive raw data from data acquisition’s team. There were three persons handling quality control of the result from data acquisition. One person handled LiDAR data, one person for controlling raw aerial photos, and one person was dedicated to monitor and conduct back up for LiDAR raw data and raw aerial photos.
Checking data acquisition results includes:
· Void/gap LiDAR data checking
There is certainly no LiDAR gap between the flightlines, for example because the thick clouds the laser cannot penetrate. There should be no voids (empty data) from the acquisition results, if any then reflight must be done.
· Check The Photo Gap Data
There are no photo gaps between the flightlines.
· Check The Cloud
Checking the frame of acquisition result of aerial photographic data, whether there is a cloud or not. If there is a cloud whether it is still within the tolerance given by TOR or not, if it exceeds the given tolerance then the acquisition it must be reflight. The tolerance limit given is not more than 10% in a single photo frame, and cloud-covered objects are not buildings or transportation.
Data that not yet fulfill the standard of the quality control, have to be reflight on the next day.
2.5. LiDAR and Aerial Photo Data Processing
After the data acquisition finish, all the data have to be checked and bring back to Jakarta’s office to do the data processing.
1. Raw image RGB of digital aerial photographs with GSD ± 12cm
2. Raw laser data and laser data as extracted result
3. GNSS differential kinematic rover data with 1 second update rate
4. GNSS differential kinematic ground station data with update rate 1 second
5. Inertial IMU data with update rate of 200 data per second
6. Trajectory LiDAR data and trajectory aerial photographs
2.5.1. Pre-Processingof LiDAR and Aerial Photo Data
The raw aerial photo data will be extracted using the Leica software FramePro. The downloaded data results were: event / time of acquisition of each frame (*.evt), camera files (*.cam), bundle adjustment of camera parameters, the installation setting, and photoID. These files will be used in the orthophoto processing. The quality of the downloaded data will be controlled by the Data Processing Team to ensure there are no more errors left.
LiDAR data extracted has been QCed again by the data processing team to ensure no errors left. The misalignment error was checked back and corrected by using TerraMatch software.
a) Trajectory is calculatedbased onkinematicGNSSdataandInertialdata, thus thekinematicGNSSprovidesposition information(XYZ) andtheinertialorientationinformation(? w?). With thistrajectoryinformation, both the position andorientationsensors inthe aircraftcan be detected.The combination oftrajectory dataand theMidpulseexposurewillprovideposition andorientation ofeach frameimage, these data will be usedfororthophotoprocessing as Exterior Orientation (EO). GNSS position recorded every 1 second and orientation IMU every 1/200 second.
Software IPAS TC Leica used for tighly coupled GNSS-IMU processing, data blending and trajectory processing. Bundle adjustment has been done for every line. In the trajectory processing, the data must be ensured that free from ambiguity (ambiguity resolved) for both forward and backward directions. Each flight line was processed in strip adjustment and block adjustment.
Trajectoryprecision tolerance less than2cmandshould be freefromambiguity.
12. Trajectory acquisition at 16 November 2017 (sesi I)
13. Trajectory acquisition at 16 November 2017 (sesi II)
14. Trajectory acquisition at 16 Novemberr 2017 (sesi III)
15. Trajectory acquisition at 17 November2017
Figure16. Trajectory acquisition at 19 November2017
From the results of trajectory processing using IPAS TC software, will get the horizontal and vertical accuracy of each trajectory per day. Trajectory accuracy graphs per day of acquisition can be seen in the figure below.
17. Plot of trajectory position accuracy at 16 November 2017 (sesi I)
18. Plot of trajectory position accuracy at 16 November 2017 (sesi II)
Plot of trajectory position accuracy at 16 November 2017 (sesi III)
20. Plot of trajectory position accuracy at 17 November 2017
Figure 21. Plot of trajectory position accuracy at 19 November 2017
The standard deviation of horizontal position (Latitude and Longitude) and vertical position (height) can be summarized in the trajectory accuracy table as follows.
le 4. Accuracy of trajectory
b) LiDAR signal processing.LiDAR uses near infrared light waves that are fired into the earth’s surface with a wavelength of 1064 nm. With a flight altitude of ± 1000 m above ground level, the divergency beam or foot print size of the laser beam on the earth’s surface is 0.1 mrad or equal to 0.1 meters. Laser waves exposing objects above the ground level are reflected and re-recorded by the recorder on the plane. Echo waves are recorded by the recorder in Full Waveform (FWF) format. This echo format is then translated into signal processing into a binary system. From this binary system it is then converted into 3D points called point cloud. The point cloud coordinate system in the signal processing stage is still a laser coordinate system.
Figure 22. LiDAR signal processing changing the light wave echo signal into binarysystem
LiDAR point cloud originally still in laser coordinates, georeferencing was done by using trajectory. The result was that every LiDAR cloud point has XYZ ground coordinates, where Z was still in ellipsoid form. For further coordinates Z will be referenced to the coordinate system desired by PT Ajisaka Destar Utama.
23. Example of cross-section profiling of LiDAR point clouds on vegetation and houses
c) Georeferencing of the LiDAR point cloud. Horizontal reference using WGS84 datum with Universal Transverse Mercator (UTM) projection system, while vertical reference using EGM2008.
d) LiDAR data matching between scan lines. Because of the misalignment errors, the scan line result between 2 or more scan lines in different direction will have the discrepancies. This error must be calibrated and corrected. Result value from calibration activity will be corrected in this stage. LiDAR data matching processed using software Leica ALSPP and TerraMatch from TerraSolid software.
Iteration adjustment processing has been conduted to get the corrected value for dx, dy, dz, d?, d?, d?, dmirror scale. The result will be corrected for the entire LiDAR’s point cloud.
By using TerraScan software, LiDAR point cloud can be displayed each flight path/flightline to checking of different positions between the flight paths. Point cloud of each flightline needed to be done strip adjustment. The image below is LiDAR point cloud staining based on flight path. Can be seen on the cross section, point cloud on different flying paths have coincided and there are no discrepancies.
Figure 24. Example of misalignmentcorrection in the LiDAR data matching stage on the building area. Point cloud at the overlap area has been coincide
e) LiDAR dataclassification. Raw LiDAR data from digital signal processing stage wasin unclassified point cloud format. The point clouds need to classify into class ground, non-ground, vegetation or building, and to classify the error points. The error points such as low point, air points, and cloud point will be detect firstly and remove from the point clouds.
The LiDAR points cloud classification based on American Society for Photogrammetry and Remote Sensing (ASPRS) standard using TerraScan software and Microstation software and done by LiDAR Data Supervisor. ASPRS already make a standard for class of LiDAR classification and apply internationally.
Table 5. LiDAR point cloud class based on the ASPRS standard
Automatic classification has beenusing software TerraScan from TeraSolid. LiDAR classification used distance and angle as parameter. Every class of point cloud have a specific distance and angle parameter and different area type will have different parameter too. Flat area will have different parameter with hilly or mountainous area. The parameter will be saved as in macro. Macro will be running iteratively until get the best classification.
The main classifications wereground data (ground level) and non-ground data (objects above ground level). Prior to classification for ground and non-ground, point cloud low point and error point should be classified first.
LowPoint wasa LiDAR point where the point position wasbelow the mean elevation of the entire point. The existence of this point wasnormally present in the aqueous area. As it is known that in the aqueous area, the laser beam was not reflected back so the elevation indicates the wrong value. Lowpoint will provide the wrong terrain conditions, so the lowest point needs to be classified on the lowpoint class so as not to interfere. After the lowpoint wasclassified, then proceed to other classes.
Point cloud classification in classes of ground level objects was done using TerraScan software using mathematic macro algorithms of distance and angle. Each class has a certain distance and angle calculation, where the value of distance and angle also depends on general terrain conditions.
Figure 25. Example of classification of point cloud. The orange color is ground class, the green color is vegetation class, and the red color is the roof of the building
f) Model Key Points wasusedasaninput data for relief displacement correction in the orthophoto processing. MKP is a point cloud class ground after filtered every 10 points.
2.5.2. LiDARData Processing
Point cloud data that has been pre-processed and checked the matching condition between its line, then will be manually edited by operator. The result of automatic classification will be manually classified as well as QC by operator and supervisor of LiDAR data processing who have experience in editing LiDAR point cloud.
Figure 26. Workflow ofLiDARdata processing
a) Digital Surface Model (DSM). Digital Surface Model is a model of the entire height of the LiDAR point cloud which has been classified. The point cloud have to be ensure free from any error points. DSM gives an overview the height all of the object above the ground both natural and man-made objects as buildings, bridges, electric poles, etc.
Figure 27. The differences betweenDigital Surface Model (DSM) andDigital Terrain Model (DTM)
LiDAR data saved as in LAS format, the international standard from ASPRS. DSM data is not stored in a seamless format because of the large data size so that it becomes heavy when opened on the computer. Processing and manufacturing of DSM using TerraScan software from TerraSolid.
The LiDAR Data Processing Coordinator and aerial photographs are responsible for the work of making DSM.
Figure28. Example of 3D point cloud
Figure 29. Example of Digital Surface Model (DSM)
b) LiDAR intensity imagery. LiDAR intensity is a value that describes the strength of the reflected laser of an object on Earth’s surface to the recorder. LiDAR intensity has a range of values or Digital Number (DN) between 0 to 255. A value of 0 ( zero ) describes the weak reflection of a laser, and the value of 255 describes powerful laser reflectance of the object surface of the earth. Values from 0 to 255 represented black to white. The white intensity LiDAR means stronger reflected laser.
Commonly, low intensity value is the element of water, or objects that have high humidity. According to the specifications of the infrared light is absorbed by the water element. While the object with high intensity values are usually open dry land. The intensity values can be formed in the image or imagery to GeoTIFF or ECW format. Intensity LiDAR point cloud taken from the already classified, corrected, and there is no point cloud errors. Software for manufacturing intensity is using TerraScan from TerraSolid.
Figure30. Example of LiDAR intensity imagery
c) Digital Elevation Model (DEM). Digital Elevation Model is a represent of the terrain model. DEM generated from point cloud class ground. The point cloud was classified in to classes based on the ASPRS standard. The point cloud ground collected from automatic classification will be QC and edit by operator. The automatic classification could classify about 70% of the actual point cloud class. There still have some point cloud that have incorrect classes. Some points that should be in class ground, classified into vegetation and vice versa. The editing by operator will perform re-classification of points cloud using TerraScan and TerraModel software. The DEM will be saved in the LAS format, BIL format and ASCII XYZ format.
To facilitate the editing of point cloud data, elevation points were made in 3D surface models by means of shading or shaded terrain relief. The color will degrade according to the height. Surface models with too high or too high cloud points are displayed with sharp color degradation, making it easier for operators to perform and select locations to edit.
Figure31. Example of Digital Elevation Model (DEM) in hilly area
Figure32. The different between DSM andDEM in the same location
Aerial photos and intensity were used as a guide for operator. Aerial photographs were used to find out the type of land use whereas the intensity is used to find out watery locations. Editing was also done by looking at cross section point cloud. Point cloud will be given different colors for each class so it is easier to identify which point cloud should be re-classify.
Figure33. LiDAR DEM editing using TerraScan andMicrostation
The point cloud of each tile was saved as in LAS 1.2 format and DEM format. All tiles in each block will be unified in BIL seamless format. Editing point cloud using TerraScan software, TerraModel and Microstation. Global mapper used to check the data result.
Figure34. Digital Elevation Model (DEM) of the hydropower plants area in Enrekang
2.5.3. Aerial Photo Data Processing
Aerial photo data processing using INPHO software. The aerial photodataprocessing wascarried out by means of aerial triangulation process to obtain a corrected EO results. Some data and equipment to be prepared include:
a. Aerial photography of extraction result in TIF 8 bit format
b. Map of flighline realization and re-flight information
c. Camera calibration data
d. Coordinate list of the main control points and checking points as well as the description and its location sketch
e. Time image/event
f. Data acquisitiontrajectory
g. Aerial triangulation software
a) Parameter Exterior Orientation. The initial basis for processing aerial photography wasthe creation of time image or event showing the exposure time of each photo. Time image wasa list of digital photo files completed with time stamp. Time stamp with trajectory and PhotoID wasthen processed to get the original EO file.
Original EO data wasused for georeferencing of all the aerial photodata. This data is called direct georefencing or sky control. Each center of the photo frame will have the coordinates and the flight elevation, the direction angle to X axis called Pitch, the direction angle to Y axis called Roll and the direction angle to Z axis called Yaw. From the georeference results it wasobtained images with coordinates in accordance with the groundcoordinates of GNSS reference measurement results. Automatically, each aerial photo frame will be arranged according to the order of acquisition.
The arrangement of photo frames was then selected by the operator so that only good quality photographs will be processed. If there wasa cloudy or darkened photo frame due to cloud or blur shadows, the operator chooses another photo overlapping with the photo or re-flight photo. Photos with poor quality have been sorted and added with reflight photos used in processing.
b) Aerial Triangulation. EO parameter resulted from GNSS/INS trajectory wasinitial data or original data that need to be re-corrected. The original EO data wasused as input for processing aerial triangulation. The final result of aerial triangulation processing wascorrected EO parameter.
The original EO still has systematic errors that wereuniform so the quality needs to be controlled as well as repaired. This wasdone by measuring common points on each photo or tiepoint, which wasthe same detail object in the overlapping image. Standard tiepoint in 1 overlap area was9 points. Tiepoint measurement wasdone automatically in Orima software using Automatic Point Matching model. The next tiepoint result waschecked to determine the standard deviation between measurements. The operator wasin charge to perform QC tiepoint and tiepoint processing. The difference in position and orientation value of the tie points will be corrected to the photo.
Figure35. Example of automatic tie point measurement usingInpho software
The pre-marked control points wereused as processing control points to bring the photo coordinates to the ground coordinates and as inputs in the bundle adjustment process. The end result of the bundle adjustment is the post-AT Exterior Orientationvalue or corrected EO.
Figure36. The comparation between original EO with EO post AT
c) Correction of Relief Displacement.The central projection of aerial photographs needs to be processed into orthogonal projection by performing relief displacement correction. Relief displacement wascorrected using Model KeyPoint (MKP) of LiDAR result that has been processed in previous pre-processing stage. MKP is the representative of point cloudground, where every 10 points cloudground is represented by 1 point.
d) Mosaic Orthophoto. The orthophoto processing wasdone in a block mosaic of the whole area. Aerial photo processing operator wasperforming cutting mosaic (connection between photos) semi-automatically. Firstlywasdone automatically using the software, and then re-corrected by the operator. This allows the operator to choose the best connection between photos with the best tone contrast.
e) Color Balancing. The mosaic results of aerial photographs werefurther processed color balancing and feathering. With this processing the mosaic will have a uniform/blending color. During data acquisition, the main problem in aerial photographs wasa thick cloud. Because the acquisition wasdone under the clouds, there wasa cloud shadow resulting in darker photo color. But the dark color of the photos does not cover the object beneath it.
GroundSample Distance(GSD) 15 cm was saved in Geotiff andECWformat.
Figure37. Example of orthomosaick
Figure38. Orthophoto mosaic of hidrpower plants area in Enrekang
2.5.4. Topographical Map Digitation
Elements/contents of topographic maps were elevation elements and planimetric elements. An elevation element of contour and spot height has been obtained from DEM generated into contour with 1 meter interval contour. While the planimetric element obtained by digitization on screen orthophoto in CAD format. Topographic data stored in CAD formats that have been topologised.
Hydrographic layer in the topographical map have 3 dimensional XYZ information in every vertex. With this information, the flow dirrection of the river will be known. Irrigation channels that have been digitized up to the water channels where entering the farmers’ fields, not only primary irrigation channels, secondary irrigation and primary irrigation.
Vegetation class or land use was digitized according to the land cover element present at the work site. Some examples of land use layers include: settlements, fields, oil palm plantations, rubber gardens, vacant lots, etc.
Administration boudnary used the administrative boundary map of the Central Bureau of Statistics (BPS) update in 2013 up to the village level limit.
Figure39. Example of topographical map in CAD DWGformat
2.6. LiDAR Data Ground Check
Untuk mengetahui akurasi data LiDAR dan foto udara telah dilakukan beberapa kali pengukuran ground check. Sebelum melakukan akuisisi telah dilakukan kalibrasi sistem LiDAR di bandara. Bersamaan dengan kalibrasi juga dilakukan pengukuran GPS di bandara. Hasil ukuran GPS di bandara kemudian dibandingkan dengan hasil akuisisi LiDAR. Hasil perbandingan kemudian dikoreksikan ke pengolahan data LiDAR sebagai koreksi kesalahan sistematik karena misalignment.
To determine the accuracy of LiDAR data and aerial photography has been done the ground check measurement several times. Prior to the acquisition of LiDAR system calibration was done at the airport. Along with calibration activity also measured GPS at the airport. The result of GPS size at the airport was then compared with the LiDAR acquisition results. The comparison results are then corrected to LiDAR data processing as a systematic error correction due to misalignment.
Hasil hitungan ketelitian vertikal pekerjaan Airborne LiDAR dengan data BM GPS yang telah dipasang dan diukur di lapangan.
The results of the vertical accuracy of Airborne LiDAR’s work with GPS BM data that has been installed and measured in the field.
Figure 40. The vertical accuracy between LiDAR data with GPS ground check
Figure41. The horizontal accuracy between aerial photo with GPS ground check