Home avatar

Open and free education in geomatics, land surveying, and geodesy

Become a better expert every day. 0% fluff. 100% contents.

Subscribe and receive updates, lessons, courses and more. No spam!

Get the latest updates and tips.

Application for academy to industry lectures and presentations

We want to bridge the gap between research, development and industry applications.

Are you a professor, scientist, researcher or R&D engineerdoing an exciting research?

You just published a paper or had presentation on the scientific conference?

Let us know!

We want to offer you an opportunity to disseminate your research, get publicity and teach people what you do!

Why present and publish at Geospatial Engineering Academy platform?

  • Free visibility on a geospatial education platform
  • Global audience from industry & applied engineering
  • Opportunity to connect with industry and companies

Presenters

  • University professors
  • Postdoctoral assistants
  • PhD students
  • Independent researchers
  • MSc thesis (high-quality research, e.g. peer-reviewed paper, industrial partner, award)

Domains

  • Land surveying
  • Geomatics
  • Geodesy
  • Photogrammetry
  • Remote sensing

Formats

  • Elevator pitch presentation (1 minute, pre-recorded)
  • Pitch presentation (5 minutes, pre-recorded)
  • Lecture (25 minutes, live or pre-recorded)
  • Live lecture + Q&A from audience

Preparation of the presentation

  • Language: English (or specify alternatives if allowed)
  • Expected audience level
    • Experienced engineers
    • Industry professionals
  • Focus
    • Practical relevance
    • Research insights

Technical delivery

  • Formats:
    • Live online recording
    • Pre-recorded
  • Platform: Google Meets
  • Screen sharing: Slides
  • Recording: Session will be recorded and published on GEA

Slide & Media Requirements

  • Slides format: PPTX and PDF
  • Aspect ratio: 16:9 (Full HD)
  • Fonts: Embedded or standard fonts only
  • Figures & videos: Clearly readable at small screen sizes

Audio & video quality

  • Microphone: High-quality headset
  • Camera: 720p or higher
  • Environment: Quiet room, no echo
  • Internet connection: stable
  • Landscape orientation

Licensing and permissions

  • Presenter confirms:

List of LLM benchmark tests


LLM testing websites

Alignment & Truthfulness

  • MASK — Measures how frequently an LLM lies when incentivized to do so. Responses are categorized as True, Evasive, or Lie, and models are ranked by 1 − p(Lie).

Knowledge & Reasoning

10 lessons for better UAV mapping using GPS/GNSS techniques

Direct image georeferencing with GNSS reduces importance and dependence of GCPs

Integrating RTK and PPK GNSS receivers into UAV platforms can reduce the number of required ground control points (GCPs). At the same time it will make surveying in hardly-accessible or inaccessible locations possible.

But: if you are surveyor, you will almost always want to have at least 4 GCPs.

Guidelines for mobile mapping system

GNSS signal considerations

  • A sufficient number of satellites must remain visible during the progress of the survey to continuously support the precise localization of the vehicle, so the conditions that are beneficial for a survey with GNSS are also beneficial for a survey with the MMS instrument; in particular, the time of day must be carefully chosen, and it must also be considered that urban canyons limit the acquisition signal.

Survey route planning

  • The survey route should be planned by reducing as much as possible abrupt maneuvers and avoiding forward and reverse car directions on the same mission.

Comparison of image formats for photogrammetry- part 1

This post is intended for practitioners who know the basics of photogrammetry and are now focused on optimizing images quality and processing speed.

The foundation of photogrammetry is finding common points in multiple images. Every error that occurs with images—including the choice of file format—propagates through the entire reconstruction pipeline, affecting the accuracy of the point cloud. To minimize the image errors and noise, it is important to understand the trade-offs between the three primary image formats used in photogrammetry: JPEG, DNG, and RAW. According to the best practice, user should generally go for the RAW format. However, this does not make sense if you do not plan to edit photos afterwards.

RAW files are the ‘digital negatives.’ This means you cannot use them as images for processing, but they hold all the data to make an image. These files also do not degrade over time, as they are the original data for an image.

A comparison of SLAM methods

Method Year Loosely coupled Tightly coupled filter IMU ICP NDT Features Loop closure (pose-graph optimization) Single session Multi-session Georeferencing Open source Open source LiDAR interface Real-time
Zhang et al. 2014 Yes Yes Yes Yes Yes Yes Yes
LOAM 2014 Yes Yes planar, edge Yes Yes Yes Yes Yes
LeGO-LOAM 2018 Yes Yes planar, edge Yes Yes Yes Yes Yes
LIO-SAM 2020 Yes EKF Yes planar, edge Yes Yes Yes Yes Yes
FAST-LIO 2020 ESKF Yes Yes Yes Yes Yes Yes
FAST-LIO2 2021 ESKF Yes planar, edge Yes Yes Yes Yes Yes
FAST-LIO3 2022 ESKF Yes planar, edge, planes Yes Yes Yes Yes Yes Yes Yes
LINS 2021 Midpoint Yes Yes Yes Yes Yes
R-LIO 2022 ESKF Yes Yes Yes Yes Yes Yes Yes

Notes

  • Loosely coupled vs. tightly coupled filter:
    • The distinction between loosely and tightly coupled systems is important. Loosely coupled systems process LiDAR and IMU data separately, while tightly coupled systems fuse them in a single estimation framework (e.g., EKF, ESKF, Gauss-Newton).
  • ICP (Iterative Closest Point):
    • ICP is rarely used in modern real-time LiDAR SLAM due to its computational cost. Most systems rely on feature-based or direct methods.

References

  • Modified table from Redovniković, L., Jakopec, A., Będkowski J., Jagetić, J., 2024. The affordable DIY Mandeye LiDAR system for surveying caves, and how to convert 3D clouds into traditional cave ground plans and extended profiles. International Journal of Speleology, 53(3), ijs2535. https://doi.org/10.5038/1827-806X.53.3.2535

First-help: photography simulators and exposure triangle

Camera simulator

Exposure triangle

Photo pills calculator

  • Timelapse Calculator
  • 500 Rule Photography Calculator for Milky Way Exposure
  • Depth of Field (DoF) Calculator
  • Advanced Depth of Field (DoF) Calculator
  • Macro Depth of Field (DoF) Calculator
  • Depth of Field (DoF) Table
  • Hyperfocal Distance Table
  • Circle of Confusion (CoC) Calculator
  • Diffraction Calculator
  • Macro Diffraction Calculator

Dof simulator

Camera Assistant

Articles

Evaluating point cloud similarity

Comparing point clouds is very important for tasks such as change detection, 3D reconstruction validation, sensor fusion, and registration accuracy assessment. Since point clouds are unordered and often large, the choice of similarity metric directly affects robustness, computational cost, and interpretability. Several mathematical metrics are adopted in engineering workflows. Here are some of them.


Chamfer distance (CD)

  • Definition: For two point sets $P$ and $Q$, the Chamfer Distance is:

List of tools for point cloud to BIM conversion

Study Multi-storeyᵃ Slabsᵇ Non-Manhattan wallsᶜ Wall surfacesᵈ Volumetric wallsᵉ Volumetric spacesᶠ Opening detectionᵍ IFC outputʰ Open-sourceⁱ
Wang et al. [9]
Macher et al. [27]
Jung et al. [20]
Shi et al. [38]
Ochmann et al. [18]
Bassier and Vergauwen [19]
Romero-Jarén and Arranz [8]
Gen and Gentes [52]
Martens and Blankenbach [10]
Mahmoud et al. [11]
Mehranfar et al. [53]
Cloud2BIM

Notes ᵃ Automatic processing of multi-story buildings, where floors are detected and structured hierarchically (e.g., as IfcBuildingStorey) without requiring manual intervention.
ᵇ Detection of slab position, thickness, and horizontal dimensions.
ᶜ Support for walls that are not strictly aligned to an orthogonal (Manhattan) grid, allowing for arbitrary angles.
ᵈ Identification and semantic segmentation of wall surfaces from point clouds.
ᵉ Detection of walls as solid objects, including their topology (connections between walls) and representation as parametric elements with thickness, start and end points, and height.
ᶠ Creation of volumetric room representations, which are derived separately from wall representations and are not necessarily included in solutions that detect volumetric walls.
ᵍ Geometric representation and placement of openings within wall elements, including width, height, position, and type (door, window).
ʰ Automatic conversion of extracted building geometry into the IFC format without reliance on proprietary software.
ⁱ The tool’s code is publicly available, allowing free use and modification.

A comparison table of free CAD tools: LibreCAD and FreeCAD

Feature LibreCAD FreeCAD
Primary Focus 2D CAD drafting 3D parametric modeling
File Formats DXF (import/export), DWG (read, partial), PNG, SVG STEP, IGES, STL, DXF, OBJ, IFC, SVG, native .FCStd
Ease of Use Easy, AutoCAD-like interface Steeper learning curve
Best For Technical drawings, 2D plans 3D design, engineering, BIM
3D Support None Robust, parametric
2D Drafting Strong, precise Limited, less intuitive
Extensibility Limited (basic scripting in progress) Python scripting, plugins, modular workbenches
Platforms Windows, macOS, Linux (Unix-like) Windows, macOS, Linux, FreeBSD
Cost Free, open-source (GPL) Free, open-source (LGPL)

List of drone mission planning software

Free (freemium), open-source

Commercial

Utilities

List of the main products created from LiDAR data

# LiDAR product Description
1 Point cloud (las/laz) Raw or classified 3D points with x, y, z, intensity, return number, and classification.
2 Digital elevation model (DEM) Raster representing bare-earth elevation, with vegetation and buildings removed.
3 Digital surface model (DSM) Raster of first-return surfaces, showing buildings, treetops, and other elevated features.
4 Canopy height model (CHM) Difference between DSM and DEM, used to measure tree height or other surface objects.
5 Intensity image Greyscale image created from LiDAR return intensity, useful for feature detection.
6 Hillshade / shaded relief 3D-like visualization of terrain using elevation data and artificial lighting.
7 Contour lines Vector elevation lines derived from DEM at specified intervals (e.g., 1m, 5m).
8 Slope map Raster map showing slope gradient calculated from DEM.
9 Aspect map Raster map showing slope direction (azimuth) derived from terrain surface.
10 Vegetation classification / metrics Derived from classified point clouds: canopy cover, leaf-on/off, vegetation height.
11 Building footprints / models (3D) Features representing buildings in 2D or 3D, from classified returns or DSM analysis.
12 Transect profiles Cross-sectional elevation profiles extracted from point clouds or DEMs.
13 Hydrologic-enforced DEMs Modified DEMs for accurate flow modeling, stream networks, and watersheds.
14 Volumetric change models Detect terrain changes over time (e.g., erosion, landslides, construction).
15 Floodplain / hazard mapping model Derived from LiDAR + hydrologic models to predict flood zones or landslide risk.
16 Powerline / corridor models Feature extraction along linear infrastructure (e.g., wires, towers, clearances).

Analysis of contrast, sharpness of photos for photogrammetry

Photogrammetry Quality Analyzer

Photogrammetry Quality Analyzer

Comprehensive analysis of photos for optimal photogrammetry results

Guide to applications of laser scanners

The “best” tool depends on specific project requirements like accuracy, speed, safety, and budget. Often, a combination of technologies yields the best results.

Domain Static Scanner (TLS) Mobile MMS (vehicle) Handheld (MMS/portable) UAV LiDAR (low-altitude drone) Airborne LiDAR (manned/large UAV)
Built environment
Geodetic & topographic survey 🟨 🟩 🟧 🟩 🟩
Cadastral survey 🟩 🟨 🟨 🟨 🟥
Interior surveying 🟩 🟧 🟩 🟧 🟥
Construction progress monitoring 🟨 🟩 🟨 🟩 🟥
Energy & industrial facilities 🟩 🟧 🟨 🟩 🟥
Infrastructure objects (bridges, dams) 🟩 🟨 🟧 🟩 🟥
Road & railway networks 🟧 🟩 🟧 🟩 🟨
Natural environment
Forests 🟥 🟧 🟧 🟩 🟩
Mining sites 🟨 🟨 🟧 🟩 🟨
Specialized
Cultural heritage 🟩 🟨 🟨 🟩 🟨
Archaeological documentation 🟩 🟨 🟨 🟩 🟩
Tunnels & caves 🟩 🟩 🟨 🟧 🟥

Usage type:

7 tips for better drone photography

Find out how to get the best out of your drones camera with settings, techniques, and real-world examples.


1. Camera settings

  • Shoot RAW (DNG) Always shoot in RAW rather than JPEG. RAW files capture more pixel information since they are not compressed, which means:

    • You can recover details in shadows.
    • Bring back blown-out highlights.
    • Adjust colors more easily in post-production. If your drone supports RAW, use it.
  • ISO 100 Stick to ISO 100 for the best image quality. Only raise it if you’re shooting video in low light. Thanks to gimbals, drones are stable—like having a tripod in the sky—so you can use slower shutter speeds without blur.

A curated list of GPS/GNSS software

Scientific/research

  • Bernesse Link
  • BKG NtripClient Link
  • goGPS Link
  • GAMP Link
  • EasyRTK Link
  • gLab Link
  • GAMIT/GLOBK Link
  • GIPSYX Link
  • GNUT Link
  • GPStk Link
  • GPStools GPS/GNSS Precise Analysis Software Link
  • GPSd Link
  • Laika an open-source GNSS processing library Link
  • MG-APP Link
  • GNATSS Link
  • GROOPS Link
  • NetDiff Link
  • New Zealand PositioNZ-PP - GNSS Post-processing service Link
  • PANDA
  • PPPLib Link
  • CNES PPP Wizard Link
  • PPPH Link
  • PPPH-VA A MATLAB-based Software for the Real-timeMulti-GNSS Variometric Process Link
  • raPPPid Uni vienna Link
  • RTKlib Link
  • GSILIB Link
  • CLASLIB Link Link
  • GEORUST Rinex

Commercial

  • EZSurv Link
  • CHCNAV, CGO 2.0 Baseline Processing Link
  • Carlson SurveyGNSS Link
  • Carlson SurvCE/SurvPC
  • GrafNav Link
  • Hi-Target Geomatic Office (HGO)
  • Leica Geoffice Link
  • MAGNET Office Tools
  • RTNet Link
  • GNSS Solutions Spectra Precision Survey Link
  • Tersus Geomatics Office Link
  • Topcon Tools
  • Trimble Business Center Link
  • WASoft Link

Software published in GPS Solutions journal

A list was originally compiled by Geodesy NOAA List Link. These papers were published to the GPS Toolbox before 1 September 2024. Source code is no longer available through the NGS website. Please contact individual authors for their code, or contact the Editor in Chief at the GPS Solutions journal.

Comparison of TLS, photogrammetry, total station, GNSS, levelling, and mobile mapping

Area vs errors graph for different geomatics techniques (Sestras et al 2025)

Method/instrument Advantages Disadvantages Accuracy Measured parameters
GNSS High precision in static measurements, real-time coordinate tracking, automated measurements, capable of long-distance measurements Depends on satellite visibility. Multipath effects reduce accuracy. Less accurate in vertical than horizontal plane. Sub-cm (static), cm (kinematic) Coordinates (vertical, horizontal)
RTS Sub-millimeter accuracy, automated measurements, real-time coordinate tracking, cost-effective for multiple static measurement points Requires contact with the structure. Requires line-of-sight. Sensitive to weather (fog, rain, wind). Higher cost compared to simpler instruments Sub-millimeter accuracy Coordinates (vertical, horizontal)
Level High precision, lower cost compared to advanced instruments Only measure vertical coordinatess. Not suitable for kinematic measurements. Sensitive to weather (fog, rain, wind). Impossibility of automated measurement process 0.1 mm Coordinates (vertical)
TLS High data density, fast and accurate 3D surveying over large areas, non-contact method Less accurate than traditional methods. Requires line-of-sight. Expensive and require expert data processing. Sensitive to weather (fog, rain, wind). 1–2 mm 3D coordinates, object geometry
Photogrammetry Low cost, fast and simple data collection, non-contact method, long-range capability Lower accuracy than high-precision methods. Requires line-of-sight. Affected by lighting and weather (fog, night). Requires post-processing and data analysis. Sub-millimeter accuracy Coordinates (in sensor plane or 3D)
LVDT High precision, real-time tracking, automated measurements, unaffected by weather conditions Requires contact with the structure. Setting a reference point can be challenging. Limited application for large coordinate changes. Up to 0.01 mm Coordinate differences (1D)

Comparison of different geospatial data acquisition techniques (after Klepárník, R., & Sedlácek, J. (2021))

Data acquisition: Method Precision (m) Acquisition speed Price Area size
Direct methods
Total station 0.01-0.05 ** €€€ O
RTK GPS 0.02-0.1 ** €€€ OO
Laser scanning 0.01-0.1 *** €€€€ O
UAV Photogrammetry 0.03-0.1 **** €€€ OOO
Direct methods / indirect methods
Airborne laser scanning 0.1-1 *** €€€€€ OOOO
Airborne photogrammetry 0.1-1 ***** €€€€€€ OOOO
Spaceborne (Remote sensing) 1-20 ** 0-€ Any size
Indirect methods
Digital cadastre 0.1-0.2 * 0 Any size
DTM/DSM 0.2-1 * Any size
Map source By Map Scale By source 0-€ Any size

Comparison of different laser scanning methods

Airborne Laser Scanner Stationary Terrestrial Laser Scanner Mobile Terrestrial Laser Scanner Handheld (Industrial)
Ideal usage Exterior mapping
Long/Linear Projects
Large scale mapping
Interior high-density high accuracy scans (MEP, architectural, structural, facilities management, and forensics) Exterior high accuracy longer range scans (Architectural reconstruction, surveying, engineering, planning, forensics) Top-quality, high-precision
Its suitable for indoor scans
Accuracy and range Accuracy +/- 10 cm, depending on conditions
Range 3,000 feet
Accuracy +/- 2 mm
Range 60 to 120 meters, depending on conditions
Accuracy +/- 2 mm
Range 150 to 330 meters, depending on conditions
Accuracy +/- 0.5mm
Range up to 110 meters, depending on conditions

References

  • Sestras, P., Badea, G., Badea, A. C., Salagean, T., Roșca, S., Kader, S., & Remondino, F. (2025). Land surveying with UAV photogrammetry and LiDAR for optimal building planning. Automation in Construction, 173, 106092.
  • Klepárník, R., & Sedlácek, J. (2021). Uav photogrammetry, lidar or webgl? A comparison of spatial data sources for landscape architecture. J. Digit. Landsc. Archit, 6, 220-229.

List of books and papers in land surveying, geomatics, geodesy, and geospatial engineering

Books

  • Principles of photogrammetry and remote sensing link1 link2

  • Principles of Remote sensing An Introductory textbook link1, link2

Book sections

  • Hafizur Rahaman Photogrammetry: What, How, and Where link1

White-papers and reports

  • 3D model processing guidelines for photogrametry link1

List of respositories that are hosting geospatial datasets

Practical tips how to stabilise the tripod with the instrument

Sometimes the floor is slippery or it is windy. One has a 30k, 40k or 50k instrument on the tripod.

What one wants is to:

  • Prevents costly instrument falls
  • Provides a stable platform for accurate layout and monitoring
  • Offers a long-term monitoring solution

Here are a few tips how to stabilise the tripod.

  • Have high-quality tripod, such as Nedo or Seco Trimax
  • Use tripod stabilizer with O-Rings for tripod feets
  • Heavy Duty Tripod Sand Bag Link
  • Put a big carpet down

For monitoring or high-precision work: if it is clay, stone or soil, you can dig a small hole, cast in 2 bags of postcrete and bolt down a Survipod Boltfix. For heavy-duty fixing in concrete, brick, and concrete block. This solution has been used many times on railway, bridges, mining and other infrastructure projects. Link

Photogrammetry datasets: 100% free to download

Often it is useful to have access to freely available photogrammetry sample datasets, whether for testing new workflows, learning and experimenting with tools, or benchmarking software performance.

Here a couple of datasets that can be used right away! No tricks.

  • All the datasets are published under “CC Attribution” license.
  • You can use the datasets for private/evaluation/training purposes.
  • Reselling of the datasets or any other commercial activity is not allowed.

Understanding classification of UAVs (drones)

Unmanned Aircraft Systems (UAS)—sometimes called Unmanned Aerial Systems—refer to any aircraft without a human pilot onboard, along with all the components needed to operate it. A UAS includes the unmanned aircraft (UA) or unmanned aerial vehicle (UAV) itself, as well as the support equipment such as the control station, data links, telemetry, communications, and navigation tools. The UA or UAV is the flying element of the system, operated either remotely by a pilot through a ground control station or autonomously via onboard computers and communication links. While the term “drone” was once used primarily for military systems, it has now become a common synonym for UAS in both civilian and commercial contexts.

In recent years, falling costs have fueled a rapid increase in UAS availability for non-military purposes. These systems make it possible to capture high-resolution imagery and data at a fraction of the cost compared to traditional surveying/measurement methods. In the past, remote sensing data was expensive to obtain and largely the domain of governments or specialized commercial programs. Now, affordable UAS paired with GPS and camera technology have made the collection of remote sensing data more accessible and cost-effective than ever before.

Quotes about business

  • The best engineers are brutally honest with their clients, no matter what the impact is upon the engineer’s bottom line.

  • It is not what you know. It is not who you know. It is what you know and who knows what you know.

  • If you are too good at your job… they will not promote you…because you are too useful where you are…and they would have to invest time and effort…to replace you.

  • Project management is 80% people, 20% process.

  • Engineers shouldn’t get paid for their time, they should get paid for their value.

  • No one cares what software you use. They care how fast you solve a problem.

  • The best engineers are storytellers.

  • Every meeting is a pitch, even if you don’t realise it.

  • Clients want trusted advisors, not task junkies.

  • You won’t get promoted for doing your job. You’ll get promoted for making others better at theirs.

  • Clients in the industry are very simple, if someone else can provide the same results, or better, for a cheaper price, they have won 90% of the client’s deciding factor.

Open point cloud formats

There is currently no general-purpose, open standard for storing data produced by three dimensional (3D) imaging systems, such as laser scanners. As a result, producers and consumers of such data rely on proprietary or ad-hoc formats to store and exchange data. There is a critical need in the imaging industry for open standards that promote data interoperability among imaging hardware and software systems.

Summary

The website https://openpointcloudformats.org/ is the home of the Open Point Cloud Formats (OPC) initiative, a community-driven effort advocating for open, standardized formats for 3D point cloud data.

List of standards, guidelines, manuals, and best-practice guides in geomatics, land surveying, geodesy and geospatial engineering

Standards

A standard is a rule or requirement that is determined by a consensus opinion of users. It prescribes the accepted and (theoretically) the best criteria for a product, process, test, or procedure. The benefits of a standard are safety, quality, interchange ability of parts or systems, and consistency across international borders.

ISO (International Organization for Standardization) is the world’s leading developer of International Standards. It is a global network that identifies and delivers international standards required by business, government and society. Several ISO standards are applicable to the Geomatics profession (the ISO 191XX family, ISO12858, ISO17123 and ISO9000).

A list of (awesome) lists related to geodesy remote sensing and geospatial sciences and industry

Convert raw GPS/GNSS observation from Android smartphones to standard RINEX observation files

A list of script and tools that can convert multi-frequency and multi-constellation GNSS raw data collected by Android smartphones into RINEX files.

  • Gnss Logger App by Google Link

  • BUAA Rinex convertor Link

  • Python toolbox for android GNSS raw data to RINEX conversion Link

  • Android GNSS Logger to RINEX converter by Rokubun Link

  • GEA GNSS Link

  • Geo++ Rinex Logger Link

  • Android Raw 2 Rinex by Morcki Link

  • GNSS Analysis app (Google)

  • Android2Rinex by Zqmever Link

Selecting independent baselines for GNSS network adjustment

Baseline selection can affect the quality of a GPS/GNSS control geodetic network. Choosing independent baselines – the minimal set of baseline vectors – is crucial for a robust network adjustment.

Below we describe what independent baselines are, why they matter, and how to select them.

What are independent GNSS baselines?

In a GNSS survey session with multiple receivers, you can form baseline vectors between every pair of simultaneously occupied receivers. However, many of those baselines are redundant (linearly dependent).

List of educational blogs and knowledge-bases in land surveying, geomatics, geodesy, and geospatial engineering domains

List of 360-degree cameras

Cameras are fundamental in creating virtual and augmented reality environments.

While standard cameras require time-consuming image collection and stitching, 360-degree cameras do this by capturing a full spherical view in a single shot. These cameras use multiple lenses to capture images from different angles simultaneously. The footage is then fused by software—either in-camera or on a connected device—to create an immersive 360-degree world that can be explored in real-time with a VR headset. The quality of the camera directly impacts the realism of the experience. Below is a comparison of some popular 360-degree cameras.

No Camera Features
1 Insta360 One X2
View Features • 2x 5.7K lenses
• 6080x3040 360-degree recording
• 6-axis gyroscope
• 360 directional focus audio
• Bluetooth, Wi-Fi, USB connectivity
• MicroSD card storage
• Less Expensive
2 GoPro Max
View Features • 1x 5.6K lens
• 16.6mp 360-degree recording
• GPS
• 4 channel microphones audio
• Bluetooth, Wi-Fi, USB connectivity
• MicroSD card storage
• Less Expensive
3 Ricoh Theta Z1
View Features • 2x 4K lenses
• 23mp 360-degree recording
• GPS
• 6x microphones audio
• Bluetooth, Wi-Fi (web server), USB connectivity
• 50 GB internal storage
• Expensive
4 Samsung Gear 360
View Features • 2x 15mp lenses
• 3840x1920 360-degree recording
• Accelerometer, Gyroscope
• Bluetooth, Wi-Fi, USB, NFC connectivity
• Compatible with High-end Samsung Mobile phones
• 1 GB internal storage with 128 GB using MicroSD
• Cheap
5 Vuze XR
View Features • 2 x Sony 12MP IMX-378 fisheye lenses
• 3840x1920 360-degree recording through Ambarella H2 video processor
• Accelerometer, Gyroscope
• Wi-Fi, USB connectivity
• 4x microphones
• Removeable MicroSD card
• Cheap
6 Nokia OZO
View Features • 8 2Kx2K ISO 190-degree lenses
• 9 Capture up to 12K30 x 12K30 Video/Stills
• 500 GB SSD Module for Recording
• Automatic stitching
• HDMI output
• Omnidirectional microphones
• Extremely Expensive
7 Gopro Odyssey
View Features • 16 x 2.7K Lenses
• MicroSD card storage
• Automatic stitching
• USB cable for connectivity
• 16-mono microphones
• Expensive
8 Kandao Obsidian GO 360° 3D VR Camera
View Features • 8 x 6K f/2.8 195° Fisheye Lenses
• Capture up to 12K30 x 12K30 Video/Stills
• Internal 8TB SSD Module for Recording Camera • 8 x 6K f/2.8 195° Fisheye Lenses
• Automatic stitching
• Wi-Fi 6, Bluetooth 5, Gigabit Ethernet
• 4-directional microphones
• Extremely Expensive
9 Panono 360° 108MP Camera
View Features • 6 x 3MP Cameras with 360° 108MP Still Image recording
• Image Requires Stitching
• Wi-Fi Connectivity
• Expensive
10 Z CAM V1 Spherical VR 360 Camera
View Features • 10 fisheye lenses with 190-degree view, each
• 7K 360-degree video recording
• Automatic stitching
• Live video streaming
• 4 built-in microphones
• Very Expensive
11 Z CAM V1 Pro Cinematic VR Camera
View Features • 9 MFT Lenses with 190-degree view
• Automatic stitching
• Live video streaming
• 4-directional microphones
• Extremely Expensive

Other cameras

Source

  • Siddiqui, M. S., Syed, T. A., Nadeem, A., Nawaz, W., & Alkhodre, A. (2022). Virtual tourism and digital heritage: an analysis of VR/AR technologies and applications. International Journal of Advanced Computer Science and Applications, 13(7).

Questions and answers about collecting and processing measurements with total station

Resection with 4 points

If you have four control points on the field, will you do resection including 4 points or just use 3 to set up resection, and check shot on the fourth?

Let’s try to answer it!


The goal of resection is to determine the total station’s position and orientation by observing known control points (backsights). The configuration and number of these points directly impact the accuracy of the solution. The more accurate and geometrically well-distributed your control points, the better the result.

A comprehensive comparison of 3D data formats and their support for various 3D data types

Comparison of 3D data formats by supported data types

3D data type LAS/LAZ E57 PLY OBJ STL FBX GLB/GLTF STEP IGES IFC USD 3DS DAE (Collada)
Point cloud
Mesh (polygonal) ✅ Limited
TIN (triangulated surface) ⚠️ ⚠️
NURBS / parametric surface ✅ Limited
CAD solid (B-rep) ⚠️ ⚠️ Limited
Voxel grid ⚠️
SDF / implicit surface ⚠️
Skeleton / rigging ⚠️ Limited
Animation data
Texture / material ⚠️
Photogrammetry dataset
LiDAR scan dataset
Displacement maps
Scene graph hierarchy ⚠️ ⚠️
  • ✅ = Native support
  • ⚠️ Limited = Partial or requires workarounds
  • ❌ = not supported.

List of content creators in domains of land surveying, geodesy, geomatics, photogrammetry and remote sensing

List of free tools for 2D and 3D model visualisation

Handling noise on reflective surfaces in laser scanning

Highly reflective surfaces such as stainless steel pipes, tanks, and polished metals are a common challenge in terrestrial laser scanning. These materials often cause multipath reflections, ghost points, or noisy clusters within the point cloud. If left unaddressed, such noise can distort measurements or lead to incorrect modeling.

This article explains why reflective surfaces create noise, and how to minimize and clean up these effects, both during data capture and post-processing.


Why do reflective surfaces cause noise?

Laser scanners rely on light reflections to measure distances. When a surface is highly polished, the laser beam may scatter or bounce multiple times, leading to:

Historical timeline of photogrammetry and laser scanning

Photogrammetry

Ancient work

  • 5th century BCE: Chinese writings (Zhoubi Suanjing) describe pinhole projections (camera obscura principles).
  • 4th century BCE: Euclid formulates geometric principles of perspective.
  • 1046 BC - 256 BC Discovery and capture of natural optical phenomena. Perforated gnomons projecting a pinhole image of the sun were described in the Chinese Zhoubi Suanjing writings. Some ancient sightings of gods and spirits, especially in temple worship, are thought to possibly have been conjured up by means of camera obscura projections.
  • 300 BC Geometry, perspective, pinhole camera model — Euclid
  • 500 BC Camera Obscura. (500 BC in China, 350 BC Aristote, 1000 Al-Haytham, 1500 Léonard de Vinci)

Early period

  • 1435 Leon Battista Alberti publishes De Pictura, explaining linear perspective, and around 1430–1440 he devises early surveying/map-making techniques (e.g. measuring Rome’s city plan), laying groundwork for topographic mapping
  • 1611 Camera lucida conceptualized (patented in 1806 by William Hyde Wollaston).
  • 1685 Optical Projection Illustration. Johann Zahn’s treatise Oculus Artificialis includes detailed drawings of camera obscura setups (like the dragon projection) that foreshadow how projected images could be used to amaze or deceive

19th century

  • 1839 Daguerreotype, “gift to the world” from French Academy. First publicly available photographic process.
  • 1851 “Father of Photogrammetry”, french officer Aime Laussedat develops the first photogrammetrical devices and methods, using calibrated images and drawing apparatus.
  • 1858 The German architect A. Meydenbauer develops photogrammetrical techniques for the documentation of buildings.
  • 1859 Aime Lausseda demonstrated a photo-based topographic survey of Paris
  • 1866 The Viennese physicist Ernst Mach publishes the idea to use the stereoscope to estimate volumetric measures.
  • 1885 The ancient ruins of Persepolis were the first archaeological object recorded photogrammetrically.
  • 1889 The first German manual of photogrammetry was published by C. Koppe.
  • 1893 Meydenbauer coined the word “Photogrammetry”.
  • 1896 Eduard Gaston and Daniel Deville present the first stereoscopic instrument for vectorized mapping.

20th century

  • 1909 The invention and use of stereoscopic plotting instruments (e.g., Zeiss Stereoplanigraph
  • 1910 The ISP (International Society for Photogrammetry), now ISPRS, was founded by E. Dolezal in Austria.
  • 1911 Aerial photogrammetry with rectified photographs by Theodor Scheimpflug.
  • 1924 Relative orientation determined by 6 points in overlapping images — von Gruber points.
  • 1930s–1950s Development of mathematical methods for block adjustment and bundle adjustment.
  • 1957 Analytical plotter (Helava) - Image-map coordinate transformation by electronic computation & servocontrol.
  • 1964 First architectural tests with the new stereometric camera-system, which had been invented by Carl Zeiss, Oberkochen and Hans Foramitti, Vienna.
  • 1980 Improvements in computer hardware and software

21st century

  • 2005 Introduction of Structure from Motion (SfM) algorithms enables 3D reconstruction from unordered image sets, expanding usability in archaeology, architecture, and geology.
  • 2009 The first commercial drone with a high-resolution camera (by DJI) was released.
  • 2010s Emergence of open-source tools (e.g., OpenMVG, MicMac, COLMAP).
  • 2010 Growth of consumer-grade UAVs (drones) with integrated cameras
  • 2012 Launch of Agisoft PhotoScan (later Metashape), bringing accessible 3D modeling to professionals and hobbyists.
  • 2013 DJI Phantom 1 released in 2013
  • 2014 Pix4D releases Pix4Dmapper, pushing automated drone image processing for mapping and 3D modeling further into commercial and academic use.
  • 2015 Photogrammetry integrated with BIM (Building Information Modeling) workflows becomes standard for renovation and documentation.
  • 2016 The rise of smartphone-based photogrammetry apps and cloud processing
  • 2017 Use of real-time SLAM (Simultaneous Localization and Mapping) in mobile and drone platforms enables real-time spatial data capture alongside photogrammetry.
  • 2018 Introduction of AI-enhanced feature matching and point cloud generation increases automation and accuracy in image processing.
  • 2019 Neural Radiance Fields (NeRF) are first introduced
  • 2020 Integration of LiDAR and photogrammetry from mobile devices (e.g., iPad Pro, iPhone 12 Pro and newer) enhances combined modeling.
  • 2021 Reality capture pipelines using photogrammetry, LiDAR, and ML become essential tools in game development, digital twins, and VR/AR applications.
  • 2023 3D Gaussian Splatting research (e.g., by Google Research) introduces new methods for efficient, photo-realistic scene reconstruction.
  • 2024 Cloud-native platforms offering end-to-end photogrammetry processing, AI-driven editing, and integration with GIS, 3D printing, and Metaverse development

Laser scanning

20th century

  • 1960s First laser distance measurements (coinciding with laser invention in 1960).
  • 1980s Terrestrial laser scanners (TLS) emerge for industrial metrology.
  • 1990s Airborne LiDAR (e.g., Optech ALTM) revolutionizes topographic mapping.

21st century

  • 2001 First commercial TLS (e.g., Cyrax 2400).
  • 2010s First Mobile LiDAR (e.g., Trimble MX9) for rapid urban surveys.
  • 2014-2017 Google’s Tango project of real-time indoor 3D scanning using depth sensors and SLAM
  • 2017 Real-Time SLAM and Mapping can perform on-the-fly mapping.
  • 2020 Consumer LiDAR (Apple, iPad Pro) brings scanning to mass markets.
  • 2023 Single-photon LiDAR (SPL) enables long-range, high-resolution mapping.

List of AR/VR displays and headsets

The quality of immersion in virtual reality (VR) is largely defined by the display device. A regular monitor can simulate a virtual environment, but the experience remains non-immersive. Mobile phones, when paired with simple headsets like Google Cardboard or VR Box, can offer a semi-immersive experience, since they use motion sensors to adjust orientation and viewing angles.

For a fully immersive VR experience, advanced head-mounted displays (HMDs) are required. These devices integrate sensors, cameras, controllers, and sometimes even their own computing hardware to deliver seamless interaction and presence.


Semi-immersive VR devices

  • Google Cardboard & VR Box: Affordable solutions that use smartphones as displays.
  • Provide basic stereoscopic viewing by splitting the screen.
  • Rely on elastic bands or handheld mounting, which is ergonomically limiting.
  • Some devices support Bluetooth controllers for added interactivity.
  • Pros: Cheap and accessible.
  • Cons: Limited interaction and comfort; not truly immersive.

Fully immersive VR devices

VR headsets deliver immersive experiences with features:

List of sensors required for AR/VR Experiences

Virtual and augmented reality (AR/VR) systems rely on a range of sensors to create immersive and interactive experiences. These sensors capture motion, direction, visuals, sound, and environmental factors to bridge between the real and virtual worlds.

Conventional AR/VR devices often use accelerometers, gyroscopes, magnetometers, and GPS for motion and location tracking. However, modern systems are adopting advanced technologies such as time-of-flight sensors, structured light, depth sensing, and thermal imaging to improve accuracy and realism.

Feedback sensors for touch, smell, and heat-based interactions are also being introduced, expanding the sensory dimension of VR/AR.

Together, these technologies help systems understand user movement, environment context, and interactions, delivering seamless immersion.

Below is a list of common sensors used in AR/VR systems and their typical usage.


Common sensors in AR/VR Systems

S. No Sensor Usage
1 Accelerometer Tracks movement in X, Y, and Z dimensions
2 Gyroscope Senses angular velocity or rotational motion
3 G-sensor Measures force of movement in gravitational units
4 Magnetometer Tracks direction
5 Proximity sensor Measures distance of nearby objects
6 Light sensor Measures ambient light intensity
7 IR sensor Detects infrared light for proximity and motion
8 Depth sensor Captures depth to distinguish near and far objects
9 Eye-tracking sensor Tracks user’s eye movement to identify gaze direction
10 Directional microphone Identifies sound direction using Doppler effect
11 Inertial Measurement Unit (IMU) Combines accelerometer, gyroscope, and magnetometer for motion data
12 Time-of-Flight (ToF) sensor Uses laser/IR to measure distance for object and navigation tracking
13 Object & gesture tracking Camera-based tracking of gestures and object recognition
14 Ultrasound sensor Detects proximity and distance using sound waves
15 Thermal sensor Detects heat signatures for differentiating objects/users
16 Ambient light sensor Measures environmental lighting conditions
17 GPS Provides outdoor, satellite-based location tracking
18 Indoor GPS Uses Bluetooth/Wi-Fi triangulation for indoor location

Source

  • Siddiqui, M. S., Syed, T. A., Nadeem, A., Nawaz, W., & Alkhodre, A. (2022). Virtual tourism and digital heritage: an analysis of VR/AR technologies and applications. International Journal of Advanced Computer Science and Applications, 13(7).
0%
If this helped you, it might help others too. Share: