Norwegian version of this page

3D Documentation

Structured Light Scanning (SLS)

Structured light scanning is a method favoured by the automative and engineering industries due to it's higher level accuracy. Unlike photogrammetry that can be performed with almost any camera, structured light scanning relies on highly engineered and calibrated equipment that performs numerous intrinsic and extrinsic checks during the capture process.

This method uses similar principals to photogrammetry in that it observes an area of a scene from two points of view. However, rather than rely on computer vision algorithms to estimate similar areas of the scene, it projects a pattern of light (usually lines) and uses the cameras to calculate shape based on how these lines deform across the surface.

Under the right conditions photogrammetry can come close to the same type fine detail offered by structured light scanning, but without the same degree of certainty, and with a far greater processing burden in the post processing stages.

This method is especially valuable for monitoring the condition of artefacts, and has been an integral part to the documentation strategy for rare and delicate artefacts such as those in the viking age assemblages for nearly two decades.

Photogrammetry and Structure from Motion (SfM)

Photogrammetry is a capture process in common use in all areas of heritage. The underlying principals of photogrammetry - obtaining reliable real world information from images – date back over a century. There are various methods that are referred to as photogrammetry, generally it is SfM photogrammetry that is used today. The approach is similar to stereo-pair photogrammetry that uses known distances and angles of the location of two images to triangulate the real-world location of items in those images. This can be, and was done manually and has been an important part of map-making for decades.  With modern computing power it is possible to automate this process resulting in much more detailed information by combining hundreds or thousands of images taken from different positions.

The museum has been employing this technique in one form or another for over a decade. In the field we use it with drone imagery to document excavation sites and their surrounding landscapes, as well as for recording some features in the field. In recent years the museum has focused its efforts on preparing high resolution, and reliable 3D data for objects in its collection.

As computing power increases this technique is becoming much more accessible, though with the parallel increase in imaging technology it can still be a costly process. However, many apps exist for most recent mobile phones that bring the possibility of creating 3D models to almost anyone.

Focus Stacking Contour reconstruction

This is a quick and relatively easy to process method of capturing 3D information, and is best suited to micro-photography. In this method computer software will go through a collection (or stack) of images taken from the same location, but with different points of focus and isolate those areas that (the computer thinks) are in focus. 

This stacking approach is more often used to create images with an extended depth of field (more of the image in focus) which is increasingly important with the extra high resolution of current digital cameras. By tracing the edges of these areas of focus it is possible to build up a contour-map of what was photographed. Subject-camera distance is critical in this method and it works best very close up where the depth of field is very shallow and the subject-camera distance between each image (each contour) can be as small as 20 microns. 

There is not a strong need for this method at the museum but it does occasionally fills a gap where other methods fall down or are too time-costly.

Reflectance transformation imaging (RTI) and stereo photometry.

These are both methods that use the changing position of light to capture information about a scene. Either a fixed lighting setup or a reference sphere is used while the camera stays completely still. The resulting images are then processed and the angle of incident light is used to calculate information about the subject in view. They can be used to interpret the shape of a subject in the form of a normal map -  a record of the apparent angle of a specific pixel relative to the camera. Unlike the methods above this is not a true 3D record, but can still be valuable for documenting very fine details below the accuracy of other methods, or on objects who's material properties are not suitable for other forms of reconstruction.

At present these methods are only a small part of the museum's digitization strategy but can be a valuable part of the 3D pipeline for documenting surface properties such as glossiness (think sickle gloss on stone artefacts or very fine changes in polishing across the edge of a blade) that give important character and meaning to an artefact.

RTI and Stereo Photometry - Reflectance Transformation Imaging

We don't do much but can -  it's for documenting surface properties more than 'scanning scanning'

LiDAR, NEural Radiance Fields and other techniques

With the advances in artificial intelligence and machine learning techniques and the miniaturisation of technology there are always new methods to be used and explored. LiDAR (Light Detection And Ranging) is actually a well established documentation method, and one that the museum holds some data for, but does not provide as a service. However it is fast becoming a ubiquitous tool with inclusion in everything from drones to cars and mobile phones. What comes next will be the next big step on our documentation journey.

CT scanning

The Museum is increasing its focus on CT scanning and how this technique can find use within the field of archaeological documentation. During the Gjellestad project, the archaeologists faced a very specific challenge: How to document the ship's highly weathered and fragmented iron rivets as gentle and efficiently as possible. They chose to extract blocks of the soil surrounding the rivets to keep the fragile objects stable. The rivet containing soil samples were brought to the lab and CT scanned in order to model the rivets without harming loose fragments or alter the original position and orientation. The imagery produced during CT scanning made it possible to segment and split apart the different material groups in the sample, and convert these into 3d models. The models can later be integrated into a 3d GIS or other viewed and analysed using other 3d tools.

This case has demonstrated potential of CT scanning for objects excavated within soil samples. The method produces high quality 3d documentation of the objects, which may later be 3d printed as a physical copy for use in research and outreach activities. The added information retrieved from studying the objects within their soil context provides a further benefit for the conservators in facilitating reflexive planning of the optimal excavation and conservation strategies during a field project.

Published Jan. 5, 2023 3:32 PM - Last modified Jan. 5, 2023 4:09 PM