The Imaging team first looked at Photogrammetry back in 2016, at that time we decided to concentrate our efforts on establishing our Multispectral Imaging workflows and delivery. Just before Covid19 we began to look again at Photogrammetry, especially as the new Manchester Digital Collections has the ability to embed 3D models into the content description of a record.
Photogrammetry in its simplest form can be described as numerous images of an item or scene taken from numerous angles and combined to produce a 3D model via specialist software. For those who would like a more in depth description, here’s a link to Wikipedia. You may also be interested in taking a look at Structure from Motion.
As with all new imaging techniques the learning curve can be quite steep and difficult. The process of image capture is fairly straight forward to comprehend, although the processing of that data can be lengthy and complicated, especially when using new software and terminology.
My first couple of weeks working from home involved a great deal of reading and experimentation with each of these programs. I eventually decided to concentrate on Metashape, Reality Capture and Blender, although each have strengths and weaknesses over each other depending on the job in hand. There are a large number of online tutorials via YouTube, Blogs and Software websites, although very detailed they can often be contradictory and confusing, especially when out of date with each software update. So, I have had to carefully interpret, pick and choose certain relevant sections from a number of tutorials.
As mentioned, MDC can now display 3D models, these are embedded into the viewer from Sketchfab, a display platform in itself with a strong suite of editing tools, including lighting, material textures, annotations, audio and VR/AR options.
Sketchfab were kind enough to grant me a Pro account for the first couple of months, this provides me a platform to upload and demo the models I have been creating. Some are more successful than others.
We created a John Rylands Library Skectchfab Pro account here for our test models and environments created in the past few weeks. Please take a look and let me know what you think.
Here’s a model a John Wesley Effigy made from a horses cervical vertebrae. I had previously imaged this item and had the jpegs on a portable hard drive ready for working from home. This is created with Reality Capture and is made up of around 300 images.
I think I had taken this model as far as I could, without the original Raw image data to reprocess as higher quality images. I was very curious to see how this model would look when printed. Luckily, I have a very talented friend, Sam Cornwell, who offered to help me out .
Sam kindly printed out the Wesley model in red UV cured resin. The size of the first model came as a bit of a surprise. It was tiny. The reprint was much larger. I was really pleased at the detail and that all the holes and features were correct. Clearly, some understanding of measurement and scaling is needed on my part, this can be achieved during image capture and then imported into the photogrammetry software.
Understanding that we would be asked to work from home, I had quickly made several hundred images of the Historic Reading Room, shooting from the ground and upper levels with a Canon DSLR. I wish I’d taken several hundred more, as I now realise that quality and quantity are very important in acquiring a decent model. Although this 3D interpretation has been well received, it is something we will definitely have to revisit when we return. This is processed with Agisoft Metashape using 700 images.
Left Mouse button to rotate. Right Mouse button to pan. Mouse wheel to zoom in and out.
With limited Special Collection data to work with from home, I was pleased to see the offer of a group video call from Thomas Flynn, demonstrating the conversion of 2D images into a 3D space. The possibilities and interpretations are boundless especially with so many rich resources available online.
Here I’ve used one image of a Daguerreotype of Catherine Hannah Dunkerley. A daguerreotype can be a very difficult item to digitise due to its reflected mirror like surface, only visible from a certain angle. By using the 2D to 3D technique described by Thomas I was able to mimic the materiality of the daguerreotype. Thomas will be making some worklflow videos to show the techniques in the near future.
This is a digital “interpretation” of the actual effect of viewing a daguerreotype. I don’t mind saying I was pretty pleased with the result using Blender and Sketchfab 3D settings Editor.
You can see other examples of using a 2D image in a 3D environment on our Sketchfab account.
This 360 Panoramic image is not photogrammetry but uses a repurposed image via Blender and Sketchfab. Thanks to Louis from Sketchfab for the template of a sphere, to which a Jpeg has been mapped onto the interior surface. If anyone has a VR/AR headset at home I’d be interested in how it looks. Please let me know.
I have probably raised more questions for myself than I’ve answered. There are still a number of challenges ahead. Some of the main areas to focus on are colour accuracy, detail, scale and the challenge of imaging very fragile collection items. I also have to note that quality as much as quantity of images is very important and to remind myself that it takes time for the software to create these models, sometimes many hours. In the digital age this can be so frustrating as we are so used to that instant result.
So where do we go next? I have only dabbled in photogrammetry for the past few weeks, and there is obviously a long way to go before we have adequately met some of the challenges that this new technique has offered us.
We are aware that other departments in the University use photogrammetry and have access to a variety of structured light scanners. It will be interesting to compare the various techniques and see how they can be combined for increased output, detail and accuracy. It will also be interesting to see how the content we create can be used and reinterpreted by others.
We have also seen over the past few weeks the need for quality accessible digitised content, with many Heritage Institutions offering their data freely as open source material. I am sure that the way we work and what we can offer will change greatly over the next few months
In the meantime here a couple of links to some of the most creative uses of photogrammetry that I’ve seen so far. There are some very clever people out there.
Thomas Flynn, Cultural Heritage Lead at Sketchfab. Thank you for your help.
Erik Lernestål at the National Historic Museums of Sweden has created some truly amazing 3D content.
I also have to mention this interactive viewer that Erik worked on with the Royal Armory of Stockholm. A 55” 4k touch screen on which you can interact with high-resolution 3D models of selected pieces of armor from the collection.
Check out this Live demo.