London aerial, 2015 — photogrammetry recovery
Fifty-two photographs taken from a window seat on a flight circling above London between 16:55 and 17:00 GMT on 6 July 2015. The plane was in a holding pattern, banking slowly through two long arcs over central and east London while it waited for a landing slot. The camera was a Canon EOS 5D Mark III with the EF 50 mm f/1.2L USM stopped down to f/4 — the wide-open lens has a softness that was beside the point here, what I needed was sharpness across the whole frame and depth of field. ISO 100 for resolution. 1/1000 to 1/1600 of a second to freeze the plane's slow vibration through the window glass. I exposed every five or six seconds with the lens flat against the perspex.
The frames were never meant to be a photographic series. They were meant to feed a photogrammetry reconstruction. Two camera positions on the same building, recorded across enough overlapping frames, are enough to triangulate the geometry between them. Fifty-two frames of the same chunk of city, from two arcs of a holding-pattern circle, should be enough to reconstruct the shape of London beneath the plane. That was the brief I gave myself in the air. Back at a desk in Oslo, I loaded the frames into Agisoft PhotoScan (the software was renamed Metashape in 2018; in 2015 it was still PhotoScan) and got partway through. PhotoScan aligned all fifty-two cameras into a single bundle, fitted the lens distortion model, and built a one-gigabyte dense point cloud. It did not build a mesh or a texture. Other work took priority and the project went into the drawer.
What the model looks like, eleven years later
Recovering the model
Recovery happened in a single evening on 8 May 2026, in two parts. The first part: opening the PhotoScan .psz file, which turned out to be an ordinary zip archive, and finding inside it the calibrated camera positions for all fifty-two frames plus the one-gigabyte dense point cloud in PhotoScan's proprietary octree format. The mesh and the texture, the parts I never finished, were not in the file. So nothing to extract directly: the project was real, the dense data was there, but the final renderable mesh would have to be rebuilt.
The second part was rebuilding it in open-source tools. I have no current Metashape licence and no graphics card, so the path was: COLMAP for the structure-from-motion check (verify the cameras still triangulate cleanly), OpenDroneMap running inside a colima-managed Docker VM for the dense reconstruction and meshing, and Blender as the head and tail of the pipeline (clean the Poisson mesh before texturing, package the textured result as a Draco-compressed glTF binary). The final artefact is the model embedded above. It is a 44-megabyte file containing 177,000 textured triangles and 215 small atlas pages, drawn in the browser by Google's <model-viewer> custom element. The detailed recipe is at 2016 - London Aerial/_readme.md in the working folder.
What's there in the model: the Thames meander through the City and Tower Bridge, the Canary Wharf cluster across the river, the buildings along the South Bank, the railway lines into Cannon Street and London Bridge. What's wrong with it: the river itself is meshed as a wavy non-surface (water has no texture for photogrammetry to triangulate against, and Poisson reconstruction tries to bridge the gap with whatever the dense cloud noise suggests). The skyscrapers are reconstructed as solid masses with reasonable rooflines but soft sides — fifty-two near-overhead frames give the geometry a top-down bias. The edges fade into scattered triangle islands, and that's deliberate. Photogrammetry meshes have characterful boundaries where the camera coverage thins out. Trimming them to a clean rectangle would erase the medium's signature.
What the medium says
This is photography that the eye can't see. Each of the fifty-two source frames is a near-conventional aerial photograph; you can hold any one of them up and read it as such. The reconstruction is the thing that the photographs together make possible. It has the same relationship to the photographs that the GPS track in Time that land forgot has to the daily diary set: substrate, not subject. The photograph is the primary surface; the geometry is the layer beneath that the photograph sits on. The 2014 piece Internet Machine works the same way at much higher resolution: 395 frames inside a Telefónica data centre, turned into navigable 3D scenes through the same family of camera-mapping techniques. London 2015 is the small messy cousin of that piece, eleven years late, run through open-source tools on a personal machine instead of a five-week post-production pipeline.
The original 2015 set is not on flickr and was never published as photographs. This page is its first surfacing, eleven years after capture, as the 3D piece it was always meant to be.
Working method established in May 2002. The 2002 statement.