Keynote Speakers 


Keynote Speakers:

Robin Wang

                                                                                            

Robin is co-founder and CEO of Living Optics. After doing degrees in Maths, Physics, and Spanish at the University of Melbourne he suffered a brief stint in the corporate world, he escaped to the University of Oxford for a PhD in physics. Halfway through the PhD, he spun out Living Optics, a company that is hell-bent on making hyperspectral imaging better, cheaper, and easier to use. 

Reluctantly starting a hyperspectral camera company: a guide to the terrors of the start-up world

Abstract:
There are many opportunities to make mistakes when running a start-up. I’ve taken up many of these opportunities. To avoid everyone else making the same (sometimes dumb) mistakes, I’ll take you through the journey of Living Optics, which start as a PhD ‘side project’ and has now raised more than £30M and is now a company of 34 people, now selling hyperspectral cameras worldwide. Their hyperspectral cameras operate at video rates and feature an all-transmissive optical design, eliminating the need for expensive and inefficient mirrors or gratings. This results in cameras that are both compact and cost-effective. Fortune 100 companies and startups alike use these cameras in applications such as industrial inspection, healthcare, and agriculture.


Anya Hurlbert

Anya Hurlbert is Professor of Visual Neuroscience at the Newcastle University in the UK.
Her background is in physics, medicine and neuroscience, with her higher education and early career research experience taking place on both sides of the Atlantic.  She holds degrees from Princeton, Cambridge, MIT, and Harvard. Her research focuses on color perception, light and behaviour, and image analysis in art and biomedicine. She co-founded the Institute of Neuroscience at Newcastle, served as  Scientific Trustee of the National Gallery in London, and is Trustee of the Science Museum Group. She is a member of several committees and boards in the field of vision science. Anya Hurlbert is also active in science outreach and has created science-based art exhibitions.

Does perceiving the illumination have anything to do with colour constancy?

Abstract:
An enduring question in vision science is whether people perceive the illumination, and if they do, how does that perception affect perception of surface colour, shape and space itself. Despite estimations of the illumination color being central to computational models of colour constancy and implicit in contemporary color correction algorithms for digital images, these computational estimations of illumination color are typically not compared with people’s perceptions. Although measurements of perceived illumination are complicated by interactions with perceived material, shape and space, experiments nonetheless suggest that perceived illumination does not determine perceived surface reflectance, and that the two are qualitatively and phenomenally different.  I will describe both physical “plenoptic” measurements of natural illumination, including its spatial, spectral and temporal variations, and compare these with people’s perceptions and computational estimates.


Michael Murdoch, RIT, Rochester (USA)

Michael J. Murdoch is an Associate Professor and Director of the Munsell Color Science Laboratory at the Rochester Institute of Technology with 25 years of research experience focused on color in advanced displays and LED lighting. He is a recipient of an NSF CAREER Award and a Fulbright U.S. Scholar Award. He holds a BS in Chemical Engineering from Cornell, MS in Computer Science from RIT, and PhD in Human-Technology Interaction from Eindhoven University of Technology in The Netherlands.

The Convergence of Lighting and Imaging

Abstract: 

Lighting and imaging – at least the display side of imaging – have an interesting co-history that is nearing convergence. Not long ago, both were constrained to vacuum tubes. Lighting has a hundred-year history of glowing filaments followed by an explosive recent decade or two made possible by LEDs. Today’s lights are often connected, smart, video-capable digital systems. Meanwhile, display systems have evolved into emissive, thin form factors from wearable to wall-sized. Contemporary lighting and imaging systems are both concerned with electroluminescence, digital control, content distribution, power efficiency, and visual perception. However, lighting tends to prioritize spectral resolution while imaging tends to prioritize spatial resolution. This talk will cover the physical and perceptual similarities between lighting and imaging systems, the perceptual performance metrics driving their development, and future opportunities their convergence may offer.


Focal Speakers:

                                                                                                  

Lou Gevaux, LNE - Conservatoire National des Arts et Métier, Paris (France)

Metrological challenges in HDR imaging for luminance measurement

Abstract:
The development of affordable, high-performance CCD and CMOS sensors has led to the rise of imaging-based measurement instruments, such as Imaging Luminance Measurement Devices (ILMDs). These allow entire scenes to be characterized in a single capture, significantly saving time compared to point-based measurements. However, cameras—whether commercial, scientific, or industrial—must be calibrated to serve as accurate measurement tools. Proper calibration of HDR devices is challenging, as it requires multiple corrections that are only valid under specific conditions, making it essential for users to understand these metrological constraints.

Raimondo Schettini, University of Milano Bicocca (Italy)

Object Color Relighting with Progressively Decreasing Information

Abstract:
Object color relighting, the process of predicting an object’s colorimetric values under new lighting conditions, is a significant challenge in computational imaging and graphics. This technique has important applications in augmented reality, digital heritage, and e-commerce. In this paper, we address object color relighting under progressively decreasing information settings, ranging from full spectral knowledge to tristimulus-only input. Our framework systematically compares physics-based rendering, spectral reconstruction, and colorimetric mapping techniques across varying  data regimes. Experiments span five benchmark reflectance datasets and eleven standard illuminants, with relighting accuracy assessed via ΔE00 metric.

Results indicate that third-order polynomial regressions give good results when trained with small datasets, while neural spectral reconstruction achieves superior performance with large-scale training. Spectral methods also exhibit higher robustness to illuminant variability, emphasizing the value of intermediate spectral estimation in practical relighting scenarios.

 

Andrew Stockman, Institute of Ophthalmology University College London (UK)

Individualized colorimetry for correcting observer metamerism failures in narrow band display and lighting.

Abstract:
Cone spectral sensitivities and their linear transforms, the colour matching functions (CMFs), can be used to specify colours, but they vary between observers because of individual differences. These differences can lead to people seeing different display colours and different illuminated colours even though according to traditional colorimetry the colours should appear the same. These failures of observer metamerism are exacerbated by modern narrow-band displays and lighting. The failures can be mitigated by using improved colorimetric standards and individualized colorimetry.

                                                                                             

Giuseppe Claudio Guarnera, Department of Computer Science, University of York, UK

Lighting Beneath the Surface: Spectral and Polarised Cues for Next‑Generation Facial Imaging.

Abstract: 
Subsurface scattering, blood perfusion, detailed surface geometry and specular highlights govern the perceived realism of digital humans. Advances in facial appearance capture have pushed the frontiers of what we can sense beneath the skin, transforming raw image values into actionable digital representations. Spectral techniques range from practical broadband-RGB acquisition through to dense hyperspectral sampling, allowing per-pixel estimation of biophysical skin quantities and age-related biomarkers. Rapid, display-based acquisition setups demonstrate that these capabilities, along with photometric information, can be delivered with commodity hardware in just a few shots. In parallel, division-of-focal-plane polarimetric cameras and neural implicit fields have shown that single-shot, multi-view polarisation cues can be exploited to disentangle diffuse and specular radiance, and to recover high-fidelity surface geometry without assuming polarised incident lighting. 

Yet these phenomena remain difficult to observe simultaneously with a single sensing modality. Additionally, the appearance of teeth, an important component of facial appearance, poses additional challenges due to strong surface specularity, translucency, and detailed microstructure. These trends motivate a practical capture-and-inference paradigm that leverages spectral and polarimetric cues. We illustrate this with two case studies: physiology-aware facial re-ageing from a single photo, and lightweight polarisation-based multi-view geometry that recovers crisp tooth geometry under uniform spherical lighting. Together, they yield photorealistic, identity-preserving models with lower capture burden and greater diagnostic value.


Hermine Chatoux

  

How to define physically a color for non-uniform surfaces?

To acquire images, light is necessary. In this paper, we focus on the impact of this light on the non-uniform surfaces. We present two scenes with different light settings (reference light like D65 or illuminant A and some specifically poorly designed LED lights).  After comparing the light quality, we look at the impact of these lights on the rendering of the scene. Poorly designed lights are able to clearly illustrate how light impacts on the rendering complexity of images.


    Further information about LIM 2025, Lighting and Imaging, will be available here shortly.

    VIEW PAST LIM PROGRAMS 




      Environmental Statement   Modern Slavery Act   Accessibility   Disclaimer   Terms & Conditions   Privacy Policy   Code of Conduct   About IOP         


      © 2021 IOP All rights reserved.
      The Institute is a charity registered in England and Wales (no. 293851) and Scotland (no. SC040092)