top of page

Meet the data & AI researchers: Pearse Keane’s Lab at ARVO 2026

  • INSIGHT communications team
  • 5 hours ago
  • 5 min read

The artificial medical intelligence lab of Pearse Keane continues to push the frontiers of ophthalmic and oculomics research, bringing together a diverse team of clinicians and computer scientists across UCL Institute of Ophthalmology and Moorfields Eye Hospital, enabled by the INSIGHT Eye and Oculomics Health Data Research Hub. 


Group photo of Pearse Keane with some of his research group members, standing on a staircase

This month, researchers from the Keane AI lab are sharing their latest projects at the annual conference of the Association for Research in Vision and Ophthalmology (ARVO), being held in Denver, Colorado. Together they will showcase a substantial breadth of work, from building more precise, more equitable AI foundation models, to discovering retinal biomarkers of systemic disease, and designing smarter clinical tools and screening interventions that can reach patients at scale. 


Here we highlight some of the lab’s rising stars across five key themes, in addition to Pearse’s own case study as co-founder of a spin-out building AI as a medical device. 


Foundation Models and Large-Scale AI Platforms

Since releasing the world’s first foundational model in ophthalmology, RETFound, the Keane lab has been developing more powerful, generalisable model architecture and extending predictive reach.


Dominic Williamson and co-authors have trained a novel foundation model on an extraordinary 140 million optical coherence tomography (OCT) B-scans from over 260,000 patients, using the DINOv3 contrastive learning framework. By aggregating rich 2D B-scan embeddings with attention pooling to capture 3D volumetric structure, the model outperforms existing benchmarks, including RETFound, across a wide range of classification and regression tasks, establishing a new standard for device-agnostic OCT representation learning. The model has been developed as part of a collaboration between INSIGHT and insitro.




Photo of Eden Ruffell

Eden Ruffell builds directly on this foundation model to tackle one of the most clinically pressing challenges in retinal disease: predicting how quickly age-related macular degeneration (AMD) will progress. Using a single baseline OCT volume, the model forecasts the most advanced AMD stage an eye is likely to reach within three years, outperforming all competing baselines, such as RETFound and OCTCube, opening the door to earlier identification of high-risk patients. The work is part of the INSIGHT and insitro collaboration.


Yilan Wu presents RETFound Plus, a next-generation longitudinal foundation model pre-trained on over 1.3 million colour fundus photographs collected over ten years. Unlike existing foundation models focused primarily on cross-sectional disease detection, RETFound Plus incorporates time-related information to improve prediction of disease incidence and progression. It demonstrates enhanced calibration and predictive accuracy across multi-ethnic cohorts for both eye conditions and systemic conditions such as stroke, diabetes and myocardial infarction.



Yukun Zhou describes an international benchmarking study examining the generalisability and fairness of retinal age prediction models, which are a promising non-invasive biomarker for biological ageing and systemic health risk. Training six independent models on cohorts from the UK, China, Japan, Australia, the USA, and Russia, Yukun’s study reveals that retinal age models exhibit population-specific behaviour, age-dependent bias, and significant loss of accuracy when applied outside their development cohort. The authors highlight the need for rigorous fairness evaluation before clinical deployment.


AI for Disease Risk Prediction and Stratification

A major theme across the group is using AI and deep learning to support clinicians in identifying who will develop serious eye disease, and when, enabling more targeted, personalised care.



Photo of Paul Nderitu

Paul Nderitu investigates whether retinal vascular morphology (RVM) features extracted from routine colour fundus photographs can enhance prediction of proliferative diabetic retinopathy (PDR), a vision-threatening complication of diabetes. Using the AUTOMORPH tool and a deep learning survival model (DeepSurv) applied to the Moorfields Diabetic Image Dataset (MIDAS), Paul demonstrates that incorporating retinal vascular metrics, such as vessel complexity and calibre, improves time-to-PDR treatment prediction.


Lie Ju has developed an evidence-based AI framework for glaucoma screening and early detection, trained on over 110,000 fundus images supplemented by longitudinal cohorts from Moorfields Eye Hospital and a tertiary centre in China. The system achieves high accuracy in detecting glaucoma, and uniquely provides transparent, clinically interpretable structural features to support justified referral decisions. This marks an important step towards trustworthy AI in screening pathways.


Hyunmin Kim evaluates OCT and fundus imaging for systemic disease prediction, using data from the AlzEye data linkage project. The study asks a fundamental question for the emerging field of oculomics: which retinal imaging modality carries more information about systemic health? Findings from this large-scale, multi-cohort analysis will have important implications for how future oculomics research is designed and which data should be prioritised for linkage.


Technical Advances: AI Tools and Methodology

Alongside clinical applications, the group continues to advance the technical methodologies and tools that underpin robust, scalable medical AI.


Justin Engelmann presents two technical contributions at ARVO this year. His first study introduces DABS (Device Agnostic Boundary Segmentation), an easy-to-use tool for segmenting and quantifying retinal layers in OCT that works across devices. The innovation addresses a significant gap in available tooling for retinal disease and oculomics research. Justin’s second study investigates whether deep learning applied to OCT angiography (OCTA) images can predict continuous glucose monitoring (CGM) data and HbA1c, a potential new window for non-invasive metabolic monitoring.



Photo of Ariel Ong

Ariel Ong shares a scalable, resource-efficient pipeline for extracting structured clinical data from free-text ophthalmic clinical letters using large language models (LLMs). The pipeline, developed and validated on nine macular diseases as a proof-of-concept, achieves high performance (micro-F1 >0.95) across all extraction tasks. It also shows strong robustness across different LLM families and over time. Beyond performance, the multidimensional assessment framework incorporates pragmatic operational considerations such as costs and time. This innovation makes previously inaccessible free-text records usable for research at scale.


Datasets and Research Infrastructure

Building the data resources that underpin fair, robust medical AI is a priority for the group, and this year’s ARVO programme includes a new contribution.


Rahul Jonas presents NERO, a large UK-based hospital myopia dataset comprising 70,164 patients with myopic metrics, drawn from Moorfields Eye Hospital between 2008 and 2024. The dataset includes refractive data, axial length measurements, retinal imaging (OCT and colour fundus photography), and treatment records for myopic macular neovascularisation, providing a rich foundation for AI-driven research into myopia progression and its ocular and systemic complications.


Bridging AI and Clinical Practice

Translating AI from a research algorithm to real-world clinical benefit requires attention to implementation, education, and patient engagement.



Photo of Pearse Keane against the backdrop of a computer screen, which displays a colour fundus photograph

Pearse Keane gives an invited case study on the journey “from code to clinic” — drawing on the landmark Moorfields-Google DeepMind retinal AI algorithm published in Nature Medicine in 2018. Pearse shares hard-won insights on spinning out a company from academic institutions, navigating the regulatory pathway from experimental research code to certified medical device, and what this means for AI researchers looking to have real-world impact. Pearse is Professor of Artificial Medical Intelligence at UCL Institute of Ophthalmology, Consultant Ophthalmologist at Moorfields Eye Hospital, and director of the INSIGHT Eye & Oculomics Health Data Research Hub


David Merle presents findings from qualitative interviews with 26 ophthalmology trainees across UK and German clinics, identifying the educational barriers that most affect early-career ophthalmologists. These barriers include lack of structured roadmaps, inconsistent onboarding, limited feedback and fragmented learning resources. The study derives design requirements for structured, technology-enabled training tools to address these gaps, with implications for how AI can support as well as transform clinical education.


Photo of Roxanne Crosby-Nwaobi

Roxanne Crosby-Nwaobi, tackles the persistent problem of low attendance at diabetic retinopathy screening, particularly among younger and more deprived populations. Through qualitative interviews with patients and healthcare professionals, and evaluation of two behaviour change interventions — a social media video and an accredited online education course for GPs — Roxanne’s EROTES study describes promising approaches to improving uptake, with a 25% increase in patients rebooking missed appointments after the video intervention. Roxanne is a PI with the Keane lab, focused on improving health equity.


Abstracts can be found on the ARVO 2026 meeting planner.


To find out how INSIGHT supports and enables researchers, visit our Researcher overview.

 
 
bottom of page