Posts

AI endoscopy enables 3D surface measurements of pre-cancerous conditions in oesophagus

Clinicians and engineers in Oxford have begun using artificial intelligence alongside endoscopy to get more accurate readings of the pre-cancerous condition Barrett’s oesophagus and so determine patients most at risk of developing cancer.

In a research paper published in the journal Gastroenterology, the researchers said the new AI-driven 3D reconstruction of Barrett’s oesophagus achieved 97.2 % accuracy in measuring the extent of this pre-cancerous condition in the oesophagus in real time, which would enable clinicians to assess the risk, the best surveillance interval and the response to treatment more quickly and confidently.

Barrett’s is a pre-malignant condition that develops in the lower oesophagus in response to acid reflux. There is a less than 0.1-0.4% risk per year of developing cancer with normal Barrett’s oesophagus – or one in 200 patients. However, that risk increases with the extent of Barrett’s lining.

Clinicians use a system called the Prague C&M criteria to give a standardised measure of Barrett’s oesophagus. This uses the circumferential length of the Barrett’s section and the maximum extent of the affected area. This score roughly determines the level of risk of developing cancer and how often the patient needs to be surveyed by an endoscopist, usually every five years for low-risk cases and two to three years for longer Barrett’s segments.

Oxford University Hospitals (OUH) NHS Foundation Trust has a cohort of around 800 patients with Barrett’s who have periodic endoscopic surveillance.

OUH Consultant Gastroenterologist Professor Barbara Braden, together with Dr Adam Bailey, oversees a large endoscopic surveillance programme for Barrett’s patients at OUH. She says the quality of the endoscopy is very dependent on the skill and expertise of the person carrying out the procedure.

“Until now, we have not had any accurate ways of measuring and quantifying the Barrett’s oesophagus. Currently, we insert the scope and then we estimate the length by pulling it back,” said Prof Braden.

“We asked colleagues from the Department of Engineering Science – Prof Jens Rittscher and Dr Sharib Ali – whether they could find a way to measure distances and areas from endoscopic videos to give us a more accurate picture of the Barrett’s area and they came up with the brilliant idea of three-dimensional surface reconstruction.”

Prof Braden, of the University of Oxford’s Translational Gastroenterology Unit, based at the John Radcliffe Hospital, added:

“Currently, you have to have a great deal of experience to know how to spot the subtle changes which indicate early neoplastic alterations in Barrett’s oesophagus. Most endoscopists don’t encounter an early Barrett’s cancer that often. So, instead of teaching thousands of endoscopists, by applying deep learning techniques to endoscopic videos you can teach a programme.”

The Oxford study is using technology to reconstruct the surface of the Barrett’s area in 3D from the endoscopy video, giving a C&M score automatically. This 3D reconstruction allows the clinician to quantify precisely the Barrett’s area including patches or ‘islands’ not connected to the main Barrett’s area.

Dr Sharib Ali, the first author of the paper and the main contributor of this innovative technology, is part of the team working on AI solutions for endoscopy at the University of Oxford’s Department of Engineering Science. He said:

“Automated segmentation of these Barrett’s areas and projecting them in 3D allows the clinician to not only report very accurately the extent of the Barrett’s area, but to pinpoint precisely the location of any dysplasia or tumour, which has not been possible up to now.”

The technique was tested on a purpose-built 3D printed oesophagus phantom and high-definition videos from 131 patients scored by expert endoscopists. The endoscopic phantom video data demonstrated a 97.2 % accuracy for the C&M score measuring the length, while the measurements for the whole Barrett’s oesophagus area achieved nearly 98.4 % accuracy. On patient data, the automated C&M measurements corresponded with the endoscopy expert scores.

“With this new AI technology, the reporting system will be much more rigorous and accurate than before. It makes it much easier when the clinician sees the patient again – they know exactly where to target biopsies or therapy. And the quicker and more efficient it is, the better the experience for the patient,” Dr Sharib Ali explained.

The research was supported by the NIHR Oxford Biomedical Research Centre (BRC), through its cancer and imaging themes.

Artificial intelligence tool for streamlining pathology workflow

Nearly 50,000 cases of prostate cancer are diagnosed each year in the UK. During the diagnostic process, men with suspected prostate cancer undergo a biopsy, which is analysed by pathology services. There are over 60,000 prostate biopsies performed in the UK annually, which represents a high workload for pathology teams. With increasing demand and a shortage of pathologists, tools that could help streamline this workflow would provide significant pathologist time savings and accelerate diagnoses.

To confidently diagnose prostate cancer, pathologists need to identify a number of tissue architecture and cellular cues. All biopsies are stained with Hematoxylin & Eosin (H&E), which allows the pathologist to study the size and shape (morphology) of the cells and tissue. However, in 25-50% cases, H&E staining alone does not provide sufficient evidence for a diagnosis, requiring the additional process of immunohistochemistry (IHC) to study other cellular features.

One bottleneck in the current pathology workflow is the requirement for a pathologist to review the H&E-stained biopsies to determine which require IHC. To address this need, pathologists Dr Richard Colling, Dr Lisa Browning and Professor Clare Verrill (Nuffield Department of Surgical Sciences and Oxford University Hospitals NHS Foundation Trust) teamed up with biomedical image analysts Andrea Chatrian, Professor Jens Rittscher and colleagues (Institute of Biomedical Engineering, Big Data Institute and Ludwig Oxford) to take a multidisciplinary approach.

In their paper in the journal Modern Pathology, the team used prostate biopsies annotated by pathologists at Oxford University Hospitals to train an artificial intelligence (AI) tool to detect tissue regions with ambiguous morphology and decide which cases needed IHC. The tool agreed with the pathologist’s review in 81% of cases on average. By enabling automated request of IHC based on the AI tool results, the pathologist would only need to review the case once all necessary staining had been carried out. This workflow improvement is estimated to save on average 11 minutes of pathologist time for each case, which scales up to 165 pathologist hours for 1000 prostate biopsies needing IHC.

“The NHS spends £27 million on locum and private services to make up for the shortfall in pathology service provision. By using this AI tool to triage prostate biopsies for IHC, pathologists would spend less time reviewing these cases, which would not only lead to financial savings but it would also accelerate prostate cancer diagnoses to inform patients and treating clinicians earlier.” – Professor Clare Verrill, Nuffield Department of Surgical Sciences and Oxford University Hospitals NHS Foundation Trust.

The tool will now be developed and validated further using pathology data from different locations to account for variation in IHC requests between pathologist teams and centres. This future work will continue to take advantage of the PathLAKE Centre of Excellence for digital pathology and artificial intelligence, of which Oxford is a member.

This work was supported by PathLAKE via the Industrial Strategy Challenge Fund, managed and delivered by Innovate UK on behalf of UK Research and Innovation (UKRI), the NIHR Oxford Biomedical Centre, the Engineering and Physical Sciences Research Council (EPSRC), the Medical Research Council (MRC) and the Wellcome Trust.

Oxford spin out influencing patient care world wide

Optellum, a lung health company aiming to redefine early diagnosis and treatment of lung disease, today announced it received FDA clearance for its “Virtual Nodule Clinic”.

Optellum was co-founded by Oxford cancer researcher Prof. Sir Michael Brady with the mission of seeing every lung disease patient diagnosed and treated at the earliest possible stage, and cured.

Optellum’s initial product is the Virtual Nodule Clinic, the first AI-powered Clinical Decision Support software for lung cancer management. Their platform helps clinicians identify and track at-risk patients and speed up decisions for those with cancer while reducing unnecessary procedures.

Lung cancer kills more people than any other cancer. The current five-year survival rate is an abysmal 20%, primarily due to the majority of patients being diagnosed after symptoms have appeared and the disease has progressed to an advanced stage. This much-needed platform is the first such application of AI decision support for early lung cancer diagnosis cleared by the FDA.

Physician use of Virtual Nodule Clinic is shown to improve diagnostic accuracy and clinical decision-making. A clinical study, which underpinned the FDA clearance for the Virtual Nodule Clinic, engaged pulmonologists and radiologists to assess the accuracy for diagnosing lung nodules when using the Optellum software.

Dr Václav Potěšil, co-founder and CEO of Optellum says:

“This clearance will ensure clinicians have the clinical decision support they need to diagnose and treat lung cancer at the earliest possible stage, harnessing the power of physicians and AI working together – to the benefit of patients.

Our goal at Optellum is to redefine early diagnosis and treatment of lung cancer, and this FDA clearance is the first step on that journey. We look forward to empowering clinicians in every hospital, from our current customers at academic medical centers to local community hospitals, to offer patients with lung cancer and other deadly lung diseases the most optimal diagnosis and treatment.”

Using AI to improve the quality of endoscopy videos

Cancers detected at an earlier stage have a much higher chance of being treated successfully. The main method for diagnosing cancers of the gastrointestinal tract is endoscopy, when a long flexible tube with a camera at the end is inserted into the body, such as the oesophagus, stomach or colon, to observe any changes in the organ lining. Endoscopic methods such as radiofrequency ablation can also be used to prevent pre-cancerous regions from progressing to cancer if they are detected in time.

Unfortunately, during conventional endoscopy, the more easily treated pre-cancerous conditions and early stage cancers are harder to spot and often missed, especially by less experienced endoscopists. Cancer detection is made even more challenging by artefacts in the endoscopy video such as bubbles, debris, overexposure, light reflection and blurring, which can obscure key features and hinder efforts to automatically analyse endoscopy videos.

In an effort to improve the quality of video endoscopy, a team of researchers from the Institute for Biomedical Engineering (Sharib Ali and Jens Rittscher), the Translational Gastroenterology Unit (Barbara Braden, Adam Bailey and James East) and the Ludwig Institute for Cancer Research (Felix Zhou and Xin Lu) have developed a deep-learning framework for quality assessment of endoscopy videos in near real-time. This framework, published in the journal Medical Image Analysis, is able to reliably identify six different types of artefacts in the video, generate a quality score for each frame and restore mildly corrupted frames. Frame restoration can help in building visually coherent 2D or 3D maps for further analysis. In addition, providing quality scores can help trainees to assess and improve their endoscopy screening performance.

Future work aims to employ real-time computer algorithm-aided analysis of endoscopic images and videos, which will enable earlier identification of potentially cancerous changes automatically during endoscopy.

This work was supported by the NIHR Oxford Biomedical Research Centre, the EPSRC, the Ludwig Institute for Cancer Research and Health Data Research UK.

(1)Real-time detection of artefacts of different types including specularity, saturation, artefact, blur, contrast, bubbles, each indicated with different coloured boxes on the image. Artefact statistics and quality score are generated. Frames suitable for restoration of blur, artefact and saturation are identified. (2) Fast and realistic frames restoration. Discriminator-generator networks are used. (3) Restoration of the entire video. Before restoration, many more frames were corrupted and fewer frames were of good quality compared to after restoration when over 50% of frames had been restored.

Graphical abstract summarising the main messages of the publication. © The Authors CC-BY-NC-ND 4.0

New AI technology to help research into cancer metastasis

Cell migration is the process of cells moving around the body, such as immune cells moving through the body’s tissues to fight off disease, or the cells that move to fill the gap where a tissue has been injured. Whilst cell migration is an important process for regeneration and growth, it is also the process that allows cancer cells to invade and spread across the body.

Therefore understanding the factors that regulate and instruct cells to move is an important part of understanding how we can prevent the metastasis of many cancers. One method of doing this is through scratch assays, which as the title suggests, involves inflicting a wound or ‘scratch’ on cells grown in a petri-dish and analysing how the surrounding cells react and migrate to ‘heal’ the scratch under a microscope.

Although cell migration is intensively studied, we still do not have efficient therapies to target it in the context of cancer metastasis. Observing cancer cell behaviour to artificial wounding and how this can be altered in response to pharmacological drug treatment or gene editing is important to fully understand the factors that drive this process in tumours and provide insights on the processes that drive such behaviours. Whilst current microscopic analysis methods of wound healing data are hindered by the limited image resolution in these assays. Therefore, there is a need to develop new methods that overcome current challenges and help to answer these questions.

Dr Heba Sailem a Research Fellow from the Department of Engineering, has led a study to develop a new deep learning technology known as DeepScratch. DeepScratch can detect cells from heterogenous image data with a limited resolution, allowing researchers to better characterise changes in tissue arrangement in response to wounding and how this affect cell migration.

Tests using the technology have found that DeepScratch can accurately detect cells in both membrane and nuclei images under different treatment conditions that affected cell shape or adhesion, with over 95% accuracy. This out-performs traditional analysis methods, and can also be used when the scratch assays in question are applied to genetically mutated cells or under the influence of pharmaceutical drugs – which makes this technology applicable to cancer cell research too.

Dr Heba Sailem says;

“Scratch assays are prevalent tool in biomedical studies, however only the wound area is typically measured in these assays. The change in wound area does not reflect the cellular mechanisms that are affected by genetic or pharmacological treatments.

“By analysing the patterns formed by single cells during healing process, we can learn much more on the biological mechanisms influenced by certain genetic or drug treatments than what we can learn from the change in wound area alone.”

Using this technology, the team have already observed that cells respond to wounds by changing their spatial organisation, whereby cells that are more distant from the wound have higher local cell density and are less spread out. Such reorganisation is affected differently when perturbing different cellular mechanisms. This approach can be useful for identifying more specific therapeutic targets and advance our understanding of mechanisms driving cancer invasion.

The team predicts that DeepScratch will prove useful in cancer research that studies changes in cell structures during migration and improve the understanding of various disease processes and engineering regenerative medicine therapies. You can read more about DeepScratch and its applications in a recent study published in Computational and Structural Biotechnology.

About Heba

Dr Heba Sailem is a Sir Henry Wellcome Research Fellow at the Big Data Institute and Institute of Biomedical Engineering at the University of Oxford. Her research is focused on developing intelligent systems that help further biological discoveries in the field of cancer.

Using machine-learning approaches to identify blood cancer types

Myeloproliferative Neoplasms (MPNs) are a group of blood cancers that occur when stem cells in the bone marrow develop mutations that lead to over-production of blood cells – either red blood cells in Polycythaemia Vera (PV), or platelets in Essential Thrombocythaemia (ET). This carries an increased risk of developing blood clots, such as in the legs, lungs, heart attacks or strokes.

In myelofibrosis, the most severe of the MPNs, destructive scarring (‘fibrosis’) of the bone marrow develops, leading to failure of the marrow to produce blood cells and severe symptoms. Patients with all MPNs are at higher risk of developing leukaemia, especially patients with myelofibrosis when this develops in over 1 in every 10 patients.

Unfortunately, we do not yet have any drug treatments that can cure these conditions. Treatments for ET and PV aim to control the blood counts and reduce the risk of blood clots. For myelofibrosis, targeted therapies such as ruxolitinib, a JAK inhibitor, can effectively control symptoms, but this does not alter the natural history of the disease and survival remains less than 5-10 years following diagnosis.

In the vast majority of cases, mutations are found in one of 3 genes – JAK2, CALR or MPL. Screening for these is important in MPN diagnosis, however distinguishing between the MPN subtypes requires a careful examination of blood counts and the morphological features of a bone marrow biopsy.

Unfortunately, assessment of the bone marrow is highly subjective, reliant on qualitative observations and there is great variability, even when it is done by expert haematopathologists. In particular, it is very hard to reliably distinguish between a mutation-negative MPN and a ‘reactive’ (non-cancer) bone marrow.

A more accurate method for diagnosis is very much needed, to enable selection of the most appropriate treatment strategy for patients and to determine treatment targets. Megakaryocyte cells or  ‘megas’ – the large, bone marrow cells that produce blood platelets – are very abnormal in all the MPNs and thought to play a key role in the disease pathology. Interestingly, although the gene mutations underlying all 3 MPNs lead to an over production of megas, subtle differences in the appearance and location of these cells within the bone marrow occur in the different MPN subtypes.

To try to improve MPN classification, a team lead by Jens Rittscher (Department of Engineering) and Daniel Royston (Radcliffe Department of Medicine), developed an AI approach to screen and classify MPN cases based on features of the mega cells, discovering new features in their cell size, clustering and internal complexity. Their machine learning approach revealed that there are clear differences between MPN subtypes – the platform was able to more accurately classify patients by assessing subtle morphological differences in the biopsies that could not have been identified by the naked eye.

These findings have been published in Blood Advances. Dr Beth Psaila, a clinician scientist at the MRC Weatherall Institute of Molecular Medicine and a haematology consultant specialising in MPNs said:

“It has long been recognised that a multitude of subtle differences in megakaryocyte morphology can distinguish between the MPN subtypes. However, this means that assessment of bone marrow biopsies is poorly reproducible, sometimes leading to diagnostic uncertainty and inappropriate treatment plans for patients.

“The approach developed here is really exciting for the field, as it is now possible to perform deep phenotyping of megakaryocytes and more accurate disease classification using simple H&E slides which are routinely prepared in all diagnostic facilities. This will be incredibly useful both for research aimed at better understanding the role of megakaryocytes in blood cancers as well as improving diagnosis and treatment pathways for our patients.”

The team hopes that in the future, this work can be combined with other histological assessments to optimise the clinical application of AI approaches, and create a more comprehensive quantitative description of the bone-marrow microenvironment and its cancers.

About the researchers and the study

This work was funded by the NIHR Oxford Biomedical Research Centre and is the result of collaboration between Korsuk Sirinukunwattana (Department of Engineering), Alan Aberdeen (Ground Truth Labs Ltd.), Helen Theissen (Department of Engineering), Jens Rittscher (Department of Engineering) and Daniel Royston (Radcliffe Department of Medicine [NDCLS]).

Jens Rittscher is a Principle Investigator whose research aim is to enhance our understanding of complex biological processes through the analysis of image data that has been acquired at the microscopic scale. Jens develops algorithms and methods that enable the quantification of a broad range of phenotypical alterations, the precise localisation of signalling events, and the ability to correlate such events in the context of the biological specimen.

Korsuk Sirinukunwattana is a postdoctoral research assistant in Rittscher’s group specialised in medical image analysis and computational pathology. His main research interest is the association between tissue morphology and molecular/genetic subtypes in various diseases.

Alan Aberdeen leads Oxford spinout Ground Truth Labs, a company supporting digital pathology research through on-demand analysis, biomarker discovery, and high-quality cohorts.

Helen Theissen is a doctoral research student in Rittscher’s group. Her research focuses on computational methods to characterise cellular subtypes and quantify the bone marrow microenvironment in MPNs.

Daniel Royston is a joint academic & consultant Haematopathologist at Oxford University Hospitals NHS Foundation Trust / Radcliffe Department of Medicine.

New digital classification method using AI developed for colorectal cancer

A new study from S:CORT demonstrates an easy, cheap way to determine colorectal cancer molecular subtype using AI deep-learning digital pathology technology

Tackling oesophageal cancer early detection challenges through AI

Dr Sharib Ali specialises in the applications of AI to early oesophageal cancer detection

NCITA: a new consortium on cancer imaging

Cancer imaging is an umbrella term that defines diagnostic procedures to identify cancer through imaging – such as scans via x-rays, CT scans and ultrasounds. There is no single imaging test that can accurately diagnose cancer, but a variety of imaging tests can be used in the monitoring of cancer and planning of its treatments.

What is NCITA?

NCITA – the UK National Cancer Imaging Translational Accelerator – is a new consortium that brings together world leading medical imaging experts to create an infrastructure for standardising the cancer imaging process, in order to improve its application in clinical cancer treatment.

Research and medical experts from the University of Oxford have come together with UCL, University of Manchester, the Institute of Cancer Research, Imperial, Cambridge University and many more to create this open access platform.

How will NCITA help cancer research?

On top of bringing together leading experts in cancer imaging to share their knowledge, the NCITA consortium will create a variety of systems, software and facilities to help localise and distribute new research and create a centralised location for cancer-image data to be analysed.

NCITA will in include a data repository for imaging, artificial intelligence (AI) tools and training opportunities – all of which will contributing to a revolution in the speed and accuracy of cancer diagnosis, tumour classification and patient response to treatment.

The NCITA network is led by Prof Shonit Punwani, Prof James O’Connor, Prof Eric Aboagye, Prof Geoff Higgins, Prof Evis Sala, Prof Dow Mu Koh, Prof Tony Ng, Prof Hing Leung and Prof Ruth Plummer with up to 49 co-investigators supporting the NCITA initiative.  NCITA is keen to expand and bring in new academic and industrial partnerships as it develops.

Go to the NCITA website to stay up to date of news about cancer imaging research.

For more information on this exciting new initiative, see the media release about the NCITA launch here.

AI research discovers link between smell genes and colon cancer

Research from Dr Heba Sailem, recently published in Molecular Systems Biology, showed that patients with specific smell-sensing genes ‘turned on’ are more likely to have worse colon cancer outcomes.

Through the development of a machine-learning approach to analyse the perturbation of over 18,000 genes, Dr Sailem and her team found that olfactory receptor gene expression may have some effect on the way that colon cancer cells are structured.

Dr Sailem used layers of Artificial Intelligence (AI), including computer algorithms, to detect the changes of cancer cell appearance and organisation when the genes are turned down using siRNA technology. AI played a crucial part of this research, as it allowed for speed and efficient analysis and mapping of cell image data to various gene functions that were studied, which greatly increase the amount of information that can be extracted and reduced human error.

Dr Sailem surveyed over 18,000 genes and found that specific smell-sensing genes called olfactory receptor genes are strongly associated with how colon cancer cells spread and align with each other akin to the changes induced by turning down key colon cancer genes.

The practical patient implications of this research include how we might approach patients with colon cancer, depending on their genetic makeup. In the long run, Dr Sailem hopes that these findings will allow clinicians to survey patient genes, create specific predictions based on their genetics and create tailored treatments to best treat their cancer.

There is already a large body of research into the genes that influence the structure of cancer tissues, but studies such as this might help to find new target genes. For example, by reducing the expression of olfactory genes, we could potentially inhibit cancer cells from spreading and eventually invading other tissues which is the major cause of cancer death

About the Author

Dr Heba Sailem is a Sir Henry Wellcome Research Fellow at the Big Data Institute and Institute of Biomedical Engineering at the University of Oxford. Her research is focused on developing intelligent systems that help further biological discoveries in the field of cancer.

This paper is a result of three years of work, focusing on identifying the role of genetic expression on the spread and management of colon cancer.

Future research

Following this research Dr Sailem hopes to apply this AI approach to a wider range of cancer, to see what genes are associated with and influence cancer tissue structure, proliferation and motility.

For more information about this research, see Dr Heba Sailem’s paper here.