Posts

Oxford spin out influencing patient care world wide

Optellum, a lung health company aiming to redefine early diagnosis and treatment of lung disease, today announced it received FDA clearance for its “Virtual Nodule Clinic”.

Optellum was co-founded by Oxford cancer researcher Prof. Sir Michael Brady with the mission of seeing every lung disease patient diagnosed and treated at the earliest possible stage, and cured.

Optellum’s initial product is the Virtual Nodule Clinic, the first AI-powered Clinical Decision Support software for lung cancer management. Their platform helps clinicians identify and track at-risk patients and speed up decisions for those with cancer while reducing unnecessary procedures.

Lung cancer kills more people than any other cancer. The current five-year survival rate is an abysmal 20%, primarily due to the majority of patients being diagnosed after symptoms have appeared and the disease has progressed to an advanced stage. This much-needed platform is the first such application of AI decision support for early lung cancer diagnosis cleared by the FDA.

Physician use of Virtual Nodule Clinic is shown to improve diagnostic accuracy and clinical decision-making. A clinical study, which underpinned the FDA clearance for the Virtual Nodule Clinic, engaged pulmonologists and radiologists to assess the accuracy for diagnosing lung nodules when using the Optellum software.

Dr Václav Potěšil, co-founder and CEO of Optellum says:

“This clearance will ensure clinicians have the clinical decision support they need to diagnose and treat lung cancer at the earliest possible stage, harnessing the power of physicians and AI working together – to the benefit of patients.

Our goal at Optellum is to redefine early diagnosis and treatment of lung cancer, and this FDA clearance is the first step on that journey. We look forward to empowering clinicians in every hospital, from our current customers at academic medical centers to local community hospitals, to offer patients with lung cancer and other deadly lung diseases the most optimal diagnosis and treatment.”

Using AI to improve the quality of endoscopy videos

Cancers detected at an earlier stage have a much higher chance of being treated successfully. The main method for diagnosing cancers of the gastrointestinal tract is endoscopy, when a long flexible tube with a camera at the end is inserted into the body, such as the oesophagus, stomach or colon, to observe any changes in the organ lining. Endoscopic methods such as radiofrequency ablation can also be used to prevent pre-cancerous regions from progressing to cancer if they are detected in time.

Unfortunately, during conventional endoscopy, the more easily treated pre-cancerous conditions and early stage cancers are harder to spot and often missed, especially by less experienced endoscopists. Cancer detection is made even more challenging by artefacts in the endoscopy video such as bubbles, debris, overexposure, light reflection and blurring, which can obscure key features and hinder efforts to automatically analyse endoscopy videos.

In an effort to improve the quality of video endoscopy, a team of researchers from the Institute for Biomedical Engineering (Sharib Ali and Jens Rittscher), the Translational Gastroenterology Unit (Barbara Braden, Adam Bailey and James East) and the Ludwig Institute for Cancer Research (Felix Zhou and Xin Lu) have developed a deep-learning framework for quality assessment of endoscopy videos in near real-time. This framework, published in the journal Medical Image Analysis, is able to reliably identify six different types of artefacts in the video, generate a quality score for each frame and restore mildly corrupted frames. Frame restoration can help in building visually coherent 2D or 3D maps for further analysis. In addition, providing quality scores can help trainees to assess and improve their endoscopy screening performance.

Future work aims to employ real-time computer algorithm-aided analysis of endoscopic images and videos, which will enable earlier identification of potentially cancerous changes automatically during endoscopy.

This work was supported by the NIHR Oxford Biomedical Research Centre, the EPSRC, the Ludwig Institute for Cancer Research and Health Data Research UK.

(1)Real-time detection of artefacts of different types including specularity, saturation, artefact, blur, contrast, bubbles, each indicated with different coloured boxes on the image. Artefact statistics and quality score are generated. Frames suitable for restoration of blur, artefact and saturation are identified. (2) Fast and realistic frames restoration. Discriminator-generator networks are used. (3) Restoration of the entire video. Before restoration, many more frames were corrupted and fewer frames were of good quality compared to after restoration when over 50% of frames had been restored.

Graphical abstract summarising the main messages of the publication. © The Authors CC-BY-NC-ND 4.0

New AI technology to help research into cancer metastasis

Cell migration is the process of cells moving around the body, such as immune cells moving through the body’s tissues to fight off disease, or the cells that move to fill the gap where a tissue has been injured. Whilst cell migration is an important process for regeneration and growth, it is also the process that allows cancer cells to invade and spread across the body.

Therefore understanding the factors that regulate and instruct cells to move is an important part of understanding how we can prevent the metastasis of many cancers. One method of doing this is through scratch assays, which as the title suggests, involves inflicting a wound or ‘scratch’ on cells grown in a petri-dish and analysing how the surrounding cells react and migrate to ‘heal’ the scratch under a microscope.

Although cell migration is intensively studied, we still do not have efficient therapies to target it in the context of cancer metastasis. Observing cancer cell behaviour to artificial wounding and how this can be altered in response to pharmacological drug treatment or gene editing is important to fully understand the factors that drive this process in tumours and provide insights on the processes that drive such behaviours. Whilst current microscopic analysis methods of wound healing data are hindered by the limited image resolution in these assays. Therefore, there is a need to develop new methods that overcome current challenges and help to answer these questions.

Dr Heba Sailem a Research Fellow from the Department of Engineering, has led a study to develop a new deep learning technology known as DeepScratch. DeepScratch can detect cells from heterogenous image data with a limited resolution, allowing researchers to better characterise changes in tissue arrangement in response to wounding and how this affect cell migration.

Tests using the technology have found that DeepScratch can accurately detect cells in both membrane and nuclei images under different treatment conditions that affected cell shape or adhesion, with over 95% accuracy. This out-performs traditional analysis methods, and can also be used when the scratch assays in question are applied to genetically mutated cells or under the influence of pharmaceutical drugs – which makes this technology applicable to cancer cell research too.

Dr Heba Sailem says;

“Scratch assays are prevalent tool in biomedical studies, however only the wound area is typically measured in these assays. The change in wound area does not reflect the cellular mechanisms that are affected by genetic or pharmacological treatments.

“By analysing the patterns formed by single cells during healing process, we can learn much more on the biological mechanisms influenced by certain genetic or drug treatments than what we can learn from the change in wound area alone.”

Using this technology, the team have already observed that cells respond to wounds by changing their spatial organisation, whereby cells that are more distant from the wound have higher local cell density and are less spread out. Such reorganisation is affected differently when perturbing different cellular mechanisms. This approach can be useful for identifying more specific therapeutic targets and advance our understanding of mechanisms driving cancer invasion.

The team predicts that DeepScratch will prove useful in cancer research that studies changes in cell structures during migration and improve the understanding of various disease processes and engineering regenerative medicine therapies. You can read more about DeepScratch and its applications in a recent study published in Computational and Structural Biotechnology.

About Heba

Dr Heba Sailem is a Sir Henry Wellcome Research Fellow at the Big Data Institute and Institute of Biomedical Engineering at the University of Oxford. Her research is focused on developing intelligent systems that help further biological discoveries in the field of cancer.

Using machine-learning approaches to identify blood cancer types

Myeloproliferative Neoplasms (MPNs) are a group of blood cancers that occur when stem cells in the bone marrow develop mutations that lead to over-production of blood cells – either red blood cells in Polycythaemia Vera (PV), or platelets in Essential Thrombocythaemia (ET). This carries an increased risk of developing blood clots, such as in the legs, lungs, heart attacks or strokes.

In myelofibrosis, the most severe of the MPNs, destructive scarring (‘fibrosis’) of the bone marrow develops, leading to failure of the marrow to produce blood cells and severe symptoms. Patients with all MPNs are at higher risk of developing leukaemia, especially patients with myelofibrosis when this develops in over 1 in every 10 patients.

Unfortunately, we do not yet have any drug treatments that can cure these conditions. Treatments for ET and PV aim to control the blood counts and reduce the risk of blood clots. For myelofibrosis, targeted therapies such as ruxolitinib, a JAK inhibitor, can effectively control symptoms, but this does not alter the natural history of the disease and survival remains less than 5-10 years following diagnosis.

In the vast majority of cases, mutations are found in one of 3 genes – JAK2, CALR or MPL. Screening for these is important in MPN diagnosis, however distinguishing between the MPN subtypes requires a careful examination of blood counts and the morphological features of a bone marrow biopsy.

Unfortunately, assessment of the bone marrow is highly subjective, reliant on qualitative observations and there is great variability, even when it is done by expert haematopathologists. In particular, it is very hard to reliably distinguish between a mutation-negative MPN and a ‘reactive’ (non-cancer) bone marrow.

A more accurate method for diagnosis is very much needed, to enable selection of the most appropriate treatment strategy for patients and to determine treatment targets. Megakaryocyte cells or  ‘megas’ – the large, bone marrow cells that produce blood platelets – are very abnormal in all the MPNs and thought to play a key role in the disease pathology. Interestingly, although the gene mutations underlying all 3 MPNs lead to an over production of megas, subtle differences in the appearance and location of these cells within the bone marrow occur in the different MPN subtypes.

To try to improve MPN classification, a team lead by Jens Rittscher (Department of Engineering) and Daniel Royston (Radcliffe Department of Medicine), developed an AI approach to screen and classify MPN cases based on features of the mega cells, discovering new features in their cell size, clustering and internal complexity. Their machine learning approach revealed that there are clear differences between MPN subtypes – the platform was able to more accurately classify patients by assessing subtle morphological differences in the biopsies that could not have been identified by the naked eye.

These findings have been published in Blood Advances. Dr Beth Psaila, a clinician scientist at the MRC Weatherall Institute of Molecular Medicine and a haematology consultant specialising in MPNs said:

“It has long been recognised that a multitude of subtle differences in megakaryocyte morphology can distinguish between the MPN subtypes. However, this means that assessment of bone marrow biopsies is poorly reproducible, sometimes leading to diagnostic uncertainty and inappropriate treatment plans for patients.

“The approach developed here is really exciting for the field, as it is now possible to perform deep phenotyping of megakaryocytes and more accurate disease classification using simple H&E slides which are routinely prepared in all diagnostic facilities. This will be incredibly useful both for research aimed at better understanding the role of megakaryocytes in blood cancers as well as improving diagnosis and treatment pathways for our patients.”

The team hopes that in the future, this work can be combined with other histological assessments to optimise the clinical application of AI approaches, and create a more comprehensive quantitative description of the bone-marrow microenvironment and its cancers.

About the researchers and the study

This work was funded by the NIHR Oxford Biomedical Research Centre and is the result of collaboration between Korsuk Sirinukunwattana (Department of Engineering), Alan Aberdeen (Ground Truth Labs Ltd.), Helen Theissen (Department of Engineering), Jens Rittscher (Department of Engineering) and Daniel Royston (Radcliffe Department of Medicine [NDCLS]).

Jens Rittscher is a Principle Investigator whose research aim is to enhance our understanding of complex biological processes through the analysis of image data that has been acquired at the microscopic scale. Jens develops algorithms and methods that enable the quantification of a broad range of phenotypical alterations, the precise localisation of signalling events, and the ability to correlate such events in the context of the biological specimen.

Korsuk Sirinukunwattana is a postdoctoral research assistant in Rittscher’s group specialised in medical image analysis and computational pathology. His main research interest is the association between tissue morphology and molecular/genetic subtypes in various diseases.

Alan Aberdeen leads Oxford spinout Ground Truth Labs, a company supporting digital pathology research through on-demand analysis, biomarker discovery, and high-quality cohorts.

Helen Theissen is a doctoral research student in Rittscher’s group. Her research focuses on computational methods to characterise cellular subtypes and quantify the bone marrow microenvironment in MPNs.

Daniel Royston is a joint academic & consultant Haematopathologist at Oxford University Hospitals NHS Foundation Trust / Radcliffe Department of Medicine.

New digital classification method using AI developed for colorectal cancer

A new study from S:CORT demonstrates an easy, cheap way to determine colorectal cancer molecular subtype using AI deep-learning digital pathology technology

Tackling oesophageal cancer early detection challenges through AI

Dr Sharib Ali specialises in the applications of AI to early oesophageal cancer detection

NCITA: a new consortium on cancer imaging

Cancer imaging is an umbrella term that defines diagnostic procedures to identify cancer through imaging – such as scans via x-rays, CT scans and ultrasounds. There is no single imaging test that can accurately diagnose cancer, but a variety of imaging tests can be used in the monitoring of cancer and planning of its treatments.

What is NCITA?

NCITA – the UK National Cancer Imaging Translational Accelerator – is a new consortium that brings together world leading medical imaging experts to create an infrastructure for standardising the cancer imaging process, in order to improve its application in clinical cancer treatment.

Research and medical experts from the University of Oxford have come together with UCL, University of Manchester, the Institute of Cancer Research, Imperial, Cambridge University and many more to create this open access platform.

How will NCITA help cancer research?

On top of bringing together leading experts in cancer imaging to share their knowledge, the NCITA consortium will create a variety of systems, software and facilities to help localise and distribute new research and create a centralised location for cancer-image data to be analysed.

NCITA will in include a data repository for imaging, artificial intelligence (AI) tools and training opportunities – all of which will contributing to a revolution in the speed and accuracy of cancer diagnosis, tumour classification and patient response to treatment.

The NCITA network is led by Prof Shonit Punwani, Prof James O’Connor, Prof Eric Aboagye, Prof Geoff Higgins, Prof Evis Sala, Prof Dow Mu Koh, Prof Tony Ng, Prof Hing Leung and Prof Ruth Plummer with up to 49 co-investigators supporting the NCITA initiative.  NCITA is keen to expand and bring in new academic and industrial partnerships as it develops.

Go to the NCITA website to stay up to date of news about cancer imaging research.

For more information on this exciting new initiative, see the media release about the NCITA launch here.

AI research discovers link between smell genes and colon cancer

Research from Dr Heba Sailem, recently published in Molecular Systems Biology, showed that patients with specific smell-sensing genes ‘turned on’ are more likely to have worse colon cancer outcomes.

Through the development of a machine-learning approach to analyse the perturbation of over 18,000 genes, Dr Sailem and her team found that olfactory receptor gene expression may have some effect on the way that colon cancer cells are structured.

Dr Sailem used layers of Artificial Intelligence (AI), including computer algorithms, to detect the changes of cancer cell appearance and organisation when the genes are turned down using siRNA technology. AI played a crucial part of this research, as it allowed for speed and efficient analysis and mapping of cell image data to various gene functions that were studied, which greatly increase the amount of information that can be extracted and reduced human error.

Dr Sailem surveyed over 18,000 genes and found that specific smell-sensing genes called olfactory receptor genes are strongly associated with how colon cancer cells spread and align with each other akin to the changes induced by turning down key colon cancer genes.

The practical patient implications of this research include how we might approach patients with colon cancer, depending on their genetic makeup. In the long run, Dr Sailem hopes that these findings will allow clinicians to survey patient genes, create specific predictions based on their genetics and create tailored treatments to best treat their cancer.

There is already a large body of research into the genes that influence the structure of cancer tissues, but studies such as this might help to find new target genes. For example, by reducing the expression of olfactory genes, we could potentially inhibit cancer cells from spreading and eventually invading other tissues which is the major cause of cancer death

About the Author

Dr Heba Sailem is a Sir Henry Wellcome Research Fellow at the Big Data Institute and Institute of Biomedical Engineering at the University of Oxford. Her research is focused on developing intelligent systems that help further biological discoveries in the field of cancer.

This paper is a result of three years of work, focusing on identifying the role of genetic expression on the spread and management of colon cancer.

Future research

Following this research Dr Sailem hopes to apply this AI approach to a wider range of cancer, to see what genes are associated with and influence cancer tissue structure, proliferation and motility.

For more information about this research, see Dr Heba Sailem’s paper here.

Leveraging AI and image analysis technology to improve prognostication in colorectal cancer