- Advertisement -
Monday, February 6, 2023
Home Medical Image Analysis 9 Challenges to Address to Enable High Throughput Image Analytics

9 Challenges to Address to Enable High Throughput Image Analytics

By Simon Adar, CEO, Code Ocean

Science these days is inseparably linked with computational analysis. All organizations whether small biotechs, large (bio)pharma companies or academic research groups have a computational element to their work that is gaining in importance as automation enables bench scientists to generate ever larger and more complex data sets. To make sense of that data, cloud workstations can be readily, and flexibly accessed and new software packages almost instantaneously perform analyses that in the past took days or weeks.

Deep Learning has advanced the state-of-the-art for image analysis and can process larger datasets than traditional methods.  This world of big data, customizable computing power and advanced image analysis tools has removed many of the old bottlenecks that slowed down innovation but also created a set of new ones primarily around shareability, traceability and reproducibility of image analyses.

Here is an overview of the nine most pressing challenges organizations struggle with:

  1. Managing computational teams and projects

With data generation no longer the research bottleneck, workflow challenges are now rate-limiting. The days of a biologist generating data and analyzing them in a spreadsheet are over. Now teams of bioinformaticians and computational researchers crunch complex multi-omics data sets created in the lab. The challenge is to manage these different skill sets and – in the absence of established best practices – develop a methodology for standardized data analysis that can be consistently applied and avoids ad hoc, project-based approaches that need to be reinvented for every project.

Each team has different tools of choice they are skilled in using, from choices in programming languages (R, Python, or MATLAB) to the computing environment (GPUs, FPGAs, and CPUs).  Collaborating on team projects that are multidisciplinary in nature can be a complex undertaking.

  1. Splitting up complex tasks

Computational projects are designed to help answer an important scientific question. Complex problems like identifying the best target or drug candidate need to be broken down in subtasks that are assigned to different computational teams.

Organizations struggle with splitting large tasks and enabling teams to quickly access data, launch the computational tools they need and reassemble the different components for a project.

  1. Handling DevOps tasks

Before computational researchers can start analyzing data, a host of DevOps tasks are required, e.g. setting up security and computing infrastructure. The smaller an organization, the bigger the problem around efficiently executing these DevOps tasks. Larger organizations can hire software engineers – an expensive solution that often slows progress as computational researchers must wait for help from scarce IT resources.

  1. Collaborating with colleagues and external partners

For large organizations the main challenge is enabling collaborative work. Knowledge needs to be shared internally or externally in a way that the collaborators can readily build on it. Static slides or spreadsheets are no longer adequate for deep knowledge transfer, but organizations generally do not have a centralized platform that hosts curated data and analysis and allows for easy sharing and quick iterations.

  1. Aggregating knowledge company-wide

Computational researchers often lack deep software engineering skills, they know enough about GitHub and Docker to be dangerous, but not enough to perform tasks such as merging code or releasing it to production. Hiring software developers solves this problem but creates a new one: organizations need to find a way for these teams to work together and aggregate the knowledge.

  1. Keeping knowledge in the company

Common problems organizations of all sizes are struggling with is capturing the knowledge created by an individual on their computing machine. A team member leaving the organization highlights this problem: often their work is lost or extremely difficult to recover or recreate.

  1. Increasing cycle time for sharing

Cycle time for sharing complex analyses with biologists who don’t code is high. Currently, reducing that cycle time requires the involvement of the IT department or hiring specialized staff. Organizations need approaches that allow them to quickly share results and iterate without also dramatically increasing overheads.

  1. Making costly decisions based on non-reproducible analyses

In biological and medical research very expensive decisions are made based on computational analyses. Trust in the results and their reproducibility are critical before a decision – such as taking a compound into the clinic – can be made. Establishing trust often requires rerunning analyses that were done months or even years ago. While this sounds basic, the reality is that attempts to rerun old analyses almost always fail. New versions of software and new dependencies turn reproducing previous work into a time-consuming, frustrating and often futile exercise when it should be as easy as clicking “rerun”.

  1. Keeping research gratifying and fun

Long cycle times, challenges cooperating and sharing results and hard to reproduce results slow down progress and can turn the fun and excitement of cutting-edge research work into a drag – something organizations intent on retaining their computational researchers in times of record low unemployment rates want to avoid.

Computational research is a young discipline and growing pains are to be expected. Many of the critical pieces already exist.  What is missing is a central platform that orchestrates and integrates the entire process, enables quick cycle times, easy sharing and collaboration, frees lab and computational researchers from the burden of DevOps tasks, and reliably generates the same results it did yesterday or six months ago

Simon Adar
CEO, Code Ocean



Must Read

Data Sciences in the Medical Industry: How powerful is it?

When novel compounds were researched about, to meet the needs of patients who were in critical conditions, medical science had huge growth....

Innovative Technologies Accelerating First-In-Human to Proof-of-Concept

During the earliest stages of clinical drug development, pharmaceutical and biotech companies need to make critical go / no-go decisions for their investigational products....

Developing Animal Free Organoid Models for Drug Discovery

There is an increasing drive within the pharmaceutical and personal care communities to reduce, if not eliminate, the use of animals in the testing...

Related News

Disrupting the Pharmaceutical Space Like Never Before

The human arsenal has a lot more to it than we usually acknowledge, and yet it doesn’t boast a feature more valuable than that...

Weaving a More Informed Healthcare Space

The human arsenal is known for a myriad of different things, but most importantly, it is known for getting better on a consistent basis....

Addressing a Big Old Plague

While the human arsenal has proven itself to be pretty much limitless, it still doesn’t possess anything more valuable than that tendency of ours...

Doubling Down on the Push to Enhance Mental Health

The human arsenal has always been a little on the loaded side, but if we being honest, it is still yet to possess anything...

Strengthening an Indispensible Pillar of Our Healthcare System

Human beings are known for many different things, but most importantly, we are known for getting better on a consistent basis. This tendency to...