St. Jude Family of Websites
Explore our cutting edge research, world-class patient care, career opportunities and more.
St. Jude Children's Research Hospital Home
St. Jude Family of Websites
Explore our cutting edge research, world-class patient care, career opportunities and more.
St. Jude Children's Research Hospital Home
Michelle Stoltz had a choice about where to use her talents. But only one place helped her sister survive childhood cancer. This is why she came to St. Jude.
Title:
Sprocket WDL Runner
Challenge Summary:
The challenge involves advancing the development of "Sprocket," a Workflow Description Language (WDL) project manager written in Rust. The specific task is to enable Sprocket to execute WDL workflows on Kubernetes clusters, both local and remote, by using Kubernetes jobs. The aim is to prototype a feature where users can initiate bioinformatics workflows through a command similar to `sprocket run workflow.wdl -I input.json`. This effort would include taking inspiration from the command line interface of Oliver, another project under the St. Jude Cloud initiative, to guide the design of the `sprocket run` command interface.
Benefit:
Implementing this feature in Sprocket would significantly enhance the scalability of bioinformatics computations, addressing a critical gap in current bioinformatics infrastructure. Most existing workflow engines struggle to efficiently manage more than a handful of concurrent workflows. By enabling the execution of over 100 concurrent workflows, Sprocket would provide a robust solution that can handle high-volume, high-throughput bioinformatics data processing, thereby accelerating research and development in computational biology and related fields.
Useful Tools/Packages/Software:
Submitter:
Clay McLeod, Computational Biology, St. Jude
Title:
Rust Differential Gene Expression Foundations Library
Challenge Summary:
The objective is to create a high-performance Rust library for differential gene expression (DGE) analysis, a critical component for the St. Jude Cloud team’s counts server project. Currently, most DGE analysis tools are implemented in R (e.g., DESeq2, limma, ComBat-seq), which may not meet the performance needs of high-throughput platforms. The challenge involves either binding Rust to existing statistical libraries like Stan or rewriting the necessary algorithms from scratch using Rust-specific scientific libraries such as ndarray and linfa_linear. The initial focus will be on:
The aim for a hackathon would be to establish a proof of concept that can be further developed and expanded by the broader community.
Benefit:
Developing these capabilities in Rust would provide a robust, efficient, and scalable alternative to existing R-based tools, suitable for integration into high-performance bioinformatics infrastructure like the counts server being developed by St. Jude Cloud. This advancement would benefit the scientific community by providing faster, more reliable tools for DGE analysis, crucial for understanding gene expression differences in large datasets, enhancing both research throughput and insights.
Useful Tools/Packages/Software:
Submitter:
Clay McLeod, Computational Biology, St. Jude
Title:
Extension of `ngs` Command Line Tool to Include MarkDuplicates Command
Challenge Summary:
The challenge involves extending the `ngs` command line tool, developed by the St. Jude Rust Labs team, to include a MarkDuplicates command. The `ngs` package, which is written in Rust, aims to provide a robust and efficient set of tools for bioinformatics analysis. This extension will reimplement the process of marking duplicates in next-generation sequencing (NGS) data, addressing current deficiencies in existing bioinformatics tools.
Benefit:
Introducing a MarkDuplicates command in the `ngs` package will offer the scientific community a more reliable and straightforward tool for identifying duplicate sequences in NGS data. This improvement will help in enhancing data accuracy and integrity in genomic studies, contributing to better research outcomes and more precise genetic analysis. A robust implementation in Rust will also ensure better performance and ease of integration into existing Rust-based bioinformatics pipelines.
Useful Tools/Packages/Software:
Submitter:
Clay McLeod, Computational Biology, St. Jude
Title:
Integrate Chainfile Liftover into `ngs` Command Line Tool
Challenge Summary:
The goal is to enhance the `ngs` command line tool, developed by the St. Jude Rust Labs team, by integrating a new subcommand that utilizes the `chainfile` crate. This subcommand will enable users to lift over genomic coordinates and variants between different genome builds. The project involves two main tasks:
Benefit:
Implementing this functionality within the `ngs` tool would provide the scientific community with a robust, efficient toolset for genomic data analysis, particularly in the context of transitioning data between genome assemblies. This capability is crucial for ensuring the accuracy and relevance of genomic data as reference standards evolve. The integration of these features into a single toolset would streamline workflows and enhance the usability of `ngs` in genomic research.
Useful Tools/Packages/Software:
Submitter:
Clay McLeod, Computational Biology, St. Jude
Title:
Development of a Language Server Protocol for WDL in Rust
Challenge Summary:
The challenge is to create a Language Server Protocol (LSP) implementation for the Workflow Description Language (WDL) using Rust. This LSP should integrate with the `stjude-rust-labs/wdl` crate and provide features similar to those found in rust-analyzer, including formatting, code completion, and inline type hinting. The goal is to enhance the development environment for WDL by offering robust Integrated Development Environment (IDE) support, making it easier for developers to write, debug, and maintain WDL scripts.
Benefit:
Implementing an LSP for WDL will significantly improve the usability and accessibility of the WDL programming language, benefiting both the immediate research community at St. Jude and the broader scientific community engaged in computational biology and bioinformatics. Enhanced IDE support will streamline the development process, reduce errors, and increase productivity by providing real-time feedback and suggestions as developers write WDL scripts. This tool will fill a current gap in the ecosystem by providing high-quality, IDE-integrated tools tailored for WDL, thereby promoting its adoption and effectiveness in scientific workflows.
Useful Tools/Packages/Software:
Submitter:
Clay McLeod, Computational Biology, St. Jude
Title:
Sprocket WDL Runner Enhancements
Challenge Summary:
The challenge involves extending the capabilities of "Sprocket," a Rust-based project manager for Workflow Description Language (WDL). The desired enhancements include:
The expected deliverables include:
Benefit:
These enhancements would significantly streamline workflow creation and management, making it easier and faster to develop and deploy bioinformatics workflows. The addition of scaffolding tools would reduce the time and complexity involved in writing and maintaining WDL scripts. Introducing symbolic imports and package management would further enhance usability and maintainability of workflows, promoting reusability and collaboration within the scientific community.
Useful Tools/Packages/Software:
Submitter:
Clay McLeod, Computational Biology, St. Jude
Title:
Implementing a cloud based pipeline for prioritizing disease predisposition genes utilizing large public summary genetic count data
Challenge Summary:
Previously, we developed CoCoRV (consistent summary counts based rare variant burden test), a computational tool to help prioritize disease predisposition genes and variants for rare diseases.
We implemented CoCoRV pipeline in Nextflow to make it feasible to run seamlessly on different platforms, i.e., personal computers, clusters, or cloud platforms. Also nextflow can leverage parallel execution of jobs to make the execution of the pipeline faster. However, users still need to process the large public control data sets such as gnomAD, and may not have adequate computational facilities to execute it efficiently. To solve the challenge, we propose to implement CoCoRV nextflow pipeline on DNAnexus cloud platform with a user friendly interface. We need to achieve the follow goals: 1) provide a self-contained and ready-to-run package for the cloud environment; 2) implement a user friend interface utilizing the DNAnexus cloud platform to execute the pipeline, e.g., interface to specify input parameters for variant annotation and running CoCoRV; 3) optimize the nextflow implementation to leverage the large number of nodes in the cloud for fast execution and efficient memory usage per node.
Benefit:
The cloud implementation of CoCoRV will include pre-processed and annotated public control data sets such as the lasted gnomAD V4, and it will have all necessary packages installed there. Therefore, it will be a ready to use pipeline, users don’t need to install anything or don’t need to preprocess the control data sets. They only need to upload their case datasets or directly explore datasets already available on DNAnexus platform. Our cloud based pipeline provides a cost effective and user-friendly tool for researchers around the world to analyze their datasets for discovery of disease predisposition genes.
Useful Tools/Packages/Software:
Submitter:
Saima Sultana Tithi, Cell & Molecular Biology, St. Jude
Title:
Enhancing Variant Detection Accuracy Using T2TCHM13 with Liftover to hg19
Challenge Summary:
The proposed challenge is to investigate the feasibility of using the T2TCHM13 reference genome to calculate structural and copy number variants from RNA and DNA, with a subsequent liftover to the hg19 reference. This approach aims to address the high rate of false positives typically generated by current variant detection methods, which are often due to gaps or mismapped regions in the reference genome. By leveraging more accurate reference data and potentially more sophisticated mapping techniques, the goal is to improve variant detection accuracy and facilitate easier comparison with existing variant data hosted on the St. Jude clinical genomics platform, St. Jude Cloud.
Benefit:
Improving the accuracy of variant detection and reducing false positives would significantly benefit not only the clinical genomics analysts at St. Jude but also the wider scientific community. By decreasing the manual review workload, the process of analyzing patient samples can be expedited, aligning with the clingen group’s goal to halve processing times. This enhancement in efficiency and reliability of genomic analyses could lead to faster and more accurate diagnoses and, by extension, more timely therapeutic interventions.
Useful Tools/Packages/Software:
Submitter:
David Rosenfeld, Computational Biology, St. Jude
Title:
Comprehensive Omics Catalogue for Hartwell
Challenge Summary:
The proposed challenge is to create an institution-wide catalogue of all omics studies conducted in Hartwell, focusing on St. Jude samples. This catalogue would serve as a centralized database to track and manage various types of omics data (genomics, proteomics, metabolomics, etc.) that have been generated across different laboratories and research groups within the institution.
Benefit:
Developing a comprehensive omics catalogue would provide significant benefits to both individual labs and the broader scientific community at St. Jude. By having a centralized repository of omics data, researchers can easily check if a particular sample has already been sequenced or analyzed, thus avoiding redundant sequencing efforts and associated costs. This efficiency not only saves resources but also accelerates research by facilitating easier access to existing data. Researchers can request file shares of needed data instead of initiating new sequencing projects, promoting collaboration and data reuse.
Useful Tools/Packages/Software:
Submitter:
Louis El Khoury, Pathology, St. Jude
Title:
Development of a Modern Super Enhancer Identification Tool
Challenge Summary:
Rank Ordering of Super Enhancers (ROSE) is the de facto tool for super enhancer identification. It is also now quite old (written 2010-2011), has never been updated, and now relies on defunct language versions (python 2). While there are a few "updated" versions floating around Github, the changes made are not well documented. In addition, its built-in annotation data is now very out of date, and it requires changes to the source code to update. It's also not installable via the typical methods for python software (pip, conda, etc). In short, it is very much showing its age and has a large amount of technical debt due to 'omics data structures being poorly defined when it was created.
This challenge would be to write a new and improved SE caller that can handle flexible annotation data (i.e. a GTF file), perform group-wise SE calling in a sensible manner, is easily installed and well-documented, outputs data in standardized and well-established formats (e.g. BED), and that has more control over how region stitching is performed (e.g. how promoters of densely packed genes may interfere or prevent stitching). Bonus points if (optional) gene expression information can be incorporated to improve SE-gene association/assignment, a problem that I've dabbled with in the past (https://github.com/j-andrews7/rosewater).
Super enhancer calling is still performed all the time, and an improved tool would see a lot of use. Dealer's choice on language (python, R, rust, whatever) so long as its reasonably performant and easily distributed and installed without a trek through dependency hell.
Benefit:
Creating a modernized, efficient SE identification tool will significantly improve the robustness, transparency, and flexibility of super enhancer analysis. It will streamline and enhance research capabilities in genomic studies where super enhancers play a critical role, such as in cancer and developmental biology research. This tool would facilitate more accurate and faster analyses, promoting deeper insights into gene regulation complexities.
Useful Tools/Packages/Software:
R, python, rust, ROSE (https://github.com/younglab/ROSE) for reference
Submitter:
Jared Andrews, Developmental Neurobiology, St. Jude
Title:
ChatCAB: A GPT-Powered Departmental Wiki Chatbot
Challenge Summary:
The challenge is to develop ChatCAB, a GPT-powered chatbot designed to automatically answer inquiries using information from the Center for Applied Bioinformatics (CAB) department Wiki. The chatbot would be integrated with the department's internal systems to provide instant responses to frequently asked questions, leveraging the content within the departmental Wiki as its knowledge base.
Benefit:
Implementing ChatCAB would significantly reduce the time CAB staff currently spend responding to routine inquiries, allowing them to focus on more complex and impactful projects. By automating responses to common questions, the chatbot would enhance efficiency and ensure consistent information dissemination within the department. Additionally, this tool could serve as a model for other departments, potentially broadening its impact across the scientific community by improving productivity and knowledge sharing.
Useful Tools/Packages/Software:
Submitter:
Xun Zhu, CAB, St. Jude
Title:
Manuscript Formatting Tool for Journal Submission
Challenge Summary:
The challenge is to develop a software or platform that enables authors to visualize their manuscripts in the specific publication format of various academic journals. Authors would upload their manuscript along with any figures and select their target journal. The software would then present the manuscript formatted according to the journal's guidelines. This tool aims to improve the readability of manuscripts and help authors identify typographical and formatting errors that might be overlooked in standard word processing formats.
Benefit:
Implementing this solution would significantly enhance the manuscript preparation process for researchers, particularly in terms of the quality and presentation of their work. For authors, this means a better reading experience and a higher chance of catching errors before submission, which can improve the likelihood of acceptance. For the scientific community at large, it streamlines the review process and ensures submissions adhere more closely to journal standards, facilitating clearer communication of research findings.
Useful Tools/Packages/Software:
Submitter:
Louis El Khoury, Pathology, St. Jude
Title:
AI-Assisted Manuscript Preparation Tool
Challenge Summary:
The challenge is to develop an AI-powered tool that aids in the preparation of scientific manuscripts by automating the initial review processes typically performed by scientific editors. This tool would utilize advanced language models to analyze manuscripts for connectivity, coherency, readability, and logical flow. It would identify common issues such as gaps in logic or inconsistencies in terminology and provide actionable suggestions to improve the manuscript. This technology aims to enhance the quality of manuscripts before they reach human editors, making the editing process more efficient and focused on deep scientific content.
Benefit:
An AI-assisted manuscript preparation tool would significantly benefit St. Jude's researchers by refining their manuscripts in the early stages, thus reducing the workload on scientific editors and enhancing the overall quality of submissions. This would streamline the manuscript preparation process, allowing scientists to focus more on their research rather than the nuances of writing. Additionally, this tool would be invaluable for researchers who do not have ready access to professional editing services, ultimately raising the standard and competitiveness of the manuscripts submitted for publication.
Useful Tools/Packages/Software:
Submitter:
Jaimin Patel, Structural Biology, St. Jude
Title:
In-House App for Maintenance Checklists
Challenge Summary:
The challenge is to develop an in-house application designed to manage routine rounds and inspections by maintenance personnel at St. Jude. The existing Computerized Maintenance Management Systems (CMMS) do not meet the specific needs of the facility operations team, leading to the decision to create a tailored solution. This app will feature a JAVA backend using PostgreSQL for data management and a React frontend to provide separate user interfaces for administrators and end-users.
Benefit:
Creating this app will digitalize the existing paper-based processes, enhancing efficiency and accuracy in recording and retrieving data for routine maintenance and inspections. This shift to digital solutions aligns with broader environmental goals by reducing paper use and streamlines operations, thus enabling maintenance staff to perform their duties more effectively. Digitizing these processes not only benefits the facility operations team at St. Jude but also serves as a model for similar departments in other institutions looking to modernize their operational workflows.
Useful Tools/Packages/Software:
Submitter:
Kennon Silence, FOM, St. Jude
Title:
Development of a Python-based Unbalanced Haar Technique for CNV Analysis
Challenge Summary:
The challenge is to implement the unbalanced Haar technique, a nonparametric function estimation method, in Python and apply it to genomic data for coverage and allele frequency decomposition. This implementation will serve as a foundational element for developing a versatile copy number variant (CNV) caller that can handle single samples across any cancer type and potentially various input types. The focus is on leveraging the mathematical properties of the unbalanced Haar transformation to efficiently identify uniform segments of the genome, which are indicative of potential CNVs.
Benefit:
Implementing the unbalanced Haar technique for CNV analysis would provide a fast and robust tool for genomic research, particularly in the context of cancer genomics. This method's ability to quickly identify uniform genomic regions could significantly enhance the accuracy and speed of copy number analysis, benefiting both clinical and research settings.
Useful Tools/Packages/Software:
Submitter:
Karol Szlachta, CAB, St. Jude
Title:
Optimization of Nextflow Bioinformatics Pipeline Architecture for Enhanced HPC Performance
Challenge Summary:
The challenge focuses on optimizing Nextflow bioinformatics pipeline architecture and the configuration of high-performance computing (HPC) resources to improve stability, efficiency, and user experience. The core issues include the frequent failure of pipelines due to immature module integration, the need for fine-tuning module parameters for different scientific applications, and the effective allocation of HPC resources like memory and CPU across various modules. Addressing these challenges involves deepening the understanding of each bioinformatics component's biological implications, refining the top-level architecture of Nextflow workflows, and enhancing collaboration with HPC architects.
Benefit:
Optimizing Nextflow pipelines and their execution on HPC clusters will directly benefit the scientific community by delivering more reliable, efficient, and user-friendly bioinformatics analyses. Improved pipeline design and resource management will not only enhance the computational performance but also ensure that the bioinformatics analyses are more reproducible and scalable. This optimization is crucial for complex analyses such as whole-genome sequencing, isoform sequencing, and variant calling, which are integral to advancing our understanding of genomic complexities in various biological contexts.
Useful Tools/Packages/Software:
Submitter:
Wenchao Zhang, CAB, St. Jude
Title:
Enhancing DRAGEN Platform Usability with Large Language Models
Challenge Summary:
The challenge is to utilize a large language model (LLM) to enhance the usability of the Illumina DRAGEN platform, which is renowned for its high-speed secondary analysis of next-generation sequencing (NGS) data. DRAGEN uses a field-programmable gate array (FPGA) for hardware acceleration and features a complex command line interface with numerous configuration options. The goal is to develop an LLM that can generate tailored command line templates for different genomic analyses, suggest optimal parameters based on input data characteristics and intended research goals, and explain the impact of various settings. This tool aims to simplify the learning curve and enhance the efficiency of using DRAGEN, allowing researchers to concentrate more on analyzing results and less on operational complexities.
Benefit:
Implementing this solution would make the DRAGEN platform more accessible to users by providing intuitive command generation and parameter optimization guidance. It would reduce the time researchers spend navigating the platform's extensive documentation and trial-and-error process, facilitating quicker adoption and more effective use of DRAGEN's powerful genomic analysis capabilities. For the broader scientific community, this means accelerated research cycles, improved data analysis outcomes, and broader dissemination of DRAGEN's advanced analytical capabilities.
Useful Tools/Packages/Software:
Submitter:
Jose Pastrana, Computational Biology, St. Jude
Title:
Integrating Spatial Analysis Tools in Health Risk Assessment and Emergency Response Planning
Challenge Summary:
The challenge focuses on leveraging advanced spatial analysis tools to enhance health risk assessments and improve emergency response strategies in healthcare. Despite the innovation in spatial analysis technology, there is a significant underutilization in areas critical to public health, such as emergency response planning and patient outreach. Key issues include inadequate resource allocation, limited precision in targeting at-risk populations, reliance on outdated data, neglect of spatial considerations, failure to account for spatial interactions between variables, and inefficient deployment of resources.
Benefit:
Integrating spatial analysis into these areas will enable more precise and effective public health interventions, leading to better health outcomes. Solutions developed from this challenge could transform spatial epidemiology research, enhance evidence-based policy making, and promote healthcare access and equity. Specifically, they would enable more targeted and efficient deployment of healthcare resources, improved planning and execution of emergency response efforts, and overall, a more data-driven approach to public health.
Useful Tools/Packages/Software:
Submitter:
Siddhant Taneja, Epidemiology & Cancer Control, St. Jude
Title:
Optimizing High-Performance Computing Resource Allocation Through Statistical Analysis
Challenge Summary:
The challenge involves a collaborative effort between data scientists and high-performance computing (HPC) administrators at research institutions to analyze and optimize the use of computational resources. The goal is to develop a statistical model that can provide insights into the current usage patterns and inefficiencies in HPC resource allocation. By leveraging job submission data, which computer administrators have access to but may lack the expertise to analyze, and the analytical skills of researchers, the project aims to create a model that can guide both groups in making more informed decisions about resource management.
Benefit:
A solution to the proposed challenge will enhance the efficiency of HPC resource utilization, leading to reduced waiting times for job processing and increased productivity for researchers. This will not only benefit individual labs at St. Jude by improving the speed and effectiveness of their research computations but also contribute to a more sustainable and efficient use of resources across the entire scientific community. By optimizing resource allocation and job submission strategies, the model will help in maximizing the throughput and minimizing the operational costs of HPC facilities.
Useful Tools/Packages/Software:
Submitter:
Ziang Zhang, IS, St. Jude
Title:
statGPT: An R Package for Automated Statistical Reporting
Challenge Summary:
The challenge is to develop "statGPT," an R package designed to automatically generate publication-quality narratives, tables, figures, methods sections, and bibliographies from statistical analyses commonly used in scientific research. The package will integrate and expand upon features from the Simple Biostat Program (SBP) and the rctrack package. statGPT will transform the outputs of standard statistical functions (e.g., t.test, glm) into structured, narrative paragraphs that include references to figures and tables. It will also compile these elements into a report formatted similarly to a scientific manuscript, ready for journal submission.
Benefit:
statGPT will significantly enhance scientific productivity by automating the conversion of raw statistical data into comprehensive, ready-to-publish document formats. This will save researchers considerable time and effort typically spent on manually interpreting and presenting statistical results. The package will ensure that statistical reporting is consistent, accurate, and adheres to high standards of scientific communication, benefiting both individual researchers and the broader scientific community by facilitating quicker, clearer dissemination of research findings.
Useful Tools/Packages/Software:
Submitter:
Stanley B Pounds, Biostatistics, St. Jude
Title:
Integrating SAM with ImageJ for Enhanced Medical Image Segmentation
Challenge Summary:
The challenge involves integrating Meta AI's Segment Anything Model (SAM), a leading image segmentation tool, with the National Institutes of Health's ImageJ software (specifically, the FIJI distribution). SAM is renowned for its segmentation capabilities but currently lacks features for downstream analysis of segmented regions of interest (ROIs). By creating a plugin for ImageJ that incorporates SAM, researchers could not only segment medical and research images effectively but also perform extensive analyses on these images. This includes collating ROIs, and providing data on shape descriptors, mean intensity, and other crucial metrics, which are essential for detailed image analysis in medical and scientific research.
Benefit:
Integrating SAM with ImageJ would substantially elevate the capabilities of both tools, allowing researchers to conduct more precise and comprehensive image analyses. This would particularly benefit labs in fields such as developmental neurobiology, where detailed image segmentation and analysis are crucial. The scientific community would gain from having a powerful, open-source tool that facilitates both advanced segmentation and robust analysis, streamlining workflows and enhancing the quality of research outputs.
Useful Tools/Packages/Software:
Submitter:
Jason Vevea, Developmental Neurobiology, St. Jude
Title:
Accelerating Image Data Visualization from Acquired Raw Data
Challenge 1: NGFF Data Preparation and Object Storage/Sharding
Challenge Summary:
The objective is to efficiently generate and store intermediate Next-Generation File Format (NGFF) files to facilitate swift and effective data retrieval for visualization tools. A key issue involves handling the large volume of NGFF files created due to data being stored in default voxel sizes of 64x64x64. This sub-challenge focuses on optimizing the storage and retrieval process through the use of object storage solutions or sharding techniques to manage and mitigate the file count, enhancing the performance of both read and write operations that are crucial for subsequent visualization stages.
Useful Tools/Packages/Software:
Challenge 2: Desktop Image Data Visualization using Napari and IDMS
Challenge Summary:
This sub-challenge involves the development of a desktop application using Napari for enhanced visualization, annotation, and analysis of multi-dimensional image arrays. The goal is to integrate Napari with the Image Data Management System (IDMS), which is under development at the Center of Bioimage Informatics (CBI). This integration will allow Napari to access registered images and their metadata through IDMS's REST API, providing a seamless user experience for desktop-based image data analysis.
Useful Tools/Packages/Software:
Challenge 3: Web Image Data Visualization using Neuroglancer
Challenge Summary:
The focus here is on enabling web-based visualization of volumetric data using Neuroglancer. The challenge involves setting up an HTTP-compatible data source that can work with Neuroglancer to visualize data (potentially utilizing the sharded or object-stored data generated in Challenge 1). This task requires a combination of frontend development and backend integration to ensure that Neuroglancer can effectively access and display the data hosted on local or network storage systems.
Useful Tools/Packages/Software:
Benefit:
Implementing these solutions will significantly streamline the process from raw data acquisition to visualization, reducing dependency on external cloud storage and enhancing data governance. The in-house development of these visualization tools allows for rapid, direct interaction with image data, facilitating immediate and effective scientific analysis and exploration, which benefits not only a specific lab or group but the broader scientific community engaged in similar research activities.
Submitted By:
Nishant Shakya, CBI and IS, St. Jude
Title:
Integrated Cell Segmentation and Tracking from Timelapse Microscopy
Challenge Summary:
The challenge is to develop an integrated solution for identifying and tracking cells in timelapse microscopy images, which encompasses handling cell movements in and out of focus or frame, cell division, and cell death. The solution aims to integrate cell segmentation and tracking into a single, efficient, and scalable system. Currently, tools like Cellpose for cell segmentation and Trackmate for cell tracking are used separately. Cellpose can run in parallel using GPU acceleration, whereas Trackmate is GUI-based, does not utilize GPU, and has limited parallelization capabilities. The goal is to combine these strengths and overcome the limitations to produce a cohesive tool that enhances the accuracy and speed of processing timelapse microscopy data.
Benefit:
Creating an integrated solution for cell tracking and segmentation will significantly enhance the efficiency of analyzing timelapse microscopy images, particularly in studies like the dynamics of phase-separated organelles inside cells. This improvement will reduce the computational time currently required, enabling more rapid and extensive analysis. The broader scientific community will benefit from having access to a tool that allows for the objective and efficient analysis of large volumes of timelapse images, facilitating more detailed and faster biological insights.
Useful Tools/Packages/Software:
Submitter:
Tapojyoti Das, Structural Biology, St. Jude