Changes between Version 3 and Version 4 of NeteSOW


Ignore:
Timestamp:
Aug 20, 2020, 1:43:52 PM (5 years ago)
Author:
Tom Goddard
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • NeteSOW

    v3 v4  
    1 = NIAID SOW Proposed deliverables (based on 2 FTE) =
     1= NIAID SOW Proposed deliverables (based on 2 FTE), January 2020 =
    22
    3 Original [https://docs.google.com/document/d/1YY4Gp-2HSKM3SKT-kBkrUm92gCu6ocptIbgMHk15Qjs/edit# Google doc version]
     3Original [https://docs.google.com/document/d/1YY4Gp-2HSKM3SKT-kBkrUm92gCu6ocptIbgMHk15Qjs/edit# Google doc version].  Earlier [wiki:NeteSOWDraft SOW draft].
    44
    551. Extensions to ChimeraX to support the NIH 3D pipeline
     
    5151
    5252
    53 == NIAID SOW draft Dec, 2019 ==
    54 
    55 Group meeting Dec 19, 2019
    56 
    57 
    58 R01 ChimeraX proposal
    59 ~2 FTE
    60 
    61 
    62 1. NIH 3D pipeline using ChimeraX (Eric) ~.8
    63    * NIH 3D Print Exchange pipeline using Chimera will be migrated to new NIH 3D pipeline for both printing and VR.  New pipeline will use ChimeraX. Current scripts are those named Chimera*py in their github site.
    64    * Current Chimera pipeline uses these features not in ChimeraX (from Elaine); unclear if any are “must haves”:
    65       * PubChem fetch
    66       * VRML export - Meghan’s 2016 email said of X3D and VRML “we only need X3D”
    67       * Import of fchk, gro, mol, sdf files
    68       * PDB biounit fetch
    69       * Combine structures, maybe not needed if only for molecular surfaces
    70       * Coulombic coloring, requires charge assignment
    71    * Export of other formats (possibly Collada, GLTF (ascii), FBX, OBJ with texture colors), X3D enhancements.
    72    * Sequence conservation coloring
    73    * Fetch sequence annotations (UniProt, domains, disease-associated mutations)
    74    * Read and visualize segmentation models.
    75 2. Human Biomolecular Atlas Program (HuBMAP) multiscale visualization (Tom) ~.1
    76    * Models from medical imaging, 3d light microscopy, 3d electron microscopy
    77    * Connection with Nils Gehlenborg at Harvard, HIVE
    78    * Tom G sent email to Phil Cruz to ask for more details about BCBB’s role in HuBMAP
    79    * Oct 2019 Nature overview of HuBMAP overview article:
    80       * “focus of HuBMAP [is] on spatial molecular mapping”
    81       * “We anticipate that the first round of data will be released in the summer of 2020”
    82       * “HuBMAP, in collaboration with other NIH programs, plans to hold a joint meeting with the Human Cell Atlas initiative to identify and work on areas of harmonization and collaboration during the spring of 2020.”
    83       * “To ensure that browsers and visualization tools from HuBMAP are valuable, the consortium will work closely with anatomists, pathologists, and visualization and user experience experts, including those with expertise in virtual or augmented reality.”
    84       * “Ultimately, we hope to catalyse novel views on the organization of tissues, regarding not only which types of cells are neighbouring one another, but also the gene and protein expression patterns that define these cells, their phenotypes, and functional interactions. In addition to encouraging the establishment of intra- and extra-consortium collaborations that align with HuBMAP’s overall mission, we envision an easily accessible, publicly available user interface through which data can be used to visualize molecular landscapes at the single-cell level, pathways and networks for molecules of interest, and spatial and temporal changes across a given cell type of interest. Researchers will also be able to browse, search, download, and analyse the data in standard formats with rich metadata that, over time, will enable users to query and analyse datasets across similar programs.”
    85 3. Segmentation capabilities for medical imaging, 3d light microscopy, 3d electron microscopy (Tom) ~.25
    86    * Interactive SimpleITK use in ChimeraX
    87    * Allow loading, visualizing, creating, measuring and saving segmentations.
    88    * Support new EMDB-SFF segmentation file format from the EM Databank
    89 4. Medical imaging (Tom) ~.25 - ~.5
    90    * Metadata browser for DICOM files
    91    * Support radiologist collaborator needs
    92    * Measuring changes in time progression, alignment, volume measurement, difference coloring
    93    * Enhanced lighting for improved perception of details.
    94 5. Virtual reality (Conrad) ~.25 - ~.50
    95    * Improve multi-person VR beyond work accomplished in SOW #2
    96       * Any participant can bring in new model
    97       * Audio chat, third-party or built-in solution
    98       * Localizable connection server solution
    99    * Recording VR sessions (for VR playback? Or conventional video playback?)
    100 6. Drug docking (Conrad)
    101    * VR UI
    102    * Turn on and off surface display
    103    * Show hydrogen bonds - for induced-fit docking results? (already have this and clashes for single-receptor multi-ligand situation)
    104    * Local minimization (use OpenMM), needs ligand parameterization, DockPrep/Antechamber
    105 7. Documentation and training materials for above new capabilities. (Elaine) ~.4
    106    * NIH 3D pipeline
    107       * Provide ongoing advice on ChimeraX commands due to syntax and parameter differences (lighting, cartoon style, etc.)
    108    * Segmentation capabilities
    109    * Medical imaging
    110    * Virtual reality
    111 8. Administration ~.1
    112 
    113 
    114 
    115 
    116 
    117 
    118 NIAID SOW Ideas – Dec 2019
    119 Initial discussion on next SOW goals, Dec 5, 2019 group meeting.  Followed up Dec 12, 2019 with video conference with Phil Cruz, Meghan McCarthy and Darrell Hurt.
    120 
    121 
    122 * MSC/NIAID suggestions
    123    * Migration of NIH print-exchange to NIH 3D (use ChimeraX)
    124    * Updates to ChimeraX to support pipeline (Elaine’s missing pieces)
    125    * See below (scripts are all on github)
    126    * Extended to VR/AR as well as 3D printing
    127       * Need to be able to handle differences
    128    * Allow uploads of ChimeraX session files?
    129    * GLTF text form support
    130    * Learning about python interface to ChimeraX[c]
    131    * Additional file types (Collada, etc.)
    132 * previous SOW Google doc with crossouts
    133 * previous SOW discussion on ChimeraX wiki
    134 * feature list in DICOM viewer notes (some are done or partially done, some are linked to tickets)
    135 * many possibilities relating to segmentation
    136    * see especially the ChimeraX wiki link above
    137    * investigate use of machine learning to identify and annotate features
    138    * They are extending the itk toolkit to microscopy images
    139    * Focus has been on medical imaging
    140 * TomG: try to steer more toward data types with which we are more familiar, e.g. light microscopy and EM
    141    * Particularly as an overlap with simple itk uses for segmentation[d]
    142 * better support for multi-person VR sessions, hard to make reliable
    143    * firewall issues
    144    * VPN
    145    * AWS - rendezvous service? Conrad will investigate.
    146    * if our own service, we don't want everybody on it
    147    * investigate audio services (possibly integrated)
    148    * enable any participant to load new data without deleting others' data
    149    * material for multi-person VR sessions for remote training
    150 * advancing VR/ViewDockX capabilities
    151    * Dock Prep
    152       * Antechamber?
    153    * anything else?
    154    * A lot of general interest.  Great VR use case.
    155    * Turn on and off surface display
    156    * Hydrogen bonding
    157    * Local minimization (use OpenMM)
    158       * How do we get force field for ligands?
    159    * Kudos for the tape measure tool
    160 * 3D printing: enable porting workflow from Chimera to ChimeraX
    161    * any tie into segmentation?
    162    * are they still interested in pursuing this?[e]
    163 * Tom F: avoid overlap with NIGMS grant
    164 * Sequence analysis/MAV?
    165    * coloring by conservation (conserved regions may be better vaccine targets, e.g. universal flu vaccine as in the PBS VR segment from June 2019)
    166    * fetch annotations or info (UniProt, domains, disease-associated mutations, ...)
    167 * hierarchy & smart level of detail in large and complex data and segmentations
    168    * segmentation browser[f]
    169    * Support transition easily between different levels of detail
    170    * Google-earth style for the models
    171    * Look at potential synergy with HubMap[g]
    172 * DICOM enhancements
    173    * DICOM browser showing data hierarchy, metadata[h]
    174       * Show 5 most important things, but support showing more metadata at the user request
    175    * better/more presets for different tissue types in DICOM data
    176    * provide “smart” initial coloring (bones, organs, tissue types)[i]
    177    * provide photorealistic lighting to visualize 3D images and volumes
    178 * Wendell Lim's engineered cells - infectious disease tie-in? Opportunity to get NIAID more interested in light (optical) microscopy. Max Krummel.
    179 * recording VR sessions (augmented reality, etc.)
    180 * HTML documentation for any of these new features, of course
    181 * Educational uses of ChimeraX and VR (e.g. undergraduate education -- multi-user especially)[j]
    182    * Would an animation tool fit here?
    183 * Other types of measurement (coupled with segmentation)
    184    * Volume measurement
    185    * Comparison capabilities (e.g. progression over time)
    186 * Other sources of collaborators for medical imaging VR
    187    * Dmitri, other VA collaborators, Viv in addition to Beth
    188 Other crossouts from previous SOW:
    189 * select and/or highlight voxels in volumetric data representations. Save the selection, and apply visualization commands to just the selection.
    190 * provide “smart” initial coloring (bones, organs, tissue types)
    191 * provide photorealistic lighting to visualize 3D images and volumes
    192 * additional material may be presented in video format, e.g., “How-to” screen capture videos, mixed reality video capture showing person and data for tutorials and for explaining research results to the public