[chimera-dev] Help with "looping through PDB IDs" script
pett at cgl.ucsf.edu
Thu Apr 17 15:05:45 PDT 2014
On Apr 17, 2014, at 2:11 PM, Navya Shilpa Josyula <njosyu2 at uic.edu> wrote:
> Now I am trying to write CASTp information for each of my proteins into a separate file. As you suggested in earlier email, the processCastpID function is in the gui.py file but not in __init__.py file. Hope I am not missing anything here. As per my understanding, this function is fetching the 4 castp files of which I would require only ".poc" and ".pocInfo" files. From these two files I want to write the data of only atoms list, pocID and MS_Volume data into a single file for all 400 proteins in my dataset. Is there a link or any script available for such requirement?
There are some fine points that I missed in my answer yesterday, and the situation is complicated further by your use of .pdb1 files instead of the "normal" entries.
So for one thing, if you are going to use the .pdb1 files, then you are going to have to run CASTp yourself on each and then process the results. In that case you might as well also analyze the .poc and .pocInfo files yourself to determine what pocket each atom belongs to (the next-to-last field in the .poc file) and the volume of that pocket (listed in the .pocInfo file).
The main point I missed in my reply, which may now be moot because of the .pdb1 thing, is that processCastpID() builds its own structure and therefore you would not open the PDB first, you would instead return the structure (along with the cavities list) from that method and make the structure available in chimera with:
and then proceed with selecting the right residues, using currentResidues() to list them, etc. I guess if you didn't want to process the .pdb1 CASTp files yourself (after running CASTp on the .pdb1) you could use processCastpFiles() to get the cavity list and structure and proceed as I just outlined. processCastpFiles is in __init__, unlike processCastpID() as you found!
> Again, as mentioned in my last email, since my output files will be huge in size, will I be able to write my files directly to a database table in SQL server?
I'm not much of an expert on this, but maybe this page would help: DatabaseInterfaces - Python Wiki
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Chimera-dev