Talk:AlphaFold

Latest comment: 6 months ago by My very best wishes in topic How AF2 works ?


More resources

edit

Parking some of these here:

  • AlphaFold's presentation at CASP14 was at 15:50 - 16:20 Tuesday (1 December; UK time). The session closed at 18:30, so reports may emerge soon.
    "John Jumper's bracing recap of AF2's pipeline at #CASP14 revealed major tweaks, adding structure-aware physics & geometry to the iterative NN design in an end-to-end learning of woven chain through a coalescing 'gas of rigid 3D bodies" [1] (3 Dec)
  • All CASP sessions are being video-recorded, but it is not confirmed whether all will be released. As of Thursday 03, the first day sessions are available via the CASP website As of Friday 04 a video of the talks by the Baker lab and the Zhang group is now up; but not AlphaFold, which followed them.
  • The AlphaFold 2018 presentations were uploaded, at [2] and [3]
  • "How Neural Networks took over CASP" gif -> pics NNs eat the world :-)
  • AF2 team *did* show some pics on how predicted contact maps evolved during AF2 convergence. (tweet)

reactions

edit
(tweet: "Is protein folding 'solved'? Not quite. I spoke to a range of researchers about what #AlphaFold can and can't do - and why existing experimental techniques to understand proteins will still be needed")
  • CC-BY resource on protein sidechain rotational statistics and tweet
  • Twitter thread [9] including comment on T1040-D1, where assessors said there was a "pre-evaluation split". Tweeter says "the ground truth may contain experimental artifacts - e.g., many hydrophobic res are exposed". Also finds it extraordinary that AF2 could predict disulphide bonds. Cautions that "overtraining on static structures doesn't automatically solve the real-world problems that primarily needs *flexibility* prediction, such as drug discovery, protein misfolding/aggregation, antigen-antibody binding, etc."
OK. So the issue with T1040-D1 is that it comes from a very big protein (doi:10.1038/s41586-020-2921-5), that naturally splits into parts which then stick together. The sequence of only one of these parts was given for prediction. This is why, as labelled in the picture in the tweet, part of the structure contains an alpha helix with a run of exposed very hydrophobic residues. Normally, in aqueous solution, this would be very unphysical. In fact it is the burying of hydrophobic residues into the interior of the protein that, in free-energy terms, drives the whole folding process. The explanation here is that these hydrophobic residues are actually where this part of the protein sticks to its neighbour. In fact, these hydrophobic bases are what sticks this part of the protein to its neighbour. Jheald (talk) 11:02, 6 December 2020 (UTC)Reply

some discussion

edit
Unfortunately, I do not have access to Business Insider, but speaking about the original tweet by Mike Thompson, yes, he is right: without solving the problem of high precision refinement, one can not talk about solving the protein folding problem. There is a good reason why such refinement is treated as a separate category on CASP. An additional problem here is the existence of multiple structures of flexible loops and the overall problem of protein flexibility. People can determine a number of highly divergent structures of same protein, especially if the protein was transmembrane - can AlphaFold reprocuce the whole spectrum? The high precision + reproducing the flexibility is a must for ligand docking/drug design. This is also the reason one can not consider any experimental structure as "the truth" like Mike does, and his comment about "10-20 times worse" is puzzling. My very best wishes (talk) 19:42, 3 December 2020 (UTC)Reply
@My very best wishes: You can read the Business Insider piece by clicking "view source" & scrolling down past all the javascript. But there's not a huge amount there. Jheald (talk) 19:58, 3 December 2020 (UTC)Reply
Thank you! It does looks like a huge step forward. As soon as AlphaFold-2 is available to community as a source code (or/and as a working public webserver), it can be independently re-tested and used in a variety of ways by others. Just for starters, I would expect AlphaFold to fail for predicting protein structures with intertwined association of subunits [10], but it would be be interesting to see what it produces in such cases. I hope that a journal where they publish will make the availability of the software a condition for the publication. This is needed: (a) to make sure that the results are truly reproducible, and (b) to make this method widely used, exactly as it deserves. In some way, a method does not exist until it is publicly available. My very best wishes (talk) 20:53, 3 December 2020 (UTC)Reply
Yes, this is all exciting. According to this, the group is now focusing on calculations of protein complexes (which should also cover the multi-domain proteins). Indeed, that is exactly what they should do, and it is straightforward using very same approach. That would immensely expand the applicability of AlphaFold. I still wonder what they would get for proteins with multiple structures like transmembrane transporters. My very best wishes (talk) 16:36, 4 December 2020 (UTC)Reply
@Jheald. Most links above are just opinion pieces. This could be an important achievement, but the following is needed: (a) a peer-reviewed paper about AlphaFold-2 should be published; (b) the assessment on CAS14 should be published, and (c) most importantly, the method should be made available to the wider scientific community (as a public webserver or source code), so that other people can check how the method is actually working in real life (CASP is not the last word here) and be able to use it.
  • Just to be completely clear, if AlphaFold-2 was publicly available and easy to use, people (me including) could run a large number of specific tests where this method would fail. Such tests would include linear peptides, proteins with multiple alternative structures (there are hundreds such cases in the PDB), certain types of protein complexes, single sequences and small or questionable alignments as input, "intrinsically unfolded" proteins, etc. This might be one of the reasons why authors of AlphaFold said they are not going to make it publicly available any time soon. My very best wishes (talk) 19:58, 5 December 2020 (UTC)Reply
@My very best wishes: I agree. For a 'solution' to have been reached scientifically, other teams need to be able to replicate what AF2 has done. And we need qualitative, not just functional, understanding of how AF2 is doing so well -- is there 'extra' information that AF2 is finding, or holding on to better? What different qualitative things (and representations) is it managing to isolate in different parts of its attention units? Are some of these novel? The solution is not really a 'solution' until we understand it. And when it is understood enough, then that may give us clues to techniques that may be able to capture the essence of those new qualitative things in ways that may be faster, more efficient, less compute-intensive.
Now we know publicly what is possible, DeepMind might well want to try to protect its lead. (Or it may not: Demis Hassabis is a bit more complicated than that). So information of may or may not be quick to emerge. It's a difficult horse for DeepMind to ride, because I suspect information like that above will emerge, and sooner rather than later, if not from DeepMind then from somebody else. (eg, per the links above, Facebook's team in New York also look to be all over this space; and others won't be far behind). I imagine the lawyers have been all over this, and DeepMind's paymasters (Google). But ultimately I suspect that this will end up being a win for all mankind, that no one player will be able to corner or monopolise (though many may do very well). So, how important are the scientific bragging rights to DeepMind, of being able to explain, qualitatively, how their system achieves what it does, before somebody else does? And how long, do they calculate, before somebody else will be able to do that, regardless of whatever they do? Those are the questions that will condition what DeepMind does next. Last time I think most of their method was in the presentations they gave at CASP. (Not 100% sure, haven't checked against the published papers). This time apparently their presentation was pretty sketchy. Personally I would expect that a more detailed description of their system will emerge, though maybe not in the CASP proceedings; plus information about what some of the internal states of the model that the model finds may look like. My guess is that that might appear within a year, possibly sooner, possibly once DeepMind have already worked out where they want to go next. But who knows?
Yes, what I have collected above are mostly opinion pieces. Where the article still needs work (IMO) is in the lead and the reactions section. In particular: what DeepMind has achieved; how significant is it; how was it reacted to (including counter-reactions); what are the limitations, opportunities, and next steps in the field. All of this needs work. It's for this that I have been trying to gather reactions; also to try to be aware if there are angles or relevant things I may have missed; also for any more info that may be out there about what it does and what makes that work. Jheald (talk) 12:37, 6 December 2020 (UTC)Reply
You said: "Now we know publicly what is possible" [to accomplish using AlphaFold-2]. No, we do not know it even after CASP-14, that's the point. The method must be publicly available to other people who would run a lot of additional tests (like I noticed above) and use it. Can the results by the AlphaFold team on CASP-14 be reproduced by other researchers who were not developers of this method? We do not know even that. John Moult said: "In some sense, the problem is solved". In what sense, and was it really solved? We do not know it until the authors (or someone else) make a public web server or software which is actually available to community (of course it is NOT solved, as anyone can see even from the presentations on CASP-14, see slides 16-18 here, for example). Also, people always suppose to publish their research in such details that others can easily reproduce their results; this includes software if needed. My very best wishes (talk) 22:16, 6 December 2020 (UTC)Reply
The point is that even without AF2's code, or even their detailed algorithm, we know now that this sustained level of detail is *possible* to achieve by code somehow, at least for proteins of the kind that get into the PDB. That, plus the fact that other major AI groups are already deeply into research of this space, will just in itself be a huge spur to people to try things. Even if the information were not forthcoming from DeepMind (and I think it will be), it is only a matter of time before more groups achieve this capability, and it becomes common knowledge how, and why such results can be obtained. Jheald (talk) 23:15, 6 December 2020 (UTC)Reply
  • I am saying that (a) AlphaFold still has a lower precision than X-ray crystallography and probably than solution NMR spectroscopy based on results of CASP-14 assessment, and (b) it probably will NOT be able to provide the "sustained level of detail" (as you say) for a lot of PDB entries, such as linear peptides (e.g. studied by solution NMR under various conditions), proteins with multiple alternative structures (there are hundreds of them in the PDB), many protein complexes, proteins represented by single or a few sequences in genome and "intrinsically unfolded" proteins. It so happened that such cases were not present in the CASP-14 protein set or they were actually present in significant nombers [11], but were not taken for testing of AlphaFold-2. My very best wishes (talk) 00:07, 7 December 2020 (UTC)Reply
@My very best wishes: I've made a first stab at expanding the "Responses" section, which I hope now supports a corresponding section in the lead. Let me know what you think! If anything, perhaps the article (lead in particular) now perhaps undersells what AF2 achieved. If there are bits that aren't referenced enough, I'd be grateful if you could tag them with {{cn}} or {{refimprove}}, rather than outright removing them, so I can see if I can find anything better to support them. Alternatively, if you know some good additional refs for any of the points, do please add them! Best regards, Jheald (talk) 00:35, 8 December 2020 (UTC)Reply
At the end of "Responses" section one of well known researchers tells that "Their predictions of structures are very good, as good as those produced experimentally." This is obviously a false statement, as follows from CASP-14 presentations (consider slides 16-18 here as an example). This is very bad to include misinformation to WP without explicitly saying it is misinformation, even if it can be sourced. One must simply follow WP:MEDRS for extraordinary claims, such as that one. This is not even close to anything WP:MEDRS. I suggest to remove any extraordinary claims that "the problem was solved" sourced to non WP:MEDRS publications.My very best wishes (talk) 01:20, 8 December 2020 (UTC)Reply

A few questions

edit
This is an extraordinary success! Some questions:
  1. What distance cutoff did they use to calculate GDT_TS in second Figure on this wikipage? Is it "the average result of cutoffs at 1, 2, 4, and 8 Å" as our page Global distance test tells? Was it always calculated only for model #1 or for "best of five"? This must be a single cutoff of ~1.5A and only for structure #1. "Best of five" is manipulation with data.
  2. What the circles on the 2nd Figure on this page show? I can only guess these are the best GDT_TS values of predictions for individual proteins obtained using all methods by all groups? What would be a similar chart specifically for AlphaFold? That would be something more relevant to the subject of this page.
  3. Speaking about the procedure and especially energy functions, was everything described here or there are other publications on methodology?
  4. Do I understand correctly that inputs (in particular for the current AlphaFold version) are sequence alignments, and the results would be different for individual sequences?
  5. Did they say that a source code will soon be available or perhaps even a public web server to use AlphaFold? My very best wishes (talk) 02:57, 3 December 2020 (UTC)Reply
  6. Why did not they apply their method to protein refinement and multi-subunit proteins on CASP-14? If it can not be applied for such task, then the claim of solving protein folding problem is at best premature.
  7. Do I understand correctly that it is not prediction/output of the program, but predictions by a large group of people who used the program? There is a big difference.
  8. How do they assess the difficulty of targets? In theory, for a method like that, one should run the targets against the entire PDB using SSM server (https://www.ebi.ac.uk/msd-srv/ssm/) to determine the % of structural coverage/overlap with known PDB structures. This coverage should then be compared with GDT_TS produced by prediction. This is something trivial, but did anyone do just that?
  9. What were the results of testing such method not in CASP setting, but simply for known structures in the PDB? For example, can it reproduce both inward- and outward-facing structures of TM transporters? Or it could be tested for new protein folds, a few of which are released by the PDB every week. My very best wishes (talk) 16:36, 3 December 2020 (UTC)Reply
@My very best wishes: Thank you so much for your scrutiny of the article yesterday, and for your eyes on this. In answer to your Qs, as far as I can:
1. Yes. [12] : "GDT_TS - GlobalDistanceTest_TotalScore : GDT_TS = (GDT_P1 + GDT_P2 + GDT_P4 + GDT_P8)/4, where GDT_Pn denotes percent of residues under distance cutoff <= nÅ".
Most of the tables on their system allow you to choose whether to plot model #1 or best-of-the-five. Not sure (atm) which was plotted (it maybe says in the video of the presentation; and no doubt it would be stated in the discussion of the corresponding pics in the Proceedings volumes for previous CASPs).
2. I think so. Detailed data, is available from the site under Results Home -> Table Browser, where you can select just for AlphaFold; also similarly for 2018. It's all also available as raw data to download, in an R-friendly format. For 2020, I think AlphaFold placed best for all but a handful of proteins, so the big circles are almost all its own. For 2018, t placed first for about a half. I think the small circles on the right of the chart are mostly AlphaFold's, as to those to the left I don't know how far off it was.
3 For AlphaFold 1, the Nature paper I think is the most detailed. There is also a write-up in the special issue of Proteins about the conference (link on article page), and the two original presentations they gave at the 2018 conf (links above). The YouTuber linked above does quite a nice talk-through of the Nature paper.
4 I think what they do with AlphaFold 2 is take the sequence they are given, and use standard methods to pull matching DNA sequences (given in their conference abstract). They then appear to use a transformer network to assess how individually significant each one of these sequences is to each residue position, in a context-dependent way that depends on everything else the system currently thinks it knows (in particular, but not only, its latest best guess of the structure). This information, plus the internal latent variables found to determine it, is one of the feeds into the second transformer network, which estimates which residues have relationships (near and far) to which other residues. All of this feeds into the prediction module. I don't know whether they take the alignment between sequences as a given, but it wouldn't surprise me ('single differentiable end-to-end system') if it was also up for grabs during the convergence process for a particular structure.
5 It's interesting, isn't it. Will they try to monetise it? Being an end-to-end system it may well be easier to release than AlphaFold 1, where they released about half, but said the other half depended on third-party services, the internal configuration of their hardware etc. I imagine at the very least a service would be available, though it might be by subscription. But I imagine that now people know this is possible, there will be 101 groups trying to clone it, and trying their own different tweaks or preferred sorts of networks. The extent to which DeepMind can 'own' this will be determined by what they can patent, how abstract and high-level of the design patent offices let them patent, and whether those patents survive objections or challenge. (Also, how determined DeepMind are even to try to own it). I *hope* that they won't be allowed to own anything so fundamental that it can't be worked around. In which case DeepMind probably won't be worth $100s bns. But who knows.
6 They were quite busy! A completely new blank-sheet design, to find ways to make work. No small task. I think they probably set themselves a defined project, and then to focus to do the very best they could on that defined task; and then think how to go next. Would the same approach extend to multiple sub-units? It might well. There could still be co-evolution data. And it could be that their residue-residue attention system and their predictor might have got a bit better than conventional physics models. And there will no doubt be more new ideas to come. While the team said they thought they'd taken AlphaFold 1 about as far as they thought it could go, I suspect they're right that what they've managed this week may only be the start of this road.
7 According to the abstract, there was very little human intervention. sometimes they chose older runs for #4 and #5 models, if otherwise the predictions would have been too similar.
8 Not sure. There obviously is a real-valued measure, from the charts, but I haven't found where the calculation is.
This is how they assessed difficulty in CASP 8 (2008) [13]. But the calculation may have developed. Jheald (talk) 22:02, 3 December 2020 (UTC)Reply
These guys suggested a particular difficulty metric in 2016, [14], but I don't know if CASP adopted it. Jheald (talk) 22:18, 3 December 2020 (UTC)Reply
Here's the overview of CASP 11 (ie 2014) [15], with a similar distance graph. Discussion of difficulty measure in Supp Materials [16], appendix 2, which cites this review of progress up to CASP 10: "we consider the difficulty in terms of two factors: first, the extent to which the most similar existing experimental structure may be superimposed on a modeling target, providing a template for comparative modeling; and second, the sequence identity between the target and the best template over the superimposed residues." This was still the reference cited in the review of CASP 13 (2018) [17], so would seem to be the scale used. Jheald (talk) 22:23, 3 December 2020 (UTC)Reply
So: for the actual calculation see "Target Difficulty" in the Methods section at the end of [18] (rather than the earlier sections on Target Difficulty, where it is critiqued). Jheald (talk) 22:44, 3 December 2020 (UTC)Reply
9 They trained the algorithm on the entire PDB, so DeepMind could maybe tell you :-)
Jheald (talk) 21:27, 3 December 2020 (UTC)Reply
Thank you!
1. Well, I am afraid this makes CASP assessment less certain. This way even very poor predictions would get a modest score. I think the assessors should keep it simple and use cutoff of ~1.5 A if anyone, including organizers, wants to make the claim of solving protein folding problem to some degree. That cutoff can be for CA atoms (that is what they usually do) or all atoms.
4. No, they used sequence alignments and said (last slide in the first pdf of their presentation): "With few or no alignments accuracy is much worse". This is exactly what experts would expect. The question is: how much worse? This is really important because there are numerous "singleton" sequences in genomes. This alone is a reason they can not claim solving protein folding problem.
5. This is sad. Sure, this is exactly what people will do and a lot more. But that's the purpose. As Shota Rustaveli said, "What you hid has been lost, [but] what you gave [to others] is yours". And remember, what they created is not science (what have they learn about proteins by doing this?) but merely a "black box" which seems to be useful only as a predictor's black box. I assume however that other people would be able to create their own codes based on information they will have.
6. The refinement. Answered here. There is way to go.
7. That can make a huge difference based on results of previous CASPs. That means they were not able to completely automate the procedure, which is fine, but a different prediction category.
8. Thank you. So yes, that is exactly what assessors did [19]. I simply did not read CASP papers for a long time.
9. Yes, testing the method on the training set is not a good idea, especially in machine learning, because unlike in simple QSAR methods (for example), one does not even know how many adjustable/fitting parameters they have. This can be a million. Well, maybe less. The developers should know. But still, this is a 100% legitimate question: what was the performance of the method on the training set (i.e. the transporters I mentioned, etc.). I think the reviewers of their next publications should ask them to provide such results somewhere in a readable form, which would not be a problem for authors of the method. The reviewers should also ask them to recalculate GDT_TS as above. My very best wishes (talk) 23:22, 3 December 2020 (UTC)Reply
But here is the bottom line. Until they make a public web server, I can not even independently assess how their method is actually working for certain cases I would be interested in. I have seen http://www.predmp.com/#/home server [20], which is based on a similar methodology. Did it exceed my expectations? Yes. Was it really so great and useful for my work? No. My very best wishes (talk) 04:06, 4 December 2020 (UTC)Reply
"but merely a "black box" which seems to be useful only as a predictor's black box." This is a very common misconception. Tensorflow allows to visualise the neuro model and analyze it (it is a very beautiful thing)! There are ways to sublimate the nueronet to a smaller NNs, that way the backbone algorithm will become more obvious, that was done in Leela chess case. What is even more important is that it is ground truth algorithm, the question is now is to reverse engineer it, which was done many times for other NNs. There is though a possibility that complexity of the algorithm is indeed ~~ the model or its sublimated version, in which case creation of math. equvivalent will be too dumb to do. Valery Zapolodov (talk) 09:59, 2 August 2021 (UTC)Reply
If so, can you explain how did they generate this model [21]? Was it taken from one of PDB structures (which one?) and then optimized? Is it good according to any criteria or some kind of a "target function"? How and why did the network decide it should be such structure? This is an example of a very simple structure. My very best wishes (talk) 16:36, 2 August 2021 (UTC)Reply
Well, actually, I can see it. Running this model through SSM server [22] against the entire PDB shows membrane Diacylglycerol kinase as the template identified by AF2. And it might indeed be evolutionary related, who knows. Except that unlike the original/native structure/template, this AF 3D model does not make any sense, at least for the monomer (empty spaces between helices, etc.). And for the trimer? To check that, one would have to generate a model of the trimer, but AF2 apparently did not do it. My very best wishes (talk) 18:08, 2 August 2021 (UTC)Reply
This is not a simple structure, where is the membrane? It should be present too to correctly fold it as a complex. Without it MSA is needed, https://mobile.twitter.com/glenhocky/status/1418622424669736971 and https://mobile.twitter.com/ivangushchin/status/1419403418775457794 Also, how should it know it is a trimer? They did not even know they can do homo- and complexes. The hack is from 24th July or something. Also, Uniprot DB should have a field indicating it is a trimer or some other DB. Valery Zapolodov (talk) 09:44, 3 August 2021 (UTC)Reply
From the first tweet: Homooligomeric prediction in #alphafold works a little too good. .... Yes, of course. This is because the structure of the corresponding pentamer was already in the PDB library used by AF2 for "learning" and to generate the prediction for the monomer. Hence the five identical monomeric structures fit as pieces of a rigid jigsaw puzzle to create the pentamer where they were taken from at the first place. This is still basically a homology modeling [of the pentamer], even if one does not define: "use this specific PDB entry as template". From the 2nd tweet: "LytS has no template." How does he know that? Did he run the model against the entire PDB using SSM server as I did just above? Actually, there should be multiple templates for this protein. My very best wishes (talk) 17:13, 3 August 2021 (UTC)Reply

DYK submission

edit

Since the article has fallen off the bottom of the page at WP:ITNC (final state of discussion), I have made a submission to DYK with the following hook:

Did you know...

(I'm hoping they may be able to overlook that the submission is one day past the specified deadline for DYK. They sometimes do).

cc: @Ktin, Bender235, Gotitbro, Alexcalamaro, GreatCaesarsGhost, Glencoe2004, and Keilana: who supported it at WP:ITNC. Jheald (talk) 19:05, 8 December 2020 (UTC)Reply

Jheald, Thanks much for this one! I am pinging Yoninah who is an expert on these DYK topics. I agree that this article is good for DYK. Lots of good work has happened in the build out of this article. Ktin (talk) 19:15, 8 December 2020 (UTC)Reply
In spirit of MOS:PEACOCK, it's probably best to simply state the facts: that AlphaFold has been the first competitor to reach over 90% prediction accuracy in the 26-year history of CASP. And then we could add an expert evaluation of that achievement. I guess what I'm trying to say is I'd prefer a blurb like: ... that AlphaFold 2 won the 14th biannual CASP competition achieving 92% accuracy, essentially solving the decades-old protein folding problem. I still can't believe this didn't "qualify" for ITN. --bender235 (talk) 19:40, 8 December 2020 (UTC)Reply
One day late is fine. I agree that a simpler-worded hook is best. Yoninah (talk) 19:52, 8 December 2020 (UTC)Reply
If you've got a better hook, feel free to submit it. I've done what I can. Fair point about mos:peacock, something more substantive and crunchy is better. But note section below, re claims like "has solved... " Jheald (talk) 21:19, 8 December 2020 (UTC)Reply
(Added: For what it's worth, my original thought was DYK ... that the results of DeepMind's AlphaFold 2 program in the CASP 14 protein structure prediction competition have been called "astounding", transformational, and "a solution to a 50-year-old scientific challenge", but then I thought to drop the last bit, if it was no longer going to be in the article lead) Jheald (talk) 11:00, 9 December 2020 (UTC)Reply
  • "AlphaFold has been the first competitor to reach over 90% prediction accuracy in the 26-year history of CASP" - even that is slightly problematic because what does it mean "accuracy" of 90%? Global distance test includes distance cutoffs like 4 and 8 A for CA atoms in the best of five computational models (this is a manipulation with numbers!). Make it single cutoff of 1.5 A for all atoms (roughly the precision of solution NMR) in the model #1 - and what it will be? 70%? 50%? I do not know. This must be calculated and published. "essentially solving the decades-old protein folding problem. No, this is certainly not true - see my comments below. My very best wishes (talk) 23:55, 8 December 2020 (UTC)Reply
AlphaFold's detailed GDT-TS scores are available from CASP at [23] (select group='427 AlphaFold 2', model = '1', and targets = 'TBM' and 'FM'). From it one gets a median score (#46) of 91.72 (not quite sure why that doesn't exactly match what's been quoted elsewhere, which maybe was for all models, but it's very similar). A GDT-TS score of 92.5 implies a lower bound on GDT-1A of 70 for that structure, and on GDT-2A of 85; though as those are lower bounds, median GDT-1A and GDT-2A scores for AF2 will in fact be higher than that.
AlQuraishi has a nice chart comparing AF2's scores to those of the group that performed 2nd best overall (a reasonable proxy for the state of the art excluding AF2). As the assessor said in the introduction to his presentation on high accuracy assessment "this year all structures have become high accuracy".
As so to why so many of CASP's measures (eg the graph of the article page) show the best results out of all the groups, or the best result of all the models of a group, it may be that in the past running X-ray experiments and interpreting their results was long expensive and slow; whereas running models was cheap and quick, so one could afford to run many, and if any of them could help phase your X-ray data, that was a win. Given the astronomical number of possible structures, "best of 5" was not really "manipulation with numbers". Statistically, all but a tiny tiny fraction of random guesses are nothing like the true structure. The best of 5 random guesses will still be rubbish. For any of the guesses to be any good demonstrates skill. This year, however, AF2 has demonstrated extraordinary skill. Jheald (talk) 11:48, 9 December 2020 (UTC)Reply
  • Speaking about GDT as the measure of success on CASP, this is a poor one. For example, having a GDT of 100% with a crystal/experimental structure does NOT mean the computational model is "as good as" the corresponding experimental structure. It only means it is close in terms of CA atom coordinates. Why did not they simply use the rmsd of coordinates of all atoms for model #1, as commonly accepted in the field? The assessors introduced GDT long time to be able evaluating very poor quality models which are not even close to exprimental structures. But GDT artificially inflates the success of prediction. I think this is good time for CASP assessors to get rid of GDT and simply use the rmsd of coordinates of all atoms for model #1 as the measure of success, as experimentalists would do. My very best wishes (talk) 15:24, 9 December 2020 (UTC)Reply
  • I am not sure how to phrase this better. First, this certainly can be described as a "highly significant breakthrough" in the field of protein structure prediction using an AI-based approach (stunning, astounding, transformational, whatever). There is no dispute about it. But we can not say that "AF2's predictions of structures are as good as those produced experimentally" (because they are not as good according to assessment on CASP-14) or that "AF2 has solved the protein structure prediction problem" - see below. My very best wishes (talk) 23:08, 8 December 2020 (UTC)Reply

Did you know nomination (transclusion of live discussion page)

edit
The following is an archived discussion of the DYK nomination of the article below. Please do not modify this page. Subsequent comments should be made on the appropriate discussion page (such as this nomination's talk page, the article's talk page or Wikipedia talk:Did you know), unless there is consensus to re-open the discussion at this page. No further edits should be made to this page.

The result was: promoted by SL93 (talk00:54, 5 February 2021 (UTC)Reply

  • Comment: Note: I am one day over in submitting this, because it was previously up for consideration at WP:ITNC (discussion), and only fell off the page there at midnight this morning. So any leeway you could give it would be appreciated.
  • Reviewed: The Adults Are Talking

Converted from a redirect by Ktin (talk), Jheald (talk), and My very best wishes (talk). Nominated by Jheald (talk) at 18:36, 8 December 2020 (UTC).Reply

  • @Bender235: While claims such as "In a serious sense the single protein single domain [prediction] problem is largely solved" have been widely made (that quote is from conference chair John Moult's closing presentation to the conference), were very widely featured as a top line in media coverage, and have also been supported in thoughtful commentary by eg Mohammed AlQuraishi [24], they have also met with opposition; and so we are not currently running them on the article. (Though this could be changed). See article talk page for extended discussions. That is why I submitted the DYK text as above.
Note also that while AF2 has made a very significant advance in the protein structure prediction problem, this is a different question to the question of how protein folding develops in nature, so caution should be taken not to confuse the two. — Preceding unsigned comment added by Jheald (talkcontribs) 09:43, 10 December 2020 (UTC)Reply
  • Folks @Jheald, My very best wishes, Alexcalamaro, and Bender235:, this one has been open for sometime now, let's go ahead and drive this one to closure. I think the below text is the best that someone on homepage would be able to follow; anything more and we run the risk that folks find it too wordy or too complex. Let's move ahead, if you are good. Also @Yoninah: I do not want to presuppose your background but can you read the below two hooks as a layperson and let me know if you a) find it interesting b) generally get the gist of this one? If you are not a layperson for this topic, I am happy to go chase down some laypersons for this topic. Cheers.Ktin (talk) 22:42, 14 December 2020 (UTC)Reply
ALT 3.0.... that DeepMind's protein-folding AI AlphaFold 2 has solved a 50-year-old grand challenge of biology? (source: MIT Technology Review).
OR
ALT 4.0 .... that DeepMind's AI AlphaFold 2 can predict the shape of proteins to within a width of an atom? (source: MIT Technology Review).
  • Wonderful. Thanks both of you @Alexcalamaro and Bender235:. Please can one of you review the hooks per our guidelines and approve both the hooks, we can choose one from the two post that or empower the posting Admin to make a choice. But, first step, lets approve the hooks. Cheers. Thanks again folks. Ktin (talk) 23:14, 14 December 2020 (UTC)Reply
General: Article is new enough and long enough
Policy: Article is sourced, neutral, and free of copyright problems
Hook: Hook has been verified by provided inline citation
  • Cited:  
  • Interesting:  
Image: Image is freely licensed, used in the article, and clear at 100px.
QPQ: Done.

Overall:   Both hooks ALT  3.0 and ALT 4.0 meet our guidelines Alexcalamaro (talk) 18:08, 15 December 2020 (UTC)Reply

Passing the baton over to you Yoninah to take it from here. I am good with either of the hooks (ALT3 or ALT4). I know you had prefered ALT3 and Bender235 had prefered ALT4. Alexcalamaro -- do you want to cast the tie-breaker vote? ;) Ktin (talk) 18:54, 15 December 2020 (UTC)Reply
I vote for ALT 3.0 option (after all, we are talking about folding proteins). Alexcalamaro (talk) 19:24, 15 December 2020 (UTC)Reply
Thanks much Alexcalamaro. Passing the baton to Yoninah. Over to you now for next steps :) Thanks everyone. I want to specially thank @Jheald and My very best wishes: who have done and continue to do lots of good work on the article. Genuinely thank you folks. Ktin (talk) 19:27, 15 December 2020 (UTC)Reply

  I think all these versions of hooks, including ALT3 and ALT4 misinform a reader. No, the "50-year-old grand challenge of biology" has not been solved. There will be many future CASP meetings to assess further progress in this direction. Just saying that it "predicts the shape of proteins to within a width of an atom" is also wrong. No, it does not. AlphaFold-2 makes sufficiently precise predictions only for 2/3 of proteins, according to CASP assessors. But even in these good cases it does NOT predict protein structure with such precision for all atoms, as a reader would assume. Actually, such claim is simply ridiculous because there is protein dynamics and there is no such thing as width of an atom. There are only atomic radii, but but this is not a single number; they are very different for different types of atoms. Also, this is not "shape", but a three-dimensional structure. The referencing is to a misleading opinion piece. Author does make a claim that AlphaFold can predict the shape of proteins to within the width of an atom, but he apparently does not have a slightest idea what he is talking about. Let's not multiply the misinformation in Wikipedia. Please see the hook I suggested above (it can be shortened if needed). My very best wishes (talk) 19:51, 15 December 2020 (UTC)Reply

  • Yes, there are indeed WP:News sources about it (some of which claim nonsense like predicting "the shape of proteins to within the width of an atom"). However, this is an extraordinary and exceptional claim about solving a fundamental scientific problem, and not everyone agree (some similar WP:News type sources claim the opposite). I think we do need some WP:MEDRS quality sources here, such as serious independent scientific reviews. There is none. The method (AlfaFold-2) has not been published. The official assessment on CASP has not been published in any peer reviewed journal.
For example, as this article tells, "DeepMind’s press release trumpeted “a solution to a 50-year-old grand challenge in biology” based on its breakthrough performance on a biennial competition dubbed the Critical Assessment of Protein Structure Prediction (CASP). But the company drew criticism from academics because it made its claim without publishing its results in a peer-reviewed paper. ... “Frankly the hype serves no one,” and so on. I just do not think we should multiply this "hype" in WP. My very best wishes (talk) 20:29, 15 December 2020 (UTC)Reply
  • @My very best wishes: in a literal sense the protein folding problem is not "solved," since we can obviously always move the goalposts regarding the necessary precision (≥90% accuracy? ≥99%? ≥99.99%?). The jump in precision at this year's CASP certainly deserves to be called a "breakthrough." I agree that the catchy "width of an atom" is not a precisely determined length (just as the even more popular "width of a human hair" is not); the press release said less than two angstrom, which we could use, too. --bender235 (talk) 21:31, 15 December 2020 (UTC)Reply
Yes, one can say a "breakthrough" (I agree), but one can not say that "the problem was solved" for a number of reasons, such as (a) the protein set on CASP is absolutely not a representative set of proteins (it included only one membrane protein and the group was ranked #43 for this target, it did not include any "intertwined" protein structures or any linear peptides or any proteins with unique sequence in genomes, and so on.), (b) the method has not been even published and is not publicly available for independent evaluation, (c) AF2 has failed for a single multi-domain protein in CASP14 data set, while such proteins represent a majority in Eukaryotes, (c) the method was not tested for protein complexes. This is not at all about the percentage. We simply do not know that percentage. We do not even know the percentage on CASP until the assessment has been officially published. My very best wishes (talk) 18:52, 16 December 2020 (UTC)Reply
  • I would oppose to most of these hooks. OK, let's keep it simple. We do have page AlphaFold. I think this is fair page. However, any hook above (except my suggestion) simply contradicts this page. Does it follow from our AlphaFold page that it "has solved a 50-year-old grand challenge of biology"? No, it does not. Does it follow that AF2 "can predict the shape of proteins to within a width of an atom?" No, it does not. Not at all. Take the lead of this page and summarize it in the hook please. That is what I was trying to do. My very best wishes (talk) 15:03, 16 December 2020 (UTC)Reply
Now, let's consider first hook at the top that the results of DeepMind's AlphaFold 2 program in the CASP 14 protein structure prediction competition have been called "astounding" and transformational?. Well, this is actually much better than last versions. Yes, this is advertisement (just as others), but at least this is not an explicit misinformation. Some people did say that, and most important, yes, the results were very good. My very best wishes (talk) 15:23, 16 December 2020 (UTC)Reply
  • In the spirit of serving our homepage readers, I will still recommend that we go with either of ALT3 or ALT4. Sufficient backing form WP:RS to move ahead. Ktin (talk) 00:00, 16 December 2020 (UTC)Reply
  • Maybe we could change the "problematic" word solve by crack (also used in the MIT review), so we keep the catchy hook for the "layperson", without multiplying the "hype". What do you think of this one? :
ALT 3.1 ... that DeepMind's protein-folding AI AlphaFold 2 has cracked a 50-year-old grand challenge of biology? (source: MIT Technology Review).

Alexcalamaro (talk) 04:06, 16 December 2020 (UTC)Reply

@Alexcalamaro: I am good with this hook (i.e. ALT 3.1). Ktin (talk) 06:30, 16 December 2020 (UTC)Reply
OK. I am an uninvolved reviewer because I was not on CASPs for a long time and I do not have connections to CASP organizers or any participants. I only helped with editing page about AF2 in WP. Here is my independent assessment. Yes, there was a great progress with protein structure prediction on CASP14. True. However, "protein folding problem" was NOT solved by AF2 (yet). This is hype. Here is why:
  1. There was only one transmembrane protein in CASP14 dataset, and AF2 team was ranked #43 for this target; the prediction for this target by AF2 or other groups is far cry from solving the structure. Transmembrane proteins constitute at least 25-30% of proteins in human genome [25] (more by other estimates)
  2. The performance by AF2 was not great for multidomain proteins, as could be expected because AF2 was not tested for predicting protein complexes. The subunits in complexes are similar to domains. Up to 80% of all eukaryotic proteins are multidomain proteins [26].
Was it solved by AF2 at least for single domains of water-soluble proteins? There is no proof of that because
  1. Many proteins are represented by just a single or by a few related sequences in sequence databases, when one can not make large sequence alignment. However, AF2 method is actually based on using large high quality sequence alignments. We do not know if AF2 was tested for such cases and how did it perform.
  2. As follows from presentations on CAS14 (for example, [27]) AF2 did NOT achieve the accuracy of experimental methods. Moreover, looking at the distance cutoff-sequence coverage graphs here for specific CASP14 targets (T1024, T1027, T1028, T1029, T1030, T1032, T1040, T1047, T1052, T1061, T1070, T1091, T1092, T1099 T1100), one can see they are not even very close. For example, T1024 has only 50% of residues covered by best models for distance cutoff of 2A. Yes, they correctly predicted protein "fold", even family where it belongs (which is great!), but this is far cry from "solving protein folding" problem.
  3. AF2 is not publicly available for an independent evaluation
  4. AF2 and assessment of AF2 were not published not only in WP:MEDRS sources, but in any peer reviewed sources.
  5. GDT measure used by CASP assessors is a poor (insensitive) measure of performance for high-precision modeling. Having GDT of 90 or 60 (e.g. [28]) does not mean that 90% or 60% of the structure was predicted with the same accuracy as provided by X-ray crystallography, for example.
My conclusion Hook ALT 3.1 is misinformation. Do not do it. My very best wishes (talk) 14:31, 18 December 2020 (UTC)Reply
  • Following the comments above by My very best wishes and aiming to reach a wide consensus, I propose the following alternative hook :
ALT 3.2 ... that DeepMind's protein-folding AI AlphaFold 2 has made great progress towards a decades-old grand challenge of biology? (source: MIT Technology Review Nature)).

Alexcalamaro (talk) 08:05, 19 December 2020 (UTC)Reply

Yes, I think that's OK, with one correction: if you need a ref, it should be this [29]. That MIT writer makes too many incorrect claims, such as AF2 used 170,000 PDB structures for training (they used less), etc. My very best wishes (talk) 21:19, 19 December 2020 (UTC)Reply
  • Comment I have changed the source of ALT3.1 to Nature, and added the hook text to the "Responses" section of the article (to meet Hooks criteria). We need more reviewers to validate the proposal. Alexcalamaro (talk) 06:31, 21 December 2020 (UTC)Reply
Hey @Yoninah and Ktin: I think we have a consensus here with ALT3.2. I am not very familiar with these matters. What is the next step in the DYK process? Thank you. Alexcalamaro (talk) 21:40, 26 December 2020 (UTC)Reply
Missed this one. @Yoninah: as an uninvolved editor, please can you help review this one? I know this has been waiting for a long time, but, worth wrapping this one imo. Appreciate your helping hand in the review. Ktin (talk) 22:56, 2 January 2021 (UTC)Reply
  •   OK, ALT3.2 looks good but there is a bit of run-on blue linking in the beginning of the hook. What words don't need to be linked? I also would like to know why the two images from a CASP presentation are licensed as fair use. It seems to me that OTRS permission should be obtained from the author. Alternately, can't someone draw up a similar graph that would be freely licensed? Yoninah (talk) 18:04, 9 January 2021 (UTC)Reply
The last point is addressed in the fair-use rationales. Regarding OTRS permission, before Christmas I emailed the DeepMind press account for the block-diagram image, and both the CASP account and John Moult for the graph, and didn't get back a reply from any of them. Jheald (talk) 19:12, 22 January 2021 (UTC)Reply
As for the hook, I would suggest unlinking "AI", as that should be pretty obvious and is a term known to most people. Jheald, still no update on the OTRS? Yoninah seems to be on a short break atm, but I think this DYK should be finished at some point. It's the only one remaining from November. --LordPeterII (talk) 12:03, 3 February 2021 (UTC)Reply
@LordPeterII: Thanks much. This has been waiting for quite some time. Thanks again for picking this up. Let's go without the image. I have written ALT 3.3 with AI removed. Appreciate your approval. Thanks. Ktin (talk) 03:52, 4 February 2021 (UTC)Reply
ALT 3.3 ... that DeepMind's protein-folding program AlphaFold 2 has made great progress towards a decades-old grand challenge of biology?
@Ktin: Oh, I'm afraid I don't feel confident enough to approve this nomination myself :/ I have never reviewed anything, and this article's topic is rather complex. I was merely pinging to inquire about the progress, to get the discussion going again. Maybe some experienced editor or admin can help out... maybe @Cwmhiraeth would you have time for this? (sorry, I'm not really sure whom to ask) --LordPeterII (talk) 09:35, 4 February 2021 (UTC)Reply

Claim: "AF2 has solved the protein structure prediction problem"

edit

This claim is not in the article (removal diffs), for reasons set out by User:My very best wishes above.

At first I thought that missing it out was 'serving the steak without the sizzle', given how much the claim was repeated in the media (and by the CASP organisers). But, actually, I think the article reads all right without it. And what does "solving the protein structure prediction problem" mean anyway? Jheald (talk) 19:10, 8 December 2020 (UTC)Reply

Yes, we can not say that "AF2 has solved the protein structure prediction problem" - for several reasons:
  1. Speaking about "protein structures" in nature, they are typically structures of protein complexes (proteins never work alone), but the AF2 team did not even try to predict structures of protein complexes;
  2. According to presentation by AF2 people, the results were much worse if the sequence alignment used as input for AlphaFold included only one or a few sequences. Well, maybe 40% of proteins in genomes would belong to that category, although I do not know exact number (probably can be found somewhere);
  3. The testing on CASP is grossly insufficient to make such claim. Basically, the method was tested only for a few dozen of proteins (those in CASP-14), while there are 120,000 protein structures in the PDB. Of course one does not need to check all these thousands, but only a limited number of cases which are known in advance to be the most challenging for AlphaFold (linear peptides, proteins with multiple alternative structures (there are hundreds such cases in the PDB), proteins which form Intertwined complexes [30], single sequences and small or questionable alignments as input, "intrinsically unfolded" proteins). That is how such methods should be tested.
  4. Such claim was not made in any sources which would qualify as WP:MEDRS. My very best wishes (talk) 23:07, 8 December 2020 (UTC)Reply
@My very best wishes: Can you clarify where (2) above was said? Was it a presentation about AF2 or AF1 ? Jheald (talk) 08:56, 9 December 2020 (UTC)Reply
For general reference, here are some of the statements that have been made:
  • John Moult: "This is a big deal. In some sense, the problem is solved." (Nature, 30 November)
  • CASP press release ([31]_ "Artificial intelligence solution to a 50-year-old science challenge" (30 November)
  • John Moult: "But actually two-thirds of these points... are above 90; and in that sense, this is a real, atomic-level, competitive-with-experiment solution to the protein folding problem. This is a really extraordinary set of data. I wasn't sure I was going to live long enough to see this, but here it is. Again, with a few caveats, but the caveats are relatively minor." (CASP 14 introductory presentation, 30 November. CASP stream day 1 part 1, at 0:30:30)
  • John Moult: "In a serious sense the single protein single domain problem is largely solved" (CASP 14 closing presentation, 4 December. CASP stream day 5 part 3, at 0:04:40), cf tweet
but later "there was some talk that we should abandon this in CASP, as it is a solved problem. Clearly that's not true. It is solved by one group, and it is one solution; and I think what we heard in the nice discussion earlier today is there are probably many ways of solving this problem, and I think it is very important that we continue to monitor those, and see which actually work." (same, from 0:05.20)
Some pushback (most of which we note in the article):
See also discussion in AlQuraishi's new essay (he thinks it has been solved; summary of points to follow & comments).
User:My very best wishes's point about WP:MEDRS is a strong one (and "exceptional claims demand exceptional sourcing" -- WP:EXTRAORDINARY). Also that it may be better to work around a dubious or false claim, rather than to introduce it even with a rebuttal. Jheald (talk) 09:27, 9 December 2020 (UTC)Reply
The poor performance for single sequences. Yes, this is something they said in 2018. What did they say about it this year? BTW, testing the performance of method for single sequences versus sequence alignments is something more or less standard. Authors know it. Did authors do such test for predicting structures from the PDB and what results did they get for AlphaFold-2? That could be a typical question by a reviewer of their future paper. My very best wishes (talk) 16:15, 9 December 2020 (UTC)Reply
OK, I checked the concluding talk by John Moult accessible on Google [32], and I agree with him about everything except only one point: that the problem of 3D prediction was largely solved by one group for single protein domains. Actually, we do not know it because of insufficient testing on CASP-14 (see my point #3 above). I would like to ask the developers of AlphaFold2 (AF2) what was the performance of their method for the following categories of proteins:
  1. Transmembrane alpha-helical proteins. There was only one such example in CASP14 dataset, with a simplest 4-helical fold. That server http://www.predmp.com/#/home does pretty good predictions for such simple cases, but it fail for more difficult TM folds. Why AF2 should be better? Perhaps it is, but there is no any proof.
  2. Linear peptides with environment-dependent conformations studied by solution NMR. There was no such cases in CASP14 dataset.
  3. Proteins with multiple very different structures like transmembrane trasporters. There was no such cases in CASP14 dataset.
  4. Proteins witt multiple domains and multiprotein complexes, especially such that form "intertwined" structures. There was one such case in CASP14 dataset (where AF2 failed) and several cases in CAPRI dataset [33], but they were not tested with AF2. The multidomain proteins are the rule rather than exceptions for eukariotic organisms, but they seem to be beyond the capability of the method.
  5. Proteins represented by a single sequence/small alignment in genomes. This is easy to check by simply using one sequence from sequence alignment, but the developers of AF2 did not say anything about it on the Meeting (did they?) In fact, AF2 is inherently based on using large sequence alignments, hence it is not expected to work (at all?!) for single sequences which is the case for a lot of proteins.
  6. The measure of success (GDT) on CASP artificially inflates the success rate. They suppose to use the percentage of coverage with a single rmsd of ~1.0 A for all atoms calculated only for model #1, rather than a bunch of rmsd of up to 8 A calculated for "the best of five" predictions. My very best wishes (talk) 15:26, 12 December 2020 (UTC)Reply
Of course anyone could check this himself if AlphaFold was publicly available and easy to use, but it is not available. Last figure on this page shows that the performance of automated web servers on CASP14 has achieved the level of best predictions made by AF-1 two years ago on CASP13. I hope same will happen during next two years. My very best wishes (talk) 21:40, 9 December 2020 (UTC)Reply
  • [34] - John Moult said: "But actually two-thirds of these points [predicted structures]... are above 90 [GDT %]; and in that sense, this is a real, atomic-level, competitive-with-experiment solution to the protein folding problem." First of all, this is only 2/3 of predicted structures. Secondly, let me respectfully disagree about being "competitive", i.e. as good as experimental structures or potentially better. The results of assessment (as shown on available PowerPoint presentations) prove something different. I do agree that the predictions by AF2 might be useful in the absence of experimental structures, but only under one condition: if AF2 would be publicly available and properly tested (see above). My very best wishes (talk) 16:51, 12 December 2020 (UTC)Reply

Claim: "AF2's predictions of structures are as good as those produced experimentally."

edit

Claim should be treated with caution, per User:My very best wishes above. (same removal diffs). It is not clear exactly by what basis the organisers have been making this claim, which has been contested. According to mvbw it is "obviously a false statement". There may be some nuance, but at the very least we should be cautious until we can clarify exactly on what basis it was being made. (Even though AF2's results were spectacularly good). Jheald (talk) 19:19, 8 December 2020 (UTC)Reply

Yes, the predicted structures are "very close" for most (not all) CASP-14 targets, but they are not "as good". We can not say that "AF2's predictions of structures are as good as those produced experimentally" simply because they are not as good - according to assessment on CASP-14 (such as this, see plots on slides 16-18, this is one of many excellent illustrations by assessors). These assessments are now available only as PowerPoint Presentations. This will be even more clear when the official assessment on CASP-14 will be actually published somewhere. My very best wishes (talk) 22:55, 8 December 2020 (UTC)Reply
I think the claim is right. See: https://mobile.twitter.com/CrollTristan/status/1420388062522134529 2A00:1370:812D:9F38:3D3F:887F:E8C3:321C (talk) 21:45, 28 July 2021 (UTC)Reply

Tags on the lead

edit

@N2e: Thanks for giving the article a good read-through. Regarding your tags on the lead, can you clarify exactly which part(s) of the statement you think need more support?

Others have objected[who?] that any such statement is premature, given that AlphaFold's method has not yet been made known in sufficient detail to allow discussion, nor full assessment of its limitations, nor independent reimplementation; nor are the reasons for its success yet understood qualitiatively; and that considerable challenges remain before the protein folding can be fully understood.

On the final bit, this was intended to be a nod to the fact that protein structure prediction of the final folded structure is not the same as a full quantitative understanding of how the protein folding actually proceeds in nature to get to those structures (and how it can go wrong in nature). A number of people with an interest in the actual process of protein folding have made this point. In the 'responses' section we give a cite to Folding@Home saying this.

As for "who" has objected, I am not sure if I can think of a particular way to group them, but we do include some names in the Responses section, and some of the references there give more.

Does this help at all? Is there a way to adjust the text, that might make it clearer? Jheald (talk) 16:43, 12 December 2020 (UTC)Reply

I think that part of the lead including mentioning protein folding is actually OK. People did make such comments. I guess you just need to include the in-line references in places labeled by N2e. My very best wishes (talk) 17:13, 12 December 2020 (UTC)Reply
Good question, Jheald. Article ledes should summarize the article prose in the body, and I was unable to find "limitation" or "objected" or "independent" in the body prose. Figured that, if I could, I could find out who in particular "objected." It may be better just to frame it with summary language about the extent to which these are obviously early results, and despite the high praise that seems to have come in (rather broadly) from certain scientists and researchers in the know, there is (as all good scientists know) a very long road ahead to assessing the true applied value of this competition win and even getting replication in other "studies" (if I can think of the competition as just one particular study using one particular set of metrics and rules for judging merit. Cheers. N2e (talk) 17:43, 12 December 2020 (UTC)Reply

Attention modules

edit

(extracted from section above)
I do have concern about something else: a transformer design, that are used to refine a matrix of relationships between residue positions and other residue positions, and between residue positions and different sequences in the sequence alignment of identified similar DNA sequences respectively. What that suppose to mean? That sounds like abracadabra to me. I fixed it now. Welcome to rephrase/explain better. My very best wishes (talk) 17:13, 12 December 2020 (UTC)Reply

@My very best wishes: So, what we know from the block diagram (and the similar, but slightly expanded diagram from the DeepMind's presentation at CASP, is that after having got together a multiple sequence alignment (MSA), AlphaFold then goes into an extensive run of iterations trying to refine two things. One is an array of residue-vs-residue information, similar to a contact map, indicating how much it thinks what is happening at one residue is relevant to what is happening at the other. The other is an array of residue-vs-sequence information, indicating how much it thinks each sequence is relevant to the relationships of each residue. These arrays are iteratively updated, because the sequences that are relevant to each residue inform what residues are relevant to that residue; and what residues are relevant to that residue helps refine the assessment of what sequences are relevant to that residue.
We know it's not good enough to do either of these refinements element-by-element, independently of what's going on elsewhere in the array. (As we note in the protein structure prediction article, that's why contact map prediction really wasn't very good before 2010). Instead the update has to work on the whole array as a whole, taking account of relationships between different elements within the array. This is where "attention" comes in, which DeepMind has emphasised quite hard. In the article we've summarised the attention mechanism as "layers that have the effect of bringing relevant data together and filtering out irrelevant data". There are other network configurations that attention can be applied to, but as of 2020 the neural network architecture that has become pretty much synonymous with "attention" is the transformer (machine learning model), which can be resource-expensive, but which have swept the machine learning world since 2017. DeepMind's language of graphs and edges is very much in line with this; when they show what they label as an "attention module" in slide 6 of their presentation, what they schematise is a transformer; when they give (eg language) as its typical domain of use, this is because it is natural language processing (eg for machine translation) that is where transformers first appeared, and where they have driven RNNs from the field, because their results are so much better. In a language model, the transformer layers encode how related each word is to each word that went before -- more exactly they identify a few of the words which went before (not necessarily the most recent), that are most relevant to the context of the present word to establish its meaning, and therefore eg how it should be translated, or how (GPT-3) the sentence might continue. As GPT-3 shows, the specificity of relationships that a trained network of these things can encode can be quite staggering. (cf also [Image-GPT] which applied the same idea to 'continuing' images -- eg the continued cat image is quite extraordinary). So the idea here is that the transformer network can recognise which residue/residue relationships are most associated with which other residue/residue relationships (ie which pixels in the square are likely to be most associated with which other pixels in the square, given what the square as a whole most looks like). Rather than 'continuing' half a cat image, you could instead use such a network to spot and fix bad pixels in such an image -- denoising it -- on the basis of the whole image. Similarly the refinement here. It is taking the initial array, and making it look more like it thinks an array 'ought' to look like, on the basis of different traits it's pattern-matching in that array. (A different array would trigger quite different associations betweeen elements). User:My very best wishes wrote elsewhere that AF2 needs at least some kind of MSA to proceed. I suspect this isn't true. I suspect that even with no MSA match, the transformer may still be able to do a fair amount of refinement on the residue/residue array (if this pixel is lit up, and this one, and this one, then that one probably should be too). What it learns from sequence co-evolutions may just be additional to what it can already do. And then, in the other module, it uses the same kind of process to update the other 'square of pixels', showing what sequences it thinks are relevant to each residue, taking into account its new square of pixels of what residue pairs are most co-associated.
That's what I was trying to suggest a hint of, while couching it in very general terms, because for example we don't even know what kind of object each pixel in one of those pixel squares represents -- for example in the residue/residue square, is it just a general 'associatedness' between the two residues, like the probability of square being lit in an old-style contact map? Or does it try to internally represent a histogram of distance probabilities at this stage, as AlphaFold 1 did? Does it also try to capture relative orientations between the two residues (cf the 2019 paper from the Baker group?) I don't know. It could just be the first, because that would already mean the transformer trying to capture linkages between 60,000 'pixels' and 60,000 other pixels for a 250-residue protein.
So that's why I was just talking about 'a matrix of "relationships" between residue positions and other residue positions' that the module was trying to refine, as language for the residue/residue 'square of pixels'. I hope it gives at least some translation into words for what the block diagram(s) appear to show. But very happy if we can find a better form of words that feels less 'abracadabra'. Jheald (talk) 19:05, 12 December 2020 (UTC)Reply
  • Well, according to this article, the method is based on co-evolutionary analysis of multiple sequence alignments. So the quality of such alignments could be the "bottleneck" of method, and it would be expected to perform very poorly if at all for a single sequence. This is easy to check and something obvious, hence I am 100% sure that authors did such tests, but there is nothing about it in their PowerPoint presentation this year [35] except one mention that adding several sequences for one target did help significantly. The presentation also does not tell anything about testing their method beyond CASP targets. My very best wishes (talk) 21:14, 13 December 2020 (UTC)Reply
Co-evolutionary patterns between the sequences determined to be most relevant for each residue position (sharpened by the 'red' refinement) certainly play importantly into DeepMind's assessment of the residue/residue interactions (represented the 'green' array), no question. But it would be incautious to assume that they are the whole story. From all the examples of known structures AF2 has been trained on, I suspect there is likely to be some green -> green refinement even if with no input from co-evolution. How good pure green -> green refinement can be without input on residue/residue co-evolution I have no idea. Like you, I would assume this is something DeepMind have certainly tested, but as you note, so far they are not saying anything. Jheald (talk) 22:51, 13 December 2020 (UTC)Reply
Just to add: above I said I didn't know what kind of objects each one of these pixels represents. Having read up a bit more about transformers -- if the refinement process is a transformer transformation -- the object is likely to be quite a long vector, with some dimensions for the properties of residue A, some for the properties of residue B, some presumably representing the guess for the distance probability histogram ('distogram') between them, some maybe for some representation of the length along the chain between them, and quite possible some for the relative orientiation between them.
Transformer 'cost' goes as   where n is the number of objects, and d the number of dimensions of the properties vector associated with each one. If n is of order 600,000 so   is of order 360,000,000,000, it may in comparison be comparatively cheap to add a few more dimensions to d to let it record some idea of the relative orientations. It may explain why the algorithm is so compute-intensive, particularly if it goes through a substantial number of iterations in this phase (as the term 'trunk' used on the diagram may suggest). It looks like it is here that that quote from Science relates to: viz that the '“attention algorithm” ... mimics the way a person might assemble a jigsaw puzzle: first connecting pieces in small clumps—in this case clusters of amino acids—and then searching for ways to join the clumps in a larger whole.' -- since it evidently doesn't refer to the final structure model, which (according to the slides) produces an almost fully-formed topologically accurate structure first go. Jheald (talk) 20:54, 14 December 2020 (UTC)Reply
Thank you! Yes, it would be very tempting and exciting to assume that the protein folding problem is almost solved or will be solved very soon. But as someone who knows how difficult and multifaceted this problem really is, I am a little sceptical even after their results on CASP. Very long time ago I actually worked with one man who claimed that he knows an approach to solving it (and many other people thought and even claimed the same). It took a lot of effort for me to realize why exactly that approach was fundamentally flawed (it operates with wrong energy which is not even a correct potential energy, instead of operating with Gibbs free energy of protein which would require an entirely different parameterization of empirical energy functions). Unfortunately, I can not say where these AF2 guys currently are in solving this problem without making some additional tests - along the lines outlined above on this page. My very best wishes (talk) 01:29, 15 December 2020 (UTC)Reply
I suppose with molecular dynamics, you're hoping to get the entropy side from the different lengths of time the model stays in each coarse-grained configuration (or chance it gets into it at all). I've always understood one of the big questions to be how to account for all the bits and pieces you're not directly modelling -- such as nearby water molecules. Without treating those properly, your potentials are going to be very wrong. And yes, if you're not sampling through the different accessible energy states enough with molecular dynamics (eg if instead you're just trying to slide down a potential), then indeed the Gibbs free energy would seem to be what you need, rather than the potential energy. But then quite challenging to estimate the relevant entropy for the corresponding states, I would think. Jheald (talk) 22:51, 16 December 2020 (UTC)Reply
Oh no, there is nothing one can do with MD, at least until it uses the conventional energy functions. The thousands kcal/mol in CHAMM/AMBER/etc. are meaningless. Here is why. The Van der Waals forces (or the corresponding interatomic potentials) have electrostatic nature and therefore are environment-dependent. In condensed media they are much weaker than in conventional force fields (which presumably describe interactions in vapor/vacuum) and follow "like dissolves like" rule (the conventional force fields do not). That follows from a theory and has been experimentally proven by measuring the Hamaker constants, e.g. two surfaces separated by a dielectric fluid interact much weaker than those separated by gas. There is a wonderful book, "The Intermolecular and surface forces", by Jacob Israelachvili. It is all there. No wonder, AF2 had performed better. My very best wishes (talk) 01:35, 17 December 2020 (UTC)Reply
Additional data point: a new preprint from Facebook AI (thread), finding that a transformer network trained like BERT, but on protein sequences rather than language strings, can out-perform the best current methods for contact map prediction. (And still works in cases where the effective number of independent sequences may be as low as 10). The Facebook team find that just by training the model to predict the next amino-acid residue (ie not telling it about contact maps as part of the training at all), the system learns enough about correlated mutations in particular sequences, that when you run a similar sequence through the trained box, internal variables light up that directly predict contacts -- and predict better than the existing published state of the art. Jheald (talk) 22:18, 16 December 2020 (UTC)Reply
  • Unfortunately, this is just a "black box", similar to other "deep learning" tools. It is not clear how it works, and why it is allegedly successful. I can only look at an example from an area I know better. This is target T1024 (aka LmrP aka 6t1z PDB code), a transmembrane transporter, one of those forming more than two different (inner-facing and outward-facing) structures. Importantly, it had 9 similar structures in the PDB (see here) which could be used for learning. The closest old structure (used by AF for learning) is 3o7p, with almost complete overlap and rmsd < 3A for 358 common CA atoms (easy to check using https://www.ebi.ac.uk/msd-srv/ssm/cgi-bin/ssmserver ). But the sequence homology is detectable only after structural superposition. That could be great GDT if AF2 simply took one of the existing similar PDB structures from its training set. How did AF2 performe? See here - this is nice graphics showing many areas with CA...CA distances greater than 4 and even 8 A (!) after structural superposition of the model with target. The GDT is 60 and rmsd is 3.7 A in the CASP table. But to be sure, one must make an additional test: run the best model, i.e. T1024TS472_1 (was it generated by AF2? No, that was group of Eloffson! [36]) against PDB using SSM server. Perhaps there is a better match for another conformation of another protein? The models are here. The result: the best superposition is indeed with 6t1z, and the rmsd is 2.7 A for 351 common CA atoms (of total ~400 CA atoms), the sequence-dependent overlap for common CA atoms is 93%. This is an excellent result, but still far cry from solving protein folding problem.My very best wishes (talk) 04:35, 18 December 2020 (UTC)Reply
  • Now I see. The prediction by AF2 for this FR target was ranked #43, and the best superposition was indeed with 6t1z, but with rmsd of 2.4 A for only 248 common CA atoms (of total ~400 CA atoms), and the sequence-dependent overlap for common CA atoms was 81%. This is much worse than by other groups. Not a winner for this target. Based on that, AF2 performs mush worse than other methods/groups for transmembrane proteins. Claim that AF2 is great for TM proteins appear to be false. My very best wishes (talk) 06:21, 18 December 2020 (UTC)Reply

Motion of protein

edit

I'm no scientist. This may be a stupid question. But can a protein change its shape over time, and this change be an important feature for the function? TomS TDotO (talk) 19:19, 12 December 2020 (UTC)Reply

Oh yes. See Protein dynamics, Conformational change, Allosteric regulation and Intrinsically unstructured proteins. Some of the alternative structures are almost not superimposable. Which brings the question: which of these structures AF2 suppose to determine? Well, I can see that AF2 was actually tested for one such protein (see my comments in previous thread). AF2 did not perform so great: it was ranked #43 [37]. The GDT was ~60. My very best wishes (talk) 14:54, 18 December 2020 (UTC)Reply
Why, yes, it is actually what prions are. 109.252.90.168 (talk) 14:14, 17 July 2021 (UTC)Reply
  • Based on papers currently in BioRxiv, i.e. [38] (the example with calmodulin) and [39], AF2 can not reproduce the actual functional flexibility of proteins. The structural differences between several AF2-generated models seem to reflect the uncertainty of the method, not the actual motions. However, the results described here [40] seem to be more encouraging. More research needed. My very best wishes (talk) 00:21, 15 November 2021 (UTC)Reply
  • The results in this paper are in line to those with calmodulin (above), i.e. the "intermediate structures" can be generated. Are they actually relevant? Who knows. But let's consider the GPCR (appear in the paper) as an example. What a good modeling/prediction program was suppose to accomplish here? It suppose to generate an active conformation for a complex with an agonist, the inactive conformation for a complex with antagonist, an intermediate conformation win complex with partial agonist, etc. Obviously, AF2 can not even attempt it because it does not model small ligands at all (if the ligand was another protein, then it potentially could). As about the conformational transition between the active and inactive states, the authors also did not even try to generate it. Can such path be generated using other tools? Yes, sure, using Morph Server [41] from Database of Molecular Motions. Can AF2 do this better? There is no any proof or indication of that. My very best wishes (talk) 19:17, 4 December 2021 (UTC)Reply

Database of protein structures predicted by AlphaFold-2

edit

So, here is it. For example, the model for CD247 would be that one. However, one of biologically relevant complexes for this protein would be that one: [42]. This is a perfect illustration why the protein folding problem has not been solved yet. Still, the results of prediction for the single chain are interesting and correct to a certain approximation (I assume that N-terminal segment in the model by AlphaFold is signal sequence). Another random example: [43]. Of course such location of a water-soluble domain relative to transmembrane helix is impossible. But the overall fold of the water-soluble domain is possibly correct, except that it should be a dimer like here: [44]. My very best wishes (talk) 19:40, 22 July 2021 (UTC)Reply

Dah, it is not solved yet, some proteins are being constructed by chaperons, so... Also, to quote an article on this, "for the third of proteins in the human body that don’t have a structure at all until they bind with others", not that it is necessary true, LOL. 109.252.90.143 (talk) 11:44, 24 July 2021 (UTC)Reply
1/3 - yes [45], and my first link above is an example of such protein, except a single TM helix (AF2 correctly predicted the C-terminal part of this protein unfolded). Chaperons - no, irrelevant, they do not make proteins; they prevent aggregation/misfolding. Quickly checking a few examples (as above) shows that AF2 does fantastic work, just as during last CASP. My very best wishes (talk) 01:24, 27 July 2021 (UTC)Reply
Chaperons do accelerate folding too. That is just a hack just like double strand breaks in neurons, but yes, you are right, Alphafold does that too perfectly. Which is nuts, look into #alphafold and #alphafold2 hashtags on Twitter, it is insane... 2A00:1370:812D:9F38:C540:F561:9BF1:F82B (talk) 20:14, 27 July 2021 (UTC)Reply
I see: [46]. Thanks! The unfolded parts and proteins are definitely a thing [47],[48], but this is fully expected. One should look at folded proteins instead. Someone is already working to set up a web server: [49]. Yes, this is must. Main problem for the future are protein complexes. But I am sure they are working to solve it right now. My very best wishes (talk) 21:04, 27 July 2021 (UTC)Reply
Packing of domains in many multi-domain proteins is a mess. This is very much like a tangled rope [50], as could be expected. It seems they do not have anything to reasonably pack these domains or subunits. Some of that can be fixed by post-processing such entries, and someone probably will do just that (obviously, one can not just take a model like that [51] and use it). The bottom line: many people will find this useful for something. My very best wishes (talk) 21:19, 27 July 2021 (UTC)Reply
And of course there is a problem of verification because the CASP data set was highly biased, with only a couple of transmembrane proteins, for example. Speaking of which, this protein model predicted by AF2 with very high confidence [52] would be highly unusual and seem to contradict the experimentally studied transmembrane topology of the protein [53]. But it is a physically possible structure, unlike the previous one [54] where one needs to delete all low-confidence regions. My very best wishes (talk) 21:08, 28 July 2021 (UTC)Reply
Or that one "confident prediction": [55] (Uniprot visualizer allows to look at the model with side-chains). It simply has no tertiary structure. I am getting an impression that ab initio models of transmembrane alpha-helical proteins by AF2 are mostly a junk. Now I can see why. AF2 simply does not do any real ab initio modeling. All it does is copy-pasting fragments of structures from the PDB + refinement. It does good though if there are any similar structures. My very best wishes (talk) 21:30, 28 July 2021 (UTC)Reply
Complexes already work in ColabFold. "set up a web server", they already did it. Part of it is done in Google Colab though. "Uniprot visualizer allows to look at the model with side-chains" Molstar there is broken though. Compare to https://alphafold.ebi.ac.uk/entry/P0ADL6 to https://www.uniprot.org/uniprot/P0ADL6#structure. 2A00:1370:812D:9F38:3D3F:887F:E8C3:321C (talk) 21:45, 28 July 2021 (UTC)Reply
  • No, thanks. If that is how it does "docking" of transmembrane helices by leaving huge empty spaces between them [56]. But let's consider something well studied, a potassium channel. Is that a reasonable/possible model of the monomeric subunit? No, of course not. It can only exist as a part of the tetrameric complex, from where it was in essence "copy pasted" by AF2 (the blue, high confidence regions; the yellow ones have indeed been predicted by AF2 - incorrectly). This is just an enormous advertisement that a problem was solved when it did not. I think all such ML methods are like a child who is trying to guess the final answer instead of actually solving the problem. And he is an excellent guesser. My very best wishes (talk) 14:34, 29 July 2021 (UTC)Reply
Alphafold DB does not have this hack, since Google people missed that. Yet, at least. It does not try to find only the final stage, there are other stages too. Sigh. Both of this mistakes were done by Twitter people, I wonder why. You can even do step by step video of folding, it is done even by Nature paper. Valery Zapolodov (talk) 18:54, 29 July 2021 (UTC)Reply
Sure, the AF2 procedure includes a number of steps. No one say that it does not. But what "hack" are you talking about? None of the examples above is related to Google Colab. Those are official results for specific proteins generated by AF2 people and available through their own website and Uniprot. All these examples, including two last ones, are not mistakes, but results that AF2 would be expected to generate by working as described in their publications, and especially for potassium channel. If you have questions why some of these structures are incorrect or even physically impossible, I can explain in more detail. My very best wishes (talk) 22:25, 29 July 2021 (UTC)Reply
  • The bottom line: the announcement by AlphaFold [57] is misleading. it was hailed as a solution to the 50-year old protein folding problem... AlphaFold demonstrated that AI could accurately predict the shape of a protein. What shape? That shape [58]? While the structures of individual domains might be good approximations to their respective native structures (I have no idea how good), that "shape"/structure of whole protein is certainly wrong, even physically impossible. Or that structure [59]? This is simply not a tertiary structure of a protein although it suppose to have one. Both proteins are suppose to be folded and represent more or less random examples of proteins. First of them is an example of a multi-domain single-pass transmembrane protein with significant sequence homology to other proteins in the PDB, 2nd one is example of a simplest polytopic transmembrane protein. My very best wishes (talk) 20:03, 1 August 2021 (UTC)Reply
  • "AF2 procedure includes a number of steps" Yes, that too. But I meant in actual folding. "But what "hack" are you talking about?" See ColabFold on the bottom of the article. "official results" they do not have homo- or hetero- or membranes. Also there is only 1 result there out of 5, since there are 5 NNs actually. Valery Zapolodov (talk) 10:19, 3 August 2021 (UTC)Reply
  • Any publications demonstrating how reliable such predictions of oligomers might be? (I do not have time for that, but it would be fun to check if it can reproduce different structures of the same homodimer like 2jwa, 2n2a, 5ob4 or other such examples). I have also big concerns about the reliability of main AF2 version, specifically with regard to multi-domain and transmembrane proteins (just a couple of such examples were present at the last CASP, and AF2 did not perform so well for them). Unfortunately, AF2 and its derivatives (such as the "hack") can not be verified just using the currently available PDB structures because AF2 generates models based on the current version of the entire PDB. To really test the method one must download the current set of AF2 models and then check them against truly new protein folds that will appear in the future releases of the PDB and that have been already predicted in the current AF2 release for the several genomes (this is basically a continuous CAFASP approach). I am sure someone will do just that. My very best wishes (talk) 14:53, 3 August 2021 (UTC)Reply
The BiorArxiv submission is about peptide-protein docking, a specific problem different from protein-protein docking in general. It say: We show on a large non-redundant set of 162 peptide-protein complexes that peptide-protein interactions can indeed be modeled accurately. OK. But I assume that AF2 was trained on a large set of protein structures from the PDB that included these 162 complexes or/and homologs of these complexes? If so, this is not really a test. Five times - are you talking about AF2? In any case, what are the references about these 5 times? FoldIt is of course very different from AF2 and they did not claime that they solved protein folding problem (sure they did not). Note that new folds are relatively rare. Which new protein folds do you think PDB released during a couple of last weeks? My very best wishes (talk) 17:27, 4 August 2021 (UTC)Reply
Foldit protein: https://twitter.com/sokrypton/status/1415998861789831175?s=20 New PDB bank structure: https://mobile.twitter.com/CrollTristan/status/1420388062522134529
Same guy fixes a chain of public structure that somebody got backwards WITH almost no homology and even no perfect BLAST: https://mobile.twitter.com/CrollTristan/status/1422238150223597571!
Alphafold was not trained on complexes (though there were some artifacts in PDB that were a part of training, but that is like 2 or 3 examples).
And finally a way to do folding many times to fold it even without MSA or any homology! https://twitter.com/sokrypton/status/1422942404588486657?s=20
And even a fake MSA and homology introduction to do folding faster and more accurate that can do it: https://mobile.twitter.com/tsuname/status/1423181524174659589 Valery Zapolodov (talk) 13:06, 5 August 2021 (UTC)Reply
Thank you for pointing out to discussions on Twitter, but I was asking about WP:RS/peer reviewed publications. Obviously, there is no anything yet with regard to serious verification of the thousands models which were just made available by AF. My very best wishes (talk) 14:47, 5 August 2021 (UTC)Reply
I think this will be WP:MEDRS already but there is one not peer reviewed (besides my comments, there is actually 0 (not 29) unpredicted human protein genes, hope author will fix that): https://www.biorxiv.org/content/10.1101/2021.08.03.454980v1 Valery Zapolodov (talk) 18:10, 5 August 2021 (UTC)Reply
Whatever. The models are certainly verifiable, so we will see. I was especially surprised by that one [60]. It does look like a truly ab initio and an unexpected prediction, not necessarily a contradiction to the study of TM E.coli proteins (linked here); the repeat [61] checks out in the model, and a number of other criteria as well. My very best wishes (talk) 01:36, 7 August 2021 (UTC)Reply
Apparently Alphfold can be used to predict splice isoforms (that low confidence part is a "wrong" exon of a gene, cool) https://mobile.twitter.com/MichaelTress/status/1418263811602227222 Valery Zapolodov (talk) 18:47, 7 August 2021 (UTC)Reply
If you want to include any content sourced to posts on Twitter or preprints in Biorxiv, they fall under WP:SELFPUB (can be included as a WP:PRIMARY source if written by recognized experts; merely a postdoc probably would not be recognized as such). But I would rather not use such sources, especially on a significant subject, such as that one. My very best wishes (talk) 23:08, 7 August 2021 (UTC)Reply
So, per discussion above (a continuous verification of AF2 using more or less novel folds/families just released by the PDB), here is an example. 7p54 (released today) versus [62] (high confidence, single domain, not an intertwined oligomer). While the model is very similar to the structure based on the sequence-independent superposition (which is definitely a great result!), only 18% residues are correctly superimposed in the sequence-dependent manner using SSM server (e.g. Glu 286 in model should be superimposed with Glu 286 in the structure, etc.). This is far cry from solving protein folding problem. Such model is hardly useful for any practical purposes (such as protein-protein or protein-ligand docking), although it might be useful for analysis of protein evolution. My very best wishes (talk) 03:40, 11 August 2021 (UTC)Reply
LOL, what? That is a homodimer as you can read in pdb (it says homo 2-mer). Those are predicted without any problems including some crazy 8.1-mers. See thread https://mobile.twitter.com/ivangushchin/status/1423629505776738304 and this for 8:1 https://mobile.twitter.com/LindorffLarsen/status/1425581943173832709 (complex (TIM9:TIM10)3) Valery Zapolodov (talk) 06:09, 14 August 2021 (UTC)Reply
The post on Twitter is about a different protein (HISTIDINE KINASE NARQ). I am saying about the relatively poor prediction for the monomer (only 18% of coinciding residues); the model for the dimer would have very same errors at best. "Relatively poor prediction" means that it was significantly worse than was demonstrated by AF2 on the last CASP. That was the first case of a sufficiently novel "fold" of a TM protein released by the PDB after the release of AF2 models. If I see something else later, I will post it here. My very best wishes (talk) 20:13, 15 August 2021 (UTC)Reply
"the model for the dimer would have very same errors at best" That is just wrong. In practice too. "worse than was demonstrated by AF2 on the last CASP14" But they used 5 differents NNs and select best, while here they use only 1. Valery Zapolodov (talk) 09:36, 18 August 2021 (UTC)Reply
P.S. There is also another issue: all regions with low-reliability prediction must be simply removed from these models as definitely wrong. My very best wishes (talk) 01:49, 25 August 2021 (UTC)Reply
Again. Not all complexes (imagine one homomer and 2 heteromers all together) are auto-assembled (https://www.sciencedirect.com/science/article/pii/S0005272815000304?via%3Dihub). Some are constructed by other proteins/ribozymes/proteribozymes. In this case membrane is what is doing this. DB is still a beta, it just does dumb thing like get a predicted gene and its exons/its splice isoforms and get the 3D protein structure. Obviously, there should be a Uniprot field number that says what homomer it is or what other protein it is heteromer with. Or you can just bruteforce it. Alphafold can do it, no problem. Valery Zapolodov (talk) 10:57, 25 August 2021 (UTC)Reply
I can only comment on models of monomers they publicly deposited. And in that regard, the proton ATP synthase (you linked to) is interesting. Here, AF2 generates structures of certain individual subunints (e.g. [63], this is subunit q in PDB entry 6b8h) that physically can not exist as a stable 3D structure outside of the complex (this is simply not a tertiary structure of a protein, but a collection of non-interacting secondary structures that are arbitrary arranged in space). I would say this is a serious flaw of the method. My very best wishes (talk) 19:27, 25 August 2021 (UTC)Reply
They publically deposited the NN itself. Use it. The structure of monomer says nothing. This is not a flaw, this is an advantage, it shows it is not just a dumb NN, it shows it cracked the algorithm. Valery Zapolodov (talk) 21:31, 26 August 2021 (UTC)Reply
Oh no, they deposited 3D models of monomers precisely because this is information to be used by other reserchers. And no, having monomeric structures that can not physically exist for the monomers is a very serious flaw. It shows that the method has little or nothing to do with actual protein folding and that structures of some (many?) individual monomers (like [64]) are useless by themselves. Yes, to fix it they must include membrane (something that they do not do). My very best wishes (talk) 22:55, 26 August 2021 (UTC)Reply
Adding membrane is going to be done, but that again requires field in Uniprot DB or other DB to automate the project. Please note that membrane proteins need MSA to work. Without it it will not work, since they make usually no sense outside of memraine. Yet in links above I even showed 2-homomer membrane protein working perfectly without membrane. So... Valery Zapolodov (talk) 16:41, 28 August 2021 (UTC)Reply
If we are talking about NarQ (your link [65]), then I assume that experimental structures of several NarQ dimers [66] were used for training of AF2. So you get as an output exactly what you put to the black box as input. It does not mean AF2 can do ab initio docking, at least until this will be demonstrated on CAPRI and in publications. Other than that, I agree with the post in Twitter. The issue is the existence of multiple structures because there are significant conformational transitions. Can AF2 reproduce them all? No, hardly. My very best wishes (talk) 19:33, 8 September 2021 (UTC)Reply
"get as an output exactly what you put to the black box as input" The opposite. Due to how NN works you 100% will not get the same result (because there is chaos component introduced at every stage) but also if the convergent common algorithm would not have been found, you would not have any result at all. "Can AF2 reproduce them all? " It can. Also some news: Alphafold for DNA awareness, https://www.biorxiv.org/content/10.1101/2021.08.25.457661v1 and alphafold for RNA on the latest Science cover. https://www.science.org/doi/abs/10.1126/science.abe5650 Did not read yet. Valery Zapolodov (talk) 12:00, 9 September 2021 (UTC)Reply
Sure, it will not be exactly the same, but one must assume that the information about the multiple existing structures [67] is already there and used by AF2 for prediction, although not necessarily directly (this can be a direct usage too, depending on how the method was constructed). That's why I talked about verification using only new structures above. It can??? Any RS to support such assertion? Even the tweet above does not support it at all. Please see cases noted in the previous thread [68] to get perspective. My very best wishes (talk) 16:09, 9 September 2021 (UTC)Reply
From official Alphafold paper no complexes were involved in NN generation. Yet, I will again say that there were some mistagged one-chain PDBs that were actually complexes. Yes, it can, but not in most insane example like prions, maybe in the presence of actual another misfolded protein to start prion misfolding further. "Any RS to support such assertion?" Sure, even from official Alphafold DB FAQ you can get different alternative forms of the same protein just by running it multiple times and by changing initial seed. You really need to read it accurately and I will be bold: "Where a protein is known to have multiple conformations AlphaFold usually only produces one of them. The output conformation cannot be reliably controlled." Just like in real life there is no way to reliably say what conformation it will get, but it can be controlled with a seed or choosing a different NN out of 5. In the case with prions only one confirmation only happens with very low possibility to trigger misfolding, be it due to any 3rd party component or another prion from ill person or be it misfolding that happens one time in a century. Same in Alphafold. Valery Zapolodov (talk) 11:38, 13 September 2021 (UTC)Reply
Meaning there are NO RS to support such assertion. Sure, AF can generate alternative structures for such complexes, but are they correct? How often they are correct? What is rmsd? Now, if the results are actually subjective as you say (they depend on the "initial seed"), that makes community experiments, such as CASP/CAPRI, necessary. Thank you for discussion, but I think we stray from the subject. Overall, after looking at these models, AF reminds me Google Translate, i.e. it does provide rough structure/translation in most cases (this is great achievement!), but it is indeed very rough with mistakes. My very best wishes (talk) 18:26, 15 September 2021 (UTC)Reply
Insane. https://www.biorxiv.org/content/10.1101/2021.09.14.460228v1 https://www.biorxiv.org/content/10.1101/2021.09.30.462231v1 Google translate is not open source, yet internally google has much better translator that already works like a human. From last year the quality of translation to english from any language is perfect. Valery Zapolodov (talk) 10:03, 3 October 2021 (UTC)Reply
Wow! But I would rather wait to see what they can show in next Critical Assessment of Prediction of Interactions. My very best wishes (talk) 23:38, 12 October 2021 (UTC)Reply
Apparently ColabFold people cracked the alternative conformation problem, maybe including prions. https://twitter.com/sokrypton/status/1464278167456256016?s=20 Valery Zapolodov (talk) 22:52, 26 November 2021 (UTC)Reply
"Obviously, there should be a Uniprot field number that says what homomer it is or what other protein it is heteromer with. Or you can just bruteforce it." So, this paper is bruteforcing complexes, just as I said :) Very insane supplementary material paper, that describes what they did, they also used Alphafold and Rosettafold. https://www.science.org/doi/10.1126/science.abm4805 https://modelarchive.org/doi/10.5452/ma-bak-cepc Valery Zapolodov (talk) 11:45, 5 December 2021 (UTC)Reply

AlphaFold-Multimer

edit
  • So, here is it [69] by authors of AlphaFold2: For heteromeric interfaces we successfully predict the interface (DockQ ≥ 0.23) in 67% of cases, and produce high accuracy predictions (DockQ ≥ 0.8) in 23% of cases, an improvement of +25 and +11 percentage points over the flexible linker modification of AlphaFold respectively. For homomeric interfaces we successfully predict the interface in 69% of cases, and produce high accuracy predictions in 34% of cases, an improvement of +5 percentage points in both instances. [for the sake of clarity, 34% are included into 69%, hence 31% are unsuccessful predictions]. OK. This is probably a good performance and can be independently verified since they promised to release their new source code. On the other hand, only 23% of high quality predictions for heteromers may not be so great. Still, even the previous iteration of the method (i.e. AF-2 as implemented in Google Colab) performed much better than other general docking methods according to papers sitting in BioRxiv.My very best wishes (talk) 17:51, 22 October 2021 (UTC)Reply
Yes, I saw it [70]. My very best wishes (talk) 04:24, 9 November 2021 (UTC)Reply
BTW, one of key papers you mentioned above on protein complexes just was published [71], and it leads to this depository of the most reliably modeled complex structures. This is very interesting. Some of these complexes are definitely great, but that is when there is a similar/homologous complex in the PDB. Others (like that one) are just terrible and obviously incorrect. Others, like this or this, are very loosely packed and unstable, with nonregular loops running in the middle of membrane; almost certainly also incorrect or at best rather far from native structures. Is it because they also used Rosetta rather than just "pure" AF? I have no idea. The individual subunits maybe good though, after removal of disordered loops. My very best wishes (talk) 02:55, 20 November 2021 (UTC)Reply
So, speaking about that [72] particularly telling example, the bigger protein in the binary complex is autophagy-related protein 9 (like that one). It is indeed very loosely packed as a monomer, but this is because it forms an intertwined homo-trimer (see 7jlo pdb entry). And that model shows it as a monomer (?) with another protein passing though the middle of it in the membrane? Let's just say this is not likely at best. What authors thought? They did not mention it in the paper. My very best wishes (talk) 03:37, 20 November 2021 (UTC)Reply
Well, those comments should have been in peer review. :) BTW, that is actually bruteforcing proteins for complexes, as I mentioned. Ha. There is a very interesting commentary in human NPC paper supplement, page 27, see https://www.biorxiv.org/content/10.1101/2021.10.26.465776v1.supplementary-material?versioned=true Still not published human NPC, thought yeast one was published. See: https://mobile.twitter.com/jankosinski/status/1459101373895786497 for this paper https://www.science.org/doi/10.1126/science.abd9776 Valery Zapolodov (talk) 11:58, 5 December 2021 (UTC)Reply
P.S. There are two significant omissions by author of Multimer article [73]. First, they apparently tested their method for docking of only individual domains - as in the examples they provide in their Table S1. The results for multi-domain proteins are in average significantly worse to my experience. Secondly, they did not say anything about the physically impossible models with interpenetrating polypeptide chains. Multimer produces such models very frequently, unlike the original AF2 version. Here is this issue discussed [74],[75]. Try running something simple like this [76], and what Multimer does? Old version does better in this example, but in average, yes, the Multimer was probably an improvement, as authors demonstrated. My very best wishes (talk) 20:16, 11 December 2021 (UTC)Reply

Alphafold decodes whole NPC

edit

Nuclear pore complex is 15 times bigger than the eukaryotic ribosome (46+33 proteins and 4 rRNA, I will remind you, while rDNA for those was decoded only recently when full human genome was done in May 2021). Also nature article (just for lulz, it has a picture about NPC and that is all). https://doi.org/10.1101/2021.10.26.465776 Valery Zapolodov (talk) 19:18, 7 November 2021 (UTC)Reply

"Alphafold decodes whole NPC". No, that is not what the cited article say. It say: "A novel aspect of our work is that we use AI-based structure prediction programs AlphaFold and RoseTTAfold to model all atomic structures used for fitting to the EM maps and modeled the entire NPC scaffold without directly using any X-ray structures or homology models. Predicted atomic structures traditionally exhibited various inaccuracies, limiting their usage for detailed near-atomic model building in low-resolution EM maps."
It also say: "Not all subunit or domain combinations that we attempted to model with AI-based structure prediction led to structural models that were consistent with complementary data, emphasizing that experimental structure determination will still be required in the future for cases in which a priori knowledge remains sparse." My very best wishes (talk) 23:45, 14 November 2021 (UTC)Reply
Here is the bottom line. No, this is actually an electron microscopy-based model of the complex. Yes, AF2 has a fantastic potential to facilitate model building using low-resolution EM data. My very best wishes (talk) 23:56, 14 November 2021 (UTC)Reply
The pdb+dynamics are not even released yet, they will be released on 25th here https://www.embl.org/about/info/course-and-conference-office/events/tbp21-01/ Who knows what part of it is what, but AFAIK the model would be impossible without Alphafold (Multimer+Colabfold), like dynamics of the final model. See: https://twitter.com/MSiggel/status/1453721315462975488?s=20 Valery Zapolodov (talk) 16:12, 17 November 2021 (UTC)Reply
I will check. Now I am becoming a fun of AF2. Look at this [77]. Or this - looks like a self-translocation of a protein through an incompletely closed transmembrane beta-barrel. Obviously, this is just a poorly generated model, but perhaps it reflects something real? These two proteins are apparently translocases of TIC/TOC complex and MICOS complex. Can their structures be modeled by AF2 without EM data? My very best wishes (talk) 18:42, 18 November 2021 (UTC)Reply
PDB + Dynamics are still not released! Wow. They are so slow, but for yeast it was published on 12th November. Valery Zapolodov (talk) 12:12, 11 December 2021 (UTC)Reply
They published it and just like genome T2T it is whole special edition of Science! Most of it is about Alphafold 2 and collabfold and from other authors too! Nuts! https://www.science.org/toc/science/376/6598 7R5K and 7R5J are still classified, though. Valery Zapolodov (talk) 20:35, 11 June 2022 (UTC)Reply
So apparently even with this latest model only 90% of NPC is decoded. Saw it on twitter https://twitter.com/zaqlinguini/status/1570312373440741378?t=6pFEY5dEFg1iCBxnI6pJ6Q&s=19 109.252.170.50 (talk) 22:11, 12 December 2022 (UTC)Reply

AlphaFill database

edit


  • What I think should be changed:

Old text:

The model only predicts the main peptide chain, not the structures of missing co-factors, metals, and co- and post-translational modifications. This can be a large oversight for a number of biologically-relevant systems:[1] between 50% and 70% of the structures of the human proteome are incomplete without covalently-attached glycans.[2] On the other hand, since the model is trained from PDB models often with these modifications attached, the predicted structure is "frequently consistent with the expected structure in the presence of ions or cofactors".[3]

new text:

The model only predicts the main peptide chain, not the structures of missing co-factors, metals, and co- and post-translational modifications. This limitation has been addressed in the AlphaFill database, that uses sequence similarity with experimentally determined structures of proteins, to "transplant" ligands and co-factors, to the AlphaFold models.[4]. Pre-computer filled models are available for the "1 million" edition, but can be computer on-the-fly for all other models in the AlphaFold database or for user structures. AlphaFill handles only monomeric ligands - e.g. it will not add peptides, carbohydrates, or nucleotides. This can be a large oversight for a number of biologically-relevant systems:[1] between 50% and 70% of the structures of the human proteome are incomplete without covalently-attached glycans.[5].[3]


  • Why it should be changed:

The edits reflect new literature on the opening statement (COI: the peer reviewed paper providing this information is co-authored by myself, which I did not perceive as COI and made the edit myself; I stilldo not see a COi is referencing my own work when it is directly relevant to an existing article, and extends a specific scientific statement, based on clear peer reviewed scientific press, and when anyone can indeed see that I am the author of the paper if they wish. I fail to perceive as a COI, its like telling me that my publications shoudl not have my name on)

  • References supporting the possible change (format using the "cite" button):

[6]

Perrakis (talk) 14:44, 30 November 2022 (UTC)Reply

References

  1. ^ a b Cite error: The named reference auto was invoked but never defined (see the help page).
  2. ^ An, Hyun Joo; Froehlich, John W; Lebrilla, Carlito B (2009-10-01). "Determination of glycosylation sites and site-specific heterogeneity in glycoproteins". Current Opinion in Chemical Biology. Analytical Techniques/Mechanisms. 13 (4): 421–426. doi:10.1016/j.cbpa.2009.07.022. ISSN 1367-5931. PMC 2749913. PMID 19700364.
  3. ^ a b Cite error: The named reference DB-Limitation was invoked but never defined (see the help page).
  4. ^ Hekkelman, M.L., de Vries, I., Joosten, R.P. et al. AlphaFill: enriching AlphaFold models with ligands and cofactors. Nat Methods (2022). https://doi.org/10.1038/s41592-022-01685-y
  5. ^ An, Hyun Joo; Froehlich, John W; Lebrilla, Carlito B (2009-10-01). "Determination of glycosylation sites and site-specific heterogeneity in glycoproteins". Current Opinion in Chemical Biology. Analytical Techniques/Mechanisms. 13 (4): 421–426. doi:10.1016/j.cbpa.2009.07.022. ISSN 1367-5931. PMC 2749913. PMID 19700364.
  6. ^ Hekkelman, M.L., de Vries, I., Joosten, R.P. et al. AlphaFill: enriching AlphaFold models with ligands and cofactors. Nat Methods (2022). https://doi.org/10.1038/s41592-022-01685-y
Any such addition should not include an external link, and it should be based on independent secondary sources rather than a primary citation. We need indication that other people have taken notice of your database. - MrOllie (talk) 14:54, 30 November 2022 (UTC)Reply

Alphafold once again helps with IFT-B like with NPC

edit

This is something very important. https://www.embopress.org/doi/full/10.15252/embj.2022112440 109.252.170.50 (talk) 22:06, 12 December 2022 (UTC)Reply

Yesterday Deepmind deprecated old weights (only for Multimer)

edit

So AF 2.3.0 is trained on much more proteins from before and has some new hacks. https://github.com/deepmind/alphafold/commit/9b18d6a966b9b08b2095dd77d8414a68d3d31fc9

We have fine-tuned new AlphaFold-Multimer weights using identical model architecture but a new training cutoff of 2021-09-30. Previously released versions of AlphaFold and AlphaFold-Multimer were trained using PDB structures with a release date before 2018-04-30, a cutoff date chosen to coincide with the start of the 2018 CASP13 assessment. The new training cutoff represents ~30% more data to train AlphaFold and more importantly includes much more data on large protein complexes. The new training cutoff includes 4× the number of electron microscopy structures and in aggregate twice the number of large structures (more than 2,000 residues)[^1]. Due to the significant increase in the number of large structures, we are also able to increase the size of training crops (subsets of the structure used to train AlphaFold) from 384 to 640 residues. These new AlphaFold-Multimer models are expected to be substantially more accurate on large protein complexes even though we use the same model architecture and training methodology as our previously released AlphaFold-Multimer paper. 109.252.170.50 (talk) 22:17, 12 December 2022 (UTC)Reply

CASP15 presentations, all (top 5) winners for not RNA are AF2-based

edit

Ground truth https://predictioncenter.org/casp15/TARGETS_PDB/ (not all, no RNA)

https://predictioncenter.org/casp15/doc/presentations/

Video will be also later available. No (!) AF 2.3 with new weights (some are here: https://github.com/deepmind/alphafold/blob/9b18d6a966b9b08b2095dd77d8414a68d3d31fc9/docs/casp15_predictions.zip)

Openfold participated too. 109.252.170.50 (talk) 22:25, 12 December 2022 (UTC)Reply

Alphafold fixes ancient DNA problem by reading proteins

edit

By directly reading from Genyornis newtoni egg. https://www.pnas.org/doi/10.1073/pnas.2109326119 109.252.170.50 (talk) 22:29, 12 December 2022 (UTC)Reply

"Responses" section

edit

Almost all of the "Responses" section corresponds to a short period of time in late 2020 after AlphaFold 2 was unveiled but before technical details were given and the code made open-source. This is too focused, speculative, and presents relatively little interest nowadays. I propose to remove almost all of this section, and to update the "AlphaFold 2, 2020" subsection with recent content. Alenoach (talk) 06:42, 3 May 2024 (UTC)Reply

Yes, I think this section might be removed, or at least shortened and rewritten. My very best wishes (talk) 21:27, 12 May 2024 (UTC)Reply
Thanks for the response. By the way, are you sure that the section "Protein folding problem" needed to be removed? My impression is that it was a useful introduction for readers that don't know what protein folding is, and to explain the "historical" context and the methods that were used before AlphaFold. I don't think this section is really making the confusion that AlphaFold would be simulating the process of protein folding as suggested in the edit summary. Do you agree? Alenoach (talk) 22:13, 12 May 2024 (UTC)Reply
OK. I self-reverted. 15:24, 13 May 2024 (UTC)
I shortened the section. Alenoach (talk) 22:59, 12 May 2024 (UTC)Reply
Looks good. My very best wishes (talk) 15:24, 13 May 2024 (UTC)Reply
I think it might still nevertheless be useful to mention that there were these concerns when AF2 was first released (including perhaps the Spiegel quote), but with the release of the code and with the experience of use, those criticisms have largely gone away. At the moment I find the latest version a bit unbalanced in respect of the initial reaction -- citing the most enthusiastic puff pieces, but not those with reservations. I think we actually would make the positivity about AF2 more credible by citing that there were some initial reservations, but those have not lasted. (Apart from any that have?) Jheald (talk) 21:03, 13 May 2024 (UTC)Reply
Sure, you can add back some content on the reservations, as long as it's interesting and understandable for readers, and not too outdated. Alenoach (talk) 21:15, 13 May 2024 (UTC)Reply
I agree this section sounds like an advertisement (it should not be), but Alenoach did good work by removing parts that are definitely outdated after the just released AF-3 and two latest versions of AF-Multimer (can be found in colabfold), which are significantly better than the first Multimer version (one with frequent overlaps of atoms used here).
This is complicated, and should be considered separately for monomeric proteins, protein complexes and complexes with ligands:
  1. Monomeric structures. One central issue was nicely illustrated by Figure 1 in this article ("The good, the bad and the ugly"), i.e. 30% of sequence is predicted with low confidence score and should be discarded (a "dark matter") - fig. 2 in same paper; note these are monomeric structures. Even though the paper was published in 2021, this is still main issue of AF (and probably also of proteins themselves) - for monomers and complexes.
  2. Complexes. That issue is worse for complexes, since they are typically determined with lower "protein-protein" scores ("ipTM" scores, see here). Actually, the ipTM scores are usually low or medium range, and the precision for large complexes is mediocre, meaning that different sets of residues from two subunits interact in the experimental (correct) and the modeled by AFM structures, as one can judge by calculating the well know DockQ score [78]. Some people are trying to generate as many divergent models as possible using different AFM versions (e.g. ptm, v2 and v3 from Colabfold) and select best of them using available experimental data, including experimental structures of partial complexes.
  3. Complexes with non-protein ligands. There is currently a single article in Nature by authors of AF-3; there are no independent assessments. My very best wishes (talk) 22:04, 13 May 2024 (UTC)Reply
P.S. Right now there are many new publications beyond CASPs that assess various AF versions for predicting mutations, multiple conformations, complexes, etc. But this would take a lot of time to include to the page. Overall, AF can produce a lot of interesting structures that look real, but one must verify each specific model very carefully through massive use of all available experimental data on mutations, structures, functions, complexes, etc. See Hallucination (artificial intelligence); AF may have same issue [79]. My very best wishes (talk) 19:16, 13 May 2024 (UTC)Reply

How AF2 works ?

edit

Another thing that would be good to see updated is the discussion on how AF2 works.

The present text in the article was written just after AF2 was first unveiled, when details were limited. In time AF2 became much better understood, and by the time the source code was released the system was rather more understood, so that the code was considered to be substantially as by then expected. IMO indicating how AF2 achieved what it did is really important, but our article doesn't really reflect the understanding that developed; in some regards what we currently give is at best unhelpful, at worst substantially misleading.

Alas, I don't have any time I can put into this at the moment, but IMO this is another section that could do with a substantial review / rework / rewrite (with sources). Thx, Jheald (talk) 19:11, 14 May 2024 (UTC)Reply

I see, you are talking about section AlphaFold#AlphaFold_2,_2020. I do not think this is misleading, but you are very welcome to improve. I just checked AF-3. The server is easy to use and very fast (they have made defunct Colabfold Google labs server). There is no dramatic improvement compare to the latest AF2 version for large complexes, only small improvements in some cases. Cases like a homodimer of [80] are still a mess. The set of ligands is ridiculously insufficient, the set of PTMs is better, although also rather incomplete. Possibly a breakthrough for RNA and DNA complexes, but I did not check those. My very best wishes (talk) 15:34, 20 May 2024 (UTC)Reply
Honestly, you seem to have much more expertise on this domain than most of us, so it's a bit hard to really make sense of your comments. I have the same impression as Jheald that this part needs to be updated, but I also don't feel knowledgeable enough. If you are motivated, feel free to modify the article directly, while keeping it relatively easy to understand for readers (mostly well-educated outsiders I guess). No obligation of course, and thanks for your work. Alenoach (talk) 18:28, 26 May 2024 (UTC)Reply
To avoid WP:OR, we need to borrow/summarize a simplified explanation from sources, such as [81], [82] (few first paragraphs), [83]. But ultimately, this just a "black box"; a user will not have a slightest idea on why exactly such and such structure has been generated. The result does depend on the quality of input, such as the multiple sequence alignment (MSA) (because the correlations in MSAs play a role) , and the existence of similar structures in the PDB which affect the parameters obtained during training of the model. Perhaps I will add something later. My very best wishes (talk) 21:19, 27 May 2024 (UTC)Reply
P.S. It does provide significant improvement for protein complexes in cases when some proteins in a complex (such as Tyrosin-protein kinase Lck) have important PTMs, such as lipidation. One must make such PTMs during their modeling with AF3. Overall, this is a fantastic modeling tool, but one that requires a significant biological expertise to interpret the results of calculations, verification of every model through comparison with experimental data, sampling (e.g. by using calculating with AF2), sometimes modeling of protein complexes by pieces (e.g. complexes of large transmembrane Tyr kinases), etc. My very best wishes (talk) 16:00, 22 May 2024 (UTC)Reply