PROJECT #16499 RESEARCH FOR CANCER
FOLDING PERFORMANCE PROFILE

PROJECT SUMMARY

In drug discovery, particularly that of cancer, maximizing state exploration is a useful novel strategy – providing new protein states and conformations to point drug design methods at increases the likelihood that a potential binder and inhibitor may be found.

However, in many cases a new state that is "useful for design" (ie.

ones distinct enough to be worth targeting to identify novel drugs) require a lot of sampling or simulation.

Sometimes, even exascale computers like Folding@home are not enough! Adaptive methods are very powerful here, but have the drawback of requiring system knowledge, or having to guess which protein features are worth adaptively exploring on, which may not always turn out to be true.

Another promising strategy, explored in these projects, is to "Accelerate" the simulations.

By broadly applying "boosters" to the simulation, we effectively "flatten" the energy landscape of a protein's conformations, allowing the protein to visit states more easily than it normally would.

Alongside the ability to discover new states that we can seed simulations of, just like adaptive sampling simulations, these boosters have specific technical and physiclal properties that allow us to infer something about a new state's "accessibility" (ie where it exists on the landscape). In projects 16497–16499 we test three such boosters to accelerate our simulations to identify how well boosted simulations work for our purposes.

Here we apply it to the protein MET kinase, a protein drug target in many cancers such as non-small-cell lung carcinoma.

MET kinase is targeted by the drug crizotinib but often evolves resistance against the drug, rendering it ineffective.

With our boosted simulations we hope to observe never before seen states of MET!.

PROJECT INFO

Manager(s): Sukrit Singh

Institution: Memorial Sloan-Kettering Cancer-Center

Project URL: http://sukritsingh.github.io/

PROJECT WORK UNIT SUMMARY

Atoms: 59,897

Core: OPENMM_22

Status: Public

PROJECT FOLDING PPD AVERAGES BY GPU

PPDDB data as of Monday, 27 June 2022 11:45:25

Rank
Project
Model Name
Folding@Home Identifier
Make
Brand
GPU
Model
PPD
Average
Points WU
Average
WUs Day
Average
WU Time
Average
1 GeForce RTX 3090
GA102 [GeForce RTX 3090]
Nvidia GA102 4,596,323 373,491 12.31 2 hrs 57 mins
2 GeForce RTX 3080 Ti
GA102 [GeForce RTX 3080 Ti]
Nvidia GA102 4,596,158 374,745 12.26 2 hrs 57 mins
3 GeForce RTX 2080 Ti
TU102 [GeForce RTX 2080 Ti] M 13448
Nvidia TU102 4,413,920 371,857 11.87 2 hrs 1 mins
4 GeForce RTX 3080
GA102 [GeForce RTX 3080]
Nvidia GA102 4,317,469 355,207 12.15 2 hrs 58 mins
5 GeForce RTX 3080 Lite Hash Rate
GA102 [GeForce RTX 3080 Lite Hash Rate]
Nvidia GA102 4,040,719 352,525 11.46 2 hrs 6 mins
6 GeForce RTX 3090 Ti
GA102 [GeForce RTX 3090 Ti]
Nvidia GA102 4,006,694 361,061 11.10 2 hrs 10 mins
7 GeForce RTX 2080 Ti Rev. A
TU102 [GeForce RTX 2080 Ti Rev. A] M 13448
Nvidia TU102 3,500,813 342,886 10.21 2 hrs 21 mins
8 GeForce RTX 3070 Ti
GA104 [GeForce RTX 3070 Ti]
Nvidia GA104 3,499,245 341,522 10.25 2 hrs 21 mins
9 GeForce RTX 3070 Lite Hash Rate
GA104 [GeForce RTX 3070 Lite Hash Rate]
Nvidia GA104 3,263,454 328,782 9.93 2 hrs 25 mins
10 GeForce RTX 3070
GA104 [GeForce RTX 3070]
Nvidia GA104 3,132,518 329,445 9.51 3 hrs 31 mins
11 GeForce RTX 3070 Mobile / Max-Q
GA104M [GeForce RTX 3070 Mobile / Max-Q]
Nvidia GA104M 2,887,406 323,910 8.91 3 hrs 42 mins
12 GeForce RTX 3080 Mobile / Max-Q 8GB/16GB
GA104M [GeForce RTX 3080 Mobile / Max-Q 8GB/16GB]
Nvidia GA104M 2,694,843 318,717 8.46 3 hrs 50 mins
13 GeForce RTX 2080 Super
TU104 [GeForce RTX 2080 SUPER]
Nvidia TU104 2,621,784 313,207 8.37 3 hrs 52 mins
14 GeForce RTX 3060 Ti Lite Hash Rate
GA104 [GeForce RTX 3060 Ti Lite Hash Rate]
Nvidia GA104 2,536,057 308,704 8.22 3 hrs 55 mins
15 GeForce GTX 1080 Ti
GP102 [GeForce GTX 1080 Ti] 11380
Nvidia GP102 2,532,483 309,545 8.18 3 hrs 56 mins
16 GeForce RTX 2060
TU106 [Geforce RTX 2060]
Nvidia TU106 2,378,844 299,695 7.94 3 hrs 1 mins
17 RTX A5000
GA102GL [RTX A5000]
Nvidia GA102GL 2,128,195 294,955 7.22 3 hrs 20 mins
18 GeForce RTX 3060
GA104 [GeForce RTX 3060]
Nvidia GA104 1,785,117 275,421 6.48 4 hrs 42 mins
19 GeForce RTX 2060 Super
TU106 [GeForce RTX 2060 SUPER]
Nvidia TU106 1,784,270 272,338 6.55 4 hrs 40 mins
20 GeForce RTX 3060 Lite Hash Rate
GA106 [GeForce RTX 3060 Lite Hash Rate]
Nvidia GA106 1,683,213 272,792 6.17 4 hrs 53 mins
21 Quadro RTX 4000
TU104GL [Quadro RTX 4000]
Nvidia TU104GL 1,446,034 257,347 5.62 4 hrs 16 mins
22 GeForce RTX 2070
TU106 [GeForce RTX 2070]
Nvidia TU106 1,425,646 251,883 5.66 4 hrs 14 mins
23 GeForce GTX 1080
GP104 [GeForce GTX 1080] 8873
Nvidia GP104 1,308,859 253,231 5.17 5 hrs 39 mins
24 GeForce GTX 980 Ti
GM200 [GeForce GTX 980 Ti] 5632
Nvidia GM200 1,291,492 246,931 5.23 5 hrs 35 mins
25 GeForce RTX 2060
TU104 [GeForce RTX 2060]
Nvidia TU104 1,177,543 240,051 4.91 5 hrs 54 mins
26 GeForce GTX 1660 SUPER
TU116 [GeForce GTX 1660 SUPER]
Nvidia TU116 1,146,022 243,319 4.71 5 hrs 6 mins
27 Geforce RTX 3050
GA106 [Geforce RTX 3050]
Nvidia GA106 1,082,075 220,202 4.91 5 hrs 53 mins
28 Quadro RTX 6000/8000
TU102GL [Quadro RTX 6000/8000]
Nvidia TU102GL 943,379 223,194 4.23 6 hrs 41 mins
29 GeForce GTX 1070
GP104 [GeForce GTX 1070] 6463
Nvidia GP104 902,239 224,899 4.01 6 hrs 59 mins
30 Tesla M40
GM200GL [Tesla M40] 6844
Nvidia GM200GL 868,796 217,702 3.99 6 hrs 1 mins
31 GeForce GTX 1060 6GB
GP106 [GeForce GTX 1060 6GB] 4372
Nvidia GP106 860,948 217,146 3.96 6 hrs 3 mins
32 P104-100
GP104 [P104-100]
Nvidia GP104 614,441 193,613 3.17 8 hrs 34 mins
33 GeForce GTX 1660
TU116 [GeForce GTX 1660]
Nvidia TU116 567,622 197,677 2.87 8 hrs 21 mins
34 GeForce GTX 1060 3GB
GP106 [GeForce GTX 1060 3GB] 3935
Nvidia GP106 543,245 185,007 2.94 8 hrs 10 mins
35 GeForce GTX 1650
TU117 [GeForce GTX 1650]
Nvidia TU117 449,994 174,030 2.59 9 hrs 17 mins
36 Quadro P1000
GP107GL [Quadro P1000]
Nvidia GP107GL 193,126 130,001 1.49 16 hrs 9 mins
37 GeForce GTX 750 Ti
GM107 [GeForce GTX 750 Ti] 1389
Nvidia GM107 142,525 118,705 1.20 20 hrs 59 mins
38 Quadro K2200
GM107GL [Quadro K2200]
Nvidia GM107GL 102,465 115,854 0.88 27 hrs 8 mins
39 GeForce GT 1030
GP108 [GeForce GT 1030]
Nvidia GP108 85,275 99,656 0.86 28 hrs 3 mins

PROJECT FOLDING PPD AVERAGES BY CPU BETA

PPDDB data as of Monday, 27 June 2022 11:45:25

Rank
Project
CPU Model Logical
Processors (LP)
PPD-PLP
AVG PPD per 1 LP
ALL LP-PPD
(Estimated)
Make