PROJECT #16499 RESEARCH FOR CANCER
FOLDING PERFORMANCE PROFILE

PROJECT SUMMARY

In drug discovery, particularly that of cancer, maximizing state exploration is a useful novel strategy – providing new protein states and conformations to point drug design methods at increases the likelihood that a potential binder and inhibitor may be found.

However, in many cases a new state that is "useful for design" (ie.

ones distinct enough to be worth targeting to identify novel drugs) require a lot of sampling or simulation.

Sometimes, even exascale computers like Folding@home are not enough! Adaptive methods are very powerful here, but have the drawback of requiring system knowledge, or having to guess which protein features are worth adaptively exploring on, which may not always turn out to be true.

Another promising strategy, explored in these projects, is to "Accelerate" the simulations.

By broadly applying "boosters" to the simulation, we effectively "flatten" the energy landscape of a protein's conformations, allowing the protein to visit states more easily than it normally would.

Alongside the ability to discover new states that we can seed simulations of, just like adaptive sampling simulations, these boosters have specific technical and physiclal properties that allow us to infer something about a new state's "accessibility" (ie where it exists on the landscape). In projects 16497–16499 we test three such boosters to accelerate our simulations to identify how well boosted simulations work for our purposes.

Here we apply it to the protein MET kinase, a protein drug target in many cancers such as non-small-cell lung carcinoma.

MET kinase is targeted by the drug crizotinib but often evolves resistance against the drug, rendering it ineffective.

With our boosted simulations we hope to observe never before seen states of MET!.

PROJECT INFO

Manager(s): Sukrit Singh

Institution: Memorial Sloan-Kettering Cancer-Center

Project URL: http://sukritsingh.github.io/

PROJECT WORK UNIT SUMMARY

Atoms: 59,897

Core: OPENMM_22

Status: Public

PROJECT FOLDING PPD AVERAGES BY GPU

PPDDB data as of Sunday, 04 December 2022 18:13:27

Rank
Project
Model Name
Folding@Home Identifier
Make
Brand
GPU
Model
PPD
Average
Points WU
Average
WUs Day
Average
WU Time
Average
1 GeForce RTX 3080 Ti
GA102 [GeForce RTX 3080 Ti]
Nvidia GA102 4,691,792 375,393 12.50 2 hrs 55 mins
2 GeForce RTX 3090
GA102 [GeForce RTX 3090]
Nvidia GA102 4,263,592 363,025 11.74 2 hrs 3 mins
3 GeForce RTX 2080 Ti
TU102 [GeForce RTX 2080 Ti] M 13448
Nvidia TU102 4,112,924 358,352 11.48 2 hrs 5 mins
4 GeForce RTX 3090 Ti
GA102 [GeForce RTX 3090 Ti]
Nvidia GA102 4,006,694 361,061 11.10 2 hrs 10 mins
5 GeForce RTX 3080 Lite Hash Rate
GA102 [GeForce RTX 3080 Lite Hash Rate]
Nvidia GA102 3,662,456 333,339 10.99 2 hrs 11 mins
6 GeForce RTX 3070 Ti
GA104 [GeForce RTX 3070 Ti]
Nvidia GA104 3,610,837 346,196 10.43 2 hrs 18 mins
7 GeForce RTX 2080 Ti Rev. A
TU102 [GeForce RTX 2080 Ti Rev. A] M 13448
Nvidia TU102 3,598,017 346,681 10.38 2 hrs 19 mins
8 GeForce RTX 3080
GA102 [GeForce RTX 3080]
Nvidia GA102 3,594,526 336,072 10.70 2 hrs 15 mins
9 GeForce RTX 3070 Lite Hash Rate
GA104 [GeForce RTX 3070 Lite Hash Rate]
Nvidia GA104 3,200,045 327,659 9.77 2 hrs 27 mins
10 GeForce RTX 3070
GA104 [GeForce RTX 3070]
Nvidia GA104 3,160,652 331,405 9.54 3 hrs 31 mins
11 GeForce RTX 3070 Mobile / Max-Q
GA104M [GeForce RTX 3070 Mobile / Max-Q]
Nvidia GA104M 2,887,406 323,910 8.91 3 hrs 42 mins
12 GeForce RTX 3080 Mobile / Max-Q 8GB/16GB
GA104M [GeForce RTX 3080 Mobile / Max-Q 8GB/16GB]
Nvidia GA104M 2,676,096 309,904 8.64 3 hrs 47 mins
13 GeForce RTX 3060 Ti Lite Hash Rate
GA104 [GeForce RTX 3060 Ti Lite Hash Rate]
Nvidia GA104 2,672,156 314,042 8.51 3 hrs 49 mins
14 GeForce RTX 2080 Super
TU104 [GeForce RTX 2080 SUPER]
Nvidia TU104 2,519,113 308,820 8.16 3 hrs 57 mins
15 GeForce RTX 3060 Ti
GA104 [GeForce RTX 3060 Ti]
Nvidia GA104 2,446,708 307,992 7.94 3 hrs 1 mins
16 GeForce GTX 1080 Ti
GP102 [GeForce GTX 1080 Ti] 11380
Nvidia GP102 2,385,851 303,105 7.87 3 hrs 3 mins
17 GeForce RTX 2060
TU106 [Geforce RTX 2060]
Nvidia TU106 2,378,844 299,695 7.94 3 hrs 1 mins
18 RTX A5000
GA102GL [RTX A5000]
Nvidia GA102GL 2,128,195 294,955 7.22 3 hrs 20 mins
19 Tesla P100 16GB
GP100GL [Tesla P100 16GB] 9340
Nvidia GP100GL 2,038,553 289,749 7.04 3 hrs 25 mins
20 GeForce RTX 2060 Super
TU106 [GeForce RTX 2060 SUPER]
Nvidia TU106 1,829,992 276,590 6.62 4 hrs 38 mins
21 GeForce RTX 3060
GA104 [GeForce RTX 3060]
Nvidia GA104 1,785,117 275,421 6.48 4 hrs 42 mins
22 GeForce RTX 3060 Lite Hash Rate
GA106 [GeForce RTX 3060 Lite Hash Rate]
Nvidia GA106 1,698,251 273,473 6.21 4 hrs 52 mins
23 GeForce RTX 2070 SUPER
TU104 [GeForce RTX 2070 SUPER] 8218
Nvidia TU104 1,454,429 254,255 5.72 4 hrs 12 mins
24 Quadro RTX 4000
TU104GL [Quadro RTX 4000]
Nvidia TU104GL 1,364,753 251,191 5.43 4 hrs 25 mins
25 GeForce RTX 2070
TU106 [GeForce RTX 2070]
Nvidia TU106 1,319,167 251,111 5.25 5 hrs 34 mins
26 GeForce GTX 1080
GP104 [GeForce GTX 1080] 8873
Nvidia GP104 1,285,169 251,830 5.10 5 hrs 42 mins
27 GeForce RTX 2060 Mobile / Max-Q
TU106M [GeForce RTX 2060 Mobile / Max-Q]
Nvidia TU106M 1,244,364 244,801 5.08 5 hrs 43 mins
28 GeForce GTX 1660 SUPER
TU116 [GeForce GTX 1660 SUPER]
Nvidia TU116 1,188,441 244,288 4.86 5 hrs 56 mins
29 GeForce RTX 2060
TU104 [GeForce RTX 2060]
Nvidia TU104 1,177,543 240,051 4.91 5 hrs 54 mins
30 GeForce GTX 1070 Ti
GP104 [GeForce GTX 1070 Ti] 8186
Nvidia GP104 1,131,420 250,395 4.52 5 hrs 19 mins
31 Geforce RTX 3050
GA106 [Geforce RTX 3050]
Nvidia GA106 1,082,075 220,202 4.91 5 hrs 53 mins
32 GeForce GTX 1070
GP104 [GeForce GTX 1070] 6463
Nvidia GP104 1,004,899 228,517 4.40 5 hrs 27 mins
33 GeForce GTX Titan X
GM200 [GeForce GTX Titan X] 6144
Nvidia GM200 995,482 239,160 4.16 6 hrs 46 mins
34 Quadro RTX 6000/8000
TU102GL [Quadro RTX 6000/8000]
Nvidia TU102GL 943,379 223,194 4.23 6 hrs 41 mins
35 Tesla M40
GM200GL [Tesla M40] 6844
Nvidia GM200GL 869,104 217,965 3.99 6 hrs 1 mins
36 GeForce GTX 980
GM204 [GeForce GTX 980] 4612
Nvidia GM204 804,975 211,196 3.81 6 hrs 18 mins
37 GeForce GTX 980 Ti
GM200 [GeForce GTX 980 Ti] 5632
Nvidia GM200 792,733 204,561 3.88 6 hrs 12 mins
38 GeForce GTX 1060 6GB
GP106 [GeForce GTX 1060 6GB] 4372
Nvidia GP106 757,643 207,234 3.66 7 hrs 34 mins
39 P104-100
GP104 [P104-100]
Nvidia GP104 614,441 193,613 3.17 8 hrs 34 mins
40 GeForce GTX 1660
TU116 [GeForce GTX 1660]
Nvidia TU116 567,622 197,677 2.87 8 hrs 21 mins
41 GeForce GTX 1060 3GB
GP106 [GeForce GTX 1060 3GB] 3935
Nvidia GP106 543,245 185,007 2.94 8 hrs 10 mins
42 GeForce GTX 1650
TU117 [GeForce GTX 1650]
Nvidia TU117 449,994 174,030 2.59 9 hrs 17 mins
43 GeForce GTX 950
GM206 [GeForce GTX 950] 1572
Nvidia GM206 211,193 131,259 1.61 15 hrs 55 mins
44 Quadro P1000
GP107GL [Quadro P1000]
Nvidia GP107GL 149,753 119,445 1.25 19 hrs 9 mins
45 GeForce GTX 750 Ti
GM107 [GeForce GTX 750 Ti] 1389
Nvidia GM107 128,052 120,869 1.06 23 hrs 39 mins
46 GeForce GT 1030
GP108 [GeForce GT 1030]
Nvidia GP108 111,139 108,705 1.02 23 hrs 28 mins
47 Quadro K2200
GM107GL [Quadro K2200]
Nvidia GM107GL 102,465 115,854 0.88 27 hrs 8 mins
48 Quadro K620
GM107GL [Quadro K620]
Nvidia GM107GL 57,762 87,255 0.66 36 hrs 15 mins

PROJECT FOLDING PPD AVERAGES BY CPU BETA

PPDDB data as of Sunday, 04 December 2022 18:13:27

Rank
Project
CPU Model Logical
Processors (LP)
PPD-PLP
AVG PPD per 1 LP
ALL LP-PPD
(Estimated)
Make