PROJECT #16497 RESEARCH FOR CANCER
FOLDING PERFORMANCE PROFILE

PROJECT SUMMARY

In drug discovery, particularly that of cancer, maximizing state exploration is a useful novel strategy – providing new protein states and conformations to point drug design methods at increases the likelihood that a potential binder and inhibitor may be found.

However, in many cases a new state that is "useful for design" (ie.

ones distinct enough to be worth targeting to identify novel drugs) require a lot of sampling or simulation.

Sometimes, even exascale computers like Folding@home are not enough! Adaptive methods are very powerful here, but have the drawback of requiring system knowledge, or having to guess which protein features are worth adaptively exploring on, which may not always turn out to be true.

Another promising strategy, explored in these projects, is to "Accelerate" the simulations.

By broadly applying "boosters" to the simulation, we effectively "flatten" the energy landscape of a protein's conformations, allowing the protein to visit states more easily than it normally would.

Alongside the ability to discover new states that we can seed simulations of, just like adaptive sampling simulations, these boosters have specific technical and physiclal properties that allow us to infer something about a new state's "accessibility" (ie where it exists on the landscape). In projects 16497–16499 we test three such boosters to accelerate our simulations to identify how well boosted simulations work for our purposes.

Here we apply it to the protein MET kinase, a protein drug target in many cancers such as non-small-cell lung carcinoma.

MET kinase is targeted by the drug crizotinib but often evolves resistance against the drug, rendering it ineffective.

With our boosted simulations we hope to observe never before seen states of MET!.

PROJECT INFO

Manager(s): Sukrit Singh

Institution: Memorial Sloan-Kettering Cancer-Center

Project URL: http://sukritsingh.github.io/

PROJECT WORK UNIT SUMMARY

Atoms: 59,897

Core: OPENMM_22

Status: Public

PROJECT FOLDING PPD AVERAGES BY GPU

PPDDB data as of Saturday, 01 April 2023 12:14:51

Rank
Project
Model Name
Folding@Home Identifier
Make
Brand
GPU
Model
PPD
Average
Points WU
Average
WUs Day
Average
WU Time
Average
1 GeForce RTX 3080 Ti
GA102 [GeForce RTX 3080 Ti]
Nvidia GA102 4,480,435 261,932 17.11 1 hrs 24 mins
2 GeForce RTX 3080
GA102 [GeForce RTX 3080]
Nvidia GA102 4,183,514 245,590 17.03 1 hrs 25 mins
3 GeForce RTX 2080 Ti
TU102 [GeForce RTX 2080 Ti] M 13448
Nvidia TU102 4,076,443 252,568 16.14 1 hrs 29 mins
4 GeForce RTX 3080 Lite Hash Rate
GA102 [GeForce RTX 3080 Lite Hash Rate]
Nvidia GA102 3,949,215 248,052 15.92 2 hrs 30 mins
5 GeForce RTX 3090 Ti
GA102 [GeForce RTX 3090 Ti]
Nvidia GA102 3,927,275 256,064 15.34 2 hrs 34 mins
6 GeForce RTX 3090
GA102 [GeForce RTX 3090]
Nvidia GA102 3,855,916 250,909 15.37 2 hrs 34 mins
7 GeForce RTX 3070 Ti
GA104 [GeForce RTX 3070 Ti]
Nvidia GA104 3,575,899 246,947 14.48 2 hrs 39 mins
8 GeForce RTX 3070 Lite Hash Rate
GA104 [GeForce RTX 3070 Lite Hash Rate]
Nvidia GA104 3,401,147 243,958 13.94 2 hrs 43 mins
9 GeForce RTX 3070
GA104 [GeForce RTX 3070]
Nvidia GA104 3,166,536 237,224 13.35 2 hrs 48 mins
10 GeForce RTX 2080 Ti Rev. A
TU102 [GeForce RTX 2080 Ti Rev. A] M 13448
Nvidia TU102 3,154,049 237,223 13.30 2 hrs 48 mins
11 GeForce RTX 3070 Mobile / Max-Q
GA104M [GeForce RTX 3070 Mobile / Max-Q]
Nvidia GA104M 2,877,682 231,670 12.42 2 hrs 56 mins
12 GeForce RTX 3080 Mobile / Max-Q 8GB/16GB
GA104M [GeForce RTX 3080 Mobile / Max-Q 8GB/16GB]
Nvidia GA104M 2,630,747 225,084 11.69 2 hrs 3 mins
13 GeForce RTX 2060
TU106 [Geforce RTX 2060]
Nvidia TU106 2,474,693 209,404 11.82 2 hrs 2 mins
14 GeForce RTX 2080 Super
TU104 [GeForce RTX 2080 SUPER]
Nvidia TU104 2,393,545 216,133 11.07 2 hrs 10 mins
15 GeForce GTX 1080 Ti
GP102 [GeForce GTX 1080 Ti] 11380
Nvidia GP102 2,390,952 212,719 11.24 2 hrs 8 mins
16 RTX A5000
GA102GL [RTX A5000]
Nvidia GA102GL 2,333,332 215,177 10.84 2 hrs 13 mins
17 GeForce RTX 3060 Ti Lite Hash Rate
GA104 [GeForce RTX 3060 Ti Lite Hash Rate]
Nvidia GA104 2,061,114 189,765 10.86 2 hrs 13 mins
18 GeForce RTX 2070 SUPER
TU104 [GeForce RTX 2070 SUPER] 8218
Nvidia TU104 1,971,291 202,443 9.74 2 hrs 28 mins
19 GeForce RTX 3060
GA104 [GeForce RTX 3060]
Nvidia GA104 1,701,605 193,562 8.79 3 hrs 44 mins
20 GeForce RTX 2060 Super
TU106 [GeForce RTX 2060 SUPER]
Nvidia TU106 1,682,038 187,593 8.97 3 hrs 41 mins
21 GeForce RTX 3060 Lite Hash Rate
GA106 [GeForce RTX 3060 Lite Hash Rate]
Nvidia GA106 1,499,415 170,370 8.80 3 hrs 44 mins
22 GeForce RTX 3060 Mobile / Max-Q
GA106M [GeForce RTX 3060 Mobile / Max-Q]
Nvidia GA106M 1,454,258 166,501 8.73 3 hrs 45 mins
23 Quadro RTX 4000
TU104GL [Quadro RTX 4000]
Nvidia TU104GL 1,320,488 177,837 7.43 3 hrs 14 mins
24 GeForce GTX 1070 Ti
GP104 [GeForce GTX 1070 Ti] 8186
Nvidia GP104 1,220,137 169,964 7.18 3 hrs 21 mins
25 GeForce GTX 1080
GP104 [GeForce GTX 1080] 8873
Nvidia GP104 1,195,852 169,781 7.04 3 hrs 24 mins
26 GeForce RTX 2070
TU106 [GeForce RTX 2070]
Nvidia TU106 1,172,950 149,375 7.85 3 hrs 3 mins
27 GeForce GTX 1660 SUPER
TU116 [GeForce GTX 1660 SUPER]
Nvidia TU116 1,086,531 165,492 6.57 4 hrs 39 mins
28 Geforce RTX 3050
GA106 [Geforce RTX 3050]
Nvidia GA106 1,083,729 166,517 6.51 4 hrs 41 mins
29 GeForce RTX 2060
TU104 [GeForce RTX 2060]
Nvidia TU104 1,020,740 163,532 6.24 4 hrs 51 mins
30 GeForce GTX 1660 Mobile
TU116M [GeForce GTX 1660 Mobile]
Nvidia TU116M 1,006,248 162,084 6.21 4 hrs 52 mins
31 GeForce GTX 1070
GP104 [GeForce GTX 1070] 6463
Nvidia GP104 1,000,580 158,053 6.33 4 hrs 47 mins
32 Tesla M40
GM200GL [Tesla M40] 6844
Nvidia GM200GL 901,670 157,028 5.74 4 hrs 11 mins
33 GeForce GTX 1060 6GB
GP106 [GeForce GTX 1060 6GB] 4372
Nvidia GP106 638,669 141,746 4.51 5 hrs 20 mins
34 P104-100
GP104 [P104-100]
Nvidia GP104 624,600 139,072 4.49 5 hrs 21 mins
35 GeForce GTX 1660
TU116 [GeForce GTX 1660]
Nvidia TU116 511,672 121,136 4.22 6 hrs 41 mins
36 GeForce GTX 1650 Mobile / Max-Q
TU117M [GeForce GTX 1650 Mobile / Max-Q]
Nvidia TU117M 453,747 125,271 3.62 7 hrs 38 mins
37 GeForce GTX 980
GM204 [GeForce GTX 980] 4612
Nvidia GM204 397,699 121,206 3.28 7 hrs 19 mins
38 GeForce GTX 1650
TU117 [GeForce GTX 1650]
Nvidia TU117 388,830 118,458 3.28 7 hrs 19 mins
39 GeForce GTX 950
GM206 [GeForce GTX 950] 1572
Nvidia GM206 213,667 96,936 2.20 11 hrs 53 mins
40 Quadro P1000
GP107GL [Quadro P1000]
Nvidia GP107GL 186,861 93,098 2.01 12 hrs 57 mins
41 Quadro K2200
GM107GL [Quadro K2200]
Nvidia GM107GL 117,604 80,027 1.47 16 hrs 20 mins
42 Quadro M2000
GM206GL [Quadro M2000]
Nvidia GM206GL 87,721 84,525 1.04 23 hrs 8 mins
43 GeForce GT 1030
GP108 [GeForce GT 1030]
Nvidia GP108 75,096 74,032 1.01 24 hrs 40 mins
44 Quadro K1200
GM107GL [Quadro K1200]
Nvidia GM107GL 52,483 60,601 0.87 28 hrs 43 mins

PROJECT FOLDING PPD AVERAGES BY CPU BETA

PPDDB data as of Saturday, 01 April 2023 12:14:51

Rank
Project
CPU Model Logical
Processors (LP)
PPD-PLP
AVG PPD per 1 LP
ALL LP-PPD
(Estimated)
Make