A few days back I stumbled across a new publication:
Curious, I scanned through the abstract, to find the following sentence: “… substrates of six enzymes from three different superfamilies were deconstructed into 41 overlapping fragments that were tested for activity or binding. Surprisingly, even those fragments containing the key reactive group had little activity, and most fragments did not bind measurably, …”. As a biochemist that spends a lot of time doing protein engineering and directed evolution on enzymes, I was not surprised by this statement: I routinely observe dramatic losses in activity even on substrates fairly similar to the native one.
I must have been bored with what I was doing at the time, because instead of moving to the next paper, I kept on reading (and thinking) about it. Why did the authors bother not only stating but also experimentally proving what so many people consider obvious? The answer is also obvious: what are the odds of isolating good enzyme inhibitors starting from large, fragment-based, libraries of chemical compounds if not even carefully crafted substrate analogues are likely to work? Seems contradictory, but is arguably the most widespread approach to drug design.
Efforts are being made to classify enzymes by their function and substrate properties1. At the same time, concepts as enzyme promiscuity2,3,4* and conformational selection5,6 warn us about the ability how enzymes recognise (or not) different substrates. Although on this point my perspective is different from the authors’, I take the point that rational substrate dissection can help informed inhibitor design. However, it seems to me that considering each enzyme (class) properties and the specifics of the reaction’s mechanism is a better approach than substrate deconstruction.
Easier said than done.
* Disclaimer: the last review is mine