👇 Deploy a biomarker at your peril!

Many think that a patient is either eligible for a drug or not based on whether they are positive for a biomarker.

It's not as simple as that.

Some patients cannot have a biopsy, or afford a test.

There are false positives and false negatives to every test.

Heterogeneity within the tumor may mean that the sample tested didn't have the biomarker even though it is present.

Some samples have poor quality or low tumor purity.

For all these reasons (and more), deploying a biomarker restricts the patient population by more than just the negative population.

So how can you ensure it's worth it to implement a biomarker?

Think carefully about the prevalence of the biomarker in the population you're testing and the response rate in BOTH the biomarker-positive and -negative groups.

For example, if 85-90% of the population is positive for the biomarker and response is not 0 in the negative population then it probably isn't worth implementing a selective biomarker.

The number of patients prevented from accessing the drug will be greater by implementing the biomarker than offering it to all comers.

You could even consider a complementary diagnostic (eg PD-L1 for Opdivo) where you don't need the biomarker to benefit from the drug, but PD-L1 testing can be informative if other therapies are available.

So folks, before you narrow your patient pool, consider the subtle intricacies of biomarker prevalence and its true impact on clinical utility.

Next
Next

Precision Dosing: More Is Not Always Better