You Need to Reduce Duplicate References in Your Medical Literature Monitoring: Here’s Why

August 24, 2020

Too much information. In our everyday lives it’s an irritation, at worst. But in medical literature monitoring, it’s a major issue. How do you cut through the noise to find the reference that really matters? In our paper Better Relevance, we discussed how precision search can cut down the volume of alerts relevant to adverse events by 30-50%. But this alone doesn’t solve the information overload problem for pharmacovigilance (PV) professionals. Teams still have to deal with deduplication. And because of the huge volume of sources and references in medical literature – and the different ways in which they are indexed – managing duplicate references is a serious headache in the medical literature monitoring process.

DOWNLOAD THE WHITEPAPER HERE

Our estimates suggest that on average, one-third of references retrieved in literature monitoring are duplicates; and 40% of references refer to more than one drug. This puts more pressure on already-stretched teams. And it increases the risk of inconsistent assessments. For example, the same reference may be reviewed several times for ICSRs, aggregate reports and safety signals – or one reference could be sent multiple times for case processing. Either way, duplicates are the start of a growing snowball of unnecessary work. They also create risk: obfuscating the regulatory clock start date, prompting inconsistent review and therefore compliance issues, and skewing aggregate reports.

The ultimate goal is to take note of each duplicate reference while ensuring that only one unique and relevant reference goes through to be reviewed and processed. Achieving this demands a combination of best practice and smart technology.

Using multi-database searches can remove many duplicate records from across multiple sources. And ensuring that alerts – automatic, regular search strategies – run against a long memory of their previous results also reduces the risk of duplicates. Finally, applying a robust, proven algorithm can take care of much of the “heavy lifting” in deduplication, so the reviewer just has to quickly decide whether or not the reference is a duplicate and needs to be assessed.

In Finding the One: How to manage the impact of duplicate references in medical literature monitoring, we explore the issues around duplicates in more depth, concluding with a checklist to benchmark your own practice.

With so much information now out there, managing duplicate references is essential for medical literature monitoring. What matters most is that your process is robust and reliable.

Pin It on Pinterest

Share This