Scientist Q&A Part 1: Samuel Yamin

After years of building evidence strategies with manufacturers, 3Aware Principal Clinical Scientist Samuel Yamin, MPH, has seen what holds up under scrutiny — and what doesn't. In this two-part Q&A session, Samuel shares what surprised him most about how the 3Aware MedTech-specific RWE platform handles device data — especially around device identification, procedural context, unstructured notes, and longitudinal follow-up — and how this changes what’s possible for PMCF, label expansion, and beyond.

1. What surprised you most coming from a medtech manufacturer to 3Aware?

A: How quickly we can move from a regulatory-driven research question to an evidence-ready analysis when the platform is built specifically for in-depth data on device safety, performance, and clinical usage and facilitating RWE workflows, not simply pouring out raw RWD.

2. What did you expect would be similar—but wasn’t?

A: Sometimes it is assumed that “RWD is just RWD.” In practice, there is a huge degree of variety and untapped potential both in the quality of RWD and what medtech researchers can do with it. To truly rely on device-level evidence, the med tech research community needs a different approach: confident device identification detailed procedural context; specifics on patient medical history, co-morbidities, and disease states; both intra-operatively and during longitudinal follow-up aren’t “nice-to-haves”—they’re foundational to putting RWD analyses on a spar with other high-quality research methodologies.

3. From an evidence generation perspective, what was your first “aha” moment?

A: That would go all the way back to 2023 when I first learned about 3Aware. At the time I was working for a med tech manufacturer, managing PMCF work and related data strategies for many different products, and when 3Aware initially reached out I recall thinking “if I don’t end up working with these folks, I definitely want to hire them for PMCF on the products i’m supporting.” Seeing how the platform enables creation of product-specific cohorts and facilitates outcomes tracking in a way that’s aligned with how regulatory questions are actually asked—and reviewed, really brought it home to me what a powerful tool is available with 3Aware, with both structured and unstructured data elements curated, organized, and at the user’s fingertips for whatever types of analyses are useful to them.

4. What’s the biggest limitation you’ve seen with other RWD sources/platforms or other methodologies?

A: They often aren’t designed for device specificity—so you spend a lot of time validating what the data really represents before you can even start answering the question. Meaning, there can be a lot of uncertainty or assumptions made about whether a device was used, how it was actually used (e.g., on-label), and whether or not the clinical benefits were successfully delivered to the patient. Or, with some methodologies where there is better confidence in some of those details, we still don’t often see such a full picture of the patients’ clinical experience, from their medical history, through the full description of procedure(s), out into the patient’s recovery and progress over time. Typically that is only possible with a full-blown clinical study, which requires far more resources than conducting such work with 3Aware, and even then some parameters like enrollment of full sample size can be more uncertain than what 3Aware can assure at the outset.

5. Why is device specificity so hard in the real world?

A: Because devices live inside workflows—inventory, procedure documentation, clinician notes, follow-up care—so identity and context can fragment unless the system is built to connect them. This is particularly true of accessories or other tools that are used as components of a system, so in many cases, studies are subject to recall bias or limited by having to infer that a particular product is used.

6. How does 3Aware change your work as a clinical scientist?

A: It lets me spend more time where clinical scientists add the most value: study design, endpoints, confounding, interpretation, and regulatory narrative—not manual data wrangling. When regulators say they want manufacturers to “tell a story”, or make an argument justifying a label expansion or device certification, conducting research with 3Aware allows study teams and medical writers to do just that, rather than spend their energies justifying shortcomings in the data.

7. What surprised you about feasibility at 3Aware?

A: How thoroughly and efficiently we can determine whether a research question is answerable—cohort definition, follow-up time and depth, endpoint quality and availability—before teams invest months and budget on a study plan that can’t be fully executed.

8. You’ve written PMCF plans and reports—what’s different when the platform is purpose-built?

A: The evidence approach becomes more traceable and reproducible, which strengthens the logic from the PMCF plan through the data analysis to study completion and any conclusions drawn from the research.

9. What’s different about qualitative data when you’re using RWD?

A: It’s not just “supporting color.” When the methods are solid, qualitative data can solidly complement quantitative outcomes. Qualitative inputs—whether that is elements of a clinician’s notes in a follow-up visit context or patient feedback, or answers in questionnaires (e.g., patient reported outcome measures or PROMs) —can be critical to understanding device usage patterns and interpreting safety/performance.

10. Coming from your EU MDR work, what stood out about “auditability” at 3Aware?

A: The emphasis on traceable, reproducible workflows. When you can clearly show how cohorts were defined, what endpoints were used, and how data quality was assessed, you’re far better positioned for review and scrutiny.

View Part 2: Longitudinal follow-up, fit-for-purpose RWE, data quality, common misconceptions, and evidence gaps.

Terms of Service | Terms of Use | Privacy Policy | 3Aware™ EU-U.S. Data Privacy Framework (EU-U.S. DPF) Notice