Join Us

Clinical Trials Part 2: How Are the Results Used?
Dawn Hershman, M.D., M.S.
April 11, 2018

Save as Favorite
Sign in to receive recommendations (Learn more)
Hershman dawn

Dr. Dawn Hershman is professor of medicine and epidemiology at Columbia University. She also serves as leader of the Breast Cancer Program at the Herbert Irving Comprehensive Cancer Center at Columbia and is nationally recognized for her expertise in breast cancer treatment, prevention, and survivorship. A member of the Professional Advisory Board, Dr. Hershman also has conducted extensive research on breast cancer treatment and quality of life -- she has published more than 250 scientific papers and has received the Advanced Clinical Research Award in Breast Cancer from the American Society of Clinical Oncology and the Advanced Medical Achievement Award from the Avon Foundation. Dr. Hershman is also on the editorial board of the Journal of Clinical Oncology and is an associate editor at the Journal of the National Cancer Institute.

Listen to the podcast to hear Dr. Hershman explain:

  • how clinical trial results are used
  • how clinical trial results have changed the standard of care
  • why factors such as diet or exercise for reducing breast cancer risk are difficult to study in clinical trials
  • why some trials are stopped early because of good or not-so-good results

Running time: 29:12

Macrogenics podcast

Show Full Transcript

This podcast is made possible by the generous support of MacroGenics. 

Jamie DePolo: Hello, everyone. Welcome to this edition of the podcast. I’m Jamie DePolo, the senior editor at Our guest today is Dr. Dawn Hershman. She’s professor of medicine and epidemiology at Columbia University. She also serves as leader of the breast cancer program at the Herbert Irving Comprehensive Cancer Center at Columbia and is nationally recognized for her expertise in breast cancer treatment, prevention, and survivorship. A member of the Professional Advisory Board, Dr. Hershman also has conducted extensive research on breast cancer treatment and quality of life. 

She has published more than 250 scientific papers and has received the Advanced Clinical Research Award in Breast Cancer from the American Society of Clinical Oncology and the Advanced Medical Achievement Award from the Avon Foundation. Dr. Hershman is also on the editorial board of the Journal of Clinical Oncology and is an associate editor at the Journal of the National Cancer Institute.  

In our second podcast on clinical trials, Dr. Hershman is going to talk to us about the results of clinical trials, including how the results are used and how patients are told about the results. Dr. Hershman, welcome to the podcast. 

Dawn Hershman: Thank you for having me. 

Jamie DePolo: Once a clinical trial is completed, what happens to the results? How do you use them? 

Dawn Hershman: That’s an excellent question. Even when a trial stops accruing, sometimes it takes even longer to get all of the results. So depending on what the study is, sometimes the primary time of interest is a year after the last patient goes on trial. Sometimes it’s not until a certain number of patients have had a recurrence or even have died, depending on what the endpoint of the clinical trial is. So, the way studies are done, they’re -- it’s determined when you’re going to analyze the data based on what the most important endpoint of that study is, and it takes a long time to make sure that the data’s clean and it is as you think it’s going to be. But ultimately, once you’ve analyzed the data, you go exactly based on what you put you were going to do in the study protocol, and once you get that information, you want to get it out so that you can change care. 

Both positive studies and negative studies have really important information to give, and so you want to make sure that people learn of the results no matter what those results are. So the two ways we get that information out there is one, by presenting it at national meetings. And in the breast cancer world, we can either present it at general cancer meetings or breast-cancer-specific meetings so that a lot of people can hear the results at one time, and we also try to publish it in major journals. 

In addition to publishing it, there are thousands and thousands of journals out there. You want to make sure that people pay attention to your results, especially if you think they’re going change care, so you have to work with the media and press to make sure that not only do physicians or scientists hear about the results, but the patients hear about the results, too. And that can happen through the general news media, but also through patient advocacy. 

Jamie DePolo: That brings up another question for me. For the patients who are in the trial, do they hear about the results along with everyone else, or do they get any sort of special preview of what happened in the study? Are they updated as things start going along? I’m just curious. Is there some sort of early information that’s given to them as sort of a thank you for participating?  

Dawn Hershman: Usually, when the results are known, they’re not made public until what we call an embargo is released. So until… if you’re at a meeting and you’re presenting it, it’s usually the day before the meeting before anybody can find out the results, or similarly, with a paper, you can’t really give those results out until the paper is published. So unfortunately, you can’t disseminate results until the pre-specified date by either a journal or a conference, even to participants in the trial. 

But what we do, depending on the type of study, is try to send letters out to all the participants explaining the results, and if it’s a blinded study, letting them know what arm they were in. As a scientific organization in general, we’re not great at that. Sometimes it can take a long time after a study before the patients find out what arm they’re in because it can take some time to go back and track that information down. But if it’s a big study, if it’s a study that’s going to impact the patient themselves, then a priority is made to disseminate that information quicker. 

Jamie DePolo: When we were first talking about how long it takes for, you know, once the trial ends until the results are published, is there an average time that that might take? And there may not be, but I’m just curious, is it, I don't know, 3 years, 5 years?  

Dawn Hershman: So you know, again, it depends on the study. I mean, do you mean complete like -- if you’re talking about once the study’s complete, meaning that all the patients have been enrolled and followed up for the most appropriate period of time to assess the endpoint, as soon as all the information has been gathered to analyze the data, then usually it’s analyzed fairly quickly and presented within 6 months. Within the cooperative group, we have rules for that, and that as soon as the data’s analyzed, it has to be either presented or submitted to publication within 6 months of those results becoming available. 

Jamie DePolo: That’s actually much faster than I thought, because knowing if there are a lot of authors on a paper and everyone needs to have input and review, I thought that process would take much longer. But that actually seems fairly quickly, 6 months. 

Dawn Hershman: Right. I mean, you have less control over smaller studies that have fewer rules, but for larger studies, especially studies that can have potential impact or practice change, usually the investigator knows ahead of time that the results are being analyzed, so you know, there’s time to prepare. Of course, if things don’t turn out the way you wanted them to or if you have to do more analyses, it can take a little bit longer. But in general, for a large treatment trial, the data aren’t made available until they’ve been reviewed by multiple statisticians at a very high level, but I think that the commitment is to try to get that information out as quickly as possible. 

Now, it’s one thing to submit it to a journal versus having it be accepted from a journal, and that process in and of itself can either be short or very, very long, and investigators don’t have a lot of control over that process. 

Jamie DePolo: That’s a good point. That’s a good point. And that, in some cases I think, may help explain why we may hear about a study, say, being presented at the San Antonio Breast Cancer Symposium, and then you see the same research published maybe 3, 4 months later in a journal. Because one came first, maybe the journal took a little bit longer to decide to publish it, so that really isn’t uncommon from my viewpoint. I see a lot of research done that way.  

Dawn Hershman: Yeah, and 3 or 4 months is a short period of time, right, because it can take -- sometimes if you present something at the San Antonio Breast conference, you even might not have all the data ready to write up the paper. And if you submit it to one journal, just in terms of the process, it could go back and forth with reviews. It could be 6 months, and then the journal might decide not to take it, and then you have to start the process all over and go to another journal, even if all the results are complete. So the process can take a couple months, but it could also take, you know, over a year. And even when a paper’s accepted, it can then take several months before it’s actually either put online or published in a journal.  

Jamie DePolo: Yeah. There’s a long publishing calendar. Can you give us some examples -- you talked about how clinical trials can change the standard of care. Can you give us some examples of when that’s happened? And also, is there an average time when, say, a study that’s considered, “Oh, this is going to change a standard of care,” how many times does that have to be replicated or supported before the standard of care actually changes?  

Dawn Hershman: Sure. So you know, there are, again, when you think of some of the most recent studies in patients with breast cancer that’s hormone-receptor-positive, such as the class of medications called CDK4/6 inhibitors, for a variety of different drugs, the results were so impressive that it really quickly changed the standard of care. Of course, practice doesn’t change until the drug is approved by the FDA, but once you start to have very strong results that have the potential to keep people alive longer, people have a tendency to change their practice very, very quickly. 

The things that sometimes stop that are, again, related to availability of the drug, like from the FDA, and even cost of the drug that might stop people from getting it. But usually, when you have a very good drug, after one study, multiple studies then come out after that confirming those results. More and more people feel confident using that drug in practice. 

Other studies can come out that can change the standard of care based on maybe less evidence if the risk is much less to the patients. For example, if you think of something like scalp cooling, where there was one large randomized trial and one observational study, both of which showed a real benefit in terms of scalp cooling to preserve hair with very little risk to the patient, even a study like that can result in practice change almost immediately as long as people have access to that type of technology. 

Jamie DePolo: I see, because there’s really no risk to the patient to try it, or very minimal risk, I should say. Nothing has no risk. And just to backtrack a little bit, when you were talking about the CDK4 inhibitors, are those medicines, things like Ibrance? Is that --  

Dawn Hershman: Exactly. Ibrance, or palbociclib, ribociclib, abemaciclib, they’re three of them. And those drugs all have really substantial changes in outcome. Practitioners feel very confident using them. 

There are other times where we change care maybe too quickly, and you can think of an example like that might be something like Perjeta, or pertuzumab, where many people -- the FDA wanted to get that drug, have it be accessible to a lot of patients very quickly based on things like tumor shrinkage. But then when the large trials looking at survival or disease-free survival came about, the results were a little bit less impressive and showed a benefit, but very, very small benefits. So you always wonder in that circumstance, would you have changed your practice on so many people if you had known how small the benefits were upfront? So it can go in both directions. 

Jamie DePolo: Earlier on in the podcast, you talked about both positive trial results and negative trial results being helpful and possibly changing practice. So to explore that a little bit further, positive results, I’m assuming, means that you have a new drug and you find out that it works better than the current standard of care. Perhaps an example might be when research found that for postmenopausal women, aromatase inhibitors were a better treatment for hormone-receptor-positive breast cancer than tamoxifen was, and so that kind of became the standard of care there. So if that’s right, could you also give us an example of how a negative study can help either change the standard of care or be informative?  

Dawn Hershman: Sure. Absolutely. I mean, I can think of an example based on a study I did, which was looking at a supplement that was really being used a lot to prevent peripheral neuropathy called L-carnitine. And there was a lot of information on the internet, for example, that it was very effective, it was effective for a variety of conditions. And there were no treatments available for the prevention of neuropathy that can come from taxanes or other chemotherapy agents, so many people were just taking it because they were buying it in a health food store. So we conducted a large study, randomized, placebo-controlled over of many hundreds of women, and found that the patients that got the supplement actually had worse neuropathy. And so that would be considered a negative study, and to a certain extent, you can inform patients. So people that were taking this, where there was no evidence that it worked, now we have evidence that you really shouldn’t be taking it. And so that can really help inform patient decision-making. 

Jamie DePolo: That’s really good to know. And also, too, sometimes I read studies where people are looking to see if a new drug or using a drug in a new way is better than the standard of care, and the results show that it’s about the same. So that then, I suppose, reinforces that the standard of care is still the thing to do.  

Dawn Hershman: Exactly. Exactly. And sometimes we think that each new drug is going to be better, or if there’s a good rationale for it, it works, but that’s why you have to do studies. I mean, when I was a fellow, many women got bone marrow transplants for breast cancer because the early data suggested it worked, and people didn’t even want to be on clinical trials because they believed it worked so much. But then when those trials were actually done, they found that not only did it not help, but it caused a lot of toxicity. And so that’s where sometimes the things that we think we know the answers to don’t always come out the way we think they’re going to. 

Jamie DePolo: Right. Now, there are some factors, and I’m thinking about diet, exercise for reducing the risk of breast cancer. Those are notoriously difficult to study in clinical trials, and can you help us understand why that is?  

Dawn Hershman: Absolutely. It’s a lot easier to give patients a pill that they can take and you can monitor whether or not they took it, and then there’s a placebo that they can take, and it doesn’t have the active ingredient. When you look at things like diet and exercise, we don’t know -- both groups eat and both groups move, right? So you don’t have as much control over what people do and don’t do. And while a lifetime of diet and exercise can affect outcome, we don’t know sometimes if we do a study of 6 months or 12 months of changing somebody’s diet if that’s going to affect long-term outcomes. But controlling what people do and how they behave for even a short period of time can be very difficult, let alone a long period of time.  

So behavioral interventions can be very, very challenging because you can have what we call either drop-in or drop-out. So people that were maybe randomized to the diet and exercise arm may not be compliant with that intervention, and people that were randomized to the usual care group may start to change their diet or exercise. And so that can make it difficult to see a difference between the two groups. 

Jamie DePolo: That makes sense, and also, too, sometimes I read about studies where, especially for diet, they ask people, “Well, tell me what you ate when you were 15 years old.” And the people in the study, maybe women who are now in their 50s and 60s -- I know I personally probably couldn’t tell you what I ate last week unless I kept a food diary. So then when the results come out and say, oh, you know, “Women who ate a lot of” -- and I’m just making this up -- “sugar when they were 15 have a higher risk of breast cancer,” I’m always slightly skeptical about that because I’m always wondering, are these people really accurately remembering what they ate?  

Dawn Hershman: Right, and there’s -- we call it bias that’s associated with those kinds of studies, right? So if you have breast cancer and you think there might be an association, you may be more likely to say, “Oh yeah, I must have had a lot of sugar, so that must be why I developed breast cancer.” So there can be a lot of misinformation based on biases that can come from that type of research. Those kinds of studies are good for getting a sense of associations, but they’re subject to a lot of methodologic problems because they can be inaccurate. 

Sometimes we use observational research like the study that you just described to help us define areas that we should do interventions in. But part of the reason why sometimes interventions don’t confirm observational findings can be maybe because the intervention wasn’t good enough or it didn’t go on for long enough, but it could also be that the observational findings aren’t accurate or are biased. And you see that a lot with nutrition studies. 

For example, I’m going to use a study that was done a long time ago called the CARET trial. And there was an enormous amount of evidence from observational studies and animal studies and even some small experiments that beta carotene was good for cancer prevention, and especially for lung cancer prevention. And they did a huge interventional trial and found -- on thousands and thousands and thousands of people -- and found that patients that took beta carotene who smoked had actually a higher risk of developing lung cancer. And that’s been found with other supplements as well such as selenium. And so that’s why sometimes there can be a discrepancy between what you find retrospectively and what you find prospectively in trials. 

Jamie DePolo: Okay. Very good to know. Now, I read about some trials being stopped early because the results were either very good or not very good. And can you help us understand why would that happen and how good do the results have to be, or how bad do the results have to be, before a trial is stopped early?  

Dawn Hershman: Right. So every study has what’s called a data safety monitoring board, and they’re independent groups comprised of people with various different expertise, usually statisticians are among them, that at pre-specified times look at the data. And these are set up really for patient protection. And they have pre-specified before the study what they consider success and what they consider failure of a drug. So sometimes trials are stopped early because very early on, there are a lot of toxicities that weren’t anticipated and that are more in one group than the other. And so if that is the case, they’ll stop a study early to make sure that they’ve protected people who participate in the trial from having an adverse consequence of being part of it. 

Sometimes they’ll find that the results at a certain point are so similar between the two groups that even if they were to follow those patients for a longer period of time or enroll more patients, they’ll statistically never find a difference that’s meaningful. Then they’ll stop the study early both to save effort and to save time. 

Sometimes there will be such an impressive result between two groups, especially for a disease that doesn’t have any other treatment, where, you know, even if -- sort of the opposite. Those results are so strong that even if they were to put more patients on or follow patients longer, the trial will still be considered a success. And then you want to be able to offer everybody that medication. So those are some of the reasons why a study might be stopped early or stopped for both good and bad reasons. 

Jamie DePolo: And when a study is stopped early because of good results, and the new treatment seems to be amazingly better than either the placebo or the current standard of care, then I’ve read in several studies where the patients who were on the not-new thing are allowed to switch over.  

Dawn Hershman: Yes. That does happen. But sometimes it doesn’t happen when they want to look at secondary outcomes such as survival. Often studies are designed in two different ways. Some studies allow a crossover, like you described, where at a certain point, if a patient progresses, they’re offered the drug so that they have that opportunity. But other studies are designed in such a way that they can’t get the drug until it’s FDA approved. So depending on what the most important outcome of that trial is -- sometimes it works in that way to the patient’s advantage and sometimes it doesn’t. 

Jamie DePolo: If the study is such that the patients are allowed to cross over and get the more helpful treatment, how are those results written up then? Does the study just end then when people start switching, or are the people who switched over, are they monitored to see if they have the same results? How does that work?  

Dawn Hershman: Usually for those studies, the primary outcome of those trials is the time to progression, right? So once a patient’s progressed, so to speak, that’s the endpoint of that trial, and then they can go onto the other drug, and it won’t affect the results. But if the study is more subtle and looking at long-term effects and they’re trying to get a drug approved, sometimes by crossing over, you can dampen the overall effects or make it impossible to find a positive effect. And so in that case, they want to stay true to the original study design so they can get the drug approved by the FDA. 

Jamie DePolo: That’s helpful. One last question. In your mind, how important are clinical trials to the future of breast cancer treatment?  

Dawn Hershman: They’ve been critically important for 35 years in terms of getting us to where we are now. It’s really the only way we can continue to incrementally move the field forward. You know, progress is sometimes not always so clear cut when you’re in the middle of it, but when you look at the history of what’s happened, it’s really incredible the options we have now that we didn’t have in the past. Not everybody is right for a clinical trial, but there are a lot of different ways of participating in research that can not only help you as a patient, but also help the scientific community understand how to either better treat breast cancer or prevent breast cancer or even prevent the side effects from breast cancer, make people live longer -- there are all different types of ways of obtaining knowledge. 

Jamie DePolo: Dr. Hershman, thank you so much. This has been a hugely helpful podcast. We really appreciate your time.  

Dawn Hershman: Absolutely. Happy to help.

Hide Transcript

Was this article helpful? Yes / No
Rn icon

Can we help guide you?

Create a profile for better recommendations

How does this work? Learn more
Are these recommendations helpful? Take a quick survey
Fy22oct sidebarad v02
Back to Top