Image source |
Notably, as another recent example that Eric gets it (citing his comments from Seeking Alpha's transcript of the conference call):
"That being said, as I said in my opening comments, the study design from the statistical perspective is predicated on an 18 month accrual process with the 30 month overall duration, to the extent that we can exceed enrollment projections that would be implicit in those numbers. We can expedite the study. The study does have an interim assessment, as I mentioned during my introductory comments, when half of the required events have occurred, that’s basically close to half of the patients have had disease progression, because it’s a 2 to 1 randomization, we have language in the protocol that will allow us to close the study early if the number of advanced don't occur in the projected number, that number then is based on the novel hypothesis which is patients in both arms of the study will have progression at the same rate. Obviously the goal of any clinical trial is disprove the now hypothesis and so as the sponsor, we expect that there will be a difference in the rate of progression, number of progression of that in the two study arms. And that's the basis for that additional opportunity to terminate the study if those events don’t occur. So the fundamental design of the study, again 18 months since last summer, 30 month overall we will do our best to exceed that accrual rates. We can't force the patients to have disease progression but we've done everything we can to make sure that in the event that there is disparity in disease progression between the two study arms that we are allowed to end the study at an appropriate time. Again most importantly perhaps to the investment community and to prospective partners is that we do have an interim assessment which allows us to examine and report interim data when the study is halfway through. (Underlined emphasis is mine)If the trial succeeds, when could it? First, foremost and traditionally, Eric's comments during the conference call address the interim analysis built into the trial design: "An interim assessment of efficacy and safety will be performed by the IRC when 50% of the events required for the primary endpoint have occurred." One event is one patient's disease progressing. The interim analysis would be conducted after 113 events have occurred (225 patients ÷ 1 patient per event).
This analysis should be triggered (i.e., I believe Eric would have designed the trial to be triggered) upon the first of (i) achieving the prescribed number of events (i.e., disease progressions) or (2) reaching the prescribed period of time by when the trial should have achieved the prescribed number of events. Thus, (a) given (or estimating, and thus continually refining) the trial's enrollment rate and (b) using systemic chemotherapy progression data well established in medical literature (of past clinical trials), one can establish expectations regarding when an interim analysis might be conducted.
Undertaking an interim analysis of a clinical trial costs alpha, which is a Type 1 error — detecting an effect that is not present, or the incorrect rejection of a true null hypothesis (a "false positive") — each tine. Multiple interim analyses, and the cumulative costs of looking multiple times into the dataset [for efficacy] narrow the margin of the hoped for superiority of the treatment arm (PV-10) over the control arm (system chemo). Eric designed the trial to have one interim analysis for efficacy to see if PV-10 is overwhelmingly better than systemic chemotherapy, and then presumably incorporated functionality to stop the trial for efficacy (although the trial very likely would continue to collect data so as to support evaluation of its secondary endpoints). This interim analysis and the data of progressions therein would be carried out by the trials' independent review committee ("IRC"), who may make a recommendation to stop the trial early, if appropriate, to the trial's independent data monitoring committee ("IDMC") or data monitoring committee ("DMC").
Second, there are other mechanisms to look at the data on an interim basis other than for efficacy, such as interim analyses for safety and/or futility, which may or may not have pre-specified rules for stoppage. These looks at the data do not cost alpha, but rather beta (and thus does not denigrate the value of the interim analysis for efficacy), which is a Type 2 error — failing to detect an effect that is present, or the failure to reject a false null hypothesis (a "false negative"). For safety, the IDMC/DMC might have stoppage rules if it sees a certain number of adverses events have occurred, or simply have this committee review overall safety and determine stoppage if appropriate. For futility, the IDMC/DMC might take a look at early trial data to determine if PV-10 is unlikely to beat the control arm (in order to potential stop the trial for futility), which would cost beta and not alpha.
These safety and futility analyses may occur at lower percentages of events than the efficacy analysis; however, it is not known whether Eric designed safety and/or futility analyses into the trial design. Even if he did so, it's unlikely he would discuss this topic, as he has incorporated a traditional interim analysis for efficacy, which is a "down the middle of the fairway" approach.
Thus, the faster the trial's enrollment, the faster the trial might reach these interim analyses.
Third, there should be a steering committee for the trial, which would ensure its appropriate conduct.
Now it's time for me to focus on other components of Provectus' clinical development program, while continuing to monitor and measure the company's pivotal melanoma Phase 3 trial.
Source documents:
1. Guidance for Industry, Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics, FDA, May 2007
2. Guidance for Clinical Trial Sponsors, Establishment and Operation of Clinical Trial Data Monitoring Committees, FDA, March 2006
No comments:
Post a Comment