DOC PREVIEW
PSU STAT 401 - Efficient group sequential designs

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

STATISTICS IN MEDICINEStatist. Med. 2000; 00:1–6 Prepared using simauth.cls [Version: 2002/09/18 v1.11]Efficient group sequential designs when there are several effectsizes under considerationChristopher Jennison1and Bruce W. Turnbull2,∗1Department of Mathematical Sciences,University of Bath, Bath, BA2 7AY U.K.2School of Operations Research and Industrial Engineering,Cornell University, Ithaca, NY 14853, U.S.A.SUMMARYWe consider the construction of efficient group sequential designs where the goal is a low expectedsample size not only at the null hypothesis and the alternative (taken to be the minimal clinicallymeaningful effect size), but also at more optimistic anticipated effect sizes. Pre-specified Type I errorrate and power requirements can be achieved both by standard group sequential tests and by morerecently proposed adaptive procedures. We investigate four nested classes of designs: (A) Groupsequential tests with equal group sizes and stopping boundaries determined by a monomial errorsp ending function (the “ρ-family”); (B) As A but the initial group size is allowed to be differentfrom the others; (C) Group sequential tests with arbitrary group sizes and arbitrary boundaries, fixedin advance; (D) Adaptive tests — as C but at each analysis, future group sizes and critical valuesare updated depending on the current value of the test statistic. By examining the performance ofoptimal procedures within each class, we conclude that class B provides simple and efficient designswith efficiency close to that of the more complex designs of classes C and D. We provide tables andfigures illustrating the performances of optimal designs within each class and defining the optimalpro cedures of classes A and B. Copyrightc° 2000 John Wiley & Sons, Ltd.KEY WORDS: clinical trial; group sequential test; sample size re-estimation; adaptive design;flexible design; optimal design; error spending function1. INTRODUCTIONAlong with practical considerations, the sample size for a clinical trial is determined by settingup null and alternate hypotheses concerning a primary parameter of interest, θ, and thenspecifying a Type I error rate α and power 1−β to be controlled at a given treatment effect sizeθ = ∆. Usually, traditional values of α and β are used (e.g., α = 0.025, 0.05, β = 0.05, 0.1, 0.2);however, there can b e much debate over the choice of ∆. Some textbooks advo cate that ∆should be chosen to represent the minimum “clinically relevant” or “commercially viable” effect∗Correspondence to: School of Operations Research and Industrial Engineering,Cornell University, Ithaca, NY 14853, U.S.A.Contract/grant sponsor: National Institutes of Health; contract/grant number: R01 CA66218Received 19 April 2004Copyrightc° 2000 John Wiley & Sons, Ltd. Revised 14 December 20042 C. JENNISON AND B. W. TURNBULLsize — see for example Senn [1, p. 170] and Piantadosi [2, p. 149]. Others such as Shun et al.[3]say that ∆ can be taken to be the anticipated effect size — a value based on expectationsfrom prior experimental, observational and theoretical evidence. Pocock[4] suggests that eitherapproach might be taken: on pages 125 and 132, ∆ is to be a “realistic value”, while in theexample on page 128, it is to be a “clinically relevant” difference that is “important to detect”.In Section 3.5 of the ICH Guidance E9 [5], it is also stated that ∆ is to be based on a judgementconcerning either the minimal clinically relevant effect size or the “anticipated” effect.The choice of ∆ is crucial because, for example, a halving in the chosen effect size will leadto a quadrupling in the sample size for a fixed sample test (and in the maximum sample sizefor a group sequential test). Using the lower sample size appropriate to a high treatment effectwill leave the trial underpowered to detect a smaller but still important effect. Because ofthis, Shun et al.[3] and others have proposed that the trial be designed using the higher effectsize (and corresponding lower sample size), but that sample size be re-estimated at an interimanalysis based on the emerging observed treatment difference. This has been termed the “startsmall then ask for more” strategy[6]. Liu and Chi[7] present formal two-stage designs in whichthe first stage sample size is sufficient to provide specified p ower at an expected effect size butadditional observations in the second stage increase power at smaller effect sizes and guaranteean overall power requirement at a minimal clinically significant treatment effect.There have been several accounts in the literature of studies in which sample size hasbeen adapted in order to increase power at lower effect sizes. Cui et al.[8] report on aplacebo controlled myocardial infarction prevention trial with a sample size of 600 subjectsper treatment arm, this number being based on a planned effect size of a 50% reduction inincidence and 95% power. However midway through the trial, only about a 25% reductionin incidence was observed, a reduction which was still of clinical and commercial importance.Because of the low conditional power at this stage, the sponsor of the trial submitted a proposalto expand the sample size. In recent years, classes of procedures termed “flexible”, “adaptive”,“self-designing” or “variance spending” have been developed which enable such sample sizere-estimation to be done while preserving the Type I error rate α. See Bauer[9]), Proschanand Hunsberger[10], Fisher[11]), Cui et al.[8], Wassmer[12], Li et al. [13], and Posch et al.[14]among others.Remarks by some authors, e.g., Shen and Fisher[15] and Shun et al.[3], suggest a desire toset a specific power, 1 − β, at whatever is the true value of the effect size parameter. Thisaim may lead to adaptive designs with a power curve rising sharply from α at θ = 0, thenremaining almost flat at 1 −β. In consequence, significant risk of a negative outcome remainseven when the effect size is high and power close to one could easily have been attained.All the above discussion supports the view that a clinical trial should guarantee powerat effect sizes θ of clinical or commercial interest. Smaller effects are not pertinent since, asShih [16, p. 517] states “. . . trials need to consider sample size to detect a difference that isclinically meaningful, not merely to find a statistical significance.” Limitations occur when thesample size needed to detect a


View Full Document

PSU STAT 401 - Efficient group sequential designs

Download Efficient group sequential designs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Efficient group sequential designs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Efficient group sequential designs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?