6  Power computation for a group sequential design

6.1 Motivation

gsSurv() and gsSurvCalendar() derive sample sizes and enrollment rates to achieve a target power for a given hazard ratio. In practice, we often want to answer the reverse question: given a specified hazard ratio, fixed enrollment, dropout, and analysis timing assumptions, what power does the design achieve?

Common scenarios include:

  • Sensitivity analysis: What happens to power if the true hazard ratio is 0.75 instead of the design assumption of 0.65?
  • Changing alpha: What if the multiplicity scheme allows one-sided \(\alpha = 0.025\) instead of \(\alpha = 0.0125\)?
  • Modified enrollment: What if enrollment is slower than planned?
  • Different analysis times: What if interim analyses occur at calendar times that differ from the original design?

gsSurvPower() addresses these questions by computing power for a group sequential survival design under user-specified assumptions.

6.2 Design

gsSurvPower() accepts an optional gsSurv object x that provides defaults for all parameters. Any parameter the user explicitly specifies overrides the corresponding value from x. When x is not provided, all design parameters must be specified directly.

6.2.1 Hazard ratio roles

Two distinct hazard ratios play different roles:

  • hr: The assumed hazard ratio under which power is computed. This is the “what-if” treatment effect.
  • hr1: The design hazard ratio used to calibrate futility bounds. When x is provided, hr1 defaults to x$hr (the effect size the trial was originally designed for). Futility bounds remain calibrated to the design assumption even when power is evaluated under a different hr.

6.2.2 Analysis timing: calendar time vs. event-driven

Analysis times can be specified by calendar time, by target event counts, or by a combination of criteria. The choice has an important consequence for sensitivity analyses:

  • plannedCalendarTime fixes the calendar time of each analysis. Expected events are then recomputed under the assumed HR. A worse HR (closer to 1) produces more expected events at the same calendar time because the experimental arm fails faster. This gives an “unconditional” power that reflects how the assumed treatment effect influences event accrual.

  • targetEvents fixes the event count at each analysis. The calendar time is solved for under the assumed HR. Since the event counts are held constant, the information fractions do not change with HR, and the resulting power matches the gsDesign power plot (plot(x, plottype = 2)) to numerical precision.

Both modes are useful. Calendar-time analyses are natural when the protocol specifies analysis dates; event-driven analyses are natural when the protocol specifies event targets.

Additional criteria can be combined per-analysis, each specified as a scalar (recycled to all k analyses) or a vector of length k with NA for “not applicable”:

  • maxExtension: Maximum time beyond the floor to wait for target events.
  • minTimeFromPreviousAnalysis: Minimum elapsed time since the previous analysis.
  • minN: Minimum sample size enrolled before analysis.
  • minFollowUp: Minimum follow-up after minN is reached.

When multiple criteria apply to a single analysis, the analysis time is the maximum of all floor criteria, with targetEvents potentially extending beyond the floor. maxExtension acts as a hard cap: the analysis time never exceeds plannedCalendarTime + maxExtension (or T[i-1] + maxExtension when no calendar time is specified), even if other criteria such as minTimeFromPreviousAnalysis or minN + minFollowUp would push it later.

6.2.3 Spending and method

  • spending: One of "information" (default) or "calendar". Information-based spending tracks the fraction of statistical information accumulated; calendar-based spending sets usTime = lsTime = T / max(T). Custom spending times can also be passed via usTime and lsTime.

  • method: One of "LachinFoulkes" (default), "Schoenfeld", "Freedman", or "BernsteinLagakos". Controls how fixed-design events (n.fix) are computed when x is not provided. When x is provided, x$n.fix is used directly for exact consistency with the design.

6.2.4 Stratified targetEvents

targetEvents accepts a scalar (recycled), a vector of length k (one overall target per analysis), or a matrix with k rows and nstrata columns (per-stratum targets). A vector of length k is always interpreted as overall targets; to specify per-stratum targets for a single analysis, use a 1-row matrix.

6.3 Example: design and power computation

We first create a group sequential design, then evaluate power under the design assumptions and under an alternative hazard ratio.

design <- gsSurv(
  k = 3, test.type = 4, alpha = 0.025, sided = 1, beta = 0.1,
  sfu = sfHSD, sfupar = -4, sfl = sfHSD, sflpar = -2,
  lambdaC = log(2) / 12, hr = 0.7, hr0 = 1,
  eta = 0.01, gamma = 10, R = 16, minfup = 12, T = 28
)
design
Group sequential design (method=LachinFoulkes; k=3 analyses; Two-sided asymmetric with non-binding futility)
N=629.1 subjects | D=353.2 events | T=28.0 study duration | accrual=16.0 Accrual duration | minfup=12.0 minimum follow-up | ratio=1 randomization ratio (experimental/control)

Spending functions:
  Efficacy bounds derived using a Hwang-Shih-DeCani spending function with gamma = -4.
  Futility bounds derived using a Hwang-Shih-DeCani spending function with gamma = -2.

Analysis summary:
Method: LachinFoulkes 
    Analysis              Value Efficacy Futility
   IA 1: 33%                  Z   3.0107  -0.2388
      N: 490        p (1-sided)   0.0013   0.5944
 Events: 118       ~HR at bound   0.5741   1.0450
   Month: 12   P(Cross) if HR=1   0.0013   0.4056
             P(Cross) if HR=0.7   0.1412   0.0148
   IA 2: 67%                  Z   2.5465   0.9410
      N: 630        p (1-sided)   0.0054   0.1733
 Events: 236       ~HR at bound   0.7176   0.8846
   Month: 19   P(Cross) if HR=1   0.0062   0.8347
             P(Cross) if HR=0.7   0.5815   0.0437
       Final                  Z   1.9992   1.9992
      N: 630        p (1-sided)   0.0228   0.0228
 Events: 354       ~HR at bound   0.8084   0.8084
   Month: 28   P(Cross) if HR=1   0.0233   0.9767
             P(Cross) if HR=0.7   0.9000   0.1000

Key inputs (names preserved):
                               desc    item  value input
                    Accrual rate(s)   gamma 39.316    10
           Accrual rate duration(s)       R     16    16
             Control hazard rate(s) lambdaC  0.058 0.058
            Control dropout rate(s)     eta   0.01  0.01
       Experimental dropout rate(s)    etaE   0.01  etaE
 Event and dropout rate duration(s)       S   NULL     S

6.3.1 Power at design assumptions

Passing the design’s analysis times with the same hazard ratio exactly reproduces the design power (90%). Internally, gsSurvPower() uses x$n.fix (when x is provided) so that the drift parameter \(\theta\) and bounds are normalized identically to gsSurv(). The assumed HR’s drift is obtained by method-specific scaling:

  • For Schoenfeld, LachinFoulkes, and BernsteinLagakos: \(\theta_{\text{assumed}} = \theta_{\text{design}} \times |\log(\text{hr}/\text{hr}_0)| / |\log(\text{hr}_1/\text{hr}_0)|\)
  • For Freedman: \(\theta_{\text{assumed}} = \theta_{\text{design}} \times |\delta(\text{hr})| / |\delta(\text{hr}_1)|\) where \(\delta(\text{hr}) = (\text{hr} - 1) / (\text{hr} + 1/\text{ratio})\)

This reproduces the design power exactly at hr = hr1 and scales correctly for other hazard ratios, matching rpact::getPowerSurvival() to within 0.5% across methods.

pwr_design <- gsSurvPower(x = design, plannedCalendarTime = design$T)
cat("Design power:", round((1 - design$beta) * 100, 1), "%\n")
Design power: 90 %
cat("gsSurvPower:  ", round(pwr_design$power * 100, 1), "%\n")
gsSurvPower:   90 %

6.3.2 Power under a different hazard ratio

Suppose the true treatment effect is HR = 0.8 instead of the design assumption of 0.7:

pwr_worse <- gsSurvPower(x = design, hr = 0.8, plannedCalendarTime = design$T)
cat("Power at HR = 0.8:", round(pwr_worse$power * 100, 1), "%\n")
Power at HR = 0.8: 54.1 %

The futility bounds remain calibrated to the design HR (0.7), but power is evaluated under the assumed HR (0.8).

6.3.3 Power over a range of hazard ratios

hr_grid <- seq(0.55, 0.90, by = 0.05)
power_vals <- sapply(hr_grid, function(h) {
  p <- gsSurvPower(x = design, hr = h, plannedCalendarTime = design$T)
  p$power
})
results <- data.frame(HR = hr_grid, Power = round(power_vals * 100, 1))
results
    HR Power
1 0.55  99.9
2 0.60  99.4
3 0.65  97.1
4 0.70  90.0
5 0.75  75.2
6 0.80  54.1
7 0.85  32.5
8 0.90  16.2

6.4 Comparison with gsDesign power plots

The gsDesign package provides power plots via plot(design, plottype = 2). These hold event counts fixed at the design values and vary only the drift parameter \(\theta\). The table below compares three approaches across a range of hazard ratios:

  • gsDesign: gsProbability() with design bounds, design events, scaled \(\theta\). This is what plot(design, plottype = 2) computes.
  • gsSurvPower (fixed events): gsSurvPower() with targetEvents = design_events. Events are held constant; calendar times adjust.
  • gsSurvPower (fixed calendar): gsSurvPower() with plannedCalendarTime = design$T. Calendar times are held constant; events change with the assumed HR.
design_events <- rowSums(design$eDC) + rowSums(design$eDE)
hr_grid <- seq(0.55, 0.95, by = 0.05)

comparison <- data.frame(
  HR = hr_grid,
  gsDesign_plot = sapply(hr_grid, function(h) {
    delta_ratio <- abs(log(h)) / abs(log(design$hr))
    theta_h <- design$delta * delta_ratio
    gsp <- gsDesign::gsProbability(
      k = design$k, theta = theta_h,
      n.I = design$n.I,
      a = design$lower$bound, b = design$upper$bound, r = 18)
    sum(gsp$upper$prob)
  }),
  fixed_events = sapply(hr_grid, function(h) {
    gsSurvPower(x = design, hr = h, targetEvents = design_events)$power
  }),
  fixed_calendar = sapply(hr_grid, function(h) {
    gsSurvPower(x = design, hr = h, plannedCalendarTime = design$T)$power
  })
)
comparison[, -1] <- round(comparison[, -1] * 100, 2)
comparison
    HR gsDesign_plot fixed_events fixed_calendar
1 0.55         99.95        99.95          99.92
2 0.60         99.57        99.57          99.43
3 0.65         97.40        97.40          97.14
4 0.70         90.00        90.00          90.00
5 0.75         74.37        74.37          75.21
6 0.80         52.53        52.53          54.10
7 0.85         31.05        31.05          32.54
8 0.90         15.36        15.36          16.20
9 0.95          6.44         6.44           6.69

Key observations:

  • The gsDesign_plot and fixed_events columns match to numerical precision because both condition on the same event counts at each analysis. When using targetEvents, gsSurvPower() reproduces the gsDesign power plot exactly.

  • The fixed_calendar column differs modestly because fixing calendar times allows the expected event count to change with the assumed HR. A worse HR (closer to 1) produces more expected events at the same calendar time, since the experimental arm has a higher failure rate. This slightly changes the statistical information at each analysis, producing an “unconditional” power that accounts for the interplay between treatment effect and event accrual.

6.4.1 Bounds stability

When using targetEvents, the efficacy and futility bounds do not change with the assumed HR. The bounds are determined entirely by the design parameters (alpha/beta spending, information fractions, n.fix) and are reused directly from the input design x when the timing matches:

design_events <- rowSums(design$eDC) + rowSums(design$eDE)
cat("Design bounds (Z-scale):\n")
Design bounds (Z-scale):
cat("  Efficacy:", round(design$upper$bound, 4), "\n")
  Efficacy: 3.0107 2.5465 1.9992 
cat("  Futility:", round(design$lower$bound, 4), "\n\n")
  Futility: -0.2388 0.941 1.9992 
for (h in c(0.5, 0.7, 0.8, 1.0)) {
  pwr <- gsSurvPower(x = design, hr = h, targetEvents = design_events)
  cat(sprintf("HR=%.1f  Efficacy: %s  Futility: %s  (identical: %s)\n",
    h,
    paste(round(pwr$upper$bound, 4), collapse = ", "),
    paste(round(pwr$lower$bound, 4), collapse = ", "),
    identical(pwr$upper$bound, design$upper$bound) &&
      identical(pwr$lower$bound, design$lower$bound)))
}
HR=0.5  Efficacy: 3.0107, 2.5465, 1.9992  Futility: -0.2388, 0.941, 1.9992  (identical: TRUE)
HR=0.7  Efficacy: 3.0107, 2.5465, 1.9992  Futility: -0.2388, 0.941, 1.9992  (identical: TRUE)
HR=0.8  Efficacy: 3.0107, 2.5465, 1.9992  Futility: -0.2388, 0.941, 1.9992  (identical: TRUE)
HR=1.0  Efficacy: 3.0107, 2.5465, 1.9992  Futility: -0.2388, 0.941, 1.9992  (identical: TRUE)

With plannedCalendarTime, different assumed HRs produce different expected event counts and therefore different information fractions, so the bounds are appropriately recomputed via gsDesign::gsDesign().

6.5 Example: event-based timing

Instead of calendar times, analyses can be triggered by target event counts:

pwr_events <- gsSurvPower(
  x = design,
  targetEvents = c(75, 150, 225)
)
cat("Analysis times:", round(pwr_events$T, 1), "\n")
Analysis times: 9.7 14.2 18.2 
cat("Events at each analysis:",
    round(rowSums(pwr_events$eDC) + rowSums(pwr_events$eDE), 1), "\n")
Events at each analysis: 75 150 225 
cat("Power:", round(pwr_events$power * 100, 1), "%\n")
Power: 73.5 %

6.6 Example: without a reference design

gsSurvPower() can be used without a reference design by specifying all parameters directly:

pwr_direct <- gsSurvPower(
  k = 2, test.type = 4, alpha = 0.025, sided = 1,
  lambdaC = log(2) / 6, hr = 0.65, hr0 = 1,
  eta = 0.01, gamma = 8, R = 18, ratio = 1,
  sfu = sfHSD, sfupar = -4, sfl = sfHSD, sflpar = -2,
  plannedCalendarTime = c(24, 36)
)
cat("Power:", round(pwr_direct$power * 100, 1), "%\n")
Power: 62.5 %
pwr_direct
Group sequential design (method=LachinFoulkes; k=2 analyses; Two-sided asymmetric with non-binding futility)
HR=0.650 vs HR0=1.000 | alpha=0.025 (sided=1) | power=62.5%
N=144.0 subjects | D=120.0 events | T=36.0 study duration | accrual=18.0 Accrual duration | minfup=18.0 minimum follow-up | ratio=1 randomization ratio (experimental/control)

Spending functions:
  Efficacy bounds derived using a Hwang-Shih-DeCani spending function with gamma = -4.
  Futility bounds derived using a Hwang-Shih-DeCani spending function with gamma = -2.

Analysis summary:
Method: LachinFoulkes 
    Analysis               Value Efficacy Futility
   IA 1: 82%                   Z   2.2661   1.5089
      N: 144         p (1-sided)   0.0117   0.0657
  Events: 98        ~HR at bound   0.6325   0.7371
   Month: 24    P(Cross) if HR=1   0.0117   0.9343
             P(Cross) if HR=0.65   0.4490   0.2646
       Final                   Z   2.0138   2.0138
      N: 144         p (1-sided)   0.0220   0.0220
 Events: 121        ~HR at bound   0.6924   0.6924
   Month: 36    P(Cross) if HR=1   0.0228   0.9772
             P(Cross) if HR=0.65   0.6250   0.3750

Key inputs (names preserved):
                               desc    item value    input
                    Accrual rate(s)   gamma     8        8
           Accrual rate duration(s)       R    18       18
             Control hazard rate(s) lambdaC 0.116 log(2)/6
            Control dropout rate(s)     eta  0.01     0.01
       Experimental dropout rate(s)    etaE  0.01     etaE
 Event and dropout rate duration(s)       S  NULL        S

6.7 Code listing

gsSurvPower() source code
gsSurvPower
function (x = NULL, k = NULL, test.type = NULL, alpha = NULL, 
    sided = NULL, astar = NULL, sfu = NULL, sfupar = NULL, sfl = NULL, 
    sflpar = NULL, r = NULL, usTime = NULL, lsTime = NULL, lambdaC = NULL, 
    hr = NULL, hr0 = NULL, hr1 = NULL, eta = NULL, etaE = NULL, 
    gamma = NULL, R = NULL, S = NULL, ratio = NULL, minfup = NULL, 
    method = NULL, spending = c("information", "calendar"), plannedCalendarTime = NULL, 
    targetEvents = NULL, maxExtension = NULL, minTimeFromPreviousAnalysis = NULL, 
    minN = NULL, minFollowUp = NULL, tol = .Machine$double.eps^0.25) 
{
    spending <- match.arg(spending)
    if (!is.null(x)) {
        if (!inherits(x, "gsSurv")) 
            stop("x must be a gsSurv object")
        sided_infer <- function(tt) if (tt == 1) 
            1L
        else 2L
        if (is.null(k)) 
            k <- x$k
        if (is.null(test.type)) 
            test.type <- x$test.type
        if (is.null(sided)) 
            sided <- sided_infer(if (!is.null(test.type)) 
                test.type
            else x$test.type)
        if (is.null(alpha)) 
            alpha <- x$alpha * sided
        if (is.null(astar)) 
            astar <- x$astar
        if (is.null(sfu)) 
            sfu <- x$upper$sf
        if (is.null(sfupar)) 
            sfupar <- x$upper$param
        if (is.null(sfl)) 
            sfl <- x$lower$sf
        if (is.null(sflpar)) 
            sflpar <- x$lower$param
        if (is.null(r)) 
            r <- x$r
        if (is.null(lambdaC)) 
            lambdaC <- x$lambdaC
        if (is.null(hr)) 
            hr <- x$hr
        if (is.null(hr0)) 
            hr0 <- x$hr0
        if (is.null(hr1)) 
            hr1 <- x$hr
        if (is.null(eta)) 
            eta <- x$etaC
        if (is.null(etaE)) 
            etaE <- x$etaE
        if (is.null(gamma)) 
            gamma <- x$gamma
        if (is.null(R)) 
            R <- x$R
        if (is.null(S)) 
            S <- x$S
        if (is.null(ratio)) 
            ratio <- x$ratio
        if (is.null(minfup)) 
            minfup <- x$minfup
        if (is.null(method)) {
            method <- if (!is.null(x$method)) 
                x$method
            else "LachinFoulkes"
        }
        beta_design <- x$beta
    }
    else {
        if (is.null(k)) 
            stop("k must be specified when x is not provided")
        if (is.null(test.type)) 
            test.type <- 4L
        if (is.null(sided)) 
            sided <- 1L
        if (is.null(alpha)) 
            alpha <- 0.025
        if (is.null(astar)) 
            astar <- 0
        if (is.null(sfu)) 
            sfu <- gsDesign::sfHSD
        if (is.null(sfupar)) 
            sfupar <- -4
        if (is.null(sfl)) 
            sfl <- gsDesign::sfHSD
        if (is.null(sflpar)) 
            sflpar <- -2
        if (is.null(r)) 
            r <- 18
        if (is.null(lambdaC)) 
            lambdaC <- log(2)/6
        if (is.null(hr)) 
            hr <- 0.6
        if (is.null(hr0)) 
            hr0 <- 1
        if (is.null(hr1)) 
            hr1 <- hr
        if (is.null(eta)) 
            eta <- 0
        if (is.null(ratio)) 
            ratio <- 1
        if (is.null(R)) 
            R <- 12
        if (is.null(minfup)) 
            minfup <- 18
        if (is.null(method)) 
            method <- "LachinFoulkes"
        beta_design <- 0.1
    }
    method <- match.arg(method, c("LachinFoulkes", "Schoenfeld", 
        "Freedman", "BernsteinLagakos"))
    if (is.null(etaE)) 
        etaE <- eta
    if (!is.matrix(lambdaC)) 
        lambdaC <- matrix(if (is.vector(lambdaC)) 
            lambdaC
        else as.vector(lambdaC))
    nstrata <- ncol(lambdaC)
    nlambda <- nrow(lambdaC)
    etaC <- if (is.matrix(eta)) 
        eta
    else matrix(eta, nrow = nlambda, ncol = nstrata)
    etaE_mat <- if (is.matrix(etaE)) 
        etaE
    else matrix(etaE, nrow = nlambda, ncol = nstrata)
    if (!is.matrix(gamma)) 
        gamma <- matrix(gamma)
    Qe <- ratio/(1 + ratio)
    Qc <- 1 - Qe
    if (is.null(plannedCalendarTime) && is.null(targetEvents)) {
        if (!is.null(x)) {
            plannedCalendarTime <- x$T
        }
        else {
            stop("At least one of plannedCalendarTime or targetEvents must be specified")
        }
    }
    if (is.null(k)) {
        if (!is.null(plannedCalendarTime)) 
            k <- length(plannedCalendarTime)
        else if (is.matrix(targetEvents)) 
            k <- nrow(targetEvents)
        else if (!is.null(targetEvents)) 
            k <- length(targetEvents)
    }
    if (is.null(k) || k < 1) 
        stop("Could not determine number of analyses (k)")
    recycle_k <- function(p, nm) {
        if (is.null(p)) 
            return(rep(NA_real_, k))
        if (length(p) == 1) 
            return(rep(p, k))
        if (length(p) == k) 
            return(p)
        stop(paste(nm, "must have length 1 or", k))
    }
    pct <- recycle_k(plannedCalendarTime, "plannedCalendarTime")
    me <- recycle_k(maxExtension, "maxExtension")
    mtpa <- recycle_k(minTimeFromPreviousAnalysis, "minTimeFromPreviousAnalysis")
    mfu <- recycle_k(minFollowUp, "minFollowUp")
    mn <- recycle_k(minN, "minN")
    if (is.null(targetEvents)) {
        te <- rep(NA_real_, k)
    }
    else if (is.matrix(targetEvents)) {
        if (nrow(targetEvents) != k) 
            stop("targetEvents matrix must have k rows")
        te <- rowSums(targetEvents)
    }
    else {
        te <- recycle_k(targetEvents, "targetEvents")
    }
    ev_at <- function(t) {
        dc <- eEvents(lambda = lambdaC, eta = etaC, gamma = gamma * 
            Qc, R = R, S = S, T = t, minfup = 0)
        de <- eEvents(lambda = lambdaC * hr, eta = etaE_mat, 
            gamma = gamma * Qe, R = R, S = S, T = t, minfup = 0)
        list(eDC = dc$d, eDE = de$d, eNC = dc$n, eNE = de$n, 
            total_d = sum(dc$d + de$d), total_n = sum(dc$n + 
                de$n))
    }
    pct_max <- if (any(!is.na(pct))) 
        max(pct[!is.na(pct)])
    else 0
    T_ub <- max(sum(R) * 5, pct_max * 2, 200)
    find_t_events <- function(target) {
        f <- function(t) ev_at(t)$total_d - target
        if (f(T_ub) < 0) {
            warning("Target ", round(target), " events may not be achievable")
            return(T_ub)
        }
        if (f(0.001) >= 0) 
            return(0.001)
        uniroot(f, c(0.001, T_ub), tol = tol)$root
    }
    find_t_enroll <- function(target) {
        f <- function(t) ev_at(t)$total_n - target
        if (f(T_ub) < 0) 
            return(T_ub)
        if (f(0.001) >= 0) 
            return(0.001)
        uniroot(f, c(0.001, T_ub), tol = tol)$root
    }
    T_an <- numeric(k)
    for (i in seq_len(k)) {
        floors <- numeric(0)
        if (!is.na(pct[i])) 
            floors <- c(floors, pct[i])
        if (i > 1 && !is.na(mtpa[i])) 
            floors <- c(floors, T_an[i - 1] + mtpa[i])
        if (!is.na(mn[i])) {
            t_n <- find_t_enroll(mn[i])
            fu <- if (!is.na(mfu[i])) 
                mfu[i]
            else 0
            floors <- c(floors, t_n + fu)
        }
        fl <- if (length(floors) > 0) 
            max(floors)
        else 0.001
        if (!is.na(te[i])) {
            t_ev <- find_t_events(te[i])
            if (t_ev <= fl) {
                T_an[i] <- fl
            }
            else if (!is.na(me[i])) {
                T_an[i] <- min(t_ev, fl + me[i])
            }
            else {
                T_an[i] <- t_ev
            }
        }
        else {
            T_an[i] <- fl
        }
        if (!is.na(me[i]) && !is.na(pct[i])) {
            T_an[i] <- min(T_an[i], pct[i] + me[i])
        }
        else if (!is.na(me[i]) && i > 1) {
            T_an[i] <- min(T_an[i], T_an[i - 1] + me[i])
        }
    }
    eDC_mat <- eDE_mat <- eNC_mat <- eNE_mat <- NULL
    for (i in seq_len(k)) {
        ev <- ev_at(T_an[i])
        eDC_mat <- rbind(eDC_mat, ev$eDC)
        eDE_mat <- rbind(eDE_mat, ev$eDE)
        eNC_mat <- rbind(eNC_mat, ev$eNC)
        eNE_mat <- rbind(eNE_mat, ev$eNE)
    }
    total_d <- rowSums(eDC_mat) + rowSums(eDE_mat)
    timing <- total_d/max(total_d)
    compute_delta_ratio <- function(hr_num, hr_denom) {
        if (method == "Freedman") {
            delta_num <- (hr_num - 1)/(hr_num + 1/ratio)
            delta_den <- (hr_denom - 1)/(hr_denom + 1/ratio)
            abs(delta_num)/abs(delta_den)
        }
        else {
            abs(log(hr_num) - log(hr0))/abs(log(hr_denom) - log(hr0))
        }
    }
    if (!is.null(x) && !is.null(x$n.fix)) {
        n_fix <- x$n.fix
    }
    else {
        T_final <- T_an[k]
        minfup_nfix <- max(0, T_final - sum(R))
        n_fix <- nSurv(lambdaC = lambdaC, hr = hr1, hr0 = hr0, 
            eta = eta, etaE = etaE, gamma = gamma, R = R, S = S, 
            T = T_final, minfup = minfup_nfix, ratio = ratio, 
            alpha = alpha, beta = beta_design, sided = sided, 
            tol = tol, method = method)$d
    }
    if (spending == "calendar") {
        usTime_use <- T_an/max(T_an)
        lsTime_use <- usTime_use
    }
    else {
        usTime_use <- usTime
        lsTime_use <- lsTime
    }
    if (k == 1) {
        z_alpha <- qnorm(1 - alpha/sided)
        theta_design <- (z_alpha + qnorm(1 - beta_design))/sqrt(n_fix)
        theta_assumed <- theta_design * compute_delta_ratio(hr, 
            hr1)
        drift <- theta_assumed * sqrt(total_d[1])
        power_val <- pnorm(drift - z_alpha)
        design <- list(k = 1, test.type = test.type, alpha = alpha/sided, 
            sided = sided, n.I = total_d[1], n.fix = n_fix, timing = 1, 
            tol = tol, r = r, upper = list(bound = z_alpha, prob = matrix(c(alpha/sided, 
                power_val), nrow = 1)), lower = list(bound = -20, 
                prob = matrix(c(1 - alpha/sided, 1 - power_val), 
                  nrow = 1)), theta = c(0, theta_assumed), en = list(en = total_d[1]), 
            delta = theta_design, delta0 = log(hr0), delta1 = log(hr1), 
            astar = astar, beta = 1 - power_val)
        class(design) <- "gsDesign"
        pwr <- list(upper = list(prob = design$upper$prob), lower = list(prob = design$lower$prob), 
            en = design$en, theta = design$theta)
    }
    else {
        reuse_bounds <- !is.null(x) && !is.null(x$timing) && 
            length(x$timing) == k && isTRUE(all.equal(timing, 
            x$timing, tolerance = 1e-04))
        if (reuse_bounds) {
            design <- x
            lower_bounds <- x$lower$bound
            upper_bounds <- x$upper$bound
        }
        else {
            gs_args <- list(k = k, test.type = test.type, alpha = alpha/sided, 
                beta = beta_design, astar = astar, n.fix = n_fix, 
                timing = timing, sfu = sfu, sfupar = sfupar, 
                sfl = sfl, sflpar = sflpar, tol = tol, delta1 = log(hr1), 
                delta0 = log(hr0), usTime = usTime_use, lsTime = lsTime_use, 
                r = r)
            design <- do.call(gsDesign::gsDesign, gs_args)
            lower_bounds <- design$lower$bound
            upper_bounds <- design$upper$bound
        }
        if (length(lower_bounds) == 0) 
            lower_bounds <- rep(-20, k)
        theta_assumed <- design$delta * compute_delta_ratio(hr, 
            hr1)
        pwr <- gsDesign::gsProbability(k = k, theta = c(0, theta_assumed), 
            n.I = total_d, a = lower_bounds, b = upper_bounds, 
            r = r)
    }
    y <- design
    y$n.I <- total_d
    y$T <- T_an
    y$eDC <- eDC_mat
    y$eDE <- eDE_mat
    y$eNC <- eNC_mat
    y$eNE <- eNE_mat
    y$hr <- hr
    y$hr0 <- hr0
    y$hr1 <- hr1
    y$R <- R
    y$S <- S
    y$minfup <- minfup
    y$gamma <- gamma
    y$ratio <- ratio
    y$lambdaC <- lambdaC
    y$etaC <- etaC
    y$etaE <- etaE_mat
    y$variable <- "Power"
    y$sided <- sided
    y$tol <- tol
    y$method <- method
    y$spending <- spending
    y$call <- match.call()
    y$timing <- timing
    y$upper$prob <- pwr$upper$prob
    y$lower$prob <- pwr$lower$prob
    y$en <- pwr$en
    y$theta <- pwr$theta
    y$power <- sum(pwr$upper$prob[, 2])
    y$beta <- 1 - y$power
    class(y) <- c("gsSurv", "gsDesign")
    nameR <- nameperiod(cumsum(y$R))
    stratnames <- paste("Stratum", seq_len(ncol(y$lambdaC)))
    nameS <- if (is.null(y$S)) 
        "0-Inf"
    else nameperiod(cumsum(c(y$S, Inf)))
    rownames(y$lambdaC) <- nameS
    colnames(y$lambdaC) <- stratnames
    rownames(y$etaC) <- nameS
    colnames(y$etaC) <- stratnames
    rownames(y$etaE) <- nameS
    colnames(y$etaE) <- stratnames
    rownames(y$gamma) <- nameR
    colnames(y$gamma) <- stratnames
    return(y)
}
<bytecode: 0x55ed0ed24ae8>