This example takes a look at incorporating a frailty, or random intercept, into a flexible parametric survival model, and how to fit them in Stata. First we’ll use merlin to estimate our model, and then the more user-friendly wrapper function stmixed. More details on these models can be found in the following papers:

We can define a multilevel proportional hazards survival model as follows,

$$h_{ij}(t) = h_{0}(t) \exp (X_{ij}\beta + b_{i})$$

where

$$b_{i} \sim N(0,\sigma^{2})$$

Now the flexible parametric model specifies the linear predictor on the log cumulative hazard scale, so in actual fact we have,

$$H_{ij}(t) = H_{0}(t)\exp (X_{ij}\beta + b_{i})$$

but with no time-dependent effects, log cumulative hazard ratios can be interpreted as log hazard ratios. So, we have our vector of baseline covariates \(X_{ij}\) and associated conditional log hazard ratios \(\beta\) (conditional on the frailty, \(b_{i}\)). We can load an example simulated dataset from the stmixed package

. use http://fmwww.bc.edu/repec/bocode/s/stmixed_example1, clear

which represents a multi-centre trial scenario, with 100 centres and each centre recuiting 60 patients, resulting in 6000 observations. Two covariates were collected, a binary covariate \(X_{1}\) (coded 0/1), and a continuous covariate, \(X_{2}\), within the range [0,1].

Let’s inspect the first few rows of our dataset,

.  list centre stime event x1 x2 in 1/5, noobs

  +-------------------------------------------+
  | centre      stime   event   x1         x2 |
  |-------------------------------------------|
  |      1   .2714585       1    1   .4306063 |
  |      1   .7353575       1    0   .5285968 |
  |      1   .8213144       1    0   .1442798 |
  |      1    .758931       1    1   .8612115 |
  |      1   .9547321       0    0    .501694 |
  +-------------------------------------------+

where centre denotes the centre number for each patient, stime contains our survival time, event our indicator variable for those that died (1) or were censored (0), x1 our binary covariate, and x2 our continuous covariate. We can fit our model as follows

. merlin (stime                             /// survival time
>                 x1 x2                     /// fixed covariates
>                 M1[centre]@1              /// random intercept
>                 ,                         ///
>         family(rp, df(3) failure(event))) //  baseline distribution
variables created: _rcs1_1 to _rcs1_3

Fitting fixed effects model:

Fitting full model:

Iteration 0:   log likelihood = -2195.8633  
Iteration 1:   log likelihood = -1964.8829  
Iteration 2:   log likelihood =  -1962.755  
Iteration 3:   log likelihood = -1962.7449  
Iteration 4:   log likelihood = -1962.7449  

Mixed effects regression model                           Number of obs = 6,000
Log likelihood = -1962.7449
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
stime:       |            
          x1 |   1.008152   .0367602    27.43   0.000      .936103      1.0802
          x2 |    -.96152   .0605554   -15.88   0.000    -1.080206   -.8428335
  M1[centre] |          1          .        .       .            .           .
       _cons |  -1.499116   .0994654   -15.07   0.000    -1.694065   -1.304167
-------------+----------------------------------------------------------------
centre:      |            
      sd(M1) |   .8965598   .0673174                      .7738693    1.038702
------------------------------------------------------------------------------
    Warning: Baseline spline coefficients not shown - use ml display

The key part of the syntax is specifying a normally distributed random effect, which we’ve called M1 (it must start with a capital letter, then a number), with the variable defining the cluster in square brackets [centre], and finally, we tell merlin not to estimate a coefficient, but rather constrain it to be 1. Our model converges nicely, and we observe an estimate of \(\sigma =\) 0.897 (95% CI: 0.774, 1.039), showing substantial heterogeneity between centres.

merlin has a challenging syntax because it emcompasses a lot of different modelling frameworks. To make it more user-friendly, we’ve developed a few wrapper functions to make our lives easier. One of these is stmixed which is specifically designed to have more consistent syntax for specifying a random effects model in Stata. It essentially follows the same design as mestreg. It also has the added benefit of requiring the data to be stset before use. So we can fit the same model with,

. stset stime, failure(event)

Survival-time data settings

         Failure event: event!=0 & event<.
Observed time interval: (0, stime]
     Exit on or before: failure

--------------------------------------------------------------------------
      6,000  total observations
          0  exclusions
--------------------------------------------------------------------------
      6,000  observations remaining, representing
      3,427  failures in single-record/single-failure data
  3,550.738  total analysis time at risk and under observation
                                                At risk from t =         0
                                     Earliest observed entry t =         0
                                          Last observed exit t =  1.985816

. stmixed x1 x2 || centre:, distribution(rp) df(3)
Random effect M1: Intercept at level centre
variables created: _rcs1_1 to _rcs1_3

Fitting fixed effects model:

Fitting full model:

Iteration 0:   log likelihood = -2195.8633  
Iteration 1:   log likelihood = -1964.8829  
Iteration 2:   log likelihood =  -1962.755  
Iteration 3:   log likelihood = -1962.7449  
Iteration 4:   log likelihood = -1962.7449  

Mixed effects survival model                             Number of obs = 6,000
Log likelihood = -1962.7449
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
_t:          |            
          x1 |   1.008152   .0367602    27.43   0.000      .936103      1.0802
          x2 |    -.96152   .0605554   -15.88   0.000    -1.080206   -.8428335
  M1[centre] |          1          .        .       .            .           .
       _cons |  -1.499116   .0994654   -15.07   0.000    -1.694065   -1.304167
-------------+----------------------------------------------------------------
centre:      |            
      sd(M1) |   .8965598   .0673174                      .7738693    1.038702
------------------------------------------------------------------------------
    Warning: Baseline spline coefficients not shown - use ml display

obtaining identical estimates, unsurprisingly. stmixed gives us some helpful output telling us what it’s named the random effect, and what level it has been specified.

There’s lots of predictions available post-estimation, just take a look at help stmixed postestimation for more details.

Specialist subjects

Real-world evidence (RWE)

Real-world evidence (RWE) Data and information that, unlike data generated in clinical trials conducted in controlled environments, has been obtained from everyday clinical practice, patient registers, or other sources outside the clinical trial setting.   RWE plays a crucial role in complementing traditional clinical trial data, providing insights into the safety, effectiveness, and overall performance […]
Read more

Videos

State-of-the-art statistical models for modern HTA

At @RedDoorAnalytics, we develop methodology and software for efficient modelling of biomarkers, measured repeatedly over time, jointly with survival outcomes, which are being increasingly used in cancer settings. We have also developed methods and software for general non-Markov multi-state survival analysis, allowing for the development of more plausible natural history models, where patient history can […]
Read more

Videos

Multilevel (hierarchical) survival models: Estimation, prediction, interpretation

Hierarchical time-to-event data is common across various research domains. In the medical field, for instance, patients are often nested within hospitals and regions, while in education, students are nested within schools. In these settings, the outcome is typically measured at the individual level, with covariates recorded at any level of the hierarchy. This hierarchical structure […]
Read more

Statistical Primers

What are competing risks?

Competing risks In survival analysis, competing risks refer to the situation when an individual is at risk of experiencing an event that precludes the event under study to occur. Competing risks commonly occur in studies of cause-specific mortality, as all other causes of death than the one under study might happen before the individuals “have […]
Read more

Statistical Primers

What is immortal time bias?

Immortal time bias Immortal time bias is a type of bias that can occur in observational research when the study design allows for a period of time during which the outcome of interest cannot occur, often referred to as “immortal time”. Simply put, immortal time bias occurs when information from a future event is incorporated into the […]
Read more

Statistical Primers

What is the proportional hazards assumption?

Proportional hazards Proportional hazards in survival analysis means that the rate at which an event of interest occurs over time for two or more groups or individuals is proportional over time. Specifically, it assumes that the hazard ratio, which represents the relative rate of an event occurring between two groups or individuals, is constant over […]
Read more

Statistical Primers

What is censoring?

Censoring refers to a situation in survival analysis where the event of interest is not observed for some of the individuals under study. In this Statistical Primer, we’ll define three types of censoring often seen in survival analysis studies. Censoring occurs when the information on the survival time is incomplete or only partially observed. Censoring […]
Read more

Statistical Primers

What is the Cox model?

The Cox model The Cox model, also known as the proportional hazards model, is a popular statistical tool used to analyse survival data. It was developed by British statistician Sir David Cox, and published in 1972. It has gained popularity largely by avoiding making parametric assumptions about the shape of the baseline rate in a […]
Read more

Statistical Primers

What is survival analysis?

Survival analysis is a statistical method used to analyse the time until an event of interest occurs. The key feature of survival analysis is that the outcome has two dimensions: – an event indicator (yes/no), and – the time spent at risk for the event All survival analyses require precise definitions of start and end of […]
Read more

Tutorials

Multivariate joint longitudinal-survival models

Joint longitudinal-survival models have been widely developed, but there are many avenues of research where they are lacking in terms of methodological development, and importantly, accessible implementations. We think merlin fills a few gaps. In this post, we’ll take a look at the extension to modelling multiple continuous longitudinal outcomes, jointly with survival. For simplicity, I’ll concentrate […]
Read more

Tutorials

Simulation and estimation of three-level survival models: IPD meta-analysis of recurrent event data

In this example I’ll look at the analysis of clustered survival data with three levels. This kind of data arises in the meta-analysis of recurrent event times, where we have observations (events or censored), k (level 1), nested within patients, j (level 2), nested within trials, i (level 3). Random intercepts The first example will […]
Read more

Tutorials

Probabilistic sensitivity analysis and survival models

Today we’re going to take a little look into probabilistic sensitivity analysis (PSA), and how it can be implemented within the context of survival analysis. Now PSA is used extensively in health economic modelling, where a particular parameter (or parameters) of interest, are altered or varied, to represent different scenarios and levels of variation. We […]
Read more