In this post we’ll take a look at how to simulate survival data with a continuous, time-varying covariate. The aim is to simulate from a data-generating mechanism appropriate for evaluating a joint longitudinal-survival model. We’ll use the survsim command to simulate the survival times, and the merlin command to fit the corresponding true model.

Let’s assume a proportional hazards survival model, with the current value parameterisation. So for the ith patient, we have the observed longitudinal outcome,

$$y_{i}(t) = m_{i}(t) + \epsilon_{i}(t)$$

where

$$m_{i}(t) = X_{2i}(t) \beta_{2} + Z_{i}(t) b_{i}$$

and \(\epsilon_{i}(t)\) is our normally distributed residual variability. We call \(m_{i}(t)\) our trajectory function, representing the true underlying value of the continuous outcome at time t, which is a function of fixed and random effects, with associated design matrices and coefficients. We assume normally distributed random effects,

$$b_{i} \sim N(0, \Sigma)$$

Our survival model can be defined in terms of the hazard function,

$$h_{i}(t) = h_{0}(t) \exp (X_{1i}\beta_{1} + \alpha m_{i}(t))$$

where \(h_{0}(t)\) is the baseline hazard function, \(X_{1i}\) is a vector of baseline covariates with associated log hazard ratios \(\beta_{1}\). We then link the current value of the biomarker directly to survival, where \(\alpha\) is a log hazard ratio for a one-unit increase in the biomarker, at time t.

Let’s assume a simple random intercept and random linear trend for the biomarker trajectory, i.e.

$$m_{i}(t) = (\beta_{20} + b_{0i}) + (\beta_{21} + b_{1i})t$$

The challenge with simulating survival times from such a model is that the cumulative hazard function is analytically intractable, and hence non-invertible. This is one of the situations survsim was designed for (more details on the algorithm can be found in Crowther and Lambert (2013)). Assuming a step function for the biomarker would be simpler, but would not reflect the true data-generating mechanism, nor the benefit of the joint model, which can model time continuously.

We’ll simulate a dataset of 500 patients, generating an id variable.

. clear
. set seed 249587
. set obs 500
Number of observations (_N) was 0, now 500.
. gen id = _n

Next we simulate the things we need at the patient level. This includes a binary treatment group variable, trt, and the random effects for later use in the longitudinal data. We’ll simulate two independent random effects (note you can use drawnorm to specify a covariance structure), representing the subject-specific deviations of the intercept and slope.

. gen trt = runiform()>0.5
. gen b0 = rnormal()
. gen b1 = rnormal(0,0.1)

Now we can simulate our survival times, still at the patient level. The key is being explicit in your definition of the hazard function. From above, substituting in the longitudinal trajectory, our hazard function becomes,

$$h_{i}(t) = h_{0}(t) \exp (X_{1i}\beta_{1} + \alpha \left[ X_{2i}(t) \beta_{2} + Z_{i}(t)b_{i}\right])$$

survsim allows you to specify a user-defined hazard function, meaning you have complete generality. The hazard function needs to be written in Mata code (Stata’s matrix programming language), but is fairly self-explanatory. Key things to note:

  • You refer to time using {t}
  • You can directly include Stata variables in the definition of your hazard function, they will get read in automatically
  • You must use the colon operator, which makes Mata do element by element operations to allow the simulation calculations to be vectorised, and hence much faster
. survsim stime died,                                     /// new variables
>       hazard(                                           /// hazard function
>              0.1:*1.2:*{t}:^0.2 :*                      /// Weibull hazard
>              exp(0.2 :* (b0 :+ (0.1 :+ b1) :* {t}))     /// current value 
>              )                                          ///
>       covariates(trt -0.5)                              /// treatment effect
>       maxt(5)                                           //  admin. censoring
Warning: 294 survival times were above the upper limit of maxtime()
         They have been set to maxtime()
         You can identify them by _survsim_rc = 3

which assumes a scale and shape of \(\lambda = 0.1\) and \(\gamma = 1.2\) for the baseline Weibull hazard function, and \(\beta_{20}=0\) and \(\beta_{21}=0.1\) for the mean intercept and slope of the longitudinal trajectory. The association parameter is set to \(\alpha=0.2\). We also have a constant treatment effect with a log hazard ratio of -0.5.

Now we can move on to generating the observed longitudinal data. We know the true trajectory function, so we can generate whatever observation scheme we like. We’ll simulate up 5 observations per patient, measured at baseline, 1, 2, 3, 4 “years”. This is easiest to do using expand, and then generating our time variable to represent the time of observation, using the observation number (_n) minus 1.

. expand 5
(2,000 observations created)
. bys id : gen time = _n-1
. drop if time>stime
(410 observations deleted)

The last line simply drops any time points that occur after a patient’s event time. Now we generate the observed longitudinal responses, at the observation time points, incorporating some residual variability from the true normal distribution, with standard deviation 0.5.

. gen xb = b0 + (0.1 + b1) * time
. gen y = rnormal(xb,0.5)

Finally, because we expand-ed our survival times, it replicated them in all the extra rows. Since we’re going to use merlin to estimate our joint model, which can handle repeated event times, it’s crucial we remove the repeats, and only pass a single event time and event indicator per id to the estimation command.

. bys id (time) : replace stime = . if _n>1
(1590 real changes made, 1590 to missing)

. bys id (time) : replace died = . if _n>1
(1,590 real changes made, 1,590 to missing)

So our final dataset looks like:

. list id trt time y stime died if id==1 | id==13, sepby(id)

      +------------------------------------------------+
      | id   trt   time           y       stime   died |
      |------------------------------------------------|
   1. |  1     0      0    1.332419           5      0 |
   2. |  1     0      1     1.35186           .      . |
   3. |  1     0      2    1.907379           .      . |
   4. |  1     0      3    1.459253           .      . |
   5. |  1     0      4    .6210317           .      . |
      |------------------------------------------------|
  53. | 13     1      0   -.9602892   2.8578734      1 |
  54. | 13     1      1    .1079234           .      . |
  55. | 13     1      2   -1.303469           .      . |
      +------------------------------------------------+

Where we have our patient identifier, id, patient treatment group, trt, the observation times of the longitudinal outcome, time, the observed values of the longitudinal outcome, y, the survival time, stime, and associated event indicator, died.

We can fit the true data-generating joint model very simply using merlin, as follows

. merlin (stime                           /// survival time
>                 trt                     /// baseline treatment (PH)
>                 EV[y] ,                 /// Expected Value of y
>          family(weibull, failure(died)) /// Weibull distribution & event ind.
>                 timevar(stime))         /// time-dependent, because of EV[y]
>        (y                               /// response
>                 time                    /// fixed effect of time
>                 time#M2[id]@1           /// random effect on time
>                 M1[id]@1 ,              /// random intercept
>                 family(gaussian)        /// distribution
>                 timevar(time))          //  time-dependent

Fitting fixed effects model:

Fitting full model:

Iteration 0:   log likelihood = -3601.7852  
Iteration 1:   log likelihood =   -2917.04  
Iteration 2:   log likelihood = -2899.6754  
Iteration 3:   log likelihood = -2899.5656  
Iteration 4:   log likelihood = -2899.5656  

Mixed effects regression model                           Number of obs = 2,090
Log likelihood = -2899.5656
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
stime:       |            
         trt |  -.5093454   .1406999    -3.62   0.000    -.7851121   -.2335787
        EV[] |   .1964969   .0753046     2.61   0.009     .0489026    .3440913
       _cons |  -2.383151   .1512066   -15.76   0.000     -2.67951   -2.086791
  log(gamma) |   .1813281   .0668245     2.71   0.007     .0503545    .3123018
-------------+----------------------------------------------------------------
y:           |            
        time |   .1058904   .0096588    10.96   0.000     .0869596    .1248212
 time#M2[id] |          1          .        .       .            .           .
      M1[id] |          1          .        .       .            .           .
       _cons |   .0550412   .0461364     1.19   0.233    -.0353846    .1454669
  sd(resid.) |   .4920555   .0098604                       .473104    .5117661
-------------+----------------------------------------------------------------
id:          |            
      sd(M1) |    .952584   .0332475                      .8895989    1.020028
      sd(M2) |   .1013755   .0127412                      .0792412    .1296924
------------------------------------------------------------------------------

which links the expected value (current value) of the longitudinal outcome directly to survival by using the EV[] element type. Note the use of the timevar() options in both sub-models, which makes sure merlin knows which variables represent time in each, and allows the appropriate calculation of the likelihood. The model converges nicely, with parameter estimates around the true values. To convince ourselves that everything is working, we can simply up the sample size further to check that everything is unbiased.

This example should hopefully provide a base case on which to expand for your own work. Check out the survsim and merlin pages for more.

Latest Resources

Specialist subjects

Real-world evidence (RWE)

Real-world evidence (RWE) Data and information that, unlike data generated in clinical trials conducted in controlled environments, has been obtained from everyday clinical practice, patient registers, or other sources outside the clinical trial setting.   RWE plays a crucial role in complementing traditional clinical trial data, providing insights into the safety, effectiveness, and overall performance […]
Read more

Videos

State-of-the-art statistical models for modern HTA

At @RedDoorAnalytics, we develop methodology and software for efficient modelling of biomarkers, measured repeatedly over time, jointly with survival outcomes, which are being increasingly used in cancer settings. We have also developed methods and software for general non-Markov multi-state survival analysis, allowing for the development of more plausible natural history models, where patient history can […]
Read more

Videos

Multilevel (hierarchical) survival models: Estimation, prediction, interpretation

Hierarchical time-to-event data is common across various research domains. In the medical field, for instance, patients are often nested within hospitals and regions, while in education, students are nested within schools. In these settings, the outcome is typically measured at the individual level, with covariates recorded at any level of the hierarchy. This hierarchical structure […]
Read more

Statistical Primers

What are competing risks?

Competing risks In survival analysis, competing risks refer to the situation when an individual is at risk of experiencing an event that precludes the event under study to occur. Competing risks commonly occur in studies of cause-specific mortality, as all other causes of death than the one under study might happen before the individuals “have […]
Read more

Statistical Primers

What is immortal time bias?

Immortal time bias Immortal time bias is a type of bias that can occur in observational research when the study design allows for a period of time during which the outcome of interest cannot occur, often referred to as “immortal time”. Simply put, immortal time bias occurs when information from a future event is incorporated into the […]
Read more

Statistical Primers

What is the proportional hazards assumption?

Proportional hazards Proportional hazards in survival analysis means that the rate at which an event of interest occurs over time for two or more groups or individuals is proportional over time. Specifically, it assumes that the hazard ratio, which represents the relative rate of an event occurring between two groups or individuals, is constant over […]
Read more

Statistical Primers

What is censoring?

Censoring refers to a situation in survival analysis where the event of interest is not observed for some of the individuals under study. In this Statistical Primer, we’ll define three types of censoring often seen in survival analysis studies. Censoring occurs when the information on the survival time is incomplete or only partially observed. Censoring […]
Read more

Statistical Primers

What is the Cox model?

The Cox model The Cox model, also known as the proportional hazards model, is a popular statistical tool used to analyse survival data. It was developed by British statistician Sir David Cox, and published in 1972. It has gained popularity largely by avoiding making parametric assumptions about the shape of the baseline rate in a […]
Read more

Statistical Primers

What is survival analysis?

Survival analysis is a statistical method used to analyse the time until an event of interest occurs. The key feature of survival analysis is that the outcome has two dimensions: – an event indicator (yes/no), and – the time spent at risk for the event All survival analyses require precise definitions of start and end of […]
Read more

Tutorials

Multivariate joint longitudinal-survival models

Joint longitudinal-survival models have been widely developed, but there are many avenues of research where they are lacking in terms of methodological development, and importantly, accessible implementations. We think merlin fills a few gaps. In this post, we’ll take a look at the extension to modelling multiple continuous longitudinal outcomes, jointly with survival. For simplicity, I’ll concentrate […]
Read more

Tutorials

Simulation and estimation of three-level survival models: IPD meta-analysis of recurrent event data

In this example I’ll look at the analysis of clustered survival data with three levels. This kind of data arises in the meta-analysis of recurrent event times, where we have observations (events or censored), k (level 1), nested within patients, j (level 2), nested within trials, i (level 3). Random intercepts The first example will […]
Read more

Tutorials

Probabilistic sensitivity analysis and survival models

Today we’re going to take a little look into probabilistic sensitivity analysis (PSA), and how it can be implemented within the context of survival analysis. Now PSA is used extensively in health economic modelling, where a particular parameter (or parameters) of interest, are altered or varied, to represent different scenarios and levels of variation. We […]
Read more
All Resources