In this post we’ll take a look at how to simulate survival data with a continuous, time-varying covariate. The aim is to simulate from a data-generating mechanism appropriate for evaluating a joint longitudinal-survival model. We’ll use the `survsim`

command to simulate the survival times, and the `merlin`

command to fit the corresponding true model.

Let’s assume a proportional hazards survival model, with the current value parameterisation. So for the ith patient, we have the observed longitudinal outcome,

where

and is our normally distributed residual variability. We call our trajectory function, representing the true underlying value of the continuous outcome at time t, which is a function of fixed and random effects, with associated design matrices and coefficients. We assume normally distributed random effects,

Our survival model can be defined in terms of the hazard function,

where is the baseline hazard function, is a vector of baseline covariates with associated log hazard ratios . We then link the current value of the biomarker directly to survival, where is a log hazard ratio for a one-unit increase in the biomarker, at time .

Let’s assume a simple random intercept and random linear trend for the biomarker trajectory, i.e.

The challenge with simulating survival times from such a model is that the cumulative hazard function is analytically intractable, and hence non-invertable. This is one of the situations `survsim`

was designed for (more details on the algorithm can be found in Crowther and Lambert (2013)). Assuming a step function for the biomarker would be simpler, but would not reflect the true data-generating mechanism, nor the benefit of the joint model, which can model time continuously.

We’ll simulate a dataset of 500 patients, generating an `id`

variable.

```
. clear
. set seed 249587
. set obs 500
Number of observations (_N) was 0, now 500.
. gen id = _n
```

Next we simulate the things we need at the patient level. This includes a binary treatment group variable, `trt`

, and the random effects for later use in the longitudinal data. We’ll simulate two independent random effects (note you can use `drawnorm`

to specify a covariance structure), representing the subject-specific deviations of the intercept and slope.

```
. gen trt = runiform()>0.5
. gen b0 = rnormal()
. gen b1 = rnormal(0,0.1)
```

Now we can simulate our survival times, still at the patient level. The key is being explicit in your definition of the hazard function. From above, substituting in the longitudinal trajectory, our hazard function becomes,

`survsim`

allows you to specify a user-defined hazard function, meaning you have complete generality. The hazard function needs to be written in Mata code (Stata’s matrix programming language), but is fairly self-explanatory. Key things to note:

- You refer to time using
`{t}`

- You can directly include Stata variables in the definition of your hazard function, they will get read in automatically
- You must use the colon operator, which makes Mata do element by element operations to allow the simulation calculations to be vectorised, and hence much faster

```
. survsim stime died, /// new variables
> hazard( /// hazard function
> 0.1:*1.2:*{t}:^0.2 :* /// Weibull hazard
> exp(0.2 :* (b0 :+ (0.1 :+ b1) :* {t})) /// current value
> ) ///
> covariates(trt -0.5) /// treatment effect
> maxt(5) // admin. censoring
Warning: 294 survival times were above the upper limit of maxtime()
They have been set to maxtime()
You can identify them by _survsim_rc = 3
```

which assumes a scale and shape of and for the baseline Weibull hazard function, and and for the mean intercept and slope of the longitudinal trajectory. The association parameter is set to . We also have a constant treatment effect with a log hazard ratio of -0.5.

Now we can move on to generating the *observed* longitudinal data. We know the true trajectory function, so we can generate whatever observation scheme we like. We’ll simulate up 5 observations per patient, measured at baseline, 1, 2, 3, 4 “years”. This is easiest to do using `expand`

, and then generating our `time`

variable to represent the time of observation, using the observation number (`_n`

) minus 1.

```
. expand 5
(2,000 observations created)
. bys id : gen time = _n-1
. drop if time>stime
(410 observations deleted)
```

The last line simply drops any time points that occur after a patient’s event time. Now we generate the observed longitudinal responses, at the observation time points, incorporating some residual variability from the true normal distribution, with standard deviation 0.5.

```
. gen xb = b0 + (0.1 + b1) * time
. gen y = rnormal(xb,0.5)
```

Finally, because we `expand`

-ed our survival times, it replicated them in all the extra rows. Since we’re going to use `merlin`

to estimate our joint model, which can handle repeated event times, it’s crucial we remove the repeats, and only pass a single event time and event indicator per `id`

to the estimation command.

```
. bys id (time) : replace stime = . if _n>1
(1590 real changes made, 1590 to missing)
. bys id (time) : replace died = . if _n>1
(1,590 real changes made, 1,590 to missing)
```

So our final dataset looks like:

```
. list id trt time y stime died if id==1 | id==13, sepby(id)
+------------------------------------------------+
| id trt time y stime died |
|------------------------------------------------|
1. | 1 0 0 1.332419 5 0 |
2. | 1 0 1 1.35186 . . |
3. | 1 0 2 1.907379 . . |
4. | 1 0 3 1.459253 . . |
5. | 1 0 4 .6210317 . . |
|------------------------------------------------|
53. | 13 1 0 -.9602892 2.8578734 1 |
54. | 13 1 1 .1079234 . . |
55. | 13 1 2 -1.303469 . . |
+------------------------------------------------+
```

Where we have our patient identifier, `id`

, patient treatment group, `trt`

, the observation times of the longitudinal outcome, `time`

, the observed values of the longitudinal outcome, `y`

, the survival time, `stime`

, and associated event indicator, `died`

.

We can fit the true data-generating joint model very simply using `merlin`

, as follows

```
. merlin (stime /// survival time
> trt /// baseline treatment (PH)
> EV[y] , /// Expected Value of y
> family(weibull, failure(died)) /// Weibull distribution & event ind.
> timevar(stime)) /// time-dependent, because of EV[y]
> (y /// response
> time /// fixed effect of time
> time#M2[id]@1 /// random effect on time
> M1[id]@1 , /// random intercept
> family(gaussian) /// distribution
> timevar(time)) // time-dependent
Fitting fixed effects model:
Fitting full model:
Iteration 0: log likelihood = -3601.7852
Iteration 1: log likelihood = -2917.04
Iteration 2: log likelihood = -2899.6754
Iteration 3: log likelihood = -2899.5656
Iteration 4: log likelihood = -2899.5656
Mixed effects regression model Number of obs = 2,090
Log likelihood = -2899.5656
------------------------------------------------------------------------------
| Coefficient Std. err. z P>|z| [95% conf. interval]
-------------+----------------------------------------------------------------
stime: |
trt | -.5093454 .1406999 -3.62 0.000 -.7851121 -.2335787
EV[] | .1964969 .0753046 2.61 0.009 .0489026 .3440913
_cons | -2.383151 .1512066 -15.76 0.000 -2.67951 -2.086791
log(gamma) | .1813281 .0668245 2.71 0.007 .0503545 .3123018
-------------+----------------------------------------------------------------
y: |
time | .1058904 .0096588 10.96 0.000 .0869596 .1248212
time#M2[id] | 1 . . . . .
M1[id] | 1 . . . . .
_cons | .0550412 .0461364 1.19 0.233 -.0353846 .1454669
sd(resid.) | .4920555 .0098604 .473104 .5117661
-------------+----------------------------------------------------------------
id: |
sd(M1) | .952584 .0332475 .8895989 1.020028
sd(M2) | .1013755 .0127412 .0792412 .1296924
------------------------------------------------------------------------------
```

which links the expected value (current value) of the longitudinal outcome directly to survival by using the `EV[]`

element type. Note the use of the `timevar()`

options in both sub-models, which makes sure `merlin`

knows which variables represent time in each, and allows the appropriate calculation of the likelihood. The model converges nicely, with parameter estimates around the true values. To convince ourselves that everything is working, we can simply up the sample size further to check that everything is unbiased.

This example should hopefully provide a base case on which to expand for your own work. Check out the `survsim`

and `merlin`

pages for more.

Subscribe to get new post alerts directly in your inbox: