Let’s begin. There will be a single continuous covariate, representing age, with a non-linear effect influencing survival. We’ll simulate survival times under a data-generating model that incorporates a non-linear effect of age. We’ll then fit some models accounting for the non-linear effect of age, and finally make predictions for specified values of age. Sounds simple, but it’s often not.

Simulating survival data with a non-linear covariate effect

To simulate survival data we will use our survsim command. Our data-generating model is,

$$h(t) = \lambda \gamma t^{\gamma – 1} \exp (X \beta_{1} + X_{2} \beta_{2})$$

i.e. a Weibull baseline, and both age (X) and it’s square impacting survival, under proportional hazards. We choose sensible values for our baseline parameters, and for the coefficients for age and age^2. Coding it up, generating a dataset of 1000 observations, simulating age from N(30,5^2), and applying administrative censoring at 10 years, we have:

. clear
. set seed 98798
. set obs 1000
Number of observations (_N) was 0, now 1,000.
. gen age = rnormal(30,5)
. gen age2 = age^2

. survsim                                 /// command
>         stime died,                     /// new variables to store survival 
>                                         ///   time & event indicator
>         distribution(weibull)           /// survival time distribution
>         lambda(0.01) gamma(1.5)         /// scale and shape of the baseline
>         maxt(10)                        /// max. follow-up time (admin. 
>                                         ///   censoring)
>         covariates(age 0.01 age2 0.001) /// linear predictor - age and age^2 
>                                         ///   with coefficients (log hazard 
>                                         //    ratios)

Fitting a model with a non-linear covariate effect

We’re working with survival data, so let’s now stset our simulated data.

. stset stime, failure(died)

Survival-time data settings

         Failure event: died!=0 & died<.
Observed time interval: (0, stime]
     Exit on or before: failure

--------------------------------------------------------------------------
      1,000  total observations
          0  exclusions
--------------------------------------------------------------------------
      1,000  observations remaining, representing
        653  failures in single-record/single-failure data
  6,756.636  total analysis time at risk and under observation
                                                At risk from t =         0
                                     Earliest observed entry t =         0
                                          Last observed exit t =        10

We can now start fitting models. We’re going to start by fitting the true model using the stmerlin command, which is a wrapper function for the more powerful merlin command. One of the differences of merlin to other estimation commands, is in how non-linear effects can be modelled. But we’ll get to that.

. stmerlin age age#age , distribution(weibull)

Fitting full model:

Iteration 0:   log likelihood = -6756.6356  
Iteration 1:   log likelihood = -2159.8185  
Iteration 2:   log likelihood = -2153.1273  
Iteration 3:   log likelihood = -2078.2946  
Iteration 4:   log likelihood = -2073.9919  
Iteration 5:   log likelihood = -2073.8839  
Iteration 6:   log likelihood = -2073.8839  

Survival model                                           Number of obs = 1,000
Log likelihood = -2073.8839
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
_t:          |            
         age |  -.0263253   .0707263    -0.37   0.710    -.1649463    .1122957
     age#age |   .0018451   .0011354     1.63   0.104    -.0003802    .0040704
       _cons |   -4.31365   1.092658    -3.95   0.000     -6.45522    -2.17208
  log(gamma) |   .4132446   .0345778    11.95   0.000     .3454734    .4810158
------------------------------------------------------------------------------

All nice and simple. Now merlin and stmerlin have a few tricks in their syntax. They have element functions, notably fp()rcs() and bs() to directly create fractional polynomial, restricted cubic spline, and B-spline basis functions for you. Unlike other commands in Stata, we don’t have to create such basis functions prior to model estimation, you can let merlin do it. Let’s take a look.

The above stmerlin model can be specified by using the fp() element function:

. stmerlin fp(age, powers(1 2)) , distribution(weibull)
variables created for model 1, component 1: _cmp_1_1_1 to _cmp_1_1_2

Fitting full model:

Iteration 0:   log likelihood = -6756.6356  
Iteration 1:   log likelihood = -2159.8185  
Iteration 2:   log likelihood = -2153.1273  
Iteration 3:   log likelihood = -2078.2946  
Iteration 4:   log likelihood = -2073.9919  
Iteration 5:   log likelihood = -2073.8839  
Iteration 6:   log likelihood = -2073.8839  

Survival model                                           Number of obs = 1,000
Log likelihood = -2073.8839
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
_t:          |            
      fp():1 |  -.0263253   .0707263    -0.37   0.710    -.1649463    .1122957
      fp():2 |   .0018451   .0011354     1.63   0.104    -.0003802    .0040704
       _cons |   -4.31365   1.092658    -3.95   0.000     -6.45522    -2.17208
  log(gamma) |   .4132446   .0345778    11.95   0.000     .3454734    .4810158
------------------------------------------------------------------------------

which reproduces the previous results. To explain further, it will create fractional polynomial basis functions of age, with the powers of 1 and 2, so age and age^2. If we favoured restricted cubic splines to model continuous covariates, we could use:

. stmerlin rcs(age, df(3) orthog) , distribution(weibull)
variables created for model 1, component 1: _cmp_1_1_1 to _cmp_1_1_3

Fitting full model:

Iteration 0:   log likelihood = -6756.6356  
Iteration 1:   log likelihood = -2159.8722  
Iteration 2:   log likelihood = -2152.5807  
Iteration 3:   log likelihood = -2077.5848  
Iteration 4:   log likelihood = -2073.2984  
Iteration 5:   log likelihood = -2073.1912  
Iteration 6:   log likelihood = -2073.1912  

Survival model                                           Number of obs = 1,000
Log likelihood = -2073.1912
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
_t:          |            
     rcs():1 |   .4146476   .0408083    10.16   0.000     .3346649    .4946303
     rcs():2 |  -.0781111   .0408836    -1.91   0.056    -.1582414    .0020193
     rcs():3 |    .045297   .0395261     1.15   0.252    -.0321728    .1227667
       _cons |  -3.402849    .116561   -29.19   0.000    -3.631304   -3.174393
  log(gamma) |   .4129337   .0345708    11.94   0.000     .3451761    .4806912
------------------------------------------------------------------------------

which would create a spline function with three degrees of freedom, and orthogonalise the basis functions (which often improves convergence).

stmerlin creates the basis functions for you, tells you what they are called, and leaves them behind in the dataset. Importantly, it labels them in the output so you can see what’s what. The elegant thing is that we only have to specify the variable age to be expanded in splines, and as such changing the degrees of freedom etc., is extremely simple to do. No (re-)creating basis functions prior to model fitting.

Predicting from a model with a non-linear covariate effect

But! The benefit of this kind of syntax really comes in when obtaining predictions from a model. Since merlin creates the spline variables internally, when we want predictions at different values of age, the spline gets automatically recreated, and as such, getting a survival prediction for someone aged 50 is as simple as…

. predict s1, survival at(age 50)

Which is rather simple, we hope you agree. The prediction call wouldn’t change if you used fractional polynomials, or more degrees of freedom to model the effect of age. Now just imagine if you had a second continuous covariate…and then a spline-spline interaction…and then time-dependent effects on them… Convinced merlin makes life a bit easier…?

Latest Resources

Specialist subjects

Real-world evidence (RWE)

Real-world evidence (RWE) Data and information that, unlike data generated in clinical trials conducted in controlled environments, has been obtained from everyday clinical practice, patient registers, or other sources outside the clinical trial setting.   RWE plays a crucial role in complementing traditional clinical trial data, providing insights into the safety, effectiveness, and overall performance […]
Read more

Videos

State-of-the-art statistical models for modern HTA

At @RedDoorAnalytics, we develop methodology and software for efficient modelling of biomarkers, measured repeatedly over time, jointly with survival outcomes, which are being increasingly used in cancer settings. We have also developed methods and software for general non-Markov multi-state survival analysis, allowing for the development of more plausible natural history models, where patient history can […]
Read more

Videos

Multilevel (hierarchical) survival models: Estimation, prediction, interpretation

Hierarchical time-to-event data is common across various research domains. In the medical field, for instance, patients are often nested within hospitals and regions, while in education, students are nested within schools. In these settings, the outcome is typically measured at the individual level, with covariates recorded at any level of the hierarchy. This hierarchical structure […]
Read more

Statistical Primers

What are competing risks?

Competing risks In survival analysis, competing risks refer to the situation when an individual is at risk of experiencing an event that precludes the event under study to occur. Competing risks commonly occur in studies of cause-specific mortality, as all other causes of death than the one under study might happen before the individuals “have […]
Read more

Statistical Primers

What is immortal time bias?

Immortal time bias Immortal time bias is a type of bias that can occur in observational research when the study design allows for a period of time during which the outcome of interest cannot occur, often referred to as “immortal time”. Simply put, immortal time bias occurs when information from a future event is incorporated into the […]
Read more

Statistical Primers

What is the proportional hazards assumption?

Proportional hazards Proportional hazards in survival analysis means that the rate at which an event of interest occurs over time for two or more groups or individuals is proportional over time. Specifically, it assumes that the hazard ratio, which represents the relative rate of an event occurring between two groups or individuals, is constant over […]
Read more

Statistical Primers

What is censoring?

Censoring refers to a situation in survival analysis where the event of interest is not observed for some of the individuals under study. In this Statistical Primer, we’ll define three types of censoring often seen in survival analysis studies. Censoring occurs when the information on the survival time is incomplete or only partially observed. Censoring […]
Read more

Statistical Primers

What is the Cox model?

The Cox model The Cox model, also known as the proportional hazards model, is a popular statistical tool used to analyse survival data. It was developed by British statistician Sir David Cox, and published in 1972. It has gained popularity largely by avoiding making parametric assumptions about the shape of the baseline rate in a […]
Read more

Statistical Primers

What is survival analysis?

Survival analysis is a statistical method used to analyse the time until an event of interest occurs. The key feature of survival analysis is that the outcome has two dimensions: – an event indicator (yes/no), and – the time spent at risk for the event All survival analyses require precise definitions of start and end of […]
Read more

Tutorials

Multivariate joint longitudinal-survival models

Joint longitudinal-survival models have been widely developed, but there are many avenues of research where they are lacking in terms of methodological development, and importantly, accessible implementations. We think merlin fills a few gaps. In this post, we’ll take a look at the extension to modelling multiple continuous longitudinal outcomes, jointly with survival. For simplicity, I’ll concentrate […]
Read more

Tutorials

Simulation and estimation of three-level survival models: IPD meta-analysis of recurrent event data

In this example I’ll look at the analysis of clustered survival data with three levels. This kind of data arises in the meta-analysis of recurrent event times, where we have observations (events or censored), k (level 1), nested within patients, j (level 2), nested within trials, i (level 3). Random intercepts The first example will […]
Read more

Tutorials

Probabilistic sensitivity analysis and survival models

Today we’re going to take a little look into probabilistic sensitivity analysis (PSA), and how it can be implemented within the context of survival analysis. Now PSA is used extensively in health economic modelling, where a particular parameter (or parameters) of interest, are altered or varied, to represent different scenarios and levels of variation. We […]
Read more
All Resources