Interval censoring occurs when we don’t know the exact time an event occurred, only that it occurred within a particular time interval. Such data is common in ophthalmology and dentistry, where events are only picked up at scheduled appointments, but they actually occurred at some point since the previous visit. Arguably, we could say all survival data is subject to some degree of interval censoring – recording time of death to the nearest day, for example, sets a time interval of one day.

In this post, we will look at how to simulate interval censored survival times, and then show how to use merlin to fit an interval censored flexible parametric survival model. We simulate continuous time survival data, and then apply interval censoring.

First we’ll simulate a dataset of 1000 observations, and include a binary variable which we will assume is a treatment group indicator coded 0 for control, and 1 for active treatment. Observations will be assigned to treatment with 50% probability. And of course we set a seed for reproducibility.

. clear

. set seed 72549

. set obs 1000
Number of observations (_N) was 0, now 1,000.

. gen trt = runiform()>0.5

We will use survsim to simulate survival times from a Weibull distribution with shape and scale parameters, 1.2 and 0.1. We’ll assume a log hazard ratio due to treatment of -0.5. Let’s apply administrative right censoring at 5 years.

. survsim stime event ,   dist(weib) lambda(0.1) gamma(1.2) ///
>                         covariates(trt -0.5) maxtime(5)

We’ve simulated continuous, exactly observed survival times, which is what we would observe in an ideal world. We will now apply interval censoring. Let’s assume observations were only made at annual intervals, i.e. every year, so any event times will be rounded up to the nearest year to give the right side of the interval, and floored to give the left interval. This code is easily adapted to widen or shorten the intervals. The left interval will be stored in st1,

. gen st1 = floor(stime) if stime < 5
(590 missing values generated)

. replace stime = st1 + 1 if st1 < 5
(410 real changes made)

merlin identifies interval censored observations through the failure() indicator being coded as a 2, rather than a 1 for exactly observed events or a 0 for right censored observations, so we make that change (as survsim codes all events by default as 1). We’re assuming all events are interval censored, but merlin allows any combination of exactly observed, interval and right censored observations.

. replace event = 2 if event == 1
(410 real changes made)

Let’s have a quick look at the dataset (always recommended),

. list stime st1 event in 1/5

     +---------------------+
     | stime   st1   event |
     |---------------------|
  1. |     5     .       0 |
  2. |     5     .       0 |
  3. |     5     .       0 |
  4. |     5     4       2 |
  5. |     1     0       2 |
     +---------------------+

The first three observations are right censored at 5 years, and hence their st1 is missing. Observation 4 is an interval censored event which occurred between 4 and 5 years, and observation 5 is an event which occurred between 0 and 1 years.

Now we are all setup and ready to fit an interval censored survival model with merlin. We will keep it relatively simple and show how to fit a Royston-Parmar flexible parametric survival model which uses restricted cubic splines to model the baseline log cumulative hazard function, but importantly now allowing for interval censoring.

The important option to focus on is the linterval() option, which lets you pass the variable which contains the left interval time for those observations which are interval censored, which must have their associated event indicator specified as a 2. The right side of the interval is simply that in your outcome variable, stime in our case. We specify our family() to be rp and use 3 degrees of freedom for the baseline log cumulative hazard function (1 df is equivalent to a Weibull – the true model), and add in the failure() and linterval() options,

. merlin  (stime trt , family(rp, df(3) failure(event) linterval(st1))) 
variables created: _rcs1_1 to _rcs1_3

Fitting full model:

Iteration 0:   log likelihood = -1792.7911  
Iteration 1:   log likelihood = -1507.7269  
Iteration 2:   log likelihood = -1321.6384  
Iteration 3:   log likelihood = -1316.8481  
Iteration 4:   log likelihood = -1316.8434  
Iteration 5:   log likelihood = -1316.8434  

Fixed effects regression model                           Number of obs = 1,000
Log likelihood = -1316.8434
------------------------------------------------------------------------------
             | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
stime:       |            
         trt |  -.6211561   .1019953    -6.09   0.000    -.8210632    -.421249
       _cons |  -.6726925   .0631678   -10.65   0.000    -.7964992   -.5488858
------------------------------------------------------------------------------
    Warning: Baseline spline coefficients not shown - use ml display

Our model fits smoothly and we estimate our log hazard ratio due to treatment. You can increase the sample size to something large to remove Monte Carlo error, to make sure there’s no bias – we leave that to you to check. Next steps would of course be to assess proportional hazards in our treatment effect by including an interaction between a function of time and trt (just add stime#trt, for example), and we can do any number of other things (non-linear effects, frailties, …).

Hopefully this simple example gives a primer for both data setup and syntax. Remember, once a feature is added to merlin, it tends to work with everything – such as any of the other inbuilt survival distributions, including the user-defined ones.

Specialist subjects

Real-world evidence (RWE)

Real-world evidence (RWE) Data and information that, unlike data generated in clinical trials conducted in controlled environments, has been obtained from everyday clinical practice, patient registers, or other sources outside the clinical trial setting.   RWE plays a crucial role in complementing traditional clinical trial data, providing insights into the safety, effectiveness, and overall performance […]
Read more

Videos

State-of-the-art statistical models for modern HTA

At @RedDoorAnalytics, we develop methodology and software for efficient modelling of biomarkers, measured repeatedly over time, jointly with survival outcomes, which are being increasingly used in cancer settings. We have also developed methods and software for general non-Markov multi-state survival analysis, allowing for the development of more plausible natural history models, where patient history can […]
Read more

Videos

Multilevel (hierarchical) survival models: Estimation, prediction, interpretation

Hierarchical time-to-event data is common across various research domains. In the medical field, for instance, patients are often nested within hospitals and regions, while in education, students are nested within schools. In these settings, the outcome is typically measured at the individual level, with covariates recorded at any level of the hierarchy. This hierarchical structure […]
Read more

Statistical Primers

What are competing risks?

Competing risks In survival analysis, competing risks refer to the situation when an individual is at risk of experiencing an event that precludes the event under study to occur. Competing risks commonly occur in studies of cause-specific mortality, as all other causes of death than the one under study might happen before the individuals “have […]
Read more

Statistical Primers

What is immortal time bias?

Immortal time bias Immortal time bias is a type of bias that can occur in observational research when the study design allows for a period of time during which the outcome of interest cannot occur, often referred to as “immortal time”. Simply put, immortal time bias occurs when information from a future event is incorporated into the […]
Read more

Statistical Primers

What is the proportional hazards assumption?

Proportional hazards Proportional hazards in survival analysis means that the rate at which an event of interest occurs over time for two or more groups or individuals is proportional over time. Specifically, it assumes that the hazard ratio, which represents the relative rate of an event occurring between two groups or individuals, is constant over […]
Read more

Statistical Primers

What is censoring?

Censoring refers to a situation in survival analysis where the event of interest is not observed for some of the individuals under study. In this Statistical Primer, we’ll define three types of censoring often seen in survival analysis studies. Censoring occurs when the information on the survival time is incomplete or only partially observed. Censoring […]
Read more

Statistical Primers

What is the Cox model?

The Cox model The Cox model, also known as the proportional hazards model, is a popular statistical tool used to analyse survival data. It was developed by British statistician Sir David Cox, and published in 1972. It has gained popularity largely by avoiding making parametric assumptions about the shape of the baseline rate in a […]
Read more

Statistical Primers

What is survival analysis?

Survival analysis is a statistical method used to analyse the time until an event of interest occurs. The key feature of survival analysis is that the outcome has two dimensions: – an event indicator (yes/no), and – the time spent at risk for the event All survival analyses require precise definitions of start and end of […]
Read more

Tutorials

Multivariate joint longitudinal-survival models

Joint longitudinal-survival models have been widely developed, but there are many avenues of research where they are lacking in terms of methodological development, and importantly, accessible implementations. We think merlin fills a few gaps. In this post, we’ll take a look at the extension to modelling multiple continuous longitudinal outcomes, jointly with survival. For simplicity, I’ll concentrate […]
Read more

Tutorials

Simulation and estimation of three-level survival models: IPD meta-analysis of recurrent event data

In this example I’ll look at the analysis of clustered survival data with three levels. This kind of data arises in the meta-analysis of recurrent event times, where we have observations (events or censored), k (level 1), nested within patients, j (level 2), nested within trials, i (level 3). Random intercepts The first example will […]
Read more

Tutorials

Probabilistic sensitivity analysis and survival models

Today we’re going to take a little look into probabilistic sensitivity analysis (PSA), and how it can be implemented within the context of survival analysis. Now PSA is used extensively in health economic modelling, where a particular parameter (or parameters) of interest, are altered or varied, to represent different scenarios and levels of variation. We […]
Read more