Validating Excel Markov models in R

As part of my ongoing campaign to stop using Microsoft Excel for tasks to which it isn’t well suited, I’ve recently started validating all Excel-based Markov implementations using R. Regrettably, Excel is still the preferred “platform” for developing health economic models (hence R being relegated to validation work), but recreating Markov models in R starkly illustrates how much time could be saved if more reimbursement or health technology assessment agencies would consider models written in non-proprietary languages rather than proprietary file formats such as Excel.

As a brief re-cap, strict Markov chains are memoryless, probability-driven processes in which transitions through a defined state space, S, are driven by a transition matrix, briefly as follows:

$$ \Pr(X_{n+1}=x\mid X_1=x_1, X_2=x_2, \ldots, X_n=x_n) = \Pr(X_{n+1}=x\mid X_n=x_n) $$

where $$S = \{X_1 \ldots X_n\} $$

A typical Markov modelling approach in Excel is to start with a transition matrix as a contiguous range at the top of sheet, lay out the initial state distributions immediately below it, and model state transitions using a series of rows containing SUMPRODUCT() formulae, which multiply and sum the state distribution in the previous row with the relevant transition probability row. As an illustrative example, a simple 5-state Markov model running over 120 cycles (say, 10 years with a monthly cycle length) would result in 600 SUMPRODUCT() formulae. Despite the ease with which the cells can be populated with Excel’s AutoFill functionality, making any change to the model (e.g. adding a state) requires the whole range to be updated every time. The use of per-row summation checks with a Machine epsilon (2^-1022) can highlight errors in the formulae, but this requires yet more formulae to update should the model structure change.

Conversely, in R, once the transition matrix and initial states have been defined, implementing a similar Markov model requires just one line of code (with the use of the Matrix exponential package):

install.packages("expm")

tm <- matrix(c(
	0.9,0.1,0,0,0,
	0,0.9,0.1,0,0,
	0,0,0.9,0.1,0,
	0,0,0,0.9,0.1,
	0,0,0,0,1), byrow=TRUE, nrow=5)

initial <- c(1,0,0,0,0)

print(initial %*% (tm %^% 120))

             [,1]         [,2]         [,3]        [,4]     [,5]
[1,] 3.229246e-06 4.305661e-05 0.0002846521 0.001244035 0.998425

This simply prints out the final state distribution after 120 cycles, but the entire Markov trace can be printed using R’s built-in sapply() and t() functions without a dramatic increase in the complexity of the code:

print(t(sapply(1:120, function(x) initial %*% (tm %^% x))))

             [,1]         [,2]         [,3]        [,4]        [,5]
[1,] 1.000000e+00 0.000000e+00 0.0000000000 0.000000000 0.000000000
[2,] 9.000000e-01 1.000000e-01 0.0000000000 0.000000000 0.000000000
...

At this point in Excel, the state distributions might subsequently be used to tally up costs and quality-adjusted life expectancy (in quality-adjusted life years or QALYs) for each state. This would require another two identically-sized ranges of cells to capture cost and QALY estimates for each state, trebling the number of formulas to 1,800. In R, adding in and summing up state costs is much more straightforward:

costs <- c(300, 350, 400, 200, 0)
timehorizon <- 120
utilities <- c(0.85, 0.8, 0.75, 0.85, 0)
discount <- 0.035

print(apply(sapply(1:timehorizon, function(x) initial %*% (tm %^% x) * costs * 1/(1+discount)^x), c(1), sum)) 
print(apply(sapply(1:timehorizon, function(x) initial %*% (tm %^% x) * utilities * 1/(1+discount)^x), c(1), sum))

[1] 2699.991 3499.853 3998.790 1997.099    0.000
[1] 7.649975 7.999664 7.497731 8.487670 0.000000

To output the top-line health economic outcomes over the entire Markov simulation, the code can then actually be simplified (since we don’t need cycle-by-cycle results) to sum the costs and the quality-adjusted life expectancy:

print(sum(sapply(1:timehorizon, function(x) initial %*% (tm %^% x) * costs * 1/(1+discount)^x)))
print(sum(sapply(1:timehorizon, function(x) initial %*% (tm %^% x) * utilities * 1/(1+discount)^x)))

[1] 6293.488
[1] 16.01336

And finally, with the introduction of a second transition matrix, a complete, two-arm, five-state Markov model with support for discounting and an arbitrary time horizon can be implemented in 10 lines of R as follows:

tm <- matrix(c(
	0.9,0.1,0,0,0,
	0,0.9,0.1,0,0,
	0,0,0.9,0.1,0,
	0,0,0,0.9,0.1,
	0,0,0,0,1), byrow=TRUE, nrow=5)
	
tm2 <- matrix(c(
	0.91,0.09,0,0,0,
	0,0.91,0.09,0,0,
	0,0,0.91,0.09,0,
	0,0,0,0.91,0.09,
	0,0,0,0,1), byrow=TRUE, nrow=5)

initial <- c(1,0,0,0,0)

costs <- c(300, 350, 400, 200, 0)
utilities <- c(0.85, 0.8, 0.75, 0.85, 0)
timehorizon <- 120
discount <- 0.035

cost <- c(
	sum(sapply(1:timehorizon, function(x) initial %*% (tm %^% x) * costs * 1/(1+discount)^x)),
	sum(sapply(1:timehorizon, function(x) initial %*% (tm2 %^% x) * costs * 1/(1+discount)^x))
)
qale <- c(
	sum(sapply(1:timehorizon, function(x) initial %*% (tm %^% x) * utilities * 1/(1+discount)^x)),
	sum(sapply(1:timehorizon, function(x) initial %*% (tm2 %^% x) * utilities * 1/(1+discount)^x))
)
print( (cost[1] - cost[2])/(qale[1] - qale[2]) )

[1] 395.0946

Having all of this functionality in 10 easily readable lines of code strikes an excellent balance between concision and clarity, and is definitely preferable to 1,800+ formulae in Excel with equivalent functionality (or even to a similar Markov model laid out in TreeAge). The above R code, while still slightly contrived in its simplicity, can be printed on a single sheet of paper and, crucially, can be very easily adapted and modified without needing to worry about filling ranges of cells with formulae including a mix of relative and absolute cell referencing.

While R is not necessarily the very best language for health economic modelling, given how straightforwardly a Markov model (as but one example) can be implemented in the language, it wouldn’t be a bad candidate to be supported by healthcare technology assessment agencies for the purposes of economic evaluation. More importantly, regardless of any specific merits and shortcomings of R, the adoption or acceptance of any similar language would represent an excellent first step away from the current near-ubiquitous use of proprietary modelling technologies or platforms that are, in many cases, ill-suited to the task.

Randomness in Excel

When writing formulae in a default installation of Excel, RAND() is the only function from which randomness can be directly derived. According to the documentation, RAND() generates an “evenly distributed random number greater than or equal to 0 and less than 1”. Installation of the Analysis ToolPak also makes the RANDBETWEEN() function available for use, which takes two integers as arguments and, again according to the documentation, returns a “random integer number between the numbers you specify”.

Note that in the case of RAND(), I say derived directly, because it’s also technically possible to use the old Microsoft-endorsed trick of using the sixth decimal place onwards from the NOW() function and multiplying up to the range 0–1. (However, for a given calculation cycle the NOW() technique returns the same random value for every cell in the spreadsheet making use of it, so it’s not always desirable.) Rounding out Excel’s random number generation capabilities are two additional methods, which are slightly more cumbersome but have the key property of allowing a random seed to be specified: the Random Number Generation tool in the Analysis ToolPak and the Rnd() function in Visual Basic for Applications (VBA).

Before we dive into these last two methods, it’s worth walking through the evolution of random number generation in Excel as it’s been a tortuous (indeed, some might say torturous) route getting to where we are at the time of writing.

A brief history of RAND()

Prior to Excel 2003, the RAND() function used a linear congruential generator (LCG) to generate pseudorandom numbers between 0 and 1 (with an initial X value of 0.5):

Xn+1 = (9821 * Xn + 0.211327) % 1

Barreto and Howland characterised the randomness of the pre-2003 Excel RAND() function in their book on Monte Carlo simulation in Excel using a “trapping” technique in which the values generated immediately after a value of between 0.7 and 0.7001 were recorded and plotted over the whole 0–1 range:

Barreto and Howland Figure 9.2.2

Barreto and Howland Figure 9.2.2

Predictably, this failed the then-current DIEHARD randomness test by George Marsaglia and can trivially be shown to be a relatively poor random number generator (RNG). From Excel 2003 onwards, Microsoft switched RAND() to use an implementation of the Wichmann-Hill algorithm. Wichmann-Hill is also a relatively simple generator, based on a summation of three LCGs:

# include 

float wichmann_hill_random (int *s1, int *s2, int *s3) {

	float value;

	*s1 = ((171 * *s1) % 30269);
	*s2 = ((172 * *s2) % 30307);
	*s3 = ((170 * *s3) % 30323);

	value = fmod((float) (*s1) / 30269.0 
	           + (float) (*s2) / 30307.0 
	           + (float) (*s3) / 30323.0, 1.0);

	return value;
	
}

Unfortunately, the first Wichmann-Hill implementation in Excel was found to be incorrect as it would eventually generate negative numbers (an impossibility according to the 1982 Wichmann-Hill paper). A 2004 bug fix was intended to fix the problem. However, McCullough subsequently used “Zeisel recursion” (so named after a 1986 publication by Zeisel, in which Chinese Remainder Theorem was employed to rewrite the Wichmann-Hill RNG as a single LCG) to demonstrate that the updated generator was still not consistent with the Wichmann-Hill RNG as published. Wichmann-Hill should have a periodicity of 6.95 × 1012 (not the 2.78 × 1013 claimed in the original paper) but, as McCullough and Heiser note, the actual periodicity of the Excel implementation is unknown.

While even the incorrect Excel implementation of Wichmann-Hill passed the DIEHARD test, it failed some aspects of all three of L’Ecuyer and Simard’s (PDF) TESTU01 test suites (SmallCrush, Crush, and BigCrush) which were designed to supersede DIEHARD. Specifically, L’Ecuyer and Simard noted that Wichmann-Hill failed 2, 12 and 22 of the SmallCrush, Crush, and BigCrush test suites, respectively.

If the original RAND() documentation is to be believed, the Wichmann-Hill generator is still in use as of Excel 2010, which is listed in the “Applies to” section. However, the Office 2010 documentation notes that the “RAND function now uses a new random number algorithm” and Guy Mélard reports that there is “semi-official” confirmation that Excel 2010 and later in fact use the Mersenne Twister algorithm to power RAND().

The Mersenne Twister has a period length of 219937 − 1, passes the full DIEHARD test suite, all of the SmallCrush tests and all but 2 each of the Crush and BigCrush tests, according to L’Ecuyer and Simard. While many improvements have been made to RNGs since the publication of the Mersenne Twister (even within the same class of generator), using the Mersenne Twister at least brings Excel to parity with many modern programming environments, including the default RNGs in SPSS and R.

However, the exact implementation details of RAND() and RANDBETWEEN() are immaterial for applications requiring reproducible randomness as the functions cannot be seeded. Seeded randomness can only be achieved using one of the two aforementioned approaches: the Random Number Generation tool in the Analysis ToolPak or VBA. In brief (and highly informal) testing, the Random Number Generation tool in the analysis ToolPak was unbearably slow in generating uniformly distributed random numbers, generating around 36 random numbers per second on a 2.3 GHz Core i7. So that leaves VBA.

Generating seeded random numbers using VBA

Based on Microsoft’s documentation, the Rnd() function in Basic has always been based on a power-of-two modulus LCG. In Microsoft Basic versions prior to Visual Basic 1.0, this took the form:

Xn+1 = (214,013 * Xn + 2,531,011) % 224

From Visual Basic versions 1.0 to 6.0, the operands were updated:

Xn+1 = (1,140,671,485 * Xn + 12,820,163) % 224

Note that the performance of the VB 1.0-6.0 LCG was also characterised by Barreto and Howland in the figure, above. It has a periodicity of 16.106. Interestingly, Microsoft notes that the code used to implement the LCG underpinning Rnd() could not be implemented in VBA itself due to the use of an unsigned long data type that’s not available in VBA.

Unfortunately there isn’t much Rnd() documentation for VB versions later than 6, but between L’Ecuyer and Simard (who refer to it as a “toy generator”, in part because of its power-of-two modulus) and Mélard, we can surmise that this implementation is still present in VBA in Excel 2010 (and possibly later).

Thankfully, while Rnd() is an extremely unsophisticated RNG, it is at least easy to use and seed. There are two key functions related to random number generation in VBA: Rnd and Randomize. The documentation states that Randomize should be used to initialize the generator, but in practice, the call isn’t necessary and seeding the generator is as simple as calling Rnd() with a single negative integer, as illustrated in the table below. Making repeated calls to Rnd() with the same negative argument yields exactly the same value and subsequent calls without any argument will continue the sequence using the seed from the first call.

If number is Rnd generates
Less than zero The same number every time, using number as the seed.
Greater than zero The next random number in the sequence.
Equal to zero The most recently generated number.
Not supplied The next random number in the sequence.

If random values are required for use in Excel formulae, techniques such as this can be used to quickly export relatively large arrays of random numbers out to a worksheet for later reference.

Explicit, comprehensible covariance matrices in Java

As part of my code readability push in a recent work project (see also here), I’ve just tided up the code responsible for creating, updating, and referring to covariance matrices of patient characteristics and treatment effects. I first searched the web for examples of how covariance matrices had been implemented previously in Java and found the Covariance class in the Apache Commons Mathematics Library which, as it happens, we were already using for its MersenneTwister implementation and a few subclasses of AbstractRealDistribution.

However, looking at the data structures involved, the Covariance class will calculate and return a covariance matrix as an instance implementing the RealMatrix interface (and accepts the raw data as either a two-dimensional array of doubles or as a RealMatrix). Looking at the Covariance source code, the current implementation returns an instance of BlockRealMatrix, which by default is divided into a series of 52 x 52 blocks (so 2,704 double values per block), which are flattened in row major order into single dimensional arrays, which are themselves stored (again in row major order) in an outer array. This is clearly an extremely high performance approach, with three blocks designed to fit into 64Kb of L1 cache (3 × 2,704 × 8 / 1,024 = 63.4Kb), allowing block multiplication to be conducted entirely within the cache (i.e. C[i][j] += A[i][k] × B[k][j]).

This is all well and good, but passing 2D arrays or RealMatrix instances around runs contrary to my current goal of writing readable code. Not only that, but I want to store Pearson product-moment correlation coefficients alongside each covariance parameter so the model can decide later whether or not to covary parameters whose correlation is not significant. So I’d really need two 2D matrices to retrieve both the covariance and the p-value. Given that I’m dealing with relatively small matrices and not doing anything too computationally intensive with them, I decided to write a thin wrapper class around an EnumMap of EnumMaps, in which the Enum for the inner and outer maps consists of elements describing each covaried characteristic and the inner EnumMaps map each Enum element to a small tuple-like nested class that stores the covariance value and p-value in public instance variables.

Since the model uses a few different covariance matrices (including the aforementioned baseline patient characteristics and treatment effects, for example), I wrote a generic class that takes the Enum class in the constructor, such that it can be instantiated as follows:

CovarianceMatrix<PatientCharacteristic> matrix = new CovarianceMatrix<>(PatientCharacteristic.class);

The constructor then creates the 2D “matrix” of the nested, tuple-like ValuePValuePair classes with all covariance and p-values set to 0.0. This is fairly memory inefficient in that we immediately have n2 instances of ValuePValuePair (where n is the number of enumerable fields), but it helped to get this up and running quickly without hitting any NullPointerExceptions or having to perform null checks. Once the data structure’s set up, values can be added to the matrix as follows (with a couple of static final constants added at the top of the class to improve the readability of the code where it counts):

static final PatientCharacteristic HBA1C = PatientCharacteristic.HBA1C;
static final PatientCharacteristic TOTAL_CHOLESTEROL = PatientCharacteristic.TOTAL_CHOLESTEROL;

matrix.setCovarianceAndPValue(HBA1C, TOTAL_CHOLESTEROL, 0.28, 0.0001);

As a quick aside, it’s worth noting that, since the ability to refer to matrix elements numerically has (apparently) been lost, it initially looks as though a lot of strict matrix-like functionality may have been lost as well. However, since the EnumMap of EnumMap approach relies on the getEnumConstants() method (which always returns Enum elements in the order in which they’re specified) to generate the matrix, the numeric index of each row (and column) can be derived from any given Enum element as follows:

int index = java.util.Arrays.asList(PatientCharacteristic.class.getEnumConstants()).indexOf(PatientCharacteristic.HBA1C);

Using this approach, “lossless” conversion from a RealMatrix instance to this data structure and back would be possible (assuming initial knowledge of which rows and columns map to which characteristics), but even without the numeric references, this meets the current requirement, which is simply to use pre-calculated covariance matrices from SAS in Java in a clear and concise manner.

The full EnumMap wrapper class looks like this:

import java.util.EnumMap;

public class CovarianceMatrix<K extends Enum<K>> {
	
	private EnumMap<K, EnumMap<K, ValuePValuePair>> matrix;
	
	private static final class ValuePValuePair {
		public double value;
		public double pValue;

		public ValuePValuePair(double value, double pValue) {
			this.value = value;
			this.pValue = pValue;
		}
	}
	
	public CovarianceMatrix(Class<K> characteristics) {
		
		this.matrix = new EnumMap<>(characteristics);
		
		for (K initialCharacteristic : characteristics.getEnumConstants()) {
			for (K secondCharacteristic : characteristics.getEnumConstants()) {
				if (this.matrix.get(initialCharacteristic) != null) {
					this.matrix.get(initialCharacteristic).put(secondCharacteristic, new ValuePValuePair(0.0, 0.0));
				} else {
					EnumMap<K, ValuePValuePair> newValue = new EnumMap<>(characteristics);
					newValue.put(secondCharacteristic, new ValuePValuePair(0.0, 0.0));
					this.matrix.put(initialCharacteristic, newValue);
				}
			}
		}
	}
	
	public void setCovarianceAndPValue(K row, K column, double covariance, double pValue) {
		this.matrix.get(row).get(column).value = covariance;
		this.matrix.get(row).get(column).pValue = pValue;
	}

	public void setCovariance(K row, K column, double covariance) {
		this.matrix.get(row).get(column).value = covariance;
	}
	
	public void setCovariancePValue(K row, K column, double pValue) {
		this.matrix.get(row).get(column).pValue = pValue;
	}
	
	public double getCovariance(K row, K column) {
		return this.matrix.get(row).get(column).value;
	}
	
	public double getCovariancePValue(K row, K column) {
		return this.matrix.get(row).get(column).pValue;
	}
	
}

The next step is to perform some profiling on this in situ and probably switch over to instantiating the ValuePValuePair instances lazily with the appropriate null checks added in to the getters. But for now, the class is working very well and achieves the goal of making the clinical modelling portions of the code much more readable.