This draft paper develops some of the thoughts in my earliest blog post on NNEG valuation.
ABSTRACT: If the general level of house prices falls a long way, policymakers may introduce new policies which seek to support prices. This paper considers the effect of such interventions on the valuation of nonegativeequity guarantees (NNEG) in equity release mortgages. I discuss past examples of interventions, and policymaker statements which suggest the prospect of future interventions. I model interventions by a reflecting barrier expressed as a fraction of the current level of house prices. Reflection at the barrier is instantaneous, so the noarbitrage property is preserved, and hence riskneutral valuation of NNEG is possible. The reflecting barrier can alternatively be justified as a representation of the different economic nature of the underlying housing (and particularly freehold land) assets in NNEG valuations, compared with the underlying equity assets in many other option valuations.
Full text available on my main page.
When fractional shares are ranked and grouped in buckets, any ratio of successive bucket means >0.5 is suspect
Listening to this interview with John Hempton reminded me of his forensic scepticism about the following pie chart, taken from an investor presentation about the pharma company Valeant.
John wrote (in 2014, when Valeant was still riding high) that the chart looked implausible:
“What we are saying is the top ten products average 1.8 percent of sales each. The next ten average 1.2 percent.
Now lets just  for the sake of argument suggest that the top product is 3 percent of sales. And the next four average 2.3 percent. Well then the first five are 12.2 percent of sales  and the next five can only be 5.8 percent of sales (or they average 1.16 percent of sales).
But oops, that is too much concentration  because we know the next ten average 1.2 percent of sales. In fact it is far too much concentration as product number 11 needs to be more than 1.2 percent of sales.
I have fiddled with these numbers and they imply a distribution of sales flatter than I have ever seen in any product or category. In order to make the numbers work the differences between product sales have to be trivial all through the first fifteen products.”
This is an acute observation to make just skimming a presentation, and I’m not sure I would have got it. And what if the buckets (sectors on the pie chart) each contained 3 or 5 items instead of 10 – what difference would that make? Can we generalise this?
One way to think about it is to assume revenue shares by product form a geometric progression with a constant scaling factor, say r. E.g. if first product has 5% share, the second has 5 x 0.8 = 4% share, then the third has 4% x 0.8 = 3.2% share, etc. This won’t be exact in reality, but the sorting by size means it can’t be far off.
Then if the first term of the geometric progression – i.e. the share of the largest product – is x_{1}, the usual formula for the sum of the first 10 terms is
and the mean over the first bucket is just this divided by ten.
We want to find {mean over bucket of first 10 / mean over bucket of next 10}. Replacing 10 by n for generality, some trivial algebra shows that this ratio of successive bucket means is given by
We can display this quantity in a 2way lookup table against the scaling factor, r, and the number of items in each bucket, n:
For the original Valeant example, n = 10, and the observed ratio of successive bucket means is 1.2% / 1.8% ≈ 0.66. So reading off the top number in the third column, this implies a scaling factor of around 0.96, which seems incredibly flat. Hence John’s intuition that the chart looked dodgy.
More generally, what’s “dodgy” depends on what the true scaling factor is. E.g. If the largest holdings in an investor’s portfolio are sorted by size, this might typically give a scaling factor of 0.8 or less; but not the investor constantly rebalances the portfolio to equal weights (which some people say is a good idea, although the maths is subtle and a lot of published work on “volatility pumping” is flawed). For most quantities, my intuition is that a scaling factor as high as 0.9 (the third row of the table) would be rare. It’s also striking how quickly the values fall away for lower (more typical?) scaling factors. Overall, the following seems a reasonable rule of thumb:
When fractional shares (revenue by product, market shares by company, or similar quantities) are sorted by size and grouped into descending buckets, any ratio of successive bucket means above 0.5 (or perhaps even 0.3) is suspect.
This article by the investor Claire Barnes of Apollo Investment Management highlights an interesting property of compound growth in any quantity which forms part of a finite whole. In summary, the point is this. Suppose urban land area + rural land area = 1 (ie a finite land area), and assume urban land area grows at a constant percentage rate (in line with the ‘economic growth’ universally sought by governments). Then it follows that rural land area doesn’t just decline, it declines at an accelerating percentage rate.
Claire’s illustration copied below shows the case where rural land is initially 85% of the total, urban land is 15%, and urban land grows at 5% per annum. On these parameters, the rural land is extinguished after about 38 years.
Note the alarming and invidious property of the second graph. For many years the percentage rate of decline in the rural part is very small, too small for most people to notice. Then suddenly the rate of decline speeds up and the remaining rural part disappears quickly, too quickly to do anything about it.
Algebraically, we can represent this as follows.
Assuming that the urban part grows a constant rate g per annum, we have
Note that continuously compounded percentage rates of growth can also be expressed as semilogarithmic derivatives (this will be useful in a moment), e.g.
Also,
Then the continuously compounded percentage rate of decline of the rural part is the semilogarithmic derivative. That is
To interpret this, note that the urban part increases at a constant continuous rate log (1+g). The rural part decreases at an accelerating rate. The rate is log (1+g) scaled by the current ratio of the two parts.
For small g, log (1+g) ≈ g (e.g. log 1.05 = 0.0479), and so we can forget about logs and just scale the growth rate g. This makes intuitive sense – in some sense the “same” process is affecting the rural part, it just has to be scaled for the continuously varying relative sizes of the two parts.
The time until the rural part is exhausted is
which for u(0) = 15% and g = 5% gives 38.9 years, consistent with the graph above. In words: the time to extinction of the rural part is inversely proportional to the growth rate of the urban part.
Pages 1213 of this report from the World Bank give some data on current urban land and rates of growth, which suggest that for most Asian countries, the outlook is not quite as bad as suggested by the graph above. But there are probably ambiguities with the definition and measurement of “urban” and “rural”. And whatever the exact parameters, a precautionary principle seems sensible, because the overall pattern is quite general: when one part increases at a constant percentage rate, the other part doesn’t just decline, it declines at an accelerating percentage rate.
I agree that this does not seem to be widely appreciated. Perhaps it is appreciated by ecologists, but on a quick search I could not find obviously relevant commentary. Claire suggests a couple of reasons why it may be neglected. First, individuals and governments tend to focus more on things which show compound growth, because that is where investment and career and taxation opportunities tend to be found. We tend to be less aware of complementary things which are declining. Second, evidencebased people focus on things which are quantified; but things which are declining tend not to be quantified, precisely because of the lack of positive opportunities. I agree with both these points.
The alarming property of the graph is the “gradually, then suddenly” pattern of the rural part’s decline. This is not really exponential (a constant percentage rate of decline), it’s more alarming than that: it’s an accelerating percentage rate of decline. Following the famous quote about bankruptcy, perhaps we can call it Hemingway decline.
We can then summarise, loosely speaking, like this:
Where two parts form a finite whole, and one part increases at a constant percentage rate, the second part declines at an acccelerating percentage rate. It declines gradually, then suddenly: Hemingway decline.
First, I need to be clear what I mean by “cheap” and “dear”. In this blog, I characterise an option as cheap (dear) if its price today is below (above) its expected value at maturity, discounted at the riskfree rate back to today. The expected value at maturity is calculated using a reasonable assumption for the underlying asset’s risk premium. I am not saying that I wish to buy or sell the options I label cheap or dear (I may wish to with some of them, but the label says nothing definite about that.)
In a recent paper on equity release mortgages, Tony Jeffery and Andrew Smith note that the Black Scholes formula gives prices for call options which are cheap (and put options dear), in the sense above. They note that this raises a puzzle: why should there be willing buyers for put options at prices which are expected to lose money? Their answer is that put options are a form of insurance, held in conjunction with other assets, which can substantially reduce losses in adverse conditions. So the investor’s reservation price – the highest price he is prepared to pay – is above expected value, at a level which implies a negative expected return on the put option in isolation. This is acceptable to the investor in the context of a reduction in the risk of the portfolio as a whole.
An analogous argument can be made for the buyer of a call option, who is increasing the risk of his portfolio by adding the option, and so requires a positive expected return to justify this increase in risk. This rationalises the buyer having a reservation price somewhat below the expected value of the call option.
The above explanations focus on the demand for options. This blog focuses on the supply of options, and in particular the asymmetric practical impact of margin requirements on sellers of calls and puts.
The supply side: margin requirements
If BlackScholes prices for puts are typically above expected values, can we make money by selling puts at these prices? There are academic studies which suggest that excess returns are available from doing precisely this: selling out of the money shortterm index put options (which are generally priced closed to BlackScholes). But no, I do not do this. The obstacle is margin requirements, a practical matter which academic studies usually overlook.
Buyers of options pay a premium at outset, but thereafter have no further potential liabilities. Sellers receive a premium, but also have to deposit initial margin with their broker, and then variation margin as the price moves against them. This seems likely to have asymmetric effects on the reservation prices at which sellers are willing to offer calls and puts, as follows:
 The seller of a call suffers margin calls as the index rises. When the index is rising, credit is likely to be easy; the rest of the seller’s portfolio is probably rising and saleable; and his increasing wealth implies declining marginal utility. The margin calls are not a problem.
 The seller of a put suffers margin calls as the index falls. When the index is falling – and especially when it is crashing – credit is likely to be difficult, the rest of the seller’s portfolio may be unsaleable, and declining wealth implies increasing marginal utility. The margin calls may be very difficult.
Margin issues are difficult to quantity. But papers which attempt to do so find that when margin issues are allowed for, the apparent attraction of selling index puts largely disappears.
This asymmetry – margin calls are easier to manage for sellers of calls than for sellers of puts – may help to explain why calls are supplied cheap and puts dear. But as with Jeffrey and Smith’s demandside argument, this is only directional, not quantitative. It doesn’t show that the discrepancy between BlackScholes prices and expected values represents fair compensation for margin issues; it only notes that the observed discrepancies – puts priced higher relative to their expected values than calls – are in directions consistent with margin considerations.
If call options are cheap relative to expected values, can we make money by buying call options? If longdated equity options were offered for sale priced on BlackScholes, then maybe yes; this looks like it could be a lowcost form of nonrecourse leverage, so I might buy some. But as they don’t seem to be offered, I haven’t. (Historically I did occasionally buy investment trust warrants; these were like call options with terms up to a few years, and sometimes very cheap.)
Further observations
A few further observations are in order.
First, to summarise the discussion above: the price at which I am prepared to sell an option depends on the margin requirements. Both expected values and BlackScholes fail to account for this.
Second, the “calls cheap, puts dear” pattern of BlackScholes prices is quite general. Section 6 of this paper by David WIlkie (presented at the AFIR Colloquim in 2001) gives more examples, and some algebra which shows that the effect is eliminated only when assuming a negative risk premium. Note that although the paper states at Section 6.9 that the “calls cheap, puts dear” pattern of BlackScholes prices applies “for an investor with a linear utility function”, it will continue to apply (albeit to a lesser degree) as risk aversion increases, for all but the most risk averse investors.
Third, in the absence of liquid markets in longdated options, we have little evidence that many investors are actually prepared to pay prices for longdated put options quite as high as BlackScholes. Again, the “demand” and “supply” rationales are both directional, not quantitative: they rationalise why buyers and sellers might be prepared to trade at “some deviation” from expected values, but not at the particular “BlackScholes deviation”.
Fourth, this Eumaeus blog argues that there is no puzzle to explain. It says that that a forward contract can be synthesised by buying a call and selling a put, and observes that the sum of the put and call prices provided by BlackScholes is equal to forward price discounted at the riskfree rate. This argument shows that BlackScholes prices are consistent with noarbitrage between puts, calls and forward prices. But so what? Why should the investor postulated by Jeffrey and Smith be concerned with this? The blog doesn’t seem to engage with the discrepancy between expected value and the prices at which investors may be prepared to trade; it just changes the subject.
Application to NNEG?
How does this discussion of margin issues apply to no negative equity guarantees? The writing of a NNEG doesn’t give rise to any exposure to margin calls. I am therefore willing to write it at a lower price than if margin calls were possible. To the extent the regulator wishes to impose a requirement for “initial margin”, in the form of “required capital” calculated using BlackScholes, that will increase my price. But this is an artefact of the regulator’s exogenous imposition, not because I place any credence on BlackScholes. And in the absence of all the apparatus of dynamic hedging – or at the very least, liquid markets in in longdated puts, calls and forwards – the regulator’s BlackScholes calculation seems rather arbitrary.
Jeffrey and Smith characterise this type of argument as a “false dilemma”. That is, they note that my observation that BlackScholes is an arbitrary formula for longdated unhedgeable options doesn’t show that any particular alternative is correct. But it seems to me that this characterisation can be turned around: in the absence of compelling hedging and noarbitrage arguments, the reverence currently afforded to BlackScholes prices amounts to a “false primacy”. Why should BlackScholes be taken so seriously, and other approaches, such as percentiles from realworld stochastic models, not considered at all?
Conclusion
A “calls trade cheap, puts trade dear” pattern relative to expected values can be explained by demandside and supplyside arguments as outlined above. But these arguments are directional, not quantitative. They justify “somewhat” cheap and dear, but not the particular amounts “BlackScholes” cheap and dear.
I agree with Buffett: for long term puts, BlackScholes prices seem unreasonably far above expected values. And the gap between BlackScholes prices and reasonable prices is larger for sellers with limited margin requirements, such as NNEG writers – and Buffett.
My Kent colleague Radu Tunaru’s recent report for the Institute and Faculty of Actuaries (IFoA) on valuation of no negative equity guarantees was discussed at a meeting at Staple Inn last Thursday. A controversial element was the adjusted rental yield calculation of 0.2 x 5% = 1%, where 5% is the observed gross yield on rented properties and 0.2 represents the proportion of houses in the UK which are rented out. I was initially sceptical myself: the objections from “some actuaries” enumerated as (a) and (b) on p33 of the paper correspond closely to my initial comments (and no doubt others too). This blog explains why I’ve (partly) changed my mind.
Houses are unusual assets in that roughly 20% have an observed rental yield and 80% don’t (20% are held as financial assets and 80% as consumption assets). There is no precise analogue of this when pricing options on individual shares or bonds. But here’s something close: many companies which could pay dividends choose not to do so. For pricing options on shares in these companies, we don’t make any reference to a “potential yield” which the company could pay if it so chose. And this seems correct, because when a company retains earnings, those earnings are embedded in the share price. The price generating process for the underlying share is different because the company returns earnings. (Modelling all prices as geometric Brownian motion sort of glosses over this, but the economic logic seems clear.)
Now on p33 of the report, Radu is trying to price a put on the house price index, not on an individual property. The price generating process for the index is one where only 20% of the houses in the index generate a yield. The lack of yield on the other 80% is embedded in the price generating process. Stated like this, I can see the point.
Whilst I can (belatedly) see the point of adjusting the observed rental yield, I’m still not sure I fully agree with the adjustment down to 1%. A useful comment at the IFoA discussion meeting on 28 February was that perhaps the 80% isn’t 0.8 x 0, it’s 0.8 x something. The “something” is the “utility yield”, reflecting the benefit the owner occupier derives from living in the house. In economics, a more common term seems to be “imputed rent”, so let’s call it the “imputed yield” from owner occupation.
The imputed yield is unobservable, but there are several reasons for thinking it might be lower than the observed rental yield: (a) mortgage finance is significantly cheaper for owner occupiers than for most landlords (b) all imputed rent is untaxed (c) owneroccupiers don’t have management expenses, voids, don’t require a profit margin from renting, etc.
As I said, p33 Radu is pricing a put on the index. When we move on to individual properties, and assuming the imputed yield is less than the observed rental yield, the argument suggests that a put on an individual property is worth less if it’s owneroccupied than if it’s rented. This initially seems odd. But it’s also consistent with the idea that an owneroccupied house receives better maintenance, so that the price generating process for an owneroccupied house is nudged upwards compared to that for a rented house, like the share price of a company which retains earnings compared to an equivalent company which pays dividends.
However for NNEG there is a further twist. NNEG holders aren’t typical owneroccupiers; they’re owner occupiers who have entered into a contract which may weaken their incentives to maintain the property, and so perhaps nudges the price generating process back towards the one for rented houses. But still not the same as for rented houses.[1]
Note that the underlying phenomenon here is very general: the existence of a loan secured against an asset, or option on the asset, can change the value of the asset. Most financial theories seem to neglect this point.
If the above argument isn’t persuasive, here’s an alternative. Imagine a rentalonly world in which owner occupation is not permitted; every house has to be either rented or left empty. Many people own one house and rent another to live in. Or perhaps more likely, housing ownership becomes more concentrated on very rich individuals and institutions. But in that world, the demand, supply, availability of mortgage finance and political intervention in the housing market would all be different; and so the yield and price generating process for housing would be different. It’s not obvious that the observed rental yield from the 20% in the extant market provides a good estimate of what would happen in a rentalonly market.
In summary, imputing a zero yield to the 80% of owner occupiers seems questionable (as the report admits – see p32), but so does imputing 5% without further thought and adjustment. So even if I don’t ultimately agree with the 0.2 x 5% + 0.8 x 0% calculation, it highlights an important point, which I hadn’t thought of before.
Sandcastles in the air
The discussion above doesn’t touch on more fundamental doubts about the hedging approach to NNEG valuation. Another useful comment at the IFoA meeting was a fleeting allusion to “sandcastles in the air.” The basic constructs on which BlackScholes type option pricing depends – most fundamentally, liquid markets in forward contracts – are not even approximately operative for housing. So why are we even thinking of using option pricing models based on hedging, to value a guarantee which just isn’t hedgeable?
It might be better to include housing in a realworld economic scenario generator (ESG), and set the reserve to be sufficient for some low quantile or conditional tail expectation of the simulations. I don’t know of any extant ESG’s which include house prices, but there doesn’t seem to be any technical reason for this; it’s just because institutions haven’t historically invest in housing. This approach would be more computationally intense, but that’s always a declining problem (so far). Perhaps this is the way it will be done in ten years’ time.
Finally, two points of personal context. First, my comments on the report are those of an interested reader; I am not part of the Kent Business School team commissioned by the IFoA (my affiliation is with another department at Kent). Second, despite recent characterisations as “firmfriendly”, my views are not reliably proindustry; just look at some of my other work, starting with why insurers are wrong about adverse selection or the book at top right of this page!
[1] . Contra this argument of weakened incentives for maintenance, there are reports that more than half of equity release borrowers use the proceeds at least partly for home improvements. eg UK Equity Release Monitor Key Retirement 2018. So maybe ERM houses get good maintenance at early durations, albeit perhaps not at long durations.
A confession: I am the actuary who disagreed with this post by Dean Buckner, a former PRA official, which asserts that the BlackScholes formula gives a good valuation of an option under the assumption of mean reversion in prices. In a followup post he said my critique was “ingenious” but “wrong” and that it was similar to an earlier “Buffett mistake”. ( I am also the “the firmfriendly friend” who previously drew his attention to Buffett’s views on the pricing of longterm options as described there.) I would very happy go on making “Buffett mistakes” for the rest of my life. But despite further discussion with Dean, I remain in disagreement with both the original post and the followup. This post explicates my view.
Suppose that as in my interpretation of the original post, the (95, 96, 95, 96…) price series is a prospectively assumed distribution. In this case, I say that a put option at 90 is always worth zero (except for the chance that our (95, 96, 95,96…) assumption may be wrong, which has nothing to do with BlackScholes).
Suppose, alternatively, as in the interpretation put forward in the followup post, the (95, 96, 95, 96…) price series is a retrospectively observed single path. In this case, I say that we have not departed from the classical assumptions of BlackScholes; we have not made a prospective assumption of mean reversion. (The 95,96, 95,96..) is merely one possible observed path amongst many under geometric Brownian motion. If we then value an option using the BlackScholes formula at every time t, we are implicitly making a prospective assumption of geometric Brownian motion at every time t. I agree that hedging allows us to correct for the retrospectively observed single path up to time t; but it says nothing about the validity of BlackScholes valuation at time t if we were to make a prospective assumption of mean reversion at that time.(*)
In short, the original post does not show what it claims to show: that the BlackScholes formula gives a good valuation of an option under a prospective assumption of mean reversion. Its criticisms of the Institute and Faculty of Actuaries (like much else on the Eumaeus website) are entertaining as slapstick, but also intemperate and wrong. And its claim that “The family of Black pricing models are amongst the most practical and robust models of reality that science possesses” is simply absurd.

(*) We may be able to fudge BlackScholes by adjusting the volatility input to allow for mean reversion, for example as in this paper. But fudged BlackScholes is not the same as BlackScholes!
My previous post presumed some understanding not just of the BlackScholes formula, but also of its derivation; in particular, of the hedging argument whereby the drift in the underlying asset can be ignored. My likely readers at this blog probably do understand this, but some other commentators may not. How to explain? Here, with some trepidation, is an attempt at analogy (I say “with some trepidation” because no analogies are perfect, and this one works best for recreational sailors).
Suppose you are sailing your boat at sea on a misty day. The mist briefly clears and in the distance you glimpse your friends, who are drift fishing on their boat. You quickly take a compass bearing. Then the mist closes in again.
If you steer the compass heading from the bearing through the mist, you will in due course successfully rendezvous with your friends. You don’t need to worry about any lateral tidal drift, because both boats are subject to the same drift. You have a rationale for ignoring the drift.
Now suppose instead that the sight you glimpsed through the mist was not a drifting boat, but a harbour entrance. In this case, if you take a compass bearing and then steer that heading thorough the mist, you will not successfully navigate to the harbour entrance. The harbour doesn't drift with the tide. You have no rationale for ignoring the drift.
Summarising, we should ignore the drift when we have a rationale do so (in an option context: continuous hedging), and not ignore it when we don’t.
This analogy isn’t perfect. In particular, it doesn’t encompass (ha ha) the idea of a policydriven reflecting barrier underpinning the assumed diffusion of house prices. But it’s a start. I welcome any suggestions for better analogies.
no negative equity guarantee, lifetime mortgages
The Prudential Regulation Authority (PRA) has issued a consultation paper CP1318 on valuation of the nonegativeequity guarantee (NNEG) in equity release mortgages. I think the use of the BlackScholes formula in this context is flawed, in ways which are more fundamental than suggested by the PRA’s rather bland observation that “some of the assumptions that allow the mathematical derivation of the formula…are not met.” The prescribed approach is likely to overestimate the value of NNEG.
Background
An equity release mortgage is a product where a home owner typically aged 60+ borrows 2530% of the valuation of their house from an insurer. A fixed rate of interest is charged to the home owner, but not actually paid. On the home owner’s death or any earlier permanent vacation of the house (e.g. after they move into a care home), the loan plus accumulated interest is repayable from the sale proceeds of the house. The NNEG guarantees that the amount repayable will not exceed the sale proceeds of the house.
Equity release mortgages are typically parcelled up into ‘restructured ERM notes’ on an insurer’s balance sheet. The restructured notes earn a relatively high yield and closely match annuity liabilities, so firms are then allowed the regulatory benefit of the ‘matching adjustment’ – an increase in the discount rate which can be used to discount the liabilities when determining the firm’s regulatory solvency. The quantum of the matching adjustment depends on the spread above riskfree rates earned on the ERM notes. This spread is construed partly as an illiquidity premium, but also partly as a risk premium, including the risk of the NNEG; the latter should not in itself give rise to a matching adjustment benefit. The question then arises: in apportioning the spread between illiquidity and retained risks, how should firms value the NNEG?
CP 1316 proposes to amend Supervisory Statement 3/17 to mandate the use of a BlackScholes type model with prescribed parameter values to value the NNEG.[i] The previous version of SS3/17 was somewhat less prescriptive. In the proposed amendments, some lip service continues to be given to the possibility of alternative approaches[ii], but this is now limited by an implication that any results which differ from the prescribed BlackScholes model will automatically be regarded as suspect.[iii]
The proposed option valuation formula is the Black (1976) formula for an option on a forward price, slightly restated in CP 1318 as follows:[iv]
where
and
 N() is the standard Normal cumulative distribution function
 S is the spot price (current price) of the property
 T is the term to maturity (the NNEG is evaluated separately for each possible future year of maturity, with deterministic assumptions for mortality, morbidity and voluntary repayments)
 K is the loan plus rolledup interest at time T
 r is the prescribed Solvency II riskfree interest rate for maturity T
 σ is the volatility of the property price, prescribed as 13%
 q is the deferment rate, prescribed as 1%.
Where does BlackScholes come from?
CP 1318 includes a brief acknowledgement that “some of the assumptions that allow the mathematical derivation of the formula….are not met”[v], but then effectively mandates that formula anyway. To assess how important the missing assumptions are in the context of NNEG, we need to step back and recall the fundamental constructs from which BlackScholes is derived.
BlackScholes is an intuitively very surprising formula. A natural first thought is that the value of an option would depend on one’s expectation for the price of the underlying at expiry. But BlackScholes says that this intuition of expectationbased pricing is wrong: the expected rate of growth of the underlying does not appear in the formula.
This surprising result arises from the constructs of dynamic hedging and arbitrage. BlackScholes derives a value for an option by considering a longshort hedge portfolio in the option and the underlying. If this is continuously adjusted as the price of the underlying changes, the value of the portfolio can be kept neutral to rate of growth of the underlying. BlackScholes then argues that this riskfree portfolio must earn the riskfree rate. Why? Because profitseeking arbitrageurs make it so. If it were not so, an arbitrageur could go long the riskless hedge portfolio, short riskless zero coupon bonds (or vice versa) and so earn riskfree profits. Since riskfree profits in practice seem thin on the ground, we conclude that market prices for options are generally set so as avoid such arbitrages.
To an options marketmaker, this noarbitrage argument – avoiding the possibility that arbitrageurs can earn riskfree profits from our prices – is always more compelling than any expectationsbased argument. Even if the marketmaker thinks he has some insight into the expected price at expiry, quoting an expectationsbased price is perilous, because it creates an exposure to arbitrages against him. In other words: noarbitrage prices take precedence over expectationsbased prices because for a marketmaker, expectationsbased prices are too dangerous to quote.
But the BlackScholes argument just given depends crucially on the idea of dynamic hedging: the existence of liquid markets which give the ability to continuously adjust the hedge portfolio in underlying and the option. It also depends on the existence of arbitrageurs hungry for riskfree profits, and on market makers with limited capital who cannot afford to bleed riskfree profits to those arbitrageurs. All these elements are missing for housing. There are no markets in NNEGs. There are no markets in the appropriate underlying, that is forwards (or alternatively futures) in housing. There are no market makers or arbitrageurs. Dynamic hedging is simply not possible in any shape or form. This is not a failure of ‘some assumptions’ of BlackScholes; it is a failure of the whole construct of Black Scholes.
Recognising the absence of forward contracts, the PRA prescribed formula recasts the original Black (1976) formula by substituting the forward price discounted at the riskfree rate, F exp{–rt}, with the deferment price, S exp{–qt}, that is a price agreed now and paid now to take possession of the property in future. The PRA comments that this substitution alleviates the problem of the lack of an observable forward price, because the deferment price can be readily estimated from the spot price (I will have more to say on this below). But as far as dynamic hedging is concerned, this modification does nothing to help us: there are still no markets in which to construct the dynamic hedge.
It is sometimes argued (eg Derman and Taleb 2005) that although the BlackScholes formula as formally derived by academics requires dynamic hedging, it can alternatively be justified by the existence of forward contracts, puts and calls, and the constraint of putcall parity (ie the avoidance by market makers of prices which give rise to static arbitrages). But even this doesn’t help for housing: there are simply no markets in forward contracts, or puts and calls. If there are no puts and calls, it is hard to see how putcall parity could be a constraint.[vi]
Geometric Brownian motion is unrealistic for house prices, especially in the lower tail
The PRA argues that although “some of the assumptions” (in my view: foundational assumptions) for BlackScholes are not met, this doesn’t really matter:
“The PRA is aware, as noted by respondents to DP 1/16, that some of the assumptions that allow the mathematical derivation of the formula in paragraph 3.20 for option valuation are not met in the residential property market. However, the PRA has not seen evidence that the approach set out in the proposed updated text of SS3/17 would automatically over or underestimate the allowance for NNEG, compared with other methods that are consistent with the four principles”[vii]
I think the PRA’s proposed approach will tend to overestimate the allowance for NNEG (albeit ‘automatically’ is always going to be a stretch, and in that sense the PRA’s wording here may be carefully chosen).
BlackScholes assumes when constructing the dynamic hedge that the underlying in which we trade (a forward which doesn’t actually exist for housing!) follows a geometric Brownian motion. For housing this would have a positive trend above the riskfree interest rate, but the trend then gets eliminated from consideration in option valuation by the construct of the hedge portfolio. In effect, the value of the option is then evaluated as the expectation under a ‘riskneutral measure’ where the underlying has an expected return equal to the riskfree rate, but wanders around that as per the Brownian motion, and in particular, can fall arbitrarily close to zero. The NNEG value resides in the possibility of outcomes in the lower tail.
This model of the underlying seems reasonable for an option on a hedgeable single stock. The expected return on the stock above the riskfree rate (i.e, the equity risk premium) is eliminated for option valuation purposes by the dynamic hedging argument; and the price of a single stock can indeed fall to zero, because companies can indeed go bust. But there are two reasons why a model in which price can fall arbitrarily close to zero is not a reasonable model for forwards on house prices.
First, the longterm experience in the UK has been that house prices have tended to increase ahead of riskfree interest rates (and also ahead of inflation and even earnings). We do not know to what extent this will continue, but it seems unreasonable – again, in the absence of the hedging argument – to give it no weight at all.[viii]
Second, a deep and prolonged fall in house prices, with the attendant collapse in mortgage lending, widespread repossessions and distress in the electorate, seems overwhelmingly likely to induce a policymaker response. This is illustrated by the policymaker response to the modest fall in house prices following the 2008 financial crisis: policies such as purchasing first gilts and then corporate bonds (and even equities in Japan); the term funding scheme to revive mortgage lending; and policies such as helptobuy and associated schemes providing blatantly direct support for house prices. And all this was in response to a quite modest and short fall in prices! In a deeper or more prolonged slump, there are many more steps which policymakers can (and I believe will) take. In a country with its own currency, the government can (and I believe will) ultimately print money and buy houses. The activities and statements of central bankers worldwide in relation to asset purchases in recent years provide further general support for this notion of a policy response to deep and prolonged falls in asset (and particularly house) prices.
This is not the same as saying that I think house prices will always go up, or that housing is a better investment than shares, or that you can never lose by buying a house. I do not believe any of that. I just believe that overwhelmingly likely policymaker responses now provide a reflecting barrier under house prices, which makes geometric Brownian motion an unreasonable assumption for house prices in the lower tail. The level and firmness of that barrier is matter for reasonable debate, but it seems unrealistic to pretend – as the prescribed NNEG formula does – that it does not exist at all.
Update: (29 Jan 2020) These ideas are developed in more detail in my paper "Valuation of no negative equity guarantees with a lower reflecting barrier", avaialble on my main page.
A couple of decades ago, I did not have this belief in a ‘policy put’. My views have slowly changed, based on observation of public policy over the past 25 years, and especially the policy response to the relatively minor decline in house prices after 2008. I now recognise that for better or worse, I live in a country where most MP’s own more than one property, former prime ministers buy whole apartment blocks to let,[ix], senior Bank of England policymakers assert in unguarded moments that ownership of property is a far superior form of personal investment to pensions[x], etc etc. Policymaker ideologies and preferences can of course slowly change, for example as new generations of MPs are elected; if and when they do, I might slowly change my mind about the reflecting barrier. But any change is likely to be very gradual, because the ideology amongst policymakers which substantiates the reflecting barrier runs much deeper than the political hue or personalities of any particular government.
One response to my beliefs about a ‘policy put’ on houses prices might be to say that whilst everyone is entitled to their own opinions, different people will have different opinions, and none of these should enter into consideration in option valuation. We should instead stick to the riskfree rate as a ‘neutral’ view. But in the absence of hedging, this purported neutrality is an illusion: assuming (in effect) growth at the riskfree rate is just another belief, and I see no reason to give it primacy. In the absence of hedging, one has to take a view on the trend in prices; it cannot be conjured away as in the standard BlackScholes.
One notable investor has made statements suggesting a belief that analogous arguments hold good for longterm equity indices as well. In his 2008 annual report, Warren Buffett discussed longterm put options written by Berkshire Hathaway on various stock indices. He gave an example of a 100year put option on the Dow Jones index, and suggested that the BlackScholes formula (with typical assumptions at the time of writing) very substantially overvalued this option. Cornell (2009) interprets Buffett’s commentary as reflecting “ the belief that future nominal stock prices are not well approximated by a lognormal distribution, because inflationary policies of governments and central banks will limit future declines in nominal stock prices compared with those predicted by an historically estimated lognormal distribution” (I agree with this interpretation.)[xi]
I also note that the Bank of England ‘stress tests’ for banks involve a scenario where house prices fall onethird in 3 years, but then resume their trend rate of growth. This completely rules out the deep and prolonged falls in house prices for which insurers are being asked to reserve on NNEGs. It’s not obvious why the reserving requirements for insurers writing NNEGs should be much more stringent than those for banks underwriting ordinary mortgages.
Deferment price less than spot price? Not necessarily.
In the prescribed NNEG formula quoted above, the PRA calls the quantity S exp{qt} the ‘deferment price’ of the property, that is the price payable now to take possession on a future date. There is no meaningful market in deferment prices over the periods of 2040 years most relevant to NNEGs.[xii] The PRA nevertheless asserts that the deferment price must always be lower than the spot price of the property, on the following rationale:
“This statement is equivalent to the assertion that the deferment rate for a property is positive. The rationale can be seen by comparing the value of two contracts, one giving immediate possession of the property, the other giving possession (‘deferred possession’) whenever the exit occurs. The only difference between these contracts is the value of foregone rights (eg to income or use of the property) during the deferment period. This value should be positive for the residential properties used as collateral for ERMs.”[xiii]
In isolation, this appears a reasonable argument. But there are also reasonable counterarguments.
Housing today is owned mainly by owneroccupiers. They have a preference for a current interest to a deferred interest, because they need a roof over their heads, they like longterm security of occupation, they like being able to make their own choices on extensions and repairs, etc. In other words, they like the practical and sentimental benefits of home ownership. A minority of owners are buytolet landlords: they like understandable form of the investment, the unusual ability to finance it largely with borrowed money, and perhaps the disengagement it facilitates from the distrusted pensions and savings industry.
For an insurer, on the other hand, these practical and sentimental benefits of a current interest in a house have no relevance. The main potential benefit of a current (as opposed to deferred) interest is the potential income from letting. But a current interest also has several disbenefits: tenants need to be managed, houses need to be maintained, from time to time there are costs (Including possibly PR costs) of evicting tenants in arrears, and there is a possibility (through existing or new legislation) that tenants might acquire new rights. If on the other hand houses are kept vacant, this gives another set of problems: council tax, security and maintenance costs, and possibly very considerable PR costs of owning substantial amounts of empty housing. These disbenefits are not fanciful; their materiality can be inferred from the observable fact that despite the excellent longterm performance of housing as an investment, neither insurers nor any other financial institutions have shown any enthusiasm over the past several decades for housing as an asset class.
So current interests in houses are evidently not attractive to insurers and other institutional investors. Deferred interest might well be more attractive, particularly if in the form of cashsettled financial contracts, so that all the problems of current interests are permanently avoided. Even if a deferred interest is not strictly preferred, the relative valuation of a deferred interest compared to a current interest seems very likely to be much higher for an insurer than a typical individual owner.
Now if there were a substantial market for deferred interests, the money weight of individuals’ preference for current interests versus insurers’ preference for deferred interests would determine the relative market prices for the two types of interest (i.e. what the PRA calls the ‘deferment rate’). But we have the same problem as with the hedging arguments: the market for deferred interests does not exist on any meaningful scale. And this is not mere happenstance or oversight; to create such a market would require the development of legal and governance frameworks covering maintenance, insurance, the rights of occupiers during and on maturity of deferred interests, etc. In the absence of such a framework, the relative values of current interests and deferred interests remain a matter of conjecture. The PRA’s argument is a reasonable one, but not the only reasonable one, and therefore not as conclusive as CP 1318 asserts.[xiv]
Negative deferment rates might offset omission of the reflecting barrier
The PRA argument that the deferment price should always be less than the corresponding spot price is sometimes characterised as a ‘positive deferment rate’ (i.e. the rate ‘q’ in the deferment price = S exp{–qt) is positive). The PRA says that some insurers may be using a deferment rate that is ‘too low’. Separately, I also noted above that the assumption of geometric Brownian motion seems unrealistic for longterm house prices in the lower tail, and that it would be more realistic to have a reflecting barrier under prices to represent the likely policy response to a deep and prolonged fall in house prices. Either the ‘error’ of omitting this reflecting barrier, or the ‘error’ of using a deferment rate that is ‘too low’, will act to reduce the NNEG valuation. So a low (or even negative) deferment rate combined with omission of the reflecting barrier might arrive at approximately the right answer, albeit arguably for the wrong reasons.
Summary
(1) In the context of NNEGs, the complete inapplicability of dynamic hedging (or even putcall parity) makes the prescribed BlackScholes formula somewhat arbitrary. At the very least, it seems unjustified to label this ‘correct’, and all other approaches ‘incorrect’.
(2) A deep and prolonged fall in house prices is almost certain to lead to a policymaker response. This means that that the geometric Brownian motion assumed in BlackScholes is too heavy in the lower tail. Since most of the NNEG value arises from this tail, the prescribed approach seems likely to overvalue the NNEG. There should be some implicit or explicit allowance in NNEG valuation for this policymaker response.
(3) Reserving requirements for insurers underwriting NNEGs should not be more stringent than those for banks underwriting ordinary mortgages.
(4) The PRA’s argument that the (hypothetical, unobserved) deferment price should always be less than the spot price (specifically: a minimum ‘deferment rate’ of 1%pa over the term of the NNEG) is not as obvious as CP1318 suggests. There are good counterarguments, which may justify a lower (perhaps even negative) deferment rate.
Notes
[i] CP1318 para 2.7 & proposed SS3/17 para 3.20.
[iii] Eg. proposed SS3/17 para 3.22.
[iv] Proposed SS 3/17 para 3.20
[vi] Derman, E. and Taleb, N.N. (2005) ‘The illusion of dynamic replication’, Quantitative Finance, 5(4):323326.. By way of simple example, consider an underlying which trades at 100, where the call option with a 105 strike trades at 8 and the put at 3 (i.e. a time value of 3 for each). Now suppose the underlying starts trending upwards. Intuitively, we night guess that the call is now worth say 10 and the put is worth 2. But given puts, calls and a forward contract, this would create an arbitrage. Putcall parity in effect requires that the put and call are both independent of the trend in the underlying.
[viii] One might wonder about a possible inconsistency of housing increasing ahead of earnings indefinitely. Do housing costs eventually absorb 100% of earnings? David Miles has thought about this. He suggests there are no compelling economic reasons why houses shouldn’t eventually become assets like jets: utilised by many people, but owned by only a very few. See Miles, D. and Sefton, J. (2018) ‘Houses across time and across place’.
[ix] ‘Tony and Cherie Blair’s property portfolio worth estimated £27m’ The Guardian, 14 March 2016.
[x] ‘Property is better bet than a pension says Bank of England economist’ The Guardian, 28 August 2016.
[xi] Cornell, B. (2009) ‘Warren Buffett, BlackScholes, and the valuation of longdated options.’ Journal of Portfolio Management, Summer 2010: 10711.
[xii] One might possibly refer to freehold reversions on short leaseholds, but this seems a feeble argument, because any market is realistically negligible.
[xiii] SS 3/17 proposed para 3.16.
[xiv] Eg SS3/17 proposed para 3.16.

Displaying entries 18 of 28 
Next page »