Sunday, 15 April 2018

Fair weather forecasting

The economics profession has had to endure some bad press over the last decade in the wake of the global financial crisis which we failed to foresee and, in the UK case, the dire predictions in the aftermath of the Brexit referendum that were not borne out. But in a way these two examples are to entirely miss the point. Economics is not a predictive discipline – as I have noted countless times before – so criticising economists for failing to predict macroeconomic economic outcomes is a bit like criticising doctors for failing to predict when people will fall ill.

Another of the narratives which has become commonplace in recent years is the notion that economists predict with certainty. This fallacy was repeated again recently in a Bloomberg article by Mark Buchanan entitled Economists Should Stop Being So Certain. In fact, nothing could be further from the truth. The only thing most self-respecting economic forecasters know for certain is that their base case is more likely to be wrong than right. The Bank of England has for many years presented its economic growth and inflation forecasts in the form of a fan chart in which the bands get wider over time, reflecting the fact that the further ahead we forecast the greater the forecast uncertainty (chart). Many other institutions now follow a similar approach in which forecasts are seen as probabilistic outcomes rather one in which there is a single outcome.

Indeed, if there is a problem with certainty in economic forecasting, it is that media outlets tend to ascribe it to economic projections. It is after all, a difficult story to sell to their readers that economists assign a 65% outcome to a GDP growth forecast of 2%. As a consequence the default option is to reference the central case.

One of the interesting aspects of Buchanan’s article, however, was the reference to the way in which the science of meteorology has tackled the problem of forecast uncertainty. This was based on a fascinating paper by Tim Palmer, a meteorologist, looking back at 25 years of ensemble modelling. The thrust of Palmer’s paper (here) is that uncertainty is an inherent part of forecasting, and that an ensemble approach that uses different sets of initial conditions in climatic modelling has been shown to reduce the inaccuracy of weather forecasts. In essence, inherent uncertainty is viewed as a feature that can be used to improve forecast accuracy and not as something to be avoided.

In fairness, economics has already made some progress on this front in recent years. We can think of forecast error as deriving from two main sources: parameter uncertainty and model uncertainty. Parameter uncertainty is derived from the fact that although we may be using the correct model, it may be misspecified or we have conditioned it on the wrong assumptions. We can try and account for this using stochastic simulation methods[1] which subject the model to a series of shocks and gives us a range of possible outcomes which can be represented in the form of a fan chart. Model uncertainty raises the possibility that our forecast model may not be the right one to use in a given situation and that a different one may be more appropriate. Thus the academic literature in recent years has focused on the question of combining forecasts from different models and weighting the outcomes in a way which provides useful information[2], although it has not yet found its way into the mainstream.

Therefore in response to Buchanan’s conclusion that “an emphasis on uncertainty could help economists regain the public’s trust” I can only say that we are working on it. But as Palmer pointed out, “probabilistic forecasts are only going to be useful for decision making if the forecast probabilities are reliable – that is to say, if forecast probability is well calibrated with observed frequency.” Unfortunately we will need a lot more data before we can determine whether changes to economic forecasting methodology have produced an improvement in forecast accuracy and so far the jury is still out. Unlike weather forecasting which at least obeys physical laws, economics does not. But both weather systems and the macroeconomy share the similarity that they are complex processes which can be sensitive to conditioning assumptions. Even if we cannot use the same techniques, there is certainly something to learn from the methodological approach adopted in meteorology.

Economics suffers from the further disadvantage that much of its analysis cuts into the political sphere and there are many high profile politicians who use forecast failures to dismiss outcomes that do not accord with their prior views. One such is the MP Steve Baker, a prominent Eurosceptic, who earlier this year said in parliament that economic forecasts are “always wrong.” It is worth once again quoting Palmer who noted that if predictions turn out to be false, “then at best it means that there are aspects of our science we do not understand and at worst it provides ammunition to those that would see [economics] as empirical, inexact and unscientific. But we wouldn’t say to a high-energy physicist that her subject was not an exact science because the fundamental law describing the evolution of quantum fields, the Schrödinger equation, was probabilistic and therefore not exact.”

As Carveth Read, the philosopher and logician noted, “It is better to be vaguely right than exactly wrong.” That is a pretty good goal towards which economic forecasting should strive.







[2] Bayesian Model Averaging is one of the favoured methods. See this paper by Mark Steel of Warwick University for an overview

Tuesday, 10 April 2018

Revisiting Brexit demographics

Even some of those who believe Brexit to be a thoroughly bad idea are beginning to realise that it is a process that cannot now be stopped. Indeed, I have long believed that full EU membership will end in March 2019 because of (a) the investment that the UK government has sunk into delivering Brexit, which will likely preclude parliament overturning the decision, and (b) the sheer cost in terms of time and effort required to deliver a second referendum which rules out the option that people will be given a chance to change their mind.

Wolfgang Münchau in his FT column last week gave four reasons why The time for revoking Brexit has passed: (i) both sides have made significant progress towards an agreement; (ii) domestic opposition to Brexit remains fragmented, which means that it has been hard for Remainers to find a credible figurehead to get behind; (iii) the UK economy has held up better than expected thus reducing the extent of buyers’ remorse and (iv) the EU has itself moved on, and having accepted that Brexit is inevitable is now turning to the issues which matter for its own future (relationships with the US and Russia and reforming EMU). Obviously this has not gone down well with hard core Remainers but sane commentators, such as the lawyer David Allen Green, increasingly point out that the energy would be better spent trying to shape the post-2019 transition rather than fight battles that have already been lost.

In order to consider what should be the appropriate strategy – fight Brexit or shape the future – consider the demographic evidence. The ONS’ population projections suggest that the 55+ cohort which voted predominantly for Brexit will have declined by almost 1.6 million between mid-2016 and mid-2019 (chart). To put this into context, the margin of victory for Leave was slightly less than 1.3 million. Not surprisingly, the further ahead we roll the numbers the bigger the decline,  such that by 2026 the 2016 cohort aged 55+ will have declined by 5.3 million (a 27% reduction). This raises the obvious question: In whose name is Brexit being conducted? It is all very well older voters opting to leave the EU but useless both to them and younger voters if they are not around to see it. So on that basis, there is an argument in favour of continuing to oppose Brexit.


As an aside, I did do some back of the envelope calculations a few months ago which bear repeating. If we were to apply a weighting structure based on the fact that younger voters have more to lose from leaving the EU and therefore we allow their votes to count for more, it is possible to come up with a scenario in which the June 2016 vote would have produced a Remain vote. Assume (arbitrarily) that votes account for a positive weight so long as voters are under the age of 90, with the weight derived as follows (90 – age / 90). For those in the 18-24 age group, if we assume a median age of 21, applying the formula gives their vote a weight of 0.7633; for those in the 25-34 bracket, the median age is 29.5 and the weight declines to 0.672. As age rises, so the weight declines. Even allowing for a low turnout amongst younger voters, survey-based evidence of voting patterns indicate this would be enough to give Remain a 52.5%-47.5% majority. Whilst such an idea should not be taken too seriously, as it cuts across the principle of one person-one vote, it does at least try to introduce some inter-generational fairness which many people claim is lacking in the whole debate.

The case for instead campaigning for the best post-Brexit settlement is also supported by the fact that the constituency most in favour of Brexit will soon become less politically relevant. The likes of Nigel Farage, who did so much to whip up Brexit support, might bewail the nature of any agreement hammered out between the UK and EU27 but he is increasingly becoming a political irrelevance as those who bought into his vision of a backward-looking Britain become less active (look out for UKIP to take a serious beating at the local UK elections on 3 May). The same may also be true for Jacob Rees-Mogg and Boris Johnson, the hardliners on the front line of Conservative politics, who belong to a party with an average membership age of at least 57 and whose numbers are only around 20% of their Labour opponents. The point, of course, being that Conservative parliamentary MPs will not in future have to be quite so beholden to their increasingly ageing party membership.

You can never say never on Brexit-related matters. But if there is to be any subsequent vote it will most likely only take the form of a parliamentary vote on the terms of the final agreement offered by the EU. After all, the will of the people has already been heard so there is no need to ask them again. It’s just a shame that large numbers of those who voted for Brexit will not be around to enjoy it.

Sunday, 8 April 2018

Don't be casual with words

They say that the pen is mightier than the sword. Consequently, it is incumbent upon us all to use our words judiciously. But it is also important that those consuming any given message are careful to interpret the information given to them without extrapolating beyond what is in front of them. This is particularly important in a world riddled with fake news in which messages can be subtly tweaked to say something that was not in the original communication, which is then passed on down the chain like the old game of Chinese whispers. It is also an issue for policymakers, particularly central bankers who are trying to communicate with markets and the wider public.

This issue was thrown into sharp relief by the recent TV interview by British Foreign Secretary Boris Johnson, who in response to the question of whether Russia was the source of the poison used in the Salisbury incident replied: “When I look at the evidence, the people from Porton Down, the laboratory… they were absolutely categorical, I mean, I asked the guy myself, I said, 'are you sure?' and he said 'there's no doubt.'” Only this week, Gary Aitkenhead, the chief executive of the government’s Defence Science and Technology Laboratory stated that whilst the government combined the laboratory’s scientific findings with information from other sources to conclude that Russia was responsible for the Salisbury attack, “we have not verified the precise source.”

It would appear that Johnson jumped to a conclusion that may not (yet) be supported by the evidence – statistically known as a Type I error. Meanwhile, the more cautious Aitkenhead refused to deal in speculation – as befitting someone leading a team of scientists. But whilst there is a discrepancy between these two versions of events, which has raised question marks against Johnson’s judgement, it is important to note that Aitkenhead did not say that the source was not Russian, as some of the more excitable media commentators have suggested.

I was similarly struck by a Twitter exchange involving the physicist Brian Cox who noted that “we have a generation of senior politicians who were not taught how to think properly - more science in their education would have helped. They use imprecise, woolly language, which is symptomatic of woolly thinking.” Cox was careful not to dismiss the arts and social sciences but was nonetheless inundated with comments accusing him of doing just that, thereby rather proving his point. People may disagree, but what I interpreted Cox as saying was that science demands very high levels of certainty and many people could benefit from understanding what constitutes a reasonable degree of proof before making pronouncements in public.

But perhaps the problem is as much to do with the medium through which many of our news stories are filtered. Take, for example, the way in which the actions of central banks are reported. In August 2013 the Bank of England unveiled a forward guidance strategy based on the unemployment rate. It announced that Bank Rate would not rise from its then-current level of 0.5% until the unemployment rate fell to 7%. This strategy was conditional upon ‘knockouts’ designed to allow for rate hikes if certain threats to inflation became evident.

Although in fact unemployment fell well below 7% over the next twelve months, the Bank did not raise rates for a variety of reasons – domestic inflation was falling whilst the international environment was plagued by euro zone uncertainty and concerns over Chinese events. Nonetheless, many people fell into the trap of arguing that the Bank’s intentions did not match with its actions which rather destroyed its credibility. But it is important to look at exactly what the BoE said: “the MPC intends not to raise Bank Rate from its current level of 0.5% at least until the … headline measure of the unemployment rate has fallen to a threshold of 7%.” This was not a commitment to raise rates once unemployment hit 7% – only a commitment not to do so as long as it remained above the threshold, which is a very different matter.

Despite the best efforts of the BoE to explain the conditional nature of economic forecasts and the risks surrounding the central case projection, the subtleties of this message are often lost in media translation. Thus the mechanistic nature of the initial forward guidance rule was always given more prominence than it deserved. Perhaps the BoE should have been more aware that the issue would be construed in this way, and framing a rule based on the unemployment rate laid it open to more criticism than was necessary. I am not convinced that the BoE did a great job of communicating its message at the time, but it certainly was not helped by some of the reporting surrounding its commentary. 

In his latest speech, MPC member Gertjan Vlieghe suggested that in his view it is “useful to provide a snapshot of how today’s central growth and inflation forecast map into my view of the likely central path of interest rates.” This is exactly the approach adopted by the Swedish Riksbank which sets out an illustrative path for the policy rate conditioned upon its economic forecast. Vlieghe pointed out that “if growth and inflation turn out differently from this central forecast, the path of interest rates will be different too. That should not be seen as a mistake, or a breaking of an earlier promise. It should be seen for what it is, namely an appropriate response to a changed economic outlook.“

Whilst this is totally correct, past UK experience suggests that if the BoE were to adopt such a strategy, a large number of people will misunderstand the nature of conditional versus unconditional forecasts and use this as a stick with which to beat the central bank when it is unable to deliver on its forecast. We all have to choose our words carefully, but it seems that central banks have to do so almost us much as foreign secretaries

Thursday, 29 March 2018

Article 50: One year on


It is now exactly a year since Theresa May stood before UK parliament and announced that she was triggering the Article 50 mechanism that would sweep the UK out of the EU within two years. Halfway through the mandated two year time period, the UK has made more progress with regard to negotiating a deal than I believed possible at the time. Nonetheless many mistakes have been made along the way, and there is still much work to do before UK is able to arrive at a deal which will minimise the damage caused by what I still consider to be an act of economic self-harm. Perhaps more significantly, the country remains as split as ever on Brexit. The ultras still want it at any price whilst there is still a significant core of Remainers who want to prevent it altogether.

Looking back over the past 12 months, there is certainly a lot less gung-ho from the prime minister. The idea that “no deal is better than a bad deal” has been quietly dropped and some of the more strident rhetoric which was designed to keep the pro-Brexit faction of her party onside has been toned down. Of course, this is in large part the result of the ill-judged election call which cost the Conservatives their parliamentary majority last June, and which has weakened the prime minister’s position. More significantly, parliament has exercised a greater of control over the domestic legislation process than was initially envisaged. The government’s original plan was that it would be the prime driver of Brexit legislation but the Withdrawal Bill has been the subject of numerous amendments during its parliamentary passage and may not be the all-encompassing piece of legislation that was envisaged a year ago.

The toning down of domestic rhetoric is also a consequence of the Realpolitik of dealing with the EU27 across the negotiating table. For example, during her speech to parliament in March 2017, the PM promised to “bring an end to the jurisdiction of the European Court of Justice in Britain.” Earlier this month, she was forced to recognise that “even after we have left the jurisdiction of the ECJ, EU law and the decisions of the ECJ will continue to affect us.” That is a very different message to the one she tried to sell a year ago but it is a recognition that the form of close partnership that the UK wants with the EU27 will necessarily involve compromises that will not please everyone in her party. However, it raises the prospect of ongoing domestic political upheaval as it becomes clear that Brexit simply cannot take place on the no-compromise terms envisaged by many Leavers.

It was evident a year ago that the two year timeframe was never going to be long enough to ensure that the final agreement between the UK and EU27 could be ratified. And so it has proven, with the announcement last week that the two sides will implement a transition deal starting in a year’s time which runs to end-2020. The good news is that this will remove the prospect of a cliff-edge Brexit in March 2019 although does not preclude the possibility that the cliff-edge has merely been postponed to December 2020. Nonetheless, this is good news for financial institutions in particular, who until yesterday were unsure whether the arrangements that allow cross-border transactions in financial services would come to an end in March 2019. However, the Bank of England has now opined that it “considers it reasonable for firms currently carrying on regulated activities in the UK by means of passporting rights … to plan that they will be able to continue undertaking these activities during the implementation period in much the same way as now.” In other words, we now have more time to prepare, and hopefully more information on the future of financial services will be forthcoming in the interim.

As regards the three key issues that formed the basis of phase one of the Brexit negotiations, the UK and EU27 are broadly agreed on guaranteeing citizens’ rights and the final exit bill. However, whilst both sides agree in principle on the issue of maintaining an open border between the Irish Republic and Northern Ireland, the UK has not yet come with a solution which will satisfy the requirements of both sides. It is thus notable that whilst the UK and EU27 agree on 75% of the issues outlined in last week’s joint agreement document, the Irish border issue, and the thorny question of how much jurisdiction the ECJ will be allowed to have, remain to be resolved.

Compared to what was expected on the economy twelve months ago, GDP growth has been broadly in line but unemployment has fallen faster and CPI inflation has picked up more than anticipated. The threat of Brexit has clearly not derailed the economy but it has arguably underperformed relative to what might have occurred in its absence. Indeed, the UK remains at the bottom of the G7 growth league and is the only major economy which registered slower growth in 2017 than in 2016. I thus remain to be convinced that Brexit will prove a net benefit for the UK economy, for reasons that I have outlined numerous times before.

But my biggest issue with Brexit remains the way in which the Leave campaign made their case ahead of the referendum (and allegations of funding impropriety which have surfaced in recent days does nothing to assuage these concerns) and the way in which the result was interpreted as a winner-take-all  event. Leave supporters continue to believe that the “will of the people” justifies Brexit at any price and precludes the option of revisiting the decision. But parliamentary democracy in the UK is founded on the principle that no parliament can take a decision that binds its successors. Yet that is precisely what Brexit implies. It is the imposition of a policy that younger generations – and perhaps those not yet born – will have to contend with. It is in many respects a profoundly undemocratic decision.

Of course, the Leave side can reasonably contend that it is undemocratic to be shackled to an institution of which they do not wish to remain part. Thus, one year after triggering Article 50 and almost two years after the referendum, the legal and constitutional implications of the decision are no nearer being resolved. Expect us to be having much the same debate about the (de)merits of Brexit in March 2019 as we did in March 2017 (or even March 2016).

Wednesday, 28 March 2018

Examining the case for a wealth tax

I have pointed out previously that the huge fiscal tightening imposed on the UK over the past eight years has come about through huge cuts in spending and relatively little by way of additional taxation (most recently here). Now that the balance between current spending and revenue has been restored, there is no serious rationale for further swingeing spending cuts. Undoubtedly this was one of the factors supporting the announcement by Theresa May earlier this week that additional funding will be made available for the NHS.

Whilst this is a welcome development, the pressure on public finances has not suddenly gone away now that the deficit on current spending has been eliminated. If anything, as the population ages, the demands on healthcare and social services will continue to rise. It is not just the health system that is struggling to cope: The benefits system is under pressure too, and there is a huge wedge of people at the lower end of the income scale who are struggling to gain access to the benefits to which they are entitled.

What the government has not outlined is how much additional funding will be provided nor where it will come from. After eight years of grinding austerity, raising existing taxes to fund the additional resource requirements will not be acceptable to taxpayers who would regard it as yet another kick in the teeth for the squeezed middle. Indeed, raising income taxes appears to be a non-starter. In any case, efforts to compensate low-paid workers for the curbing of their benefits via an increase in personal income tax allowances is already estimated to have cost a cumulated £12bn in foregone revenue in FY 2017-18. Having raised VAT to an already-lofty 20%, the scope for raising indirect taxes is also limited. It would thus be sensible to look for alternative revenue sources, and two apparently radical fiscal suggestions have been given more prominence in recent weeks. One is the possibility of some form of wealth tax and the other is to introduce a hypothecated tax to fund such items as the NHS.

In this post I will consider only the option of a wealth tax and will come back to hypothecated taxes another time. The rationale for a wealth tax is that incomes, which form the basis of most direct taxes, have remained stable relative to GDP over the past three decades whereas wealth holdings have significantly increased. Thirty years ago, UK household net financial wealth holdings were a multiple of 1.2 times GDP but today the multiple stands at 2.3. The picture looks even more favourable if we add in wealth held in the form of housing. Financial and housing wealth together amount to around 5 times GDP compared to a multiple of 3 in 1988 (chart).

But why should this windfall gain be subject to tax? One strong argument is that taxes on income do not take into account the claim on overall resources that wealth confers. For example, there is a difference in the ability to pay a bill of (say) £1,000 between someone who earns £20,000 from labour income and someone who earns £20,000 as a return on a wealth stock of £1 million. As a result, a wealth tax will raise the overall progressivity of the tax system by taking account of the additional taxable capacity conferred by wealth. But wealth holdings are already subject to tax in some form or another. For example, liquidating wealth holdings subjects individuals to capital gains tax. Moreover, the flow of income accruing to a stock of financial wealth is liable to income tax. In addition, even if the wealth is untouched and generates no direct financial benefit to the individual, if it is passed on as a bequest to future generations it is subject to inheritance tax.

In any case, there are a huge number of practical difficulties associated with introducing a wealth tax. To name just a few: How much should it raise? On which assets should it be levied? At what rate should it be set? Should it be set at a single or graduated rate? Howmuch (if any) of an individual’s wealth should be exempt? Even if we could agree on these issues, once such a tax has been implemented, two of the biggest ongoing problems are disclosure and valuation. The disclosure problem is obvious: It is easy to hide many forms of wealth (think how simple it is to hide small but precious items such as diamonds). As a result, compliance becomes a problem and even honest taxpayers have an incentive to cheat if their fellow citizens are not playing ball. In addition, the valuation problem is often underestimated, particularly if the absence of a market transaction makes it difficult to establish an appropriate valuation metric. It is for all these reasons that the proportion of OECD countries levying a wealth tax has fallen over the last three decades. In 1990, half of them did so (17) but by 2010 only France, Norway and Switzerland levied them on an ongoing basis.

Despite all the practical difficulties, there is a genuine case for some form of wealth tax on grounds of inter-generational fairness. For example, older generations tend to hold the vast bulk of the wealth whilst benefiting from additional public spending on areas such as the NHS. It is for this reason that the Resolution Foundation recently put forward a series of proposals to reform property taxes, including the introduction of a progressive property tax to replace the existing Council Tax and raising taxes on highest-value properties.

Such measures will clearly not be popular with Conservative voters, and is one reason why they will not be implemented any time soon. But as the fiscal debate increasingly switches away from deficit reduction and focuses more on what the state can reasonably be expected to provide, the issue of inter-generational equity will inevitably rise up the list. We may not want to talk about wealth taxes today but it is an issue that is unlikely to go away.

Saturday, 24 March 2018

China crisis

The announcement that the US government plans to impose tariffs of 25% on $50 bn of goods imported from China has set the cat amongst the pigeons, with equity markets turning sharply lower and safe havens such as gold gaining ground. But whilst this may look like a simple application of economic nationalism, led by a president who clearly has no appreciation of the damage that he may be about to unleash, it is worth considering the underlying US grievances. First and foremost, the Administration believes that China has gained from the unfair expropriation of US intellectual property and that some form of recompense is required. But this is not only a US problem: Many western governments are beginning to worry about the asymmetries which their companies face in doing business in China – the US is simply the first to take action.

Three weeks ago The Economist, which has long been a cheerleader for global free trade, expressed reservations about the direction in which China is heading. In its view, “the China of Xi Jinping is a great mercantilist dragon under strict Communist Party control, using the power of its vast markets to cow and co-opt capitalist rivals, to bend and break the rules-based order.” The article went on to point out that “Chinese markets are opened only after they have ceased to matter” whilst regulators “take away computers filled with priceless intellectual property and global client lists.” A week later, the Financial Times ran a story headed “Backlash grows over Chinese deals for Germany’s corporate jewels” following the news that Geely has acquired a 10% stake in Daimler which has raised fears that Daimler’s know-how in the field of electronic vehicles will filter back to China without appropriate compensation.

Industrial espionage is not new, of course, and we should remember that some of the earliest examples of such activity involved the transfer of Chinese technological advantages into western hands. For example, in the 1800s China had a monopoly on tea growing until a British botanist, acting on behalf of the East India Company, smuggled tea plants and seeds to India and established an industry whose output eventually eclipsed that of China. But this does not assuage current western concerns that the heavy-handed techniques employed by the Chinese are backed by the government. Whilst western companies have long been required to hand over technological secrets before being allowed to conduct business in the Chinese market, the fear is now that many of China’s major foreign acquisitions have been funded by state-backed institutions. Indeed, the FT reports that the message conveyed by the Geely chairman to the German media, that the company wanted to cooperate with Daimler, was not the one he gave to his home audience which was that the action was designed to “support the growth of the Chinese auto industry” and “serve our national interests.”

Seen in these terms, it is hardly a surprise that the US feels that it needs to take some form of action. But despite Trump’s rhetoric to the contrary, it is hard to believe that the US really wants to embark on a major trade war. The fact that China responded with tariffs on only $3bn of imports from the US suggests that it is not willing to escalate the problem either. In the grand scheme of things, the US actions will have next to zero impact on Chinese GDP. A 25% tariff on $50bn of exports amounts to a total hit of $12.5bn which is insignificant in a Chinese economy whose output is valued at $12 trillion. But it is what comes next that matters.

In assessing the outcomes, I am indebted to some modelling analysis conducted by Bloomberg analysts using the NiGEM global macro model. In its first scenario, Bloomberg assumed that the $50 bn figure becomes a US revenue target to reflect the estimated damage done to the economy by intellectual property theft. But the impact of such an outcome, which is four times more significant than what we believe likely to happen, only costs 0.2% of Chinese GDP by 2020. Bloomberg thus concluded that China would be better off not retaliating because the economic losses resulting from inflation generated by higher import tariffs would exceed this amount. In a second simulation exercise, Bloomberg tried to assess the impact of a 45% tariff on all Chinese imports – a figure that Trump happened to mention on the campaign trail. This resulted in a 0.7% hit to GDP by 2020 which they concluded would “not be disastrous.”

But the real problem comes when the tariff war goes global and pulls in countries other than China. An across-the-board rise of 10% in US import tariffs which is met with a similar response by all the US’s trading partners results in a 0.5% drop in Chinese output but a 0.9% drop for the US. This highlights the self-defeating nature of tariff wars and results from the fact, as Bloomberg pointed out, “the tariffs affect 100% of US trade, but for China and other countries, [they] only impact bilateral trade with the US.”

On the surface, the optimal Chinese response to higher tariffs which in aggregate terms amount to little more than a gnat’s bite, would be to ignore them. Since it continues to grow faster than the US it will – in the not-too-distant future – overtake the US as the world’s largest economy and will be in a position to retaliate more effectively. In any case, there are other ways to respond. China is also the largest holder of US Treasury securities (chart). It could thus tweak the tail of the US by selling Treasuries and put upward pressure on US longer-term interest rates.


But this issue is about more than just economics. This is a tale of two alpha economies demonstrating their political muscle, which runs the risk of miscalculation. Unlike Japan in the 1980s, when the US tried to exert pressure using similar tactics, China is a much more potent economic and political rival. It is also not politically allied with the US in the way that Japan was. I have never subscribed to the idea that China and the US are doomed to fall into the Thucydides Trap. But this is a time for cool heads and as I noted in late-2016, it is at times like these that we will miss the rationality of an Obama.

Monday, 19 March 2018

Should we continue spending a penny?


One of the policy ideas that was floated in the wake of last week’s UK Spring Statement was the possibility of scrapping the lowest denomination coins (the 1p and 2p piece). According to research cited by the Treasury last week, “surveys suggest that six in ten 1p and 2p coins are used in a transaction once before they leave the cash cycle. They are either saved, or in 8% of cases are thrown away” (see chart). Since the Royal Mint has to produce and issue additional coins to replace those falling out of circulation, and because “the cost of industry processing and distributing low denomination coins is the same as for high denomination coins” this not unreasonably raises the question of whether we need the lowest denomination coins.


I have to confess that I have long wondered the same thing given that over the years I have collected large quantities of pennies in jars, which weigh a lot but have little monetary value. Moreover, the amounts in coin which vendors are legally obliged to accept in the UK are very low. For example, the legally acceptable maximum payment in 1p and 2p coins is a mere 20p (it can be more, depending on the discretion of the payee). For 5p and 10p coins, that limit rises to £5. That is a very arbitrary amount: I can legally only use 20 x 1p coins in one transaction whereas I can use 100 x 5p coins (which are also  irritatingly small). I also recall that the one time many years ago when I wanted to cash in my pile of bronze and took it down to the automatic machine at my local supermarket, the nominal value of the coins was something like £42 but I paid a £2 commission fee, corresponding to almost 5% (and yes, I know I should have taken it to a bank).

So it should be clear that I am not a fan of coinage that clutters up space for little return. And the UK has form when it comes to taking small denomination coins out of circulation. Way back in 1960 the farthing was removed from circulation. For those unfamiliar with UK coinage, the farthing was equivalent to a quarter of a pre-decimal penny – around one-tenth of a modern penny. In 1983, the halfpenny was taken out of circulation and I do not recall any great wailing and gnashing of teeth at the time. Moreover, since 1984 the price level as measured by the CPI has increased by a factor of 2.5, and one penny today is worth less than 0.4p in 1984 prices. In other words, the real value of the penny today is less than the halfpenny in 1984.

One of the arguments which is increasingly used in favour of demonetising the smallest unit is that its face value is often less than the cost of production. In the US, for example, it cost 1.5 cents to mint each penny in 2016, and although the US continues to issue the penny, the Bank of Jamaica recently announced it would phase out the one cent coin on the grounds that it is too expensive to make. The Royal Mint in the UK has not revealed how much it costs to strike a penny in the UK, but so far as we know it is less than its face value.

But the penny has a strong hold on the UK public imagination and last week’s suggestion was met with such a howl of outrage that the government was forced to back down. It has been around for a thousand years in various guises although it was not until 1714 that it began its transformation from a little-used small silver coin to the bronze item that has become such a staple part of the UK coinage system. Some people are concerned that abolishing the penny would encourage retailers to round up prices – and there may be some truth to that – but a better argument against abolition is that low income households are bigger users of cash and they would likely be hit disproportionately hard by its abolition. Indeed, it is notable that the Treasury’s call for evidence was based on the notion that we are increasingly moving towards electronic cash – but whilst that may be true for most, it is not so for all. The charity sector was also quick to argue that the collection bucket is a useful place to get rid of low denomination coins.

But as much as I value tradition and have no desire to impoverish the less well-off, the arguments in favour of keeping the penny are not strong enough to save it in my view. Inflation has eroded its usefulness – as anyone who has tried to spend a penny in recent years can testify. It ought to go the way of the threepenny bit, the sixpence and the half crown – not to mention the pound note. In any case, there is still the 2p – why keep the penny when you can have two of them in one coin?