Wednesday, 31 January 2018
Janet Yellen: A job well done
Today’s FOMC meeting was effectively the last act of Chair Janet Yellen, whose four-year term expires on 3 February. It is unusual for a one-term Chair not to be offered another term: Her tenure marks the shortest since G. William Miller’s ill-fated 17 month spell in 1978-79 and is indeed the second shortest since 1934 (beating the curtailed chairmanship of Thomas McCabe by a mere 14 days). The Fed Chair is in the gift of the President, so he is quite within his rights not to renew Yellen’s term. Nonetheless, I cannot help thinking that the Administration may be missing out by not giving her another four years.
Compared to her two immediate predecessors, Yellen came across as relatively unflashy and low key. She never sought the limelight in the same way as Alan Greenspan, and as good an academic economist as she is, Yellen never seemed to exude the same star quality as Ben Bernanke (maybe that’s an unfair characterisation but it is purely a personal impression). Yet in her understated way, Yellen has moved the dial further forward as the Fed seeks to move away from the crisis measures of 2008-09. In many respects, Bernanke’s inheritance was the result of years of loose monetary policy and a relaxed attitude to markets under Greenspan. Accordingly, much of his eight years were spent trying to prevent the economic and financial system from collapsing and Bernanke scored high marks for recognising the symptoms of the Great Depression and introducing a massive monetary expansion to combat it.
When Yellen took over in 2014 the economy was on a solid footing but monetary policy was still jammed in high gear, with interest rates at zero and the central bank balance sheet all but maxed out. The decision to start raising interest rates in late-2015 – the first increase in almost nine years – passed off without incident and an additional four increases, each of 25 bps, have not done any damage to the economy or to markets. Yellen also presided over the decision to start running down the Fed’s balance sheet although it will be up to her successor (Jay Powell) to fully implement it.
On the whole, it is likely that Yellen will be judged as a safe pair of hands who navigated the Fed through some difficult waters. It appears that her only failing was to be a Democrat at a time when an avowedly Republican Congress was in place. Whilst it was conservative lawmakers’ distrust of the Fed’s QE policy, fully supported at the time by Yellen, which counted against her, it is ironic that she has overseen the start of balance sheet unwinding – a process which has never been tested in the modern era.
As of next week, Jay Powell will be occupying the big chair and although he is widely seen as the continuity candidate, he may have his work cut out. For one thing, the US expansion is already long in the tooth, and assuming nothing goes wrong beforehand, May will mark the second longest expansion in recorded history. Quite how the Fed will respond if the economy starts to wobble may be an issue for the latter months of 2018. Then there is the question of how the Fed deals with any market wobble. For the last nine years, markets have generally only gone in one direction – upwards – but with valuations looking stretched it may not be too long before the bubble of optimism starts to deflate.
In the past, Greenspan and Bernanke were not averse to nudging monetary policy to help markets along. Whether Powell will act in the same way remains to be seen. But Janet Yellen will not be around for these issues to blot her copybook, which is a pity because the true test of how good central bankers are at their job is determined by their reaction to adversity. So we will never know how good she could have been, but as it is, Yellen can reflect on a job well done over the last four years.
Tuesday, 30 January 2018
Brexit: The (un)civil war continues
It has been clear all along that Brexit has little to do
with economics and everything to do with a view which a certain group within
the Conservative Party has of the UK and its place in the world. It is also not
news that the form of Brexit which this group intends to pursue is not one
which large parts of the electorate voted for – even those who voted in favour
of leaving the EU. But it is increasingly evident that this is becoming an
obstacle to the smooth running of government as the fissures within the
Conservative Party threaten to split it apart.
Last week, Boris Johnson again broke ranks with his cabinet colleagues via a series of pre-briefed news articles by calling for additional NHS spending, with newspaper reports suggesting he was pushing for an extra £100m per week (£5bn per year) after Brexit. Recall that Johnson was one of the prime supporters of the claim that the UK would be able to save £350m per week after leaving the EU, a large proportion of which could be channelled towards health spending. Whatever you think of Johnson as a politician, his call for additional NHS spending is well made. According to the OBR’s projections, total health spending is set to decline from 7.2% of GDP in FY 2017-18 to 6.8% by 2019-20. Simply to hold spending constant as a share of GDP implies increasing funding by £166m per week by 2020. But as is often the case with Johnson, there is usually more than meets the eye and the rest of his cabinet colleagues clearly did not think much of his attempts to hijack the political debate.
Meanwhile, Chancellor Philip Hammond is the focus of ire from the Leavers following his speech at the World Economic Forum in which he called for a soft Brexit that would result in only “very modest” changes to the UK’s relationship with the EU. The prime minister has been called upon to sack her Chancellor as concerns mount amongst pro-Leave MPs that the UK is “diluting Brexit” and that it may become an EU “vassal state” during the transition period. This comes at a time when leaked reports prepared for the government suggest that in the absence of a trade deal with the EU, output would be 8% below the pre-referendum baseline over a 15 year horizon. A free trade agreement with the EU – the current favoured option – would result in a 5% decline in output whilst the soft Brexit option (i.e. continued single market membership) would result in a 2% decline in GDP. All of which comes after Brexit Secretary David Davis refused to release impact assessments covering 58 sectors of the economy when requested to by parliament, claiming they did not exist.
All this is, to be sure, a deeply unsatisfactory state of affairs and it highlights the weakness of Theresa May’s position. Despite Boris Johnson’s constant flouting of collective cabinet responsibility, such is the strength of his grassroots support that the prime minister is unable to remove him without jeopardising her own position. If she were to bow to the minority group of hardline MPs calling for Hammond’s departure, the other half of the party would similarly revolt. There is thus mounting concern that there may be a challenge to May’s leadership – a procedure which requires the support of just 48 MPs – although since most Conservatives believe this would hasten their exit from government, this is not anybody’s favoured scenario.
But the Conservatives only have themselves to blame. It was their party that called the referendum, and their government which decided the terms on which they would seek to leave despite being warned of the dangers. Theresa May has compounded the problem by not offering any leadership on the Brexit issue. She has not articulated what she wants from the EU, other than the closest possible relationship, and in the words of FT commentator Philip Stephens, “Mrs May, it is obvious, has no organising vision of the shape of Britain’s post-Brexit relationship with its own continent ... As things stand, history will remember her as an accidental prime minister who foolishly squandered a parliamentary majority in an election she had no need to call — the worst prime minister of modern times with the exception, of course, of her immediate predecessor, David Cameron.”
Former LibDem leader Nick Clegg, also writing in the FT, noted that as it currently stands the proposed transition period which will run beyond the end of the Article 50 period in March 2019, will leave the UK powerless; a member of the EU in all respects but one – the ability to have any say in writing EU legislation. This is very much the position Norway finds itself in now. Why this comes as any surprise to anybody beats me. I pointed out in 2015 that such a Norwegian outcome “would appear to be even less optimal than that which the UK faces today.”
There is increasingly little faith in the government’s ability to square this circle. It clearly appears that Theresa May gambled on the UK’s ability to quickly achieve a deal with the EU without thinking through the implications of what this might entail. In that sense, she is very much in tune with those elements of her party who have spent much of their political career either ignoring or misreading European issues. And now she is their prisoner.
Last week, Boris Johnson again broke ranks with his cabinet colleagues via a series of pre-briefed news articles by calling for additional NHS spending, with newspaper reports suggesting he was pushing for an extra £100m per week (£5bn per year) after Brexit. Recall that Johnson was one of the prime supporters of the claim that the UK would be able to save £350m per week after leaving the EU, a large proportion of which could be channelled towards health spending. Whatever you think of Johnson as a politician, his call for additional NHS spending is well made. According to the OBR’s projections, total health spending is set to decline from 7.2% of GDP in FY 2017-18 to 6.8% by 2019-20. Simply to hold spending constant as a share of GDP implies increasing funding by £166m per week by 2020. But as is often the case with Johnson, there is usually more than meets the eye and the rest of his cabinet colleagues clearly did not think much of his attempts to hijack the political debate.
Meanwhile, Chancellor Philip Hammond is the focus of ire from the Leavers following his speech at the World Economic Forum in which he called for a soft Brexit that would result in only “very modest” changes to the UK’s relationship with the EU. The prime minister has been called upon to sack her Chancellor as concerns mount amongst pro-Leave MPs that the UK is “diluting Brexit” and that it may become an EU “vassal state” during the transition period. This comes at a time when leaked reports prepared for the government suggest that in the absence of a trade deal with the EU, output would be 8% below the pre-referendum baseline over a 15 year horizon. A free trade agreement with the EU – the current favoured option – would result in a 5% decline in output whilst the soft Brexit option (i.e. continued single market membership) would result in a 2% decline in GDP. All of which comes after Brexit Secretary David Davis refused to release impact assessments covering 58 sectors of the economy when requested to by parliament, claiming they did not exist.
All this is, to be sure, a deeply unsatisfactory state of affairs and it highlights the weakness of Theresa May’s position. Despite Boris Johnson’s constant flouting of collective cabinet responsibility, such is the strength of his grassroots support that the prime minister is unable to remove him without jeopardising her own position. If she were to bow to the minority group of hardline MPs calling for Hammond’s departure, the other half of the party would similarly revolt. There is thus mounting concern that there may be a challenge to May’s leadership – a procedure which requires the support of just 48 MPs – although since most Conservatives believe this would hasten their exit from government, this is not anybody’s favoured scenario.
But the Conservatives only have themselves to blame. It was their party that called the referendum, and their government which decided the terms on which they would seek to leave despite being warned of the dangers. Theresa May has compounded the problem by not offering any leadership on the Brexit issue. She has not articulated what she wants from the EU, other than the closest possible relationship, and in the words of FT commentator Philip Stephens, “Mrs May, it is obvious, has no organising vision of the shape of Britain’s post-Brexit relationship with its own continent ... As things stand, history will remember her as an accidental prime minister who foolishly squandered a parliamentary majority in an election she had no need to call — the worst prime minister of modern times with the exception, of course, of her immediate predecessor, David Cameron.”
Former LibDem leader Nick Clegg, also writing in the FT, noted that as it currently stands the proposed transition period which will run beyond the end of the Article 50 period in March 2019, will leave the UK powerless; a member of the EU in all respects but one – the ability to have any say in writing EU legislation. This is very much the position Norway finds itself in now. Why this comes as any surprise to anybody beats me. I pointed out in 2015 that such a Norwegian outcome “would appear to be even less optimal than that which the UK faces today.”
There is increasingly little faith in the government’s ability to square this circle. It clearly appears that Theresa May gambled on the UK’s ability to quickly achieve a deal with the EU without thinking through the implications of what this might entail. In that sense, she is very much in tune with those elements of her party who have spent much of their political career either ignoring or misreading European issues. And now she is their prisoner.
Monday, 29 January 2018
Reflections from snow-topped mountains
One of the main strands of Donald Trump’s appeal to the
American public is that he is an outsider. Of course, that is not true – and
never has been. As the president said in
a speech
in Arizona last year, “I was a good
student. I always hear about the elite. You know, the elite. They're elite? I
went to better schools than they did. I was a better student than they were. I
live in a bigger, more beautiful apartment, and I live in the White House, too,
which is really great.” So it probably should not have been a great
surprise that he became the first sitting president in almost two decades to
attend the World Economic Forum’s jamboree for the great and the good in Davos[1].
Not surprisingly, Trump was the star of the show and the
positive message Europe heard was that “America
First does not mean America alone.” But he also pointed out that the world
"cannot have free and open trade if
some countries exploit the system" and that Washington "will no longer turn a blind eye to unfair
trade policies." The WEF’s annual report indeed highlighted that the
risk of some form of conflict was high on the list of issues which could derail
the current friendly economic environment. The WEF notes that “charismatic strongman politics is on the
rise” that has hastened the move away from the rules-based multilateralism
which has underpinned the peace and prosperity of the post-WWII economic
system. The US has blocked appointments to the WTO’s seven-member Appellate
Body during Trump’s tenure, and two seats are currently waiting to be filled. A
weakening of the WTO’s ability to resolve disputes does nothing to assuage
concerns that trade tensions between the US and China could yet become a major
problem.
But one of the biggest curiosities of the Davos bash is that
it should happen at all. One of the reasons why “strongman politics” is on the
rise is that many millions of ordinary voters feel left behind by the advance
of the global capitalist economy, which appears to benefit the very few at the
expense of the many. Two weeks ago BlackRock CEO Larry Fink distributed a
letter addressed to the CEOs of global companies arguing that “society is demanding that
companies, both public and private, serve a social purpose.” Economists
such as Milton Friedman would not agree. Writing in 1970, Friedman argued that
a business executive who exercises social responsibility in the course of their
work “must mean that he is to act in some
way that is not in the interest of his employers”. Businesses which do
anything other than maximise profits were “unwitting
puppets of the intellectual forces that have been undermining the basis of a
free society these past decades” and were guilty of “analytical looseness and lack of rigor.”
Arguably Friedman’s view is flawed because it fails to
distinguish between short-term and long-term profit maximisation and ignores
the non-pecuniary benefits which flow from social activities. Companies which
simply attempt to maximise profits each year, whilst failing to treat their
customers and employees with respect, will fail. But that is a different
proposition to suggesting that companies exist to serve a social purpose. In
any case, many large firms have long since taken ideas of social responsibility
on-board in drawing up their corporate sustainability programmes. Corporates do
have a duty to act responsibly, of course, and those which fail to do so are
held up to scrutiny. The problems faced by Volkswagen following the revelation
that it falsified diesel engine emissions highlights that there are costs
associated with acting in a non-socially responsible manner.
But the more I listened to what Fink had to say, the less I
was convinced by his message. Another of his Davos pronouncements was that too many
people are excluded from the workings of financial markets as a result of
financial illiteracy and more work has to be done to ensure “they don't feel frightened of moving their
money into long term instruments.” Given that Fink is the CEO of a primarily
passive investment fund, there is a certain irony (to say the least) in his
desire to get more people involved in financial markets. His point that a lack
of involvement ultimately hampers efforts to generate decent retirement incomes
was valid. But at a time when many people are finding their incomes being severely
squeezed, they simply do not have the excess resources to devote to financial
investing – a problem the likes of Fink do not have.
I have no doubt that Fink’s views – and those of his fellow
grandees – are motivated by a genuine concern that the system from which they
have benefited is under threat, and that they believe there is a strong case
for redistributing some of the wealth. Perhaps they not aware of how their
argument in favour of caring capitalism comes across – it does sound like a ‘let
them have cake’ view. Many people simply
feel that they are being screwed and want a piece of the pie. That said, when it falls
to the rich to talk about solving global inequality problems, it is small wonder that the ordinary voter has little faith in governments.
[1] In
the interests of disclosure, I should point out I was not there. I assume my
invitation was lost in the post.
Wednesday, 24 January 2018
Whose data is it anyway?
To the extent that economics is concerned with the study of
how resources are allocated, a system of property rights impacts on the way these
resources can be used. For example, if a person owns a piece of land they can
choose (within limits) what to do with it e.g. build a house or let it lie fallow.
Other people have no right to determine how the land can be used. In a modern
market economy, transactions between individuals involve the transfer of
property rights and form the basis of the price determination process we see at
work every day. These rights are backed up by a legal system designed to
enforce the entitlement to a given bundle of goods (or services) and to record
their transfer from one person to another.
However, in the digital age the distinction of property rights has become much more blurred. I was reminded of this recently by an article in The Economist which quoted Nikhil Pahwa, an Indian digital-rights activist, as saying “When they say, ‘Big data is the new oil,’ I answer, ‘But my data is not your resource.’” The context of his quote is India’s biometric ID scheme, Aadhaar, whose database is apparently rather leaky with the result that many people’s personal details find their way into the public domain. But it could equally be applied to the likes of Facebook, which owns the world’s largest personal dataset. At issue is whose data is it?
Technically, of course, it belongs to the individual who posted it. But Facebook’s terms of service state quite explicitly that “you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook.” In other words, although you own the content Facebook has carte blanche to do what they want with it. From the company’s perspective this is great because it has a huge database upon which it can let loose its AI algorithms to generate ever more sophisticated consumer profiles. One of the great concerns expressed by network campaigners is that such huge databases act as a barrier to entry to smaller companies attempting to break into a particular market, because the lack of access to data means that their consumer profiling will always be inferior.
And this takes us right back to Pahwa’s point: Is it right that the data which we own, and which we give away for free, should be used by a profit maximising organisation to enrich shareholders? In their defence, big data companies argue that they do not charge for their services – Google clicks do not cost the user, so in that sense we are getting something for nothing. Except that is not quite true because we pay for it by giving up some data about ourselves, which may be trivial in isolation but when combined with the billions of pieces from other users, goes to make up a huge mosaic which Google can use to target its adverts more effectively.
In an interesting paper by Imanol Arrieta and co-authors, the argument is made that data providers should be paid for the information they yield in order that they are compensated for their contribution to the world of AI – information which might in due course be used to displace workers replaced by machines. As data hoarding by Big Data companies increasingly raises public interest concerns, it is likely to provoke the interest of regulators keen to cut down the monopoly power of Google, Facebook et al. It would not be the first time that regulators have taken an interest in tech-related issues: Twenty years ago, the US government opened antitrust proceedings against Microsoft, accusing it of establishing a monopoly position and engaging in anti-competitive practices. And if data really is the new oil, as many commentators contend, recall how in the early twentieth century the US government forced the breakup of Standard Oil, accusing it of being an illegal monopoly.
However, in the digital age the distinction of property rights has become much more blurred. I was reminded of this recently by an article in The Economist which quoted Nikhil Pahwa, an Indian digital-rights activist, as saying “When they say, ‘Big data is the new oil,’ I answer, ‘But my data is not your resource.’” The context of his quote is India’s biometric ID scheme, Aadhaar, whose database is apparently rather leaky with the result that many people’s personal details find their way into the public domain. But it could equally be applied to the likes of Facebook, which owns the world’s largest personal dataset. At issue is whose data is it?
Technically, of course, it belongs to the individual who posted it. But Facebook’s terms of service state quite explicitly that “you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook.” In other words, although you own the content Facebook has carte blanche to do what they want with it. From the company’s perspective this is great because it has a huge database upon which it can let loose its AI algorithms to generate ever more sophisticated consumer profiles. One of the great concerns expressed by network campaigners is that such huge databases act as a barrier to entry to smaller companies attempting to break into a particular market, because the lack of access to data means that their consumer profiling will always be inferior.
And this takes us right back to Pahwa’s point: Is it right that the data which we own, and which we give away for free, should be used by a profit maximising organisation to enrich shareholders? In their defence, big data companies argue that they do not charge for their services – Google clicks do not cost the user, so in that sense we are getting something for nothing. Except that is not quite true because we pay for it by giving up some data about ourselves, which may be trivial in isolation but when combined with the billions of pieces from other users, goes to make up a huge mosaic which Google can use to target its adverts more effectively.
In an interesting paper by Imanol Arrieta and co-authors, the argument is made that data providers should be paid for the information they yield in order that they are compensated for their contribution to the world of AI – information which might in due course be used to displace workers replaced by machines. As data hoarding by Big Data companies increasingly raises public interest concerns, it is likely to provoke the interest of regulators keen to cut down the monopoly power of Google, Facebook et al. It would not be the first time that regulators have taken an interest in tech-related issues: Twenty years ago, the US government opened antitrust proceedings against Microsoft, accusing it of establishing a monopoly position and engaging in anti-competitive practices. And if data really is the new oil, as many commentators contend, recall how in the early twentieth century the US government forced the breakup of Standard Oil, accusing it of being an illegal monopoly.
Big Data companies are already potentially feeling the heat
from the US Federal Communications Commission, which voted in December to
dismantle its existing net neutrality rules. These rules prevent broadband
suppliers from treating different groups of consumers differently, and the
likes of Google, Facebook et al are concerned that changes to the rules could
impact upon their business models if they are discriminated against by internet
service providers (ISPs). As an aside, there are many who argue that net
neutrality impinges on the property rights of ISPs, but that is a subject for
another day.
In order to alleviate regulators concerns, it might be prudent for the Big Data outfits to take some pre-emptive actions which show that they are taking mounting social concerns more seriously. For example, there is a case for suggesting that at least part of the data they collect could be shared across a range of platforms thus creating an open-source database (after suitable efforts have been made to anonymise it). After all, it is a public resource – it is “our” information. Of course, this might mean an end to much of the apparently “free” content currently available online. However, both the tech industry and society as a whole are going to have to do some hard thinking about how to balance privacy issues against the cost of online services. If this does not happen, it is likely that government will take the decisions for us, which may not be to anyone’s liking.
In order to alleviate regulators concerns, it might be prudent for the Big Data outfits to take some pre-emptive actions which show that they are taking mounting social concerns more seriously. For example, there is a case for suggesting that at least part of the data they collect could be shared across a range of platforms thus creating an open-source database (after suitable efforts have been made to anonymise it). After all, it is a public resource – it is “our” information. Of course, this might mean an end to much of the apparently “free” content currently available online. However, both the tech industry and society as a whole are going to have to do some hard thinking about how to balance privacy issues against the cost of online services. If this does not happen, it is likely that government will take the decisions for us, which may not be to anyone’s liking.
Saturday, 20 January 2018
Public-private partnerships: An assessment
Modern economies depend on infrastructure that we generally
take for granted. Indeed, we often only notice it when it fails. But the
capital investment to build the roads, rail and hospitals upon which we depend
does not come cheap, nor indeed does the funding required to run them on a
day-to-day basis. Increasingly, therefore, governments have turned to the
private sector to provide the required funding.
Such schemes generally involve a private investor assuming financial, technical and operational risk in return for a guaranteed fixed return from the public sector which acts as the final consumer of the service provided. This risk transfer puts the onus on the private sector to deliver a project as efficiently as possible in order to maximise the difference between the initial outlay and the revenue stream provided by the government. As a consequence, the public sector is off the hook for any cost overruns associated with big capital investment projects. A further advantage for the government is that much of the finance for such projects is treated as an off-balance sheet item in the public accounts which obviously flatters the public sector debt position, and provides an incentive for governments to put projects out to private sector tender.
In addition to capital investment, numerous day-to-day functions (e.g. the cleaning of public buildings, rubbish collection, IT and even law enforcement in the US) are increasingly contracted out to the private sector. The idea is that opening up the bidding process to competitive tendering puts downward pressure on costs so that we get the same services as before, only at lower cost. But the practice is rather different. A recent report by the UK National Audit Office found “no evidence of operational efficiency” in the hospital sector and that “the cost of services, like cleaning, in London hospitals is higher under PFI (Private Finance Initiative) contracts.” The NAO also found evidence that in an attempt to meet pre-specified levels of service “the contractually agreed standards under PFI have resulted in higher maintenance spending in PFI hospitals.”
Another problem, which was thrown into stark relief this week following the announcement that Carillion Plc – a major UK government contractor – has gone into liquidation, is the extent to which risk is really transferred away from the public sector. Although the company has ceased to trade, the economy still depends on many of the services which it provided. If no other buyer is found and the government does not step in, services such as the running of schools and prisons, the maintenance of railway infrastructure and the construction of major hospital projects, will cease. This is unthinkable. After all, Carillion ran all the catering, cleaning, laundry and car parking at the James Cook Hospital in Middlesbrough (NE England). A collapse of ancillary services will mean the closure of the hospital, which the government simply cannot allow to happen. So it could be forced to step in.
The UK railway industry has proven to be particularly troublesome with regard to private sector participation. The system is designed such that operators bid for a licence to run a rail franchise for a fixed period and it is their responsibility to balance costs and revenues to ensure it can make a profit over the lifetime of the contract. There have been numerous instances of problems in the bidding process, including dubious bids and companies suffering financial difficulties. The latest such occurrence took place in late 2017, when the government allowed the private sector operator of the main London-Edinburgh route simply to walk away from its contract without any penalties after it overbid for the franchise, with the result that it cannot now make sufficient profit from the deal. Virgin Trains will not now pay a reported £2 billion, which is the sum outstanding over the remainder of the franchise which runs until 2023.
It has been widely suggested that this was allowed to happen for political reasons. A company that walks away from its obligations is unable to bid for a tender for the next three years. With a number of other franchises coming up for renewal over that period Virgin would be ineligible to participate, which would be bad for them and reduce the government’s choice of partners nominally capable of running such a franchise. Whatever the truth of the matter, the government’s action creates moral hazard by undermining the basis of private sector participation if taxpayers are acting as the ultimate backstop.
There are thus serious questions as to whether public-private partnerships (PPPs) deliver value for money, particularly when the government can raise finance at a lower cost than the private sector – the UK government can borrow at rates just over 1% whereas the private sector weighted average cost of capital (WACC) is above 4% (chart). Moreover, PPPs generally deliver a rate of return between 10% and 15%, implying that PPPs are very lucrative for the private sector. This might be acceptable if private investors were bearing all the risk, but where the government is forced to act as a backstop this is clearly not a good deal for taxpayers. Consequently, serious consideration has to be given as to whether PPPs are meeting the needs of taxpayers. This does not necessarily mean that they should be abandoned altogether, but they need to be used more judiciously to meet public investment needs.
Such schemes generally involve a private investor assuming financial, technical and operational risk in return for a guaranteed fixed return from the public sector which acts as the final consumer of the service provided. This risk transfer puts the onus on the private sector to deliver a project as efficiently as possible in order to maximise the difference between the initial outlay and the revenue stream provided by the government. As a consequence, the public sector is off the hook for any cost overruns associated with big capital investment projects. A further advantage for the government is that much of the finance for such projects is treated as an off-balance sheet item in the public accounts which obviously flatters the public sector debt position, and provides an incentive for governments to put projects out to private sector tender.
In addition to capital investment, numerous day-to-day functions (e.g. the cleaning of public buildings, rubbish collection, IT and even law enforcement in the US) are increasingly contracted out to the private sector. The idea is that opening up the bidding process to competitive tendering puts downward pressure on costs so that we get the same services as before, only at lower cost. But the practice is rather different. A recent report by the UK National Audit Office found “no evidence of operational efficiency” in the hospital sector and that “the cost of services, like cleaning, in London hospitals is higher under PFI (Private Finance Initiative) contracts.” The NAO also found evidence that in an attempt to meet pre-specified levels of service “the contractually agreed standards under PFI have resulted in higher maintenance spending in PFI hospitals.”
Another problem, which was thrown into stark relief this week following the announcement that Carillion Plc – a major UK government contractor – has gone into liquidation, is the extent to which risk is really transferred away from the public sector. Although the company has ceased to trade, the economy still depends on many of the services which it provided. If no other buyer is found and the government does not step in, services such as the running of schools and prisons, the maintenance of railway infrastructure and the construction of major hospital projects, will cease. This is unthinkable. After all, Carillion ran all the catering, cleaning, laundry and car parking at the James Cook Hospital in Middlesbrough (NE England). A collapse of ancillary services will mean the closure of the hospital, which the government simply cannot allow to happen. So it could be forced to step in.
The UK railway industry has proven to be particularly troublesome with regard to private sector participation. The system is designed such that operators bid for a licence to run a rail franchise for a fixed period and it is their responsibility to balance costs and revenues to ensure it can make a profit over the lifetime of the contract. There have been numerous instances of problems in the bidding process, including dubious bids and companies suffering financial difficulties. The latest such occurrence took place in late 2017, when the government allowed the private sector operator of the main London-Edinburgh route simply to walk away from its contract without any penalties after it overbid for the franchise, with the result that it cannot now make sufficient profit from the deal. Virgin Trains will not now pay a reported £2 billion, which is the sum outstanding over the remainder of the franchise which runs until 2023.
It has been widely suggested that this was allowed to happen for political reasons. A company that walks away from its obligations is unable to bid for a tender for the next three years. With a number of other franchises coming up for renewal over that period Virgin would be ineligible to participate, which would be bad for them and reduce the government’s choice of partners nominally capable of running such a franchise. Whatever the truth of the matter, the government’s action creates moral hazard by undermining the basis of private sector participation if taxpayers are acting as the ultimate backstop.
There are thus serious questions as to whether public-private partnerships (PPPs) deliver value for money, particularly when the government can raise finance at a lower cost than the private sector – the UK government can borrow at rates just over 1% whereas the private sector weighted average cost of capital (WACC) is above 4% (chart). Moreover, PPPs generally deliver a rate of return between 10% and 15%, implying that PPPs are very lucrative for the private sector. This might be acceptable if private investors were bearing all the risk, but where the government is forced to act as a backstop this is clearly not a good deal for taxpayers. Consequently, serious consideration has to be given as to whether PPPs are meeting the needs of taxpayers. This does not necessarily mean that they should be abandoned altogether, but they need to be used more judiciously to meet public investment needs.
Wednesday, 17 January 2018
The rusting of the iron lady
Twelve months ago to the day, Theresa May stood up in
Lancaster House in London and outlined the nature of the future relationship
she was seeking with the EU. As I wrote at the time “anyone looking for any great insight as to how the UK will be able to achieve
the objectives she set out will have been disappointed. Mrs May in effect gave a
wish list of what she wants for the UK, but the nature of the deal itself will
depend on the compromise which can be achieved with the EU. And on that front,
we will have to await the EU’s opening play to assess how realistic her goals
are.“
One failed election campaign later, which has severely weakened the prime minister’s political clout, the UK remains deadlocked in negotiations with an EU27 which has maintained an impressively coherent stance, despite attempts by the UK to sow division. The UK has also fallen into line with EU27 demands on its three main negotiation points (the rights of EU citizens, the Irish border question and the Brexit bill). Twelve months ago, the UK government was also awaiting the Supreme Court ruling on whether parliament would be allowed a vote on the triggering of Article 50. In the event, it was – despite the government’s initial position that this was not necessary. The EU Withdrawal Bill, which is now in the process of going through parliament, has also been watered down with parliament to be given a vote on the final terms of the EU deal in yet another concession to MPs.
One failed election campaign later, which has severely weakened the prime minister’s political clout, the UK remains deadlocked in negotiations with an EU27 which has maintained an impressively coherent stance, despite attempts by the UK to sow division. The UK has also fallen into line with EU27 demands on its three main negotiation points (the rights of EU citizens, the Irish border question and the Brexit bill). Twelve months ago, the UK government was also awaiting the Supreme Court ruling on whether parliament would be allowed a vote on the triggering of Article 50. In the event, it was – despite the government’s initial position that this was not necessary. The EU Withdrawal Bill, which is now in the process of going through parliament, has also been watered down with parliament to be given a vote on the final terms of the EU deal in yet another concession to MPs.
Indeed, looking back over the past year it is clear that the
government has been forced to give way on a number of areas. Theresa May
initially believed that the government could simply transcribe all EU law onto
the UK statue book, and strike out those parts of the legislation it did not
like, before agreeing an exit deal with the EU without any parliamentary
oversight. She also told us in September 2016 that she would not give a “running
commentary” on Brexit negotiations. But thanks to a rearguard action by many MPs,
parliament now has a chance to scrutinise the Brexit deal whilst the House of Commons Library
has published on the state of negotiations after each round of Brexit talks. This,
of course, is precisely what should happen. After all, one of the key Brexit
selling points was that the UK parliament should oversee UK laws (though
strangely enough, the likes of the Daily Mail always kicked up huge objections
any time the government’s authoritarian Brexit approach was challenged).
With numerous pro-Leave campaigners expressing regrets about how issues have been handled since the referendum, most notably the surprise suggestion by Nigel Farage that he would not necessarily be opposed to a second plebiscite, it does make you wonder how events will unfold over the next twelve months. As I noted last week, a second referendum is unlikely anytime soon. However, it is likely that a compromise agreement will be reached which postpones the final exit decision until end-2020, thus giving industry – including the so-far neglected financial services sector – more time to prepare for an uncertain future.
The government itself is in a precarious position, and survives in office only with the support of a confidence and supply agreement with the Democratic Unionists. In more normal times, it is hard to imagine Theresa May holding onto her position since she has proved, as The Economist wrote last week, to be anything but a safe pair of hands whilst “her biggest problem is more fundamental: she doesn’t have any ideas.” But she maintains the support of the Conservative party because she is the least divisive candidate in a party which is split down the middle on Brexit and which is paranoid that a change of leader would open the door to the opposition Labour party. The gossip regarding May’s survival chances continues to swirl and I have no idea whether she will still be in office in a year’s time. But over the next twelve months, we can look forward to the crossing of more red lines as the process of aligning the “will of the people” with the needs of the economy continues.
With numerous pro-Leave campaigners expressing regrets about how issues have been handled since the referendum, most notably the surprise suggestion by Nigel Farage that he would not necessarily be opposed to a second plebiscite, it does make you wonder how events will unfold over the next twelve months. As I noted last week, a second referendum is unlikely anytime soon. However, it is likely that a compromise agreement will be reached which postpones the final exit decision until end-2020, thus giving industry – including the so-far neglected financial services sector – more time to prepare for an uncertain future.
The government itself is in a precarious position, and survives in office only with the support of a confidence and supply agreement with the Democratic Unionists. In more normal times, it is hard to imagine Theresa May holding onto her position since she has proved, as The Economist wrote last week, to be anything but a safe pair of hands whilst “her biggest problem is more fundamental: she doesn’t have any ideas.” But she maintains the support of the Conservative party because she is the least divisive candidate in a party which is split down the middle on Brexit and which is paranoid that a change of leader would open the door to the opposition Labour party. The gossip regarding May’s survival chances continues to swirl and I have no idea whether she will still be in office in a year’s time. But over the next twelve months, we can look forward to the crossing of more red lines as the process of aligning the “will of the people” with the needs of the economy continues.
Monday, 15 January 2018
Making sense of macroeconomics
The
reputation of macroeconomics took a battering in the wake of the global
financial crisis after failing to predict the
great recession. Although much of the criticism by outsiders is misplaced,
there are some grains of truth and many academic economists would agree that
there are many areas where economics needs to improve.
This collection of papers from the Oxford Review of Economic Policy looks at the state of macroeconomics today and provides a range of opinions from leading macroeconomists. More importantly, it shines the spotlight on those areas where economics can be seen to have failed and offers some suggestions about how to take us forward (the papers are not particularly technical and as such are relatively accessible. Credit should also go to the publishers, Oxford University Press, for taking this volume from out behind the paywall).
David Vines and Samuel Wills make the point that macroeconomics has been here before – in the early 1930s and again in the 1970s, and both times the discipline evolved to try and make sense of changed circumstances. But in order to identify what has to change, we need to know where we are and what is wrong. At the centre of the debate stand New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models, which form the workhorse model for policy analysis.
The general consensus is that they are not fit for purpose – a point I have made before (here and here). Such models are based on microfounded representative-agents – a theoretical approach which postulates that there is a typical household or firm whose behaviour is representative of the economy as a whole. I have always rather struggled with this approach because it assumes that all agents respond in the same way – something we know is not true in the case of households given differing time preferences, depending on age and educational attainment. An additional assumption that underpins such models is that expectations are formed rationally – something we know is not always true.
Thus the consensus appears to be that these two assumptions need to be relaxed if macroeconomics is going to be more relevant for future policy work. You might say that it is about time. Indeed it is a sad indictment that it took the failure of DSGE models during the financial crisis to convince proponents that their models were flawed when it was so obvious to many people all along.
In order to understand this failure, Simon Wren-Lewis offers an explanation as to why this form of thinking became so predominant to the exclusion of other types of model. He argues that the adoption of rational expectations was “a natural extension of the idea of rationality that was ubiquitous in microeconomic theory” and that “a new generation of economists found the idea of microfounding macroeconomics very attractive. As macroeconomists, they would prefer not to be seen by their microeconomic colleagues as pursuing a discipline with seemingly little connection to their own … Whatever the precise reasons, microfounded macroeconomic models became the norm in the better academic journals.” Indeed, Wren-Lewis has long argued that since academics could only get their work published in top journals if they went down this route, this promoted an “academic capture” process which led to the propagation of a flawed methodology.
Wren-Lewis also makes the point that much of so-called cutting edge analysis is no longer constrained to be as consistent with the data as was once the case. He notes that in the 1970s, when he began working on macro models “consistency with the data was the key criteria for equations to be admissible as part of a model. If the model didn’t match past data, we had no business using it to give policy advice.” There is, of course, a well-recognised trade-off between data coherency and theoretical consistency, and I have always believed that the trick is to find the optimal point between the two in the form of a structural economic model. It does not mean that the models I use are particularly great – they certainly would not make it into the academic journals – but they do allow me to provide a simplified theoretical justification for the structure of the model, in the knowledge that it is reasonably consistent with the data.
Ultimately one of the questions macroeconomists have to answer more clearly – particularly to outsiders – is what are we trying to achieve? Although much of the external criticism zooms in on the failure of economists to forecast the future, what we are really trying to do is better understand how the economy currently works and how it might be expected to respond to shocks (such as the financial crisis). Olivier Blanchard believes that “we need different types of macroeconomic models for different purposes” which allows a continued role for structural models, particularly for forecasting purposes. Whilst I agree with this, I have still not shaken off the conviction, best expressed by Ray Fair back in 1994 (here p28), that the structural model approach “is the best way of trying to learn how the macroeconomy works.” Structural models are far from perfect, but in my experience they are the least worst option at our disposal.
This collection of papers from the Oxford Review of Economic Policy looks at the state of macroeconomics today and provides a range of opinions from leading macroeconomists. More importantly, it shines the spotlight on those areas where economics can be seen to have failed and offers some suggestions about how to take us forward (the papers are not particularly technical and as such are relatively accessible. Credit should also go to the publishers, Oxford University Press, for taking this volume from out behind the paywall).
David Vines and Samuel Wills make the point that macroeconomics has been here before – in the early 1930s and again in the 1970s, and both times the discipline evolved to try and make sense of changed circumstances. But in order to identify what has to change, we need to know where we are and what is wrong. At the centre of the debate stand New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models, which form the workhorse model for policy analysis.
The general consensus is that they are not fit for purpose – a point I have made before (here and here). Such models are based on microfounded representative-agents – a theoretical approach which postulates that there is a typical household or firm whose behaviour is representative of the economy as a whole. I have always rather struggled with this approach because it assumes that all agents respond in the same way – something we know is not true in the case of households given differing time preferences, depending on age and educational attainment. An additional assumption that underpins such models is that expectations are formed rationally – something we know is not always true.
Thus the consensus appears to be that these two assumptions need to be relaxed if macroeconomics is going to be more relevant for future policy work. You might say that it is about time. Indeed it is a sad indictment that it took the failure of DSGE models during the financial crisis to convince proponents that their models were flawed when it was so obvious to many people all along.
In order to understand this failure, Simon Wren-Lewis offers an explanation as to why this form of thinking became so predominant to the exclusion of other types of model. He argues that the adoption of rational expectations was “a natural extension of the idea of rationality that was ubiquitous in microeconomic theory” and that “a new generation of economists found the idea of microfounding macroeconomics very attractive. As macroeconomists, they would prefer not to be seen by their microeconomic colleagues as pursuing a discipline with seemingly little connection to their own … Whatever the precise reasons, microfounded macroeconomic models became the norm in the better academic journals.” Indeed, Wren-Lewis has long argued that since academics could only get their work published in top journals if they went down this route, this promoted an “academic capture” process which led to the propagation of a flawed methodology.
Wren-Lewis also makes the point that much of so-called cutting edge analysis is no longer constrained to be as consistent with the data as was once the case. He notes that in the 1970s, when he began working on macro models “consistency with the data was the key criteria for equations to be admissible as part of a model. If the model didn’t match past data, we had no business using it to give policy advice.” There is, of course, a well-recognised trade-off between data coherency and theoretical consistency, and I have always believed that the trick is to find the optimal point between the two in the form of a structural economic model. It does not mean that the models I use are particularly great – they certainly would not make it into the academic journals – but they do allow me to provide a simplified theoretical justification for the structure of the model, in the knowledge that it is reasonably consistent with the data.
Ultimately one of the questions macroeconomists have to answer more clearly – particularly to outsiders – is what are we trying to achieve? Although much of the external criticism zooms in on the failure of economists to forecast the future, what we are really trying to do is better understand how the economy currently works and how it might be expected to respond to shocks (such as the financial crisis). Olivier Blanchard believes that “we need different types of macroeconomic models for different purposes” which allows a continued role for structural models, particularly for forecasting purposes. Whilst I agree with this, I have still not shaken off the conviction, best expressed by Ray Fair back in 1994 (here p28), that the structural model approach “is the best way of trying to learn how the macroeconomy works.” Structural models are far from perfect, but in my experience they are the least worst option at our disposal.
Thursday, 11 January 2018
Double or quits?
We are less than two weeks into the new year but a number of
very odd things have already taken place. Arch-protectionist Donald Trump is
prepared to rub shoulders with the global elite in Davos whilst Steve Bannon,
who promised to “go nuclear” on those opposed to Trump’s populist nationalist
agenda following his White House departure, has been fired by Breitbart News.
Perhaps most surprisingly of all, Nigel Farage has suggested that a second
referendum on the UK’s EU membership might be necessary to resolve the Brexit
question once and for all.
This comes against the backdrop of a renewed campaign against Brexit. As Farage put it, “My mind is actually changing on all this. What is for certain is that the Cleggs, the Blairs, the Adonises will never, ever, ever give up. They will go on whinging and whining and moaning all the way through this process. So maybe, just maybe, I’m reaching the point of thinking that we should have a second referendum on EU membership … I think that if we had a second referendum on EU membership we would kill it off for a generation.”
This follows the comments by Andrew Adonis, former chair of the National Infrastructure Commission, who resigned at the end of December, arguing that “good government has essentially broken down in the face of Brexit” and will now devote more time to the issue of a second referendum. Former prime minister Tony Blair took a slightly different tack with his Institute for Global Change highlighting the economic costs that are so far visible. He also made the valid point that 2017 was too early to rethink the Brexit strategy but by 2019 it will be too late. “Realistically, 2018 will be the last chance to secure a say on whether the new relationship proposed with Europe is better than the existing one.” Whatever people might think of Blair – and he is widely reviled for his role in involving the UK in unpopular conflicts in the Middle East – he remains a formidable centrist politician and it is hard to disagree with much of the IGC’s analysis (if only Blair had applied a similar level of rigour to the weapons of mass destruction question in 2003 he would have a claim as one of the greatest peacetime prime ministers).
On the question of a second referendum, my guess is that it is most unlikely. Despite calls for a second vote to give a verdict on the terms of the final EU deal, it is unlikely to happen because: (i) neither the Conservative nor Labour parties support the idea and (ii) it is too soon to reopen the divisions created by the 2016 referendum. Add to this the fact that Theresa May and her government have invested so much time and credibility in delivering Brexit, it becomes inconceivable to think that it will be open to calling a second plebiscite.
Nonetheless, it is astonishing to hear Farage make his suggestion. In his view “the percentage that would vote to leave next time would be very much bigger than it was last time round. And we may just finish the whole thing off.” That is a very bold statement and like many of Farage’s predictions, probably not true. Although the economy has held up better than anticipated, consumers are being squeezed by the Brexit-induced decline in real wages. Moreover, with the question of NHS funding and staff shortages currently so prominent, Blair points out that “applications from EU nurses to work in the UK have fallen by 89% since the referendum” and “nearly 1 in 5 NHS doctors from the European Economic Area have made concrete plans to leave the UK.” I have also pointed to survey evidence that suggests a rising trend in people believing that voting for Brexit may have been the wrong decision (here). Any attempt to re-run the referendum would likely result in a very tight race and it is far from clear how it would pan out.
But let us suppose that in order to clear the air the government does accede to this suggestion. What should it do? First and foremost, it should introduce a minimum participation threshold. A simple in-out referendum which results in a narrow win for one side is not sufficient. In order for change to come into effect, it would have to be ratified by at least 40% of all eligible voters in the same way as the Scottish devolution referendum of 1979. Assuming the electorate is the same size as in June 2016, the Leavers would have to gain 6.8% more votes (almost 1.9 million). But even if this were to happen, the Remain voters would still argue that the 40% threshold represents a minority of the eligible electorate. Thus, an additional constraint might be that in the event Leave gains less than 50% of all eligible votes, it must secure a victory margin of at least 10 percentage points. If you want a really funky solution, perhaps we could weight votes according to age. Although this undermines the principle of one person-one vote, on the basis that younger voters have more to lose there is an argument that their votes should count for more[1].
I stress that this is all hypothetical. But if the government were to take up Farage’s suggestion it would be easy enough to put in place a system which makes it very difficult for the leavers to win. There would be howls of protest from Brexiteers that the rules of the game have changed. But if the decision is to be binding (and let us recall that the 2016 referendum was purely advisory) we would have to be damn sure that the case is watertight. Only then will we Remainers shut up.
This comes against the backdrop of a renewed campaign against Brexit. As Farage put it, “My mind is actually changing on all this. What is for certain is that the Cleggs, the Blairs, the Adonises will never, ever, ever give up. They will go on whinging and whining and moaning all the way through this process. So maybe, just maybe, I’m reaching the point of thinking that we should have a second referendum on EU membership … I think that if we had a second referendum on EU membership we would kill it off for a generation.”
This follows the comments by Andrew Adonis, former chair of the National Infrastructure Commission, who resigned at the end of December, arguing that “good government has essentially broken down in the face of Brexit” and will now devote more time to the issue of a second referendum. Former prime minister Tony Blair took a slightly different tack with his Institute for Global Change highlighting the economic costs that are so far visible. He also made the valid point that 2017 was too early to rethink the Brexit strategy but by 2019 it will be too late. “Realistically, 2018 will be the last chance to secure a say on whether the new relationship proposed with Europe is better than the existing one.” Whatever people might think of Blair – and he is widely reviled for his role in involving the UK in unpopular conflicts in the Middle East – he remains a formidable centrist politician and it is hard to disagree with much of the IGC’s analysis (if only Blair had applied a similar level of rigour to the weapons of mass destruction question in 2003 he would have a claim as one of the greatest peacetime prime ministers).
On the question of a second referendum, my guess is that it is most unlikely. Despite calls for a second vote to give a verdict on the terms of the final EU deal, it is unlikely to happen because: (i) neither the Conservative nor Labour parties support the idea and (ii) it is too soon to reopen the divisions created by the 2016 referendum. Add to this the fact that Theresa May and her government have invested so much time and credibility in delivering Brexit, it becomes inconceivable to think that it will be open to calling a second plebiscite.
Nonetheless, it is astonishing to hear Farage make his suggestion. In his view “the percentage that would vote to leave next time would be very much bigger than it was last time round. And we may just finish the whole thing off.” That is a very bold statement and like many of Farage’s predictions, probably not true. Although the economy has held up better than anticipated, consumers are being squeezed by the Brexit-induced decline in real wages. Moreover, with the question of NHS funding and staff shortages currently so prominent, Blair points out that “applications from EU nurses to work in the UK have fallen by 89% since the referendum” and “nearly 1 in 5 NHS doctors from the European Economic Area have made concrete plans to leave the UK.” I have also pointed to survey evidence that suggests a rising trend in people believing that voting for Brexit may have been the wrong decision (here). Any attempt to re-run the referendum would likely result in a very tight race and it is far from clear how it would pan out.
But let us suppose that in order to clear the air the government does accede to this suggestion. What should it do? First and foremost, it should introduce a minimum participation threshold. A simple in-out referendum which results in a narrow win for one side is not sufficient. In order for change to come into effect, it would have to be ratified by at least 40% of all eligible voters in the same way as the Scottish devolution referendum of 1979. Assuming the electorate is the same size as in June 2016, the Leavers would have to gain 6.8% more votes (almost 1.9 million). But even if this were to happen, the Remain voters would still argue that the 40% threshold represents a minority of the eligible electorate. Thus, an additional constraint might be that in the event Leave gains less than 50% of all eligible votes, it must secure a victory margin of at least 10 percentage points. If you want a really funky solution, perhaps we could weight votes according to age. Although this undermines the principle of one person-one vote, on the basis that younger voters have more to lose there is an argument that their votes should count for more[1].
I stress that this is all hypothetical. But if the government were to take up Farage’s suggestion it would be easy enough to put in place a system which makes it very difficult for the leavers to win. There would be howls of protest from Brexiteers that the rules of the game have changed. But if the decision is to be binding (and let us recall that the 2016 referendum was purely advisory) we would have to be damn sure that the case is watertight. Only then will we Remainers shut up.
[1] On
the basis of a voting system which raises the voting weight the further below
the age of 90 you are, a back-of-the-envelope calculation suggests that Remain
would have won the June 2016 referendum by a margin of 52.8% to 47.2%.
Subscribe to:
Posts (Atom)