Chief executives in the UK saw their salaries fall last year – but they are still far ahead of the average worker.
In 2016, chief executives of the FTSE 100 index companies in the UK saw their compensation fall by almost a fifth compared with the previous year, reports The Telegraph. Meanwhile, in the US, TheWashington Post reported nearly a 4% drop in pay among the chief executives of the 350 largest American corporations.
In the UK, CEOs’ average compensation fell to £4.5 million in 2016 from £5.4 million in 2015. But that is still a hefty 160 times more than the average UK worker’s annual salary of £28,000. In the US, average CEO compensation dropped to US$ 15.6 million from US$ 16.3 million, but that still leaves the average US executive 271 times better off than the average worker.
Still, in the US, that ratio is well below the previous high mark attained in 2000, when the average executive raked in 376 times the average worker’s pay. But it is still well above the 20-to-1 ratio recorded in 1965 – and even the 59-to-1 ratio of 1989.
Compensation growth rates tell a similar story: from 1978 to 2016, CEO pay grew by up to 937% while typical worker compensation increased by 11.2%, reports the Economic Policy Institute.
It would seem Americans have no idea just how big the difference in compensation is. In a 2014 study, reports The Washington Post, respondents said they thought the gap between executives and unskilled workers was about 30-to-1.
Key factors are exerting downward pressure on executive salaries. In particular, investors are becoming increasingly sensitive to the lack of correlation between executive pay and company performance.
Despite automation, productivity growth has slowed in all the advanced economies. Why?
Computers are “everywhere but in the productivity statistics,” wrote Nobel laureate economist Robert Solow in 1987. What we call the “productivity paradox” has since become more and more apparent. Many jobs have been taken over by automation. And although robots and artificial intelligence continue to promise ever more radical change, productivity has been slowing in all advanced economies, writes Adair Turner, former chairman of the UK’s Financial Services Authority, in a blog on the Institute for New Economic Thinking site.
As Turner points out, many explanations have been suggested for the slowdown: low business investment, poor worker skills, deteriorating infrastructure, excessive regulation; some even question whether information technology is as powerful as it is said to be. But perhaps is it time to rethink our notion of productivity, he says.
Our view of productivity is inherited from the time when we moved from agriculture to industry, says Turner. “We start with 100 farmers who produce 100 units of food,” he writes. Then because of progress, 50 farmers can produce the same amount. The other 50 move to plants where they manufacture cars and washing machines. Using this model, productivity doubles.
But what if the more productive farmers don’t want to buy cars or washing machines and, instead, hire the other 50 as domestic servants and artists? Then, even if agricultural output never slows, the productivity growth rate would tend toward zero.
Our economy is full of “zero-sum” activities: legal services, financial regulators, cybercriminals and those who fight them, advertising agents, hairdressers, and so on. Increasingly, our economies are divided between automated, high-productivity-growth activities and low-productivity, low-wage jobs.
As a result, “measured GDP and gains in human welfare eventually may become entirely divorced,” writes Turner. In 2100, when solar-powered robots will manufacture just about everything, all their activity will probably account for a trivial proportion of GDP, because it is so inexpensive. On the other hand, all measured GDP will focus on zero-sum or impossible-to-automate activities: housing rent, sports prizes, artistic events. Productivity will probably shrink almost to zero, but at the same time become almost irrelevant to human welfare.
Turner concludes, “Computers are not in the productivity statistics precisely because they are so powerful.”
Many people regard their pets as part of the family. But some question the ethics of having pets at all.
Pet animals pose a growing moral dilemma to humans, reports The Guardian. Is it right to thwart the nature of animals and their self-determination in order to bring them into our homes, where we dictate what they eat, how they sleep, what they look like and if they will be sterile?
The dilemma goes deeper: many adore their cats, yet cats are killers that account for the death of billions of small, furry and feathered animals every year. Ethicists ask: should we allow our pets free rein to their natural impulses, or should we stop having them altogether?
And deeper still: an increasing number of people think of pets as people, and consider them part of their family, their best friend. They wouldn’t sell them for a million dollars. A recent British study found that 12% of pet owners love their pet more than their partner, and 24% love them more than their best friend. Yet, each year in the US, 1.5 million shelter animals are euthanized.
Laws reflect our changing views. In 2015, for example, New Zealand and Quebec recognized animals as sentient beings and declared that they are no longer property. In the US, pets are still considered property, but 32 states protect them from domestic violence. At the same time, studies increasingly show us that the emotional lives of animals are richer and more complex than we imagined.
“The logical consequence is that the more we attribute … these characteristics [to them], the less right we have to control every single aspect of their lives,” says psychology professor Hal Herzog.
But perhaps the view of Tim Wass, chair of the Pet Charity, is more to the point: “It has already been decided by market forces and human nature … the reality is people have pets in the millions. The question is: how can we help them care for them correctly and appropriately?”