The problem with carbon budgets

It has been the best part of 30 years since the UN Framework Convention on Climate Change (an international treaty aiming to reduce atmospheric concentrations of greenhouse gases) was established. set up. One of the first tasks agreed on was for countries to start documenting their emissions and report annually to monitor decarbonisation. To ensure proper accounting, a precise definition for “domestic emissions” was established that dictated the emissions that countries should include on their ledgers. Many countries have since set themselves legal deadlines to achieve reductions of these emissions, such as the UK’s commitment to achieve ‘net zero’ carbon emissions by 2050.

This has brought about the concept of carbon budgeting; that there is a certain level of permissible net emissions in a given year which must be divided between all sectors of the economy. However, these budgets only covers the emissions that the UNFCCC regards as domestic emissions. This is not equivalent to the country’s total carbon footprint, which would be the emissions based on the behaviour of the population. The former is based on the emissions released on a country’s land, while the latter is based on the emissions resulting from the consumption of the population. The graph below shows the evolution of the UK’s emissions using these two different metrics between 1970 and 2015.

The domestic emissions and consumption based emissions in the UK. Data source: [1]

Economists often reference Goodhart’s law, which states that “when a measure becomes a target, it ceases to be a good measure”. In other words, when we aim to change a specific metric, the changes made focus on the definition of that metric, rather than addressing the problem it was measuring. Personally, I think this effect can be avoided with sensibly chosen metrics, however it does seem to apply in this case; over the past 30 years domestic emissions have been consistently falling, while consumption based emissions rose until 2008. The focus on reducing the emissions that “count” has led to those that don’t being neglected.

To understand the difference between the two metrics it is useful to breakdown the consumption emissions by source. The graph below shows a breakdown of the UK’s emissions in 2016.

A comparison of different estimates of UK emissions in 2016. Data source: [2]

International aviation and shipping are not included in the UNFCCC definition of domestic emissions. This is partly due to the difficultly distributing the emissions between the origin and destination countries. The consequence is that these emissions are not accounted for in any country’s budget. This is alarming because shipping and aviation will be some of the hardest sectors to decarbonise, and without incentivising countries to tackle it it’s hard to see how a solution will be found.

However, the largest discrepancy between the two metrics comes from so called ‘imported emissions’. These are the emissions associated with production and transport of goods imported from overseas, minus those associated with exported goods. Although these emissions will be captured in another countries budget, only Annex I countries annually report their emissions. In the UK’s case, there has been a significant increase over the past few years in imports from China and India – see the graph below, noting that the data cover a smaller time range than the previous graph.

UK imports from China and India between 1995 and 2019. Data source: [3]

Neither China nor India are Annex I countries, so they aren’t subject to the annual reporting. Additionally, both countries use coal (the most carbon intensive fuel) for the majority of their electricity generation. This means that, even without the transportation emissions, the production of goods will likely be more carbon intensive than if they were produced domestically. In other words, Annex I countries are being incentivised to shift their emissions overseas, despite the fact that this will increase global emissions.

The concept of carbon budgeting makes sense, but we should be considering the global budget required to mitigate climate change, rather than individual countries virtue signalling with their own commitments.

References
[1] ONS, The decoupling of economic growth from carbon emissions: UK evidence, 2019.
[2] ONS, Net zero and the different official measures of the UK’s greenhouse gas emissions, 2019.
[3] https://tradingeconomics.com/united-kingdom/imports

The price of green electricity

Following significant investment, the past decade has seen a rapid fall in the price of renewable energy. Some now believe that renewables are cost competitive to fossil fuels (as well as their obvious benefits). However, making economic comparisons between electricity generation technologies is actually surprisingly difficult.

When quantifying the cost of generating a unit of energy, you need to consider all of the costs associated with producing it. On top of the fuel cost, this includes the costs of building, operating, and decommissioning the power plant. Historically, we only needed to compare fuel-combustion technologies, and this meant that the only substantial differences were in fuel price. However, now that a more diverse range of technologies is available, a more rigorous comparison is necessary. Essentially we need to work out all of the costs incurred over the lifetime of a plant and divide them by the amount of energy generated. This is referred to as the levelised cost of energy. Lazard publishes annual estimates for the maximum and minimum levelised cost of energy using different technologis, and the 2019 estimates are shown below.

The estimated range of cost for various generation technologies. Data source: [1]

The large gaps between the upper and lower bounds reflects the large number of variables that affect the final cost. To name a few, the amount you pay your workers, the size of the plant, and (for renewables) the quality of natural resources available. Nuclear is expensive because the power stations are expensive to build, decommission, and operate. Solar panels are much cheaper than wind turbines, but you won’t get as much energy out of them, so the cost per unit energy is similar. In terms of fossil fuels, gas is much cheaper than coal (perhaps the real incentive behind phasing it out). Oil is now so expensive that I’ve excluded it from the graph, there is very little oil generation left in Europe – with the exception of in Croatia, not sure why.

When looking to decarbonise the electricity mix, these costs need to be considered in conjunction with the carbon intensity of each technology. Again, the whole life-cycle needs to be considered (it is currently impossible to manufacture a wind turbine without emitting carbon). The graph below shows the total emissions attributed to each kWh of electricity (in equivalent CO2) for the different technologies. Note that there is an argument that the emissions from waste/biofuels shouldn’t count as they would be eventually emitted anyway, but frankly I don’t want to touch that debate with a stick.

The carbon intensity of electricity by generation type. Data source: [2]

Comparing both graphs, it is clear that wind and solar are the overall winners on green value-for-money. However, this is only looking at the cost of generation, which is not the only cost associated with running an electricity system. Wind and solar are uncontrollable, which is to say that you can’t decide when and how much the plants will output. This means that running a system with a large amount of solar or wind requires a method of energy storage and/or an excess generation capacity. Either of these will add cost that isn’t incurred with controllable power generation sources. Therefore, while nuclear power is much more expensive to generate, at least some of the difference will be offset by other system costs.

One way to compare the costs of running an electricity system with different fuel mixes is to look at the variety of systems that already exist. Given their similar electricity demand and available resources, there is a surprising range of fuel mixes across Europe. The graph below shows a subset* of the European countries on axis of carbon intensity and consumer electricity price. The size of the marker is proportional to the annual electricity consumption of the country, while the different colours show the composition of the fuel mix. Generation sources are grouped into fossil fuels (coal, oil, gas), sustainable fuels (biofuels, waste), nuclear, and renewables (solar, wind, and hydro). Note the data are from 2018, which (considering the pace of renewables investment) already makes them out of date.

*chosen to show a broad range without sacrificing aesthetics – apologies to Portugal, which is hidden behind the UK.

The carbon intensity vs. consumer price of various electricity systems. Data source: [3-5]

I will be the first to say that consumer electricity price is a poor metric for true system cost. Varying government subsidies, physical geographies, and connections to other systems are just some of the factors which cloud the comparison. However, it is still interesting to look at the systems which are achieving low carbon intensities at a low cost.

The first point to make is that there seems to be little correlation between the price consumers are paying for their electricity and its carbon intensity. Norway, Sweden, and Finland all achieve low costs with a high renewable penetration, but this is using hydropower, which doesn’t suffer from the intermittency issues that wind and solar do. 

Denmark is by so way the leader in terms of intermittent renewables – with almost 50% of demand being met by solar or wind power. The competition for second place is between Germany, the UK, Ireland, Spain, and Portugal all in the 20s. However the Danish consumers pay a high price for the privilege. This premium could be explained by the higher system costs associated with running on highly intermittent generation.

France is another interesting case study. Relying heavily on nuclear power to achieve a low carbon intensity theoretically gives them the highest cost of energy generation. However, they have a relatively low consumer electricity price. This could be due to government subsidies, or because of the lower systems costs associated with nuclear power (likely both). I have no interest in getting into a nuclear power debate, suffice to say there a pros and cons outside of those mentioned here.

Overall, it is quite difficult to work out whether the low carbon electricity systems of the future will be cheaper or more expensive than their existing counterparts. Existing systems were designed with large fuel-burning power plants in mind, so it is unsurprising that switching to smaller variable sources will incur additional costs. However, with the price of wind and solar energy already low, and still dropping, it is easy to imagine that a system designed for to run using such generation might be cheaper. Obviously, regardless of the answer to this question, transitioning to lower carbon electricity systems is necessary. However, perhaps this transition won’t come with the high price tag that some expect.

References
[1] Lazard, LCOE Perspectives, 2019
[2] IPCC Working Group III – Mitigation of Climate Change, Annex II.9.3 Lifecycle greenhouse gas emissions. pp 1306-1308.
[3] Country Specific Electricity Factors, Association of Issuing Bodies (AIB) 2018.
[4] Electricity Prices in Europe Compared, Selectra, 2018.
[5] the World Factbook, Central Intelligence Authority, 2014.

Electric vehicle smart charging: the transmission-distribution conflict

Electric vehicles will play a key role in decarbonising the transport sector. However, transitioning energy demand from oil to electricity will increase the strain on electricity networks. This will be especially true in countries that primarily rely on gas heating, as their systems typically aren’t designed to handle large loads.

The higher-level national network is referred to as the transmission network (think of those large pylons you see on the sides of motorways). At all times there needs to be an approximate balance between the total power demand and supply on the transmission network. If there is a mismatch in supply and demand, bad things start to happen very quickly (see the UK’s August blackout). One of the particular concerns with electric vehicles is that charging might be concentrated in the evening, which would coincide with the existing peak demand. This would be expensive because additional power plants would need to be built, but they would see a very low utilisation (as they will only be required for a short period each day). 

In order to avoid this, a framework for smart charging is being developed. This will allow users to be reimbursed for delaying charging to a time when the other load on the system is lower. The wholesale electricity price varies significantly throughout the day, as shown in the graph below. This means that, even without accounting for the avoided cost of new power stations, there is potentially a lot of value to be gained by shifting charging to off-peak times. One suggestion for a smart charging system is to use a variable charging tariff whose price moves to reflect the current wholesale electricity price (e.g. the Octopus Energy agile tariff). Then a consumer (or more likely a piece of software) will make decisions to charge so as to minimise cost.

Fig 1: The average wholesale electricity price

A national tariff would have a similar effect on all vehicles regardless of their location. This is potentially problematic because there are also local network constraints that could be violated with vehicle charging. The lower-level network is referred to as the distribution network (think the overhead lines you see on the side of the street). Violation of distribution network constraints has less serious consequences than transmission system limits (e.g. street-wide power cut rather than millions of consumers disconnected), but are still undesirable as they necessitate costly upgrades to the network.

If operated correctly, smart charging could be used to avoid these network. However, it is important that both the transmission network (national scale) and the distribution network (local scale) are considered simultaneously. If smart charging is used only to protect the transmission system, then many additional local networks will require upgrades. If it is used only to protect the distribution network, then additional power plants will be needed. This point is illustrated below.

Fig. 2 shows the estimated GB electricity demand profile with a fully electrified fleet of private vehicles under various charging regimes. As demand must be met at all times, the required power generation capacity is dictated by the peak demand. Three charging schemes are shown: uncontrolled – without smart charging, controlled (T) – smart charging controlled at the transmission level (e.g. national tariff), and controlled (D) – smart charging to protect distribution networks. In terms of peak demand, controlling only for the distribution network is almost as bad as implementing no smart charging.

Fig 2: GB demand with 100% electric vehicle charging

It is worth unpacking this result slightly. Here it has been assumed that charging will occur in residential networks. This means that the existing load on those networks will be mostly household load. Therefore when smart charging is used to protect the local networks it smoothes out the household demand, but doesn’t offset any of the other demand on the system. In the UK a large amount of the electricity demand is industrial (e.g. manufacturing, transport). This demand tends to peak in the middle of the day, hence the new peak in the controlled (D) case.

On the other hand, Fig. 3 shows the percentage of residential networks predicted to experience violations in each charging case. Without wanting to get bogged down by technical details, transformer violations cause street-wide power cuts and voltage violations result in poor power quality in homes at the ends of the network. Either type of violation would necessitate upgrades to the network. Controlling charging at the transmission level reduces the number of constraint violations, but there are still many that could be avoided.

Fig 3: The percentage of networks expected to have constraint violations.

While there is an inherent conflict between the two system levels, it is possible to achieve both the flat national demand and the local network protection simultaneously. This is because there are many networks which have large overheads, meaning they will not hit local constraints even with uncontrolled charging. Fig 4 shows an estimate of the percentage of networks that will experience violations broken down by geography. Both the 100% electrified and the 2030 scenarios are shown.

Fig 4: The percentage of predicted network violations by geography.

Note that these are estimates based on imperfect information; in order to know with certainty the likelihood of a network overload, more extensive monitoring of the distribution system is required. However, these estimates demonstrate the scale of variation that can be expected between networks in different areas. The differences can be attributed to local driving behaviour, network design, and socio-economic factors. For example, many of the worst effected areas are the urban areas outside London, which have high population densities but poor public transport.

If the vehicles on the most constrained networks can be identified, then these can be controlled to protect their local network. Meanwhile, the vehicles on the least constrained networks can over-compensate in order to avoid the midday peak in national demand. This will require a more complicated smart charging system than a national tariff, and may bring up difficult questions when it comes to compensating consumers. However, the result will be a cheaper electricity system – in theory, lower prices for all.

In conclusion, in order to be most effective, smart charging needs to take account of the location of vehicles in the network. Let’s make national tariffs a stepping stone, not the destination.

[A/N] This post covered the core concept from my new journal paper. If you are interested in reading the full manuscript, it is available here.

Forecasting s-curves is hard

S-curves (or sigmoid functions) are commonly used to model the evolution of social or biological systems over time [1]. These functions start with exponential growth, then increase linearly, and finally level off (therefore end up looking like a wonky s). Many things that we think of as exponential functions will actually follow an s-curve (otherwise the system would reach infinity). One famous example is the adoption of a new technology. The graph below shows the percentage of US adults who own a smartphone over time, with a best-fit s-curve imposed on the top. In this case the exponential growth occurs because of the way publicity and supply are rolled out. However, there are only a limited number of potential consumers (some of whom will never get a smartphone) and so the growth gradually slows to zero.

US smartphone ownership [2]

Another example, and the reason that these curves have been back in the news, is the propagation of disease. In this case the exponential growth occurs when the virus is new, such that most people encountering it will not have developed immunity. The level-off occurs because the virus is no longer encountering people without immunity (either due to ‘herd immunity’ or isolation of those infected). The graph below shows the number of deaths in China from the SARS outbreak in 2003, again with a best-fit s-curve.

Deaths due to SARS in China [3]

S-curves have only three parameters, and so it is perhaps impressive that they fit a variety of systems so well. Broadly, the three parameters describe the initial growth rate, the level-off rate, and the value at which it levels-off. Therefore, if you can estimate these three numbers, then you have the trend curve. Many of us will have learnt in school that if there are three parameters to be found, you need three data points to define the function. This would suggest that you could perfectly predict the level-off point based on only three observations (spoiler: you can’t). 

In reality, while we can say that the overall trend of the data is likely to fit to some s-curve, the individual points will not all lie along it. This can be seen in both of the previous examples. This discrepancy is often described as ‘modelling error’, which comprises both errors in the measurement of the data, and the fact that the s-curve model is fundamentally wrong. To quote George Box “all models are wrong, but some are useful”. 

Intuitively, it makes sense that it should not be possible to forecast the curve from the early data; to assume this, means believing that we can’t affect the outcome. However, in my experience “intuition” and “mathematics” can often be hard to reconcile. Therefore, I decided to investigate how much the “best fit s-curve” changes as more data becomes available. Below is a s-curve that I chose at random. The points shown are “noisy observations” – which is the maths-y way of saying ‘points from the curve with a random amount of error applied’.

In this case, the s-curve model is a perfect fit – I have literally generated the data from an s-curve. This means that if there was zero error then we would only need three points to find the curve. All this to say, that this example is idealistic – in reality there is unlikely to be a curve that fits the data so well. Below is an animation showing the best fit s-curve (found using a least squares optimisation) as more data becomes available.

It may not be surprising that in the exponential growth phase the estimate is very bad, but even in the linear phase (when 40+ points are available) the correct curve has not been found. In fact, it is only once the data starts to level-off that the correct s-curve is found. This is especially unhelpful when you consider that it can be quite hard to tell which part of the curve you on; hindsight is 20-20.

This is not to say that it is impossible to model or predict s-curves. Only that, contextual information about the system you are modelling is likely required. For biological systems, are there physical parameters which govern the initial growth rate? For technological changes, can the final level-off be reasonably estimated? This information is application specific. In other words, data enthusiasts (such as myself) should leave the modelling up to the professionals.

Edit: 20/04/20
I’ve had several requests to share the code used to generate the animation. The optimisation I used is part of another project which I can’t share, but I have uploaded a script which should reproduce the animation here.

References
[1] Nieto et. al, “Performance analysis of technology using the S curve model: the case for digital signal procession technologies” 1998.
[2] Comscore Whitepaper: ‘The 2016 U.S. Mobile App Report”, September 13, 2016
[3] World Health Organisation https://www.who.int/csr/sars/country/en/