“It isn’t the code Ferguson ran to produce his famous Report 9. What’s been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others….Clearly, Imperial are too embarrassed by the state of it…which is unacceptable given that it was paid for by the taxpayer and belongs to them.…This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all.”…[GitHub is owned by Microsoft]…“The code should have been made available to all other Profs & top Coders and Data Scientists & Bio-Statisticians to PEER Review BEFORE the UK and USA Gvts made their decisions.” Commenter Sean Flanagan
5/6/20, “Code Review of Ferguson’s Model,” LockdownSceptics.org, by Sue Denim (not the author’s real name}
“Imperial [College of London] finally released a derivative of Ferguson’s code. I figured I’d do a review of it and send you some of the things I noticed. I don’t know your background so apologies if some of this is pitched at the wrong level.
My background. I wrote software for 30 years. I worked at Google between 2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company’s database product, amongst other jobs and projects. I was also an independent consultant for a couple of years. Obviously I’m giving only my own professional opinion and not speaking for my current employer.
The code.It isn’t the code Ferguson ran to produce his famous Report 9. What’s been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was “a single 15,000 line file that had been worked on for a decade” (this is considered extremely poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.
The model. What it’s doing is best described as “SimCity without the graphics”. It attempts to simulate households, schools, offices, people and their movements, etc. I won’t go further into the underlying assumptions, since that’s well explored elsewhere.
Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.
This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it’s apparent that the same numbers as in Report 9 might not come out of it. Non-deterministic outputs may take some explanation, as it’s not something anyone previously floated as a possibility.
The documentation says: “The model is stochastic. Multiple runs with different seeds should be undertaken to see average behaviour.” “Stochastic” is just a scientific-sounding word for “random”. That’s not a problem if the randomness is intentional pseudo-randomness, i.e. the randomness is derived from a starting “seed” which is iterated to produce the random numbers. Such randomness is often used in Monte Carlo techniques. It’s safe because the seed can be recorded and the same (pseudo-)random numbers produced from it in future. Any kid who’s played Minecraft is familiar with pseudo-randomness because Minecraft gives you the seeds it uses to generate the random worlds, so by sharing seeds you can share worlds.
Clearly, the documentation wants us to think that, given a starting seed, the model will always produce the same results. Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.
5/6/20, “Code Review of Ferguson’s Model,” LockdownSceptics.org, by Sue Denim (not the author’s real name}
“Imperial [College of London] finally released a derivative of Ferguson’s code. I figured I’d do a review of it and send you some of the things I noticed. I don’t know your background so apologies if some of this is pitched at the wrong level.
My background. I wrote software for 30 years. I worked at Google between 2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company’s database product, amongst other jobs and projects. I was also an independent consultant for a couple of years. Obviously I’m giving only my own professional opinion and not speaking for my current employer.
The code.It isn’t the code Ferguson ran to produce his famous Report 9. What’s been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was “a single 15,000 line file that had been worked on for a decade” (this is considered extremely poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.
The model. What it’s doing is best described as “SimCity without the graphics”. It attempts to simulate households, schools, offices, people and their movements, etc. I won’t go further into the underlying assumptions, since that’s well explored elsewhere.
Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.
This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it’s apparent that the same numbers as in Report 9 might not come out of it. Non-deterministic outputs may take some explanation, as it’s not something anyone previously floated as a possibility.
The documentation says: “The model is stochastic. Multiple runs with different seeds should be undertaken to see average behaviour.” “Stochastic” is just a scientific-sounding word for “random”. That’s not a problem if the randomness is intentional pseudo-randomness, i.e. the randomness is derived from a starting “seed” which is iterated to produce the random numbers. Such randomness is often used in Monte Carlo techniques. It’s safe because the seed can be recorded and the same (pseudo-)random numbers produced from it in future. Any kid who’s played Minecraft is familiar with pseudo-randomness because Minecraft gives you the seeds it uses to generate the random worlds, so by sharing seeds you can share worlds.
Clearly, the documentation wants us to think that, given a starting seed, the model will always produce the same results. Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.
I’ll illustrate with a few bugs. In issue 116 a UK “red team” at Edinburgh University reports that they tried to use a mode that stores data tables in a more efficient format for faster loading, and discovered – to their surprise – that the resulting predictions varied by around 80,000 deathThat mode doesn’t change anything about the world being simulated, so this was obviously a bug.
The Imperial team’s response is that it doesn’t matter: they are “aware of some small non-determinisms”, but “this has historically been considered acceptable because of the general stochastic nature of the model”. Note the phrasing here: Imperial know their code has such bugs, but act as if it’s some inherent randomness of the universe, rather than a result of amateur coding.
Apparently, in epidemiology, a difference of 80,000 deaths is “a small non-determinism”.
Imperial advised Edinburgh that the problem goes away if you run the model in single-threaded mode, like they do. This means they suggest using only a single CPU core rather than the many cores that any video game would successfully use. For a simulation of a country, using only a single CPU core is obviously a dire problem – as far from supercomputing as you can get. Nonetheless, that’s how Imperial use the code: they know it breaks when they try to run it faster.It’s clear from reading the code that in 2014 Imperial tried to make the code use multiple CPUs to speed it up, but never made it work reliably. This sort of programming is known to be difficult and usually requires senior, experienced engineers to get good results. Results that randomly change from run to run are a common consequence of thread-safety bugs. More colloquially, these are known as “Heisenbugs“.
But Edinburgh came back and reported that – even in single-threaded mode – they still see the problem. So Imperial’s understanding of the issue is wrong. Finally, Imperial admit there’s a bug by referencing a code change they’ve made that fixes it.The explanation given is “It looks like historically the second pair of seeds had been used at this point, to make the runs identical regardless of how the network was made, but that this had been changed when seed-resetting was implemented”. In other words, in the process of changing the model they made it non-replicable and never noticed.
Why didn’t they notice? Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up…and eventually this behaviour became normalised within the team.
In issue #30, someone reports that the model produces different outputs depending on what kind of computer it’s run on (regardless of the number of CPUs). Again, the explanation is that although this new problem “will just add to the issues”…“This isn’t a problem running the model in full as it is stochastic anyway”.
Although the academic on those threads isn’t Neil Ferguson, he is well aware that the code is filled with bugs that create random results. In change #107 he authored he comments: “It includes fixes to InitModel to ensure deterministic runs with holidays enabled”. In change #158 he describes the change only as “A lot of small changes, some critical to determinacy”.
Imperial are trying to have their cake and eat it. Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.
No tests. In the discussion of the fix for the first bug, Imperial state the code used to be deterministic in that place but they broke it without noticing when changing the code.
Regressions like that are common when working on a complex piece of software, which is why industrial software-engineering teams write automated regression tests. These are programs that run the program with varying inputs and then check the outputs are what’s expected. Every proposed change is run against every test and if any tests fail, the change may not be made.
The Imperial code doesn’t seem to have working regression tests. They tried, but the extent of the random behaviour in their code left them defeated. On 4th April they said: “However, we haven’t had the time to work out a scalable and maintainable way of running the regression test in a way that allows a small amount of variation, but doesn’t let the figures drift over time.”
Beyond the apparently unsalvageable nature of this specific codebase, testing model predictions faces a fundamental problem, in that the authors don’t know what the “correct” answer is until long after the fact, and by then the code has changed again anyway, thus changing the set of bugs in it. So it’s unclear what regression tests really mean for models like this – even if they had some that worked.
Undocumented equations. Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.
For example, on line 510 of SetupModel.cpp there is a loop over all the “places” the simulation knows about. This code appears to be trying to calculate R0 for “places”. Hotels are excluded during this pass, without explanation.
This bit of code highlights an issue Caswell Bligh has discussed in your site’s comments: R0 isn’t a real characteristic of the virus. R0 is both an input to and an output of these models, and is routinely adjusted for different environments and situations. Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect prediction. There’s a discussion of this problem in section 2.2 of the Google paper, “Machine learning: the high interest credit card of technical debt“.
Continuing development. Despite being aware of the severe problems in their code that they “haven’t had time” to fix, the Imperial team continue to add new features; for instance, the model attempts to simulate the impact of digital contact tracing apps.
Adding new features to a codebase with this many quality problems will just compound them and make them worse. If I saw this in a company I was consulting for I’d immediately advise them to halt new feature development until thorough regression testing was in place and code quality had been improved.
Conclusions. All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.
On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.”
“My identity. Sue Denim isn’t a real person (read it out). I’ve chosen to remain anonymous partly because of the intense fighting that surrounds lockdown, but there’s also a deeper reason. This situation has come about due to rampant credentialism and I’m tired of it. As the widespread dismay by programmers demonstrates, if anyone in SAGE or the Government had shown the code to a working software engineer they happened to know, alarm bells would have been rung immediately. Instead, the Government is dominated by academics who apparently felt unable to question anything done by a fellow professor. Meanwhile, average citizens like myself are told we should never question “expertise”. Although I’ve proven my Google employment to Toby, this mentality is damaging and needs to end: please, evaluate the claims I’ve made for yourself, or ask a programmer you know and trust to evaluate them for you.”
………………………………………………………
The Imperial team’s response is that it doesn’t matter: they are “aware of some small non-determinisms”, but “this has historically been considered acceptable because of the general stochastic nature of the model”. Note the phrasing here: Imperial know their code has such bugs, but act as if it’s some inherent randomness of the universe, rather than a result of amateur coding.
Apparently, in epidemiology, a difference of 80,000 deaths is “a small non-determinism”.
Imperial advised Edinburgh that the problem goes away if you run the model in single-threaded mode, like they do. This means they suggest using only a single CPU core rather than the many cores that any video game would successfully use. For a simulation of a country, using only a single CPU core is obviously a dire problem – as far from supercomputing as you can get. Nonetheless, that’s how Imperial use the code: they know it breaks when they try to run it faster.It’s clear from reading the code that in 2014 Imperial tried to make the code use multiple CPUs to speed it up, but never made it work reliably. This sort of programming is known to be difficult and usually requires senior, experienced engineers to get good results. Results that randomly change from run to run are a common consequence of thread-safety bugs. More colloquially, these are known as “Heisenbugs“.
But Edinburgh came back and reported that – even in single-threaded mode – they still see the problem. So Imperial’s understanding of the issue is wrong. Finally, Imperial admit there’s a bug by referencing a code change they’ve made that fixes it.The explanation given is “It looks like historically the second pair of seeds had been used at this point, to make the runs identical regardless of how the network was made, but that this had been changed when seed-resetting was implemented”. In other words, in the process of changing the model they made it non-replicable and never noticed.
Why didn’t they notice? Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up…and eventually this behaviour became normalised within the team.
In issue #30, someone reports that the model produces different outputs depending on what kind of computer it’s run on (regardless of the number of CPUs). Again, the explanation is that although this new problem “will just add to the issues”…“This isn’t a problem running the model in full as it is stochastic anyway”.
Although the academic on those threads isn’t Neil Ferguson, he is well aware that the code is filled with bugs that create random results. In change #107 he authored he comments: “It includes fixes to InitModel to ensure deterministic runs with holidays enabled”. In change #158 he describes the change only as “A lot of small changes, some critical to determinacy”.
Imperial are trying to have their cake and eat it. Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.
No tests. In the discussion of the fix for the first bug, Imperial state the code used to be deterministic in that place but they broke it without noticing when changing the code.
Regressions like that are common when working on a complex piece of software, which is why industrial software-engineering teams write automated regression tests. These are programs that run the program with varying inputs and then check the outputs are what’s expected. Every proposed change is run against every test and if any tests fail, the change may not be made.
The Imperial code doesn’t seem to have working regression tests. They tried, but the extent of the random behaviour in their code left them defeated. On 4th April they said: “However, we haven’t had the time to work out a scalable and maintainable way of running the regression test in a way that allows a small amount of variation, but doesn’t let the figures drift over time.”
Beyond the apparently unsalvageable nature of this specific codebase, testing model predictions faces a fundamental problem, in that the authors don’t know what the “correct” answer is until long after the fact, and by then the code has changed again anyway, thus changing the set of bugs in it. So it’s unclear what regression tests really mean for models like this – even if they had some that worked.
Undocumented equations. Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.
For example, on line 510 of SetupModel.cpp there is a loop over all the “places” the simulation knows about. This code appears to be trying to calculate R0 for “places”. Hotels are excluded during this pass, without explanation.
This bit of code highlights an issue Caswell Bligh has discussed in your site’s comments: R0 isn’t a real characteristic of the virus. R0 is both an input to and an output of these models, and is routinely adjusted for different environments and situations. Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect prediction. There’s a discussion of this problem in section 2.2 of the Google paper, “Machine learning: the high interest credit card of technical debt“.
Continuing development. Despite being aware of the severe problems in their code that they “haven’t had time” to fix, the Imperial team continue to add new features; for instance, the model attempts to simulate the impact of digital contact tracing apps.
Adding new features to a codebase with this many quality problems will just compound them and make them worse. If I saw this in a company I was consulting for I’d immediately advise them to halt new feature development until thorough regression testing was in place and code quality had been improved.
Conclusions. All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.
On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.”
“My identity. Sue Denim isn’t a real person (read it out). I’ve chosen to remain anonymous partly because of the intense fighting that surrounds lockdown, but there’s also a deeper reason. This situation has come about due to rampant credentialism and I’m tired of it. As the widespread dismay by programmers demonstrates, if anyone in SAGE or the Government had shown the code to a working software engineer they happened to know, alarm bells would have been rung immediately. Instead, the Government is dominated by academics who apparently felt unable to question anything done by a fellow professor. Meanwhile, average citizens like myself are told we should never question “expertise”. Although I’ve proven my Google employment to Toby, this mentality is damaging and needs to end: please, evaluate the claims I’ve made for yourself, or ask a programmer you know and trust to evaluate them for you.”
………………………………………………………
Among comments:
……………………………………………………
"Simon Conway-Smith
I had hoped Donald Trump would be a stronger leader than that, and insisted on any model being independently and repeatedly verified before making any decision.".[Excerpt from full comment below].
......
Guest
Will Jones
Devastating. Heads must roll for this, and fundamental changes be made to the way government relates to academics and the standards expected of researchers. Imperial College should be ashamed of themselves.
175
20 hours ago
……………………………………………………
"Simon Conway-Smith
I had hoped Donald Trump would be a stronger leader than that, and insisted on any model being independently and repeatedly verified before making any decision.".[Excerpt from full comment below].
......
Guest
Will Jones
Devastating. Heads must roll for this, and fundamental changes be made to the way government relates to academics and the standards expected of researchers. Imperial College should be ashamed of themselves.
175
20 hours ago
Imperial and the Professor should start to worry about claims for losses incurred as a result of decisions taken based on such a poor effort. Could we know, please, what this has cost over how many years and how much of the Professor’s career has been achieved on the back of it.
in 2002 he predicted 50,000 people would die of BSE. Actual number: 178 (national CJD research and survellance team) In 2005 he predicted 200 million people would die of avian flu H5N1. Actual number according to the WHO: 78 In 2009 he predicted that swine flu H1N1 would kill 65,000 people. Actual number 457. In 2020 he predicted 500,000 Britons would die from Covid-19.
Still employed by the government. Maybe 5th time lucky?
…………………………………….
Juan Luna
Ferguson should be retired and his team disbanded. As a former software professional I am horrified at the state of the code explained here. But then, the University of East Anglia code for modelling climate change was just as bad. Academics and programming don’t go together.
I support the idea of letting the Insurance industry do the modelling. They are the experts in this field.
………………………………………………………
Perhaps, if enough people come to understand how badly this has been managed, they will start to ask the same questions of the climate scientists and demand to see their models published. It could be the start of some clearer reasoning on the whole subject, before we spend the trillions that are being demanded to avert or mitigate events that may never happen.
Lets hope this “workings not required” doesn’t get picked up by schoolkids taking their exams 🙂
https://www.bbc.com/news/uk-politics-52553229
At the end of the article, there is “analysis” from a BBC health correspondent.
With such pitiful performance from the national broadcaster, I think Ferguson and his team will face no consequences.
“In 2011 the Frontier Centre for Public Policy think tank interviewed Tim Ball and published his allegations about Mann and the CRU email controversy. Mann promptly sued for defamation[61] against Ball, the Frontier Centre and its interviewer.[62] In June 2019 the Frontier Centre apologized for publishing, on its website and in letters, “untrue and disparaging accusations which impugned the character of Dr. Mann”. It said that Mann had “graciously accepted our apology and retraction”.[63] This did not settle Mann’s claims against Ball, who remained a defendant.[64] On March 21, 2019, Ball applied to the court to dismiss the action for delay; this request was granted at a hearing on August 22, 2019, and court costs were awarded to Ball. The actual defamation claims were not judged, but instead the case was dismissed due to delay, for which Mann and his legal team were held responsible”
I’m afraid Ferguson is a very small part of the plan, and merely doing what he was hired for….
Robert Borland
-Academic science has not fallen victim to capitalism, it has fallen victim to bureaucracy and conformity; if you do not conform to espouse expected and required outcomes you are labeled as a pariah, demonised and excluded. Evidence contradicting official policy is suppressed, falsified, or rationalised away….
-In this most recent marriage of political power and ‘modelling’ catastrophe, the solution has been to just come up with yet another model and to rationalise whatever policy implemented as having been necessary; politicians will rarely if ever admit error of a policy course no matter what the cost, whether lives or money.
They seem to have changed their model in the last few days – the curves look more plausible now. However, plausible looking curves mean nothing – any one of us could take the existing data (up to today) and ‘extrapolate’ a curve into the future. So plausibility means nothing – it’s just making stuff up based on pseudo-science. In the UK, we’re not supposed to dissent, because that implies that we don’t want to ‘save lives’ or ‘protect the NHS’, so the pessimistic model wins. In the US, it’s different, depending on people’s politics, so I’m not going to try to analyse that.
So why do governments leap at these pseudo-models with their useless (but plausible-looking) predictions?...If there are competing crystal balls from different academics, the government will simply pick the one that matches its philosophy best, and claim that it is ‘following the science’.
They leap at them for fear of the MSM accusing them of not doing anything.
I had hoped Donald Trump would be a stronger leader than that, and insisted on any model being independently and repeatedly verified before making any decision.
…………………………………………………………
It’s perfectly normal not to want to disclose 30 year old code because, as has been proven by this very review, people will look at it and criticize it as if it was modern code.
So Ferguson evidently rewrote his program to be more consistent with modern coding standards before releasing it. And probably introduced a couple of bugs in the process. Given the fact that the original code was undocumented, old, and that he was under time pressure to produce it in a hurry, it would have been strange if this didn’t introduce some bugs. This does not, per se, invalidate the model….
I disagree with your framing of the author’s other criticisms as amounting to criticism of stochastic models. It does not appear the author has an issue with stochastic models, but rather with models where it is impossible to determine whether the variation in outputs is a product of intended pseudo-randomness or whether the variation is a product of unintended variability in the underlying process.
As a side note, I currently work on a code base that is pure C and close to 30 years old. It is properly composed of manageable sized units and reasonably organized. It also has up to date function specifications and decent regression tests. When this was written, these were probably cutting-edge ideas, but clearly wasn’t unknown. Since then we’ve upgraded to using current tech compilers, source code repositories, and critical peer review of all changes.
So there really is no excuse for using software models that are so deficient. The problem is these academics are ignorant of professional standards in software development and frankly don’t care. I’ve worked with a few over the course of my career and that has been my experience every time.
I was coding on a large multi-language and multi-machine project 40 years ago. This was before Jsckson Structured Programming, but we were still required to document, to modularise, and to perform regression testing as well as test for new functionality. These were not new ideas when this model was originally created.
Instead we had the politicians deferring to the ‘scientists’, who were trying out a predictive model untested against real life. That seems to have worked out about as well as if you had sacked the sales team of a company and let the IT manager run sales simulations on his own according to a theory which had been developed by his mates…
Peak deaths in NHS hospitals in England were 874 on [4/08] 08/04. A week earlier, on [4/01] 01/04, there were 607 deaths. Crude Rt = 874/607 = 1.4. On average, a patient dying on [4/08] 08/04 would have been infected c. 17 days earlier on 22/03. So, by [3/22] 22/03 (before the full lockdown), Rt was (only) approx 1.4.Ok, so that doesn’t tell us too much, but if we repeat the calculation and go back a further week to [3/15] 15/03, Rt was approx 2.3. Another week back to [3/08] 08/03 and it was approximately 4.0. Propagating forward a week from [3/22] 22/03, Rt then fell to 0.8 on [3/29] 29/03
So you can see that Rt fell from 4.0 to 1.4 over the two weeks preceding the full lockdown and then from 1.4 to 0.8 over the following week, pretty much following the same trend regardless.
So, using the data we can see that we could have predicted the peak before the lockdown occurred, simply using the trend of Rt.
In my hypothesis, this was a consequence of limited social distancing (but not full lockdown) and the virus beginning to burn itself out naturally, with very large numbers of asymptomatic infections and a degree of prior immunity.
silent one
What are the deaths of those that have died FROM covid 19 and how are those written on the death certificates and how is it that those that die of a disease other than covid 19 are also included as covid 19 deaths when they were only infected by covid 19. As we know there are asymptomatic carriers so there MUST be deaths were they had the covid but that it was not a factor in those deaths but were included on the death certificate. The numbers of deaths that have been attributable to covid 19 have been over-inflated. Never mind that the test is for a general coronavirus and not specific to covid 19. ………………………………..
Tom Welsh
“Flu season deaths top 80,000 last year, CDC says”
Updated 1645 GMT (0045 HKT) September 27, 2018
https://edition.cnn.com/2018/09/26/health/flu-deaths-2017–2018-cdc-bn/index.html
There are plenty of countries without lockdown to compare against. So it is not an unverifiable hypothesis.
As with “global warming”, the politicians, bureaucrats and academics are circling the wagons together to protect their interlinked interests.
AlanReynolds
Epidemic curve are flat or down in so many countries with such different mitigation policies that it’s hard to say this policy or that made big difference, aside from two – ban all international travel by ship or airplane and stop mass transit commuting. No U.S. state could or did so either, but island states like New Zealand could and did both. In the U.S., state policies differ from doing everything (except ban travel and transit) to doing almost nothing (9 low-density Republican states, like Utah and the Dakotas). But again, Rt is at or below in almost all U.S. states, meaning the curve is flat or down.Policymakers hope to take credit for something that happened regardless of their harsh or gentle “mitigation” efforts, but it looks like something else –such as more sunshine and humidity or the virus just weakening for unknown reasons (as SARS-1 did in the U.S. by May). https://rt.live/”
LorenzoValla
As an academic, I would expect you to be appalled that the program wasn’t peer reviewed….
All of the modern standards (modularization, documentation, code review, modularization, unit and regression testing, etc.) are standards because they are necessary to create a trustworthy and reliable program. This is standard practice in the private sector because when their programs don’t work, the business fails. Another difference here is that when that business fails, the program either dies with it or is reconstituted in a corrected form by another business. In an academic setting, it’s far more likely that the failure will be blamed on insufficient funding, or that more research is required, or some other excuse that escapes blame being correctly applied…..1 day ago
whatever
Academic source code is uniformly shit. It is very rarely provided, and never “peer reviewed”. “Peer reviewal” isn’t paid, it’s an extra “voluntary activity” done in one’s free time. You seriously think scientists have so much money that they’ll spend weeks peer viewing each others’ 15K line files looking for bugs?
Tom
I know nothing about the coding aspects, but have long harboured suspicions about Professor Ferguson and his work. The discrepancies between his projections and what is actually observed (and he has modelled many epidemics) is beyond surreal! He was the shadowy figure, incidentally, advising the Govt. on foot and mouth in 2001, research which was described as ‘seriously flawed’, and which decimated the farming industry, via a quite disproportionate and unnecessary cull of animals.
I agree with the author that theoretical biologists should not be giving advice to the Govt. on these incredibly important issues at all! Let alone treated as ‘experts’ whose advice must be followed unquestioningly. I don’t know what the Govt. was thinking of. All this needs to come out in a review later, and, in my view, Ferguson needs to shoulder a large part of the blame if his advice is found to have done criminal damage to our country and our economy. This whole business has been handled very badly, not just by the UK but everyone, with the honourable exception of Sweden.
Russ Nelson
I’m not sure that the code we can see deserves much detailed analysis, since it is NOT what Ferguson ran. It has been munged by theoretically expert programmers and yet it STILL has horrific problems.
Eric B Rasmusen
The biggest problem…is not making the code public. I’m amazed at how in so many fields it’s considered okay to keep your data and code secret. That’s totally unscholarly, and makes the results uncheckable.
Annette Jones
I am a lay person who does not understand computer modelling….but for such huge decisions to be made without adequate peer review of the data is shocking.
LorenzoValla
The bottom line is that if the recommendations from a computer program are going to be used to make decisions that significantly affect the daily lives of millions of people, the friggen program absolutely needs to be as solid as possible, which includes frequent code review, proper documentation, and in-depth testing. Then, it needs to be shared for peer review.
Anne
Here are the results of Professor Ferguson’s previous modelling efforts.
Bird Flu “200m globally” – Actual 282
Swine flu “65,000 UK” – Actual 457
Mad Cow “50-50,000 UK” – Actual 177
Yasmin Mattox
This is stunning in how awful this all is. The word criminal comes to mind. Thank you so much for this assessment.
Thomas
Are the mainstream media capable of covering this? That is what frightens me. Who is going to be the first to point out that the reason sick peoples weren’t getting hospital beds is because the models were telling us to expect thousands more sick people than there were? How many people died because of this?
This diktat that we can’t set free young people who are not threatened by the virus because the model says hundreds of thousands would die? All nonsense.
This is the greatest academic scandal in our history.
.........
I am science trained but a HW guy, not SW. I place most of my trust in measurements, especially ones that can be reproduced by others.
The infamous “Harry_Read_Me” file contained in the original Climate Gate release springs to mind. As I recall, it was a similar tale of a technician desperately trying to make sense of terrible software and coding being used by the “Climate Scientists” – one of whom had to ask for help using Excel…
Michael Hughes
Bill Gates funds Ferguson directly and indirectly.
We don’t need one of Bill Gates’ dodgy vaccines because our immune systems learned a trick or two over the past 100 million years or so.
“$79,006,570”
March 2020 Imperial College London – Bill & Melinda Gates Foundation https://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database/Grants/2020/03/OPP1210755
Germane to any discussion of Mr. Ferguson’s COVID-19 modelling software is that, aside from its contentious abuse as the first component of an applied Hegelian dialectic in justifying a near-global lock-down leading to the destruction of national economies resulting in a global recession primarily affecting the world’s urban population as supply chains, support services and law and order disintegrate in just-in-time food and fuel delivery-dependent cities, …
“Today, 55% of the world’s population lives in urban areas …”
16.05.2018
68% of the world population projected to live in urban areas by 2050, says UN | UN DESA | United Nations Department of Economic and Social Affairs https://www.un.org/development/desa/en/news/population/2018-revision-of-world-urbanization-prospects.html
“… An economic breakdown is more than just economic. It leads quickly to a social breakdown that involves looting, random violence, fraud and decadent behavior.
The Roaring ’20s in the U.S. (with Al Capone and Champagne baths) and Weimar Germany (with riots and cabaret) are good examples.
Looting, burglary and violence in the midst of a state of emergency are the shape of things to come.
The veneer of civilization is paper-thin and easily torn. Most people don’t realize how fragile it is. But they’re going to learn that lesson, I’m afraid.
Expect social disorder to get worse long before it gets better.”
14.04.2020 (James Rickards)
Worst Recession in 150 Years – The Daily Reckoning https://dailyreckoning.com/worst-recession-in-150-years/
…there’s also a private-public sector funded rush to fast-track the production of a now 20-year-old yet still unproven vaccine technology.
From Forbes:
“… When the genomic sequence of the virus was released online by Chinese scientists on January 11, 2020, the Cambridge, Massachusetts-based Moderna team had a vaccine design ready within 48 hours. It shipped a batch of its first vaccine candidate to the National Institutes of Health for a phase one study just 42 days after that. In early March, Moderna’s mRNA vaccine, which represents an entirely new way to provide immunity to disease, was injected into humans for the first time.
That’s lightning fast. Vaccines typically take years (or in some cases, decades) to develop,…
… The speed is made possible by a new technology: mRNA vaccines, … mRNA vaccines work kind of like a computer program: After the mRNA “code” is injected into the body, it instructs the machinery in your cells to produce particular proteins. Your body then becomes a vaccine factory, producing parts of the virus that trigger the immune system. In theory, this makes them safer and quicker to develop and manufacture, …
… The prospect that Moderna may have the technology to compress years into a few months and take on a virus that has crippled the global economy has investors salivating. …
… “If it works, we might have the best vaccine technology in the world,” Bancel says.
But that’s a big “if.” No mRNA vaccine currently exists on the market, and nobody knows for sure if the technology will work, much less against this virus. To date, nobody’s been able to make a vaccine that works against a human coronavirus. …
… Bancel isn’t the only optimist. In the past 20 years, there’s been an explosion of companies developing mRNA vaccines for a large swathe of diseases, and many have turned their attention towards the COVID-19 pandemic. German company BioNTech is working with Pfizer to develop an mRNA vaccine. Human trials have already begun. Another German company, CureVac, is backed by the Gates Foundation and is expected to begin vaccine trials this summer. Lexington, Massachusetts-based Translate Bio has partnered with French pharmaceutical giant Sanofi to develop its mRNA vaccine, with human trials expected to start later this year.
… But it is still all theoretical—there aren’t any mRNA vaccines on the market for any diseases yet. When asked how we know mRNA vaccines will work, Drew Weissman, a researcher at the University of Pennsylvania School of Medicine who has spent 13 years studying the technology, answered bluntly:
“We don’t.” There have been only a handful of human trials for any mRNA infectious disease vaccine, all of which have been focused on safety. There’s yet to be a trial showing mRNA vaccines are effective and long-lasting at preventing an infectious disease.
Scientists also don’t know how fast this coronavirus will mutate, which could affect how often a new vaccine will need to be created. If the virus mutates quickly, Weissman says, “We might have to make a new coronavirus vaccine every year or every couple of years.” …
… Nevertheless, the federal government is backing mRNA -vaccines with serious cash. It has pledged to give nearly $500 million to Moderna alone for its COVID-19 vaccine. To speed development, the FDA has authorized both Moderna and BioNTech to begin vaccine trials in humans before safety-testing in animals was finished. …”
08.05.2020
Fueled By $500 Million In Federal Cash, Moderna Races To Make 1 Billion Doses Of An Unproven Cure https://www.forbes.com/sites/leahrosenbaum/2020/05/08/fueled-by-500-million-in-federal-cash-moderna-races-to-make-1-billion-doses-of-an-unproven-cure/
I, for one, will decline the offer of any such vaccine, and if necessary, resist its legislated imposition on my person to the point of imprisonment in the confidence that when under-informed members of the public who naively agree to be afflicted by such quackery begin suffering ill effects in numbers too large to be ignored, I will have grounds for an appeal.
“Taking their cue from Gates they agreed that overpopulation was a priority,”
May 26, 2009
Billionaires Try to Shrink World’s Population, Report Says – The Wealth Report – WSJ https://blogs.wsj.com/wealth/2009/05/26/billionaires-try-to-shrink-worlds-population-report-says/
The now-broken Times of London link in the above Wall Street Journal article:
May 24, 2009
Billionaire club in bid to curb overpopulation – Times Online https://web.archive.org/web/20110223015213/http://www.timesonline.co.uk/tol/news/world/us_and_americas/article6350303.ece
At 3:57 (cued) Bill Gates posits that carbon emissions can be curbed in part via a population reduction approach involving vaccines etc. (Note the audience response.) 29:32
Feb 20, 2010
TED
Innovating to zero! | Bill Gates – YouTube https://www.youtube.com/watch?v=JaF-fq2Zn7I&t=3m57s
Edward Reeves
On Monday I got so angry that I created a change.org petition on this very subject. https://www.change.org/p/never-again-the-uk-s-response-to-covid-19″
Added: Statistics can't “predict” viruses. Covid-19 virus can’t even be “predicted” by flu virus stats. “The “goal” of a virus is not to kill but only to spread. The virus only kills unintentionally, when the living organism in which it manages to settle does not have adequate antibodies. In other words, the virus does not intend to kill its carrier…simply because it would disappear with it:”
“Statistics do not allow predicting the behavior of a living organism, in this case the behavior of a virus. You have to start by understanding that the “goal” of a virus is not to kill but only to spread. The virus only kills unintentionally, when the living organism in which it manages to settle does not have adequate antibodies. In other words, the virus does not intend to kill its carrier, nor to make a species disappear completely…simply because it would disappear with it.
In any case, extrapolating measures used to flu epidemics applying them to the current Covid-19 epidemic is something completely absurd: the flu affects a large number of children, which does not happen with Covid-19, which-speaking in demographic terms-mainly kills people of the so-called “third age”, diabetics and with hypertension problems. The viral load of children contaminated with Covid-19 is very light, so much so that it is not even known yet if they can become contagious.
On March 22, [2020] Professor Neil Ferguson acknowledged having made his calculations of the Covid-19 epidemic based on a database of influenza epidemics from 13 years ago.”…April 19, 2020, “Covid-19: Neil Ferguson, the Lysenko of liberalism,“ Voltaire, Thierry Meyssan
……………………………………………………….
Ed. note: Image of needle and syringe added by blog editor Susan.