Jekyll2021-05-18T06:34:36+00:00/feed.xmlMario Teixeira Parenteacademic websiteCAMERA Workshop2021-05-07T00:00:00+00:002021-05-07T00:00:00+00:00/posts/2021/05/07/camera-workshop<p>From April 20–22, 2021, I had the possibility to be part of a virtual workshop on <em>Autonomous Discovery in Science and Engineering</em> (<a href="https://autonomous-discovery.lbl.gov/">website</a>) organized by the <em>Center for Advanced Mathematics for Energy Research Applications</em> (<a href="https://www.camera.lbl.gov/">CAMERA</a>) at <em>Lawrence Berkeley National Laboratory</em> (<a href="https://www.lbl.gov/">LBNL</a>).</p>
<p>I gave a talk on <em>Autonomous Experiments for Neutron Three-Axis Spectrometers (TAS) with Log-Gaussian Processes</em> in the breakout session on <em>Autonomous Discovery in Neutron Scattering</em>.
The presentation covered recent methodological advances of our group in the application of log-Gaussian processes for autonomous neutron scattering experiments.</p>
<p>Other talks were either focussing on physical applications or were showing methodological approaches to autonomous material discovery.
Although I was not able to fully follow the physics parts, I got a decent impression of what the problems are that groups try to solve in this area.</p>
<p>I will provide a link to an extended abstract of our contribution as soon as it is available.
<!-- You can find an extended abstract of our contribution [here](). --></p>From April 20–22, 2021, I had the possibility to be part of a virtual workshop on Autonomous Discovery in Science and Engineering (website) organized by the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory (LBNL).Bayes’ theorem and medical screening tests2021-04-03T00:00:00+00:002021-04-03T00:00:00+00:00/posts/2021/04/03/bayes-tests<p>The Coronavirus pandemic was an ever-present topic during the last 12 months and still is.
The virus <em>SARS-CoV-2</em> is tried to be detected by conducting medical screening tests like the PCR or antigen tests.
A lot of these tests have been made and still continue to be done on a daily basis, regardless of whether the tested persons show symtpoms or not.
Since many of the so-called nonpharmaceutical interventions are based on the number of positive tests during the last week, it is of great importance to ensure that the test results are not only reliable on the level of a single test but also meaningful as a collection.</p>
<p>The following mathematical elaboration aims for an interpretation of one of the main statistical measures that are used when it comes to assessing the performance of so-called <em>binary classification tests</em> in medicine: the <em>positive predictive value</em> (PPV).
The PPV specifies the chance that a person with a positive test is indeed infected.</p>
<p>We approach this investigation by first explaining <em>Bayes’ theorem</em>, a well-known and famous result from Bayesian statistics.
With this, we will derive an expression for an upper bound of the PPV that gives insight in its nature with respect to two other important values, the <em>false positive rate</em> and the <em>prevalence</em>.</p>
<h1 id="bayes-theorem">Bayes’ theorem</h1>
<p>The famous theorem of Bayes, or just <em>Bayes’ theorem</em>, is specifying how to “update” the chance (also called <em>degree of belief</em> in the Bayesian view on the concept of probability) of a random event \(A\) after observing another random event \(B\) with \(\mathbf{P}(B)>0\), where \(\mathbf{P}(B)\) denotes the probability or chance of the event \(B\) occurring.</p>
<p>The theorem states that
\begin{equation}
\mathbf{P}(A\,\vert\,B) = \frac{\mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A)}{\mathbf{P}(B)}
\end{equation}
and can be informally interpreted by saying that the <em>prior probability</em> \(\mathbf{P}(A)\) is updated by the term \(\mathbf{P}(B\,\vert\,A)/\mathbf{P}(B)\) to the <em>posterior probability</em> \(\mathbf{P}(A\,\vert\,B)\) after observing that \(B\) occurred.
A proof of this form of Bayes’ theorem is trivial by applying the definition of conditional probability and using symmetry of the \(\cap\)-operation (intersection).</p>
<p>We can further concretize the above expression by regarding \(\mathbf{P}(B)\) as a so-called <em>marginal probability</em> and using well-known equalities.
That is, we can write
\begin{align}
\mathbf{P}(B) &= \mathbf{P}(B \cap A) + \mathbf{P}(B \cap \overline{A}) \newline
&= \mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}) \cdot \mathbf{P}(\overline{A}) \newline
&= \mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}) \cdot (1-\mathbf{P}(A)) \newline
&= [\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A})] \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}),
\end{align}
where \(\overline{A}\) denotes the event of \(A\) <em>not</em> occurring.</p>
<p>If additionally \(\mathbf{P}(A)>0\), we get that
\begin{align}
\mathbf{P}(A\,\vert\,B) &= \frac{\mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A)}{[\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A})] \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A})} \newline
&= \frac{\mathbf{P}(B\,\vert\,A)}{\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A}) + \frac{\mathbf{P}(B\,\vert\,\overline{A})}{\mathbf{P}(A)}}.
\end{align}</p>
<p>Furthermore, since \(\frac{\alpha}{\alpha+\beta} \leq \frac{1}{1+\beta}\) for nonnegative values \(\alpha,\beta\) with \(\alpha\leq1\), it holds that
\begin{equation}
\mathbf{P}(A\,\vert\,B) \leq \frac{1}{1 - \mathbf{P}(B\,\vert\,\overline{A}) + \frac{\mathbf{P}(B\,\vert\,\overline{A})}{\mathbf{P}(A)}}.
\end{equation}</p>
<h1 id="positive-predictive-value-of-medical-screening-tests">Positive predictive value of medical screening tests</h1>
<p>Let us now apply the above result for medical screening tests to get some insight into the <em>positive predictive value</em>.</p>
<p>For this, we denote the event of a person being infected as
\begin{equation}
I := \lbrace \text{Person is infected} \rbrace.
\end{equation}
The event \(I\) replaces what was denoted by the event \(A\) above.</p>
<p>The event, that a test of this person is positive, is denoted as
\begin{equation}
T_+ := \lbrace \text{Test of person is positive} \rbrace.
\end{equation}
The event \(T_+\) replaces what was denoted by the event \(B\) above.</p>
<p>Hence, the expression \(\mathbf{P}(I\,\vert\,T_+)\) denotes the probability that a person is indeed infected after getting a positive test result.</p>
<p>Applying the upper bound from above, we get that
\begin{equation}
\mathbf{P}(I\,\vert\,T_+) \leq \frac{1}{1 - \mathbf{P}(T_+\,\vert\,\overline{I}) + \frac{\mathbf{P}(T_+\,\vert\,\overline{I})}{\mathbf{P}(I)}},
\end{equation}
where \(\overline{I}\) denotes the event that the person is <em>not</em> infected.
The term \(\mathbf{P}(T_+\,\vert\,I)\) is also called the <em>false positive rate</em> (FPR) of the test and represents the ratio between the number of falsely positive tests and the number of noninfected persons.
The <em>prevalence</em> is denoted by \(\mathbf{P}(I)\) and specifies the proportion of infected persons in the whole population.</p>
<p>Finally, repeating the above inequality with the mentioned terms, we get that
\begin{equation}
\mathbf{P}(I\,\vert\,T_+) \leq \frac{1}{1 - \text{FPR} + \frac{\text{FPR}}{\text{Prevalence}}}.
\end{equation}</p>
<p>The upper bound, viewed as a function of the FPR and the prevalence, is displayed in the following figure.</p>
<center><img src="/assets/images/fpr-preval-ppv.svg" /></center>
<p>Note that the \(y\)-axis has a <em>log</em> scale.</p>
<h1 id="interpretation">Interpretation</h1>
<p>The main observation with the above figure is that the PPV can get quite low if the FPR and the prevalence are unfavorably related.
More concretely, if the prevalence is low, say \(\text{prevalence}\approx1\%\),
then the test needs to be really accurate in the sense that it should have a FPR close to zero; otherwise the test risks becoming unreliable and invalid which can lead to false assessments of the public health situation and thus can provide incorrect information to policy makers.</p>The Coronavirus pandemic was an ever-present topic during the last 12 months and still is. The virus SARS-CoV-2 is tried to be detected by conducting medical screening tests like the PCR or antigen tests. A lot of these tests have been made and still continue to be done on a daily basis, regardless of whether the tested persons show symtpoms or not. Since many of the so-called nonpharmaceutical interventions are based on the number of positive tests during the last week, it is of great importance to ensure that the test results are not only reliable on the level of a single test but also meaningful as a collection.Scientific Computing: attempting a definition2021-04-02T00:00:00+00:002021-04-02T00:00:00+00:00/posts/2021/04/02/scien-comp-def<p>First of all, “Scientific Computing” (SC) is an accepted term for a certain area of research among mathematicians and computer scientists particularly, but also for the scientific community in general.
However, scientists seem to have varying notions of the term, even if they come from similar disciplines.
This text attempts to show why a clear definition of the term is not straightforward, but finally dares to do exactly that, a fairly clear (objective) definition.</p>
<p>Let us start with an obvious observation.
The term “Scientific Computing” consists of two words: “scientific” and “computing”.
We do not try to explain both words separately.
For the first, we would have to find a definition for “science” which is a question that already exists for centuries and is tried to be answered by philosophy, more exactly <a href="https://en.wikipedia.org/wiki/Philosophy_of_science"><em>philosophy of science</em></a>.</p>
<p>What we are rather looking for is a definition of the term “Scientific Computing” (as an interplay of both words) in which the word “scientific” is related to “computing”.
Hence, following langugage, SC is a <em>particular kind of computing</em> that is <em>scientifically sound</em>, accepts the <em>scientific method</em>, and is thus open for the scientific community to get criticized and discussed.</p>
<p>As opposed to these rather trivial observations, the more difficult question to answer is what SC <em>really does</em>, in the sense of questions like</p>
<ul>
<li>which areas of mathematics and computer science are used in SC and how they interact,</li>
<li>which problems are solved by SC and how.</li>
</ul>
<p>SC has often been tried to be defined by following questions of this type.
However, doing so increases the risk of the definition getting subjective too quickly.
For example, a statistician has answers to the above questions that can substantially differ from answers given by a numerical analyst or a computer scientist, but still everyone is convinced that the own description is more precise.
This does not get us very far.</p>
<p>To find a more objective definition of SC, we need to circumvent classifications of the mentioned type.
We base our attempt of a definition on what we want to call the <em>three pillars of SC</em>:</p>
<ol>
<li>Theory,</li>
<li>Methodology,</li>
<li>Implementation.</li>
</ol>
<p>For this attempt, we need to agree on the following: “SC tries to solve problems that can be solved by computing, i.e., by using a computer.”
Such problems are called <em>computational problems</em> in the remainder and often involve <em>mathematical models</em>.</p>
<p>Now, the main point of our definition is that neither finding a method or an algorithm alone (methodology), nor proving a numerical result for its own sake (theory), nor an efficient implementation of an algorithm in a suitable programming language (implementation) without a connection to the former two tasks is what SC does.
Much more, it is the (often complex) interplay of all of the three parts.</p>
<p><br /><center><img src="/assets/images/sc-pillars.svg" /></center><br /></p>
<p>The main purpose of SC certainly is finding a method or algorithm that solves a computational problem.
However, following our definition, only the consideration and connection of all three aspects makes the approach a scientific computing approach.</p>
<h1 id="1-theory">1. Theory</h1>
<p>Theory, as we use the term in this context, leads to a <em>formal verification</em> of the developed algorithm.
For this, it utilizes a reasonable (mathematical and logical) formalism and useful notation to show that the algorithm is indeed solving the given computational problem.
The quality of the solution can be demonstrated as well.
As an example, numerical analysts can provide promising convergence results or insightful upper bounds on approximation errors.
Additionally, formal formulations can also lead to useful abstractions which potentially broadens the applicability of the method.</p>
<p>Most of theory is done by mathematicians, or at least in a mathematical way.
Mathematical areas that are often applied are, e.g., linear algebra, calculus, numerical mathematics, probability theory, and statistics.
However, also theoretical areas from computer science, as e.g., computability theory or complexity theory, can play a role here depending on the concrete case.</p>
<h1 id="2-methodology">2. Methodology</h1>
<p>As mentioned, this is certainly the core of the scientific computing approach.
The main job of this part is the development of methods, algorithms, or techniques to solve the computational problem at hand.
Preferably, the approaches need to be described algorithmically such that others can understand them.
It is then the theorist’s task to provide a proof of the quality of the approach to the community.
The implementation in software can get started as soon as there is a reasonable description of the method and a sufficiently large chance of success.</p>
<p>In our view, it is indeterminate whether the methodological part is dominated by mathematics or computer science.
We find that both disciplines can equally contribute here.</p>
<h1 id="3-implementation">3. Implementation</h1>
<p>Implementing a proposed method or algorithm is software development, more or less.
Of course, if the problem is highly computational expensive, techniques of <em>high performance computing</em>, which we also see as part of implementation, should be applied.
It is the job of the software developer (or computer scientist) to produce code that efficiently executes the idea of the algorithm.
In this respect, software validation by suitable tests showing the correctness of the implementation is also necessary at this point.</p>
<p>Since this part is mostly about software development, it is certainly dominated by computer science.
Of course, programming can also be done by mathematicians who however act as software developers then.</p>
<p>Theoretically, all of the above three parts can be done by one and the same person.
Though, there is more than one scientist involved in most cases since approaches can consist of multiple sufficiently complex subtasks that need to be handled by specialists.</p>
<h1 id="distinction-from-computational-science">Distinction from <em>Computational Science</em></h1>
<p>In contrast to a definition from <a href="https://en.wikipedia.org/wiki/Computational_science">Wikipedia</a>, which does <em>not</em> differ between SC and <em>Computational Science</em> (ClS; to distinguish from CS which is often used for computer science), we would like to promote such a distinction.</p>
<p>The focus with SC lies on the computing or computation aspect, in our opinion.
In other words, we have a computational problem that is tried to be solved scientifically and that orientates on the three pillars mentioned above.</p>
<p>On the other hand, ClS, as the term says, is doing <em>science</em>, science in a <em>computational</em> manner.
This means that ClS tries to answer questions from a certain scientific area and hence always has the application in mind.
For example, problems from astrophysics are nowadays often solved computationally by simulations involving mathematical models that aim to reflect reality.
We can thus say that “SC is applied to do ClS” in this case.
Of course, computational problems in SC can be motivated by questions from ClS or from a certain scientific discipline directly, but do not necessarily need to.
Problems in SC can also emerge from other problems in SC.</p>
<h1 id="summary">Summary</h1>
<p>This text tried to formulate a new definition of <em>Scientific Computing</em>.
Existing approaches are often based on questions like which mathematical or computer science areas contribute to SC, which is rather subjective.
We aimed for establishing objectivity in the new definition by following another approach called <em>the three pillars of SC</em>: theory, methodology, implementation.
Finally, an explicit distinction to <em>Computational Science</em> was made which however conflicts with other attempts; see, e.g., <a href="https://en.wikipedia.org/wiki/Computational_science">Wikipedia</a>.</p>First of all, “Scientific Computing” (SC) is an accepted term for a certain area of research among mathematicians and computer scientists particularly, but also for the scientific community in general. However, scientists seem to have varying notions of the term, even if they come from similar disciplines. This text attempts to show why a clear definition of the term is not straightforward, but finally dares to do exactly that, a fairly clear (objective) definition.UQ course at HM2021-04-01T00:00:00+00:002021-04-01T00:00:00+00:00/posts/2021/04/01/uq-course-hm<p>During this summer, I am teaching <a href="https://zpa.cs.hm.edu/public/module/374/"><em>Fundamentals of Uncertainty Quantification (UQ)</em></a> in a course for Bachelor students at the <a href="https://www.cs.hm.edu/">Department of Computer Science and Mathematics</a> (FK07) of the <a href="https://www.hm.edu/">University of Applied Sciences Munich</a> (HM).</p>
<p>At most schools, classes on UQ are only part of Master’s programs since they require decent knowledge of and education in various mathematical (linear algebra, calculus, probability theory, statistics, …) and computer science (programming, algorithms, data structures, …) subdisciplines.</p>
<p>However, we decided to offer a Bachelor’s course that introduces fundamental (as opposed to advanced) aspects of the field.
The introductory classes discuss motivating examples of why UQ actually matters and tries to give a reasonable overview of the field and a fair description of the notion <em>uncertainty</em>.
In a second chapter, we lay the basis for forthcoming contents, i.e., we repeat fundamental and necessary definitions and results of probability theory and statistics.
Basic random number sampling, Monte Carlo-type methods along with more advanced <em>Latin Hypercube Sampling</em> (LHS) is explained in chapter 3.
The final and main part of the course is chapter 4 in which we introduce techniques for <em>global sensitivity analysis</em> of mathematical models.
A table of contents and the models used for demonstration are placed below this text.</p>
<p>Besides discussing the contents in a formal way, students can get their hands on some assignments as part of their practical training.
They implement methods from chapter 4 and test them on the SEIR model, a compartmental model from epidemiology for the spread of infectious diseases.</p>
<p>Other more advanced but common UQ approaches as <em>Forward UQ</em> or <em>Inverse UQ</em> are not discussed in this course.
They could be part of courses UQ II or UQ III which then, however, would be better suited as part of a Master’s program.</p>
<p>I am very happy to get the opportunity from FK07 and HM to teach this course which is actually quite related to topics of my dissertation.
Having influence on young people and educating them to critically think about underlying assumptions and their consequences from a formal, informal, and intuitive perspective gives me great pleasure and fulfills me.</p>
<p><strong>Table of contents</strong>:</p>
<ol>
<li><strong>Introduction</strong><br />
1.1. Motivation<br />
1.2. Types and sources of uncertainties</li>
<li><strong>Fundamentals in probability theory and statistics</strong><br />
2.1. Random variables<br />
2.2. Expectation value and (co)variance<br />
2.3. Quantiles<br />
2.4. Important distributions<br />
2.5. Statistical estimators</li>
<li><strong>Sampling strategies</strong><br />
3.1. Pseudo-random number sampling<br />
3.2. Monte Carlo simulations<br />
3.3. Latin Hypercube Sampling (LHS)</li>
<li><strong>Global sensitivity analysis</strong><br />
4.1. Primitive approach<br />
4.2. Partial rank correlation coefficients<br />
4.3. Sobol indices</li>
</ol>
<p><strong>Models</strong>:</p>
<ul>
<li>Predator-prey model</li>
<li>Compartment model from epidemiology</li>
</ul>During this summer, I am teaching Fundamentals of Uncertainty Quantification (UQ) in a course for Bachelor students at the Department of Computer Science and Mathematics (FK07) of the University of Applied Sciences Munich (HM).Start at JCNS-4 with AINX2020-12-04T00:00:00+00:002020-12-04T00:00:00+00:00/posts/2020/12/04/jcns-start<p>It is now two months ago that I started my Postdoc position at the <a href="https://www.fz-juelich.de/jcns/EN/Home/home_node.html">Jülich Centre for Neutron Science</a> (JCNS).
JCNS is an institute of the <a href="https://www.fz-juelich.de/">Forschungszentrum Jülich</a> which itself is part of the <a href="https://www.helmholtz.de/">Helmholtz association</a>.
More concretely, I am working in the <a href="https://www.fz-juelich.de/jcns/EN/Leistungen/ScientificComputing/_node.html">Scientific Computing group</a> of the JCNS-4 outstation at the <a href="http://www.frm2.tum.de/en/">FRM II</a> which is the TUM neutron source.</p>
<p>I was hired to contribute to the project <em>AINX</em> (<strong>A</strong>rtificial <strong>I</strong>ntelligence for <strong>N</strong>eutron and <strong>X</strong>-ray scattering) which investigates machine learning techniques on their use for neutron and X-ray scattering experiments.</p>
<p>The project is divided into two main phases.</p>
<p><strong>Phase 1:</strong> Together with instrument scientists for the triple-axes spectrometer <a href="https://wiki.mlz-garching.de/panda:index"><em>PANDA</em></a> (Twitter: <a href="https://twitter.com/PandaMlz">@PandaMlz</a>), my principal investigator Dr. Marina Ganeva and myself try to guide corresponding experiments by using Gaussian Process regression.
<a href="https://scikit-learn.org/stable/modules/gaussian_process.html">Gaussian processes</a> are capable of quantifying uncertainties in function approximation and, hence, they can provide reasonable suggestions for informative measurements locations, namely that with highest uncertainty.</p>
<p><strong>Phase 2:</strong> Many neutron experiments are disrupted by unfavorable artifacts like noise or background signals, spurious peaks, and others.
We aim at training neural networks in that they will be able to uncover informative data by removing the mentioned disruptions.
More details need to be figured out when it comes to implement this plan.</p>
<p>I am looking forward to all the new things I can learn and accomplish in the next time.
Especially, the highly interdisciplinary flavor of this project, working in a team with scientists having various backgrounds, will be interesting and fun.</p>It is now two months ago that I started my Postdoc position at the Jülich Centre for Neutron Science (JCNS). JCNS is an institute of the Forschungszentrum Jülich which itself is part of the Helmholtz association. More concretely, I am working in the Scientific Computing group of the JCNS-4 outstation at the FRM II which is the TUM neutron source.PhD defense: Passed2020-09-17T00:00:00+00:002020-09-17T00:00:00+00:00/posts/2020/09/17/phd-defense<p>I am very happy to write that I finally passed my PhD defense.</p>
<p>The defense consisted of a 25 minutes talk on the main outcomes of my research and a subsequent oral examination on contents of the dissertation.</p>
<p>I want to thank the reviewers, examiners, and the chair of the examination board for their participation and interest.</p>I am very happy to write that I finally passed my PhD defense.Submission of my PhD thesis2020-05-25T00:00:00+00:002020-05-25T00:00:00+00:00/posts/2020/05/25/subm-thesis<p>Finally, I made it to submit my PhD thesis “<em>Active Subspaces in Bayesian Inverse Problems</em>”.</p>
<p>The thesis is now going to be reviewed by two reviewers.
If the reviews are then accepted by the department, we can conduct the defense which is the final major step to graduation.</p>
<p>EDIT: A final version of the thesis is available at the <a href="https://mediatum.ub.tum.de/?id=1546065">TUM University library</a>.</p>Finally, I made it to submit my PhD thesis “Active Subspaces in Bayesian Inverse Problems”.Sensitivities in SEIR models: a (very) quick investigation2020-04-18T00:00:00+00:002020-04-18T00:00:00+00:00/posts/2020/04/18/seir-sensit<p>The COVID-19 pandemic recently caused and still causes major problems in several respects.
Also, (mathematical) modelers face difficulties in simulating and predicting the final size of the pandemic.</p>
<p>The dynamics of the pandemic are often simulated by compartmental models.
Although they are known to contain some uncertainties, they can be utilized to reliably demonstrate the effect of intervention strategies.</p>
<h1 id="goal">Goal</h1>
<p>The goal of this post is to show that a particular compartmental model, the <em>SEIR model</em>, is not useful for prediction purposes due to high sensitivities in its model parameters, which is of interest since some of the more sophisticated models are based on SEIR.
For example, the COVID-19 model of the German <a href="https://www.rki.de/"><em>Robert Koch-Institut</em></a> (RKI) is an adjusted SEIR model with more compartments to reflect the complexity of the COVID-19 pandemic; see <a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Modellierung_Deutschland.pdf">their publication</a>.</p>
<p>We assume that the reader is already familiar with the SEIR model and some statistics.
A short description can be seen on <a href="https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model">Wikipedia</a>.</p>
<p><strong>Remark</strong>.
This post is <em>not</em> a scientific statement.
It only/superficially describes the result of a (very) quick investigation of SEIR parameter sensitivities that the author conducted in his spare time.</p>
<h1 id="seir-model">SEIR model</h1>
<p>As a reminder, SEIR (excluding births and deaths) describes the dynamics of an infectious disease with the ODE system
\begin{align}
\frac{dS}{dt} &= -\beta S \frac{I}{N}, \newline
\frac{dE}{dt} &= \beta S \frac{I}{N} - \alpha E, \newline
\frac{dI}{dt} &= \alpha E - \gamma I, \newline
\frac{dR}{dt} &= \gamma I.
\end{align}</p>
<p>It models four compartments (susceptibles – <strong>S</strong>, exposed – <strong>E</strong>, infectious – <strong>I</strong>, removed – <strong>R</strong>) and the transitions between them involving model parameters for transition rates.</p>
<p><br /><center><img src="/assets/images/post-seir-sensit/compartments.png" /></center><br /></p>
<p>The initial conditions for the ODE system above are
\begin{align}
S(0) &= N-I(0), \newline
E(0) &= 0, \newline
I(0) &= I_0, \newline
R(0) &= 0,
\end{align}
where \(N\) is the (fixed) total number of individuals.
Note that
\begin{equation} S(t)+E(t)+I(t)+R(t)=N\end{equation}
for all \(t\geq0\).</p>
<p><strong>Remark</strong>.
The unit of time \(t\) is <em>weeks</em>.</p>
<p>The model parameters are:</p>
<ul>
<li>\(\beta\) – transmission rate (average number of contacts per person per time),</li>
<li>\(\alpha\) – latency rate, or equivalently, \(\alpha^{-1}\) – mean duration of the latency period,</li>
<li>\(\gamma\) – recovery rate, or equivalently, \(\gamma^{-1}\) – mean duration of the infection,</li>
<li>\(I_0\) – initial number of infections.</li>
</ul>
<p>One important characteristic of an infection is the so-called <a href="https://en.wikipedia.org/wiki/Basic_reproduction_number"><em>basic reproduction number</em></a> denoted by \(\mathcal{R}_0\).
It indicates the expected number of direct infections caused by exactly one case in a population where all other individuals are susceptible.
For the SEIR model, it can be computed to
\begin{equation}\mathcal{R}_0=\frac{\beta}{\gamma}.\end{equation}</p>
<h1 id="rkis-assumptions">RKI’s assumptions</h1>
<p>In <a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Modellierung_Deutschland.pdf">their publication</a>, RKI makes the following assumptions:</p>
<ul>
<li>\(\mathcal{R}_0=2\),</li>
<li>\(\alpha^{-1}=3/7\),</li>
<li>\(\gamma^{-1}=9/7\),</li>
<li>\(I_0=1000\).</li>
</ul>
<p>It follows that \(\beta = 14/9\).</p>
<h1 id="sensitivity-study">Sensitivity study</h1>
<p>In the following, we additionally assume a total population size of \(N=80 \cdot 10^6 = 80\text{ million}\).</p>
<p>We set
\begin{equation}\theta = (\beta, \alpha, \gamma, I_0)^\top \in \mathbf{R}^4\end{equation}
as the parameter vector and regard it as a <em>random</em> vector following a uniform distribution \(\mu=\mathcal{U}(R)\) on a rectangle \(R\) such that
\begin{equation}\theta_i \in [\theta_i^-,\theta_i^+]\end{equation}
for all \(i=1,\ldots,4\).
The boundaries of the rectangle are determined by a perturbation of \(\pm p\%\) of the RKI parameters above.
For example,
\begin{equation}
\beta^{\pm} = 14/9 \cdot \left(1 \pm \frac{p}{100}\right).
\end{equation}</p>
<p>Let us set the simulation time to \(T=60\text{ [weeks]}\) and define two maps.
The map
\begin{equation}
\mathcal{G}_1(\theta) := (I(t))_{t=0,1,\ldots,T}
\end{equation}
takes a particular parameter \(\theta\) and computes the corresponding number of infectious individuals for times \(t=0,1,\ldots,T\).
Additionally, the map
\begin{equation}
\mathcal{G}_2(\theta) := \mathop{\mathrm{arg\,max}}_{t=0,1,\ldots,T}{I(t)}
\end{equation}
computes the peak time of the infectious compartment.</p>
<p>For a fixed \(p\in[0,100]\), we investigate two corresponding distributions, \(\mu(\mathcal{G}_1^{-1}(\cdot))\) and \(\mu(\mathcal{G}_2^{-1}(\cdot))\) by sampling \(M=1000\) times from \(\mu\) and computing \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) for each sample.</p>
<p>For a perturbation of \(p=5\%\), the (approximate) distributions are plotted in the following figure.</p>
<center><img src="/assets/images/post-seir-sensit/pert_5perc.svg" /></center>
<p>On the left, we see the distribution \(\mu(\mathcal{G}_1^{-1}(\cdot))\) with its mean, median, and a 95% quantile band, i.e. the band between the 2.5% and 97.5% quantile.
The right plot displays the distribution of the peaks.
The corresponding 95% quantile interval here is \([20, 24]\).</p>
<p>The same quantities are plotted for \(p=10\%\) in the following figure.</p>
<center><img src="/assets/images/post-seir-sensit/pert_10perc.svg" /></center>
<p>The 95% quantile interval for the infectious peaks on the right is \([19, 27]\).</p>
<p><strong>Remark</strong>.
This is not a serious sensitivity analysis.
There might be parameters that are more sensitive than others which is not visible by the investigated quantities and plots.</p>
<h1 id="conclusion">Conclusion</h1>
<p>The predictions of SEIR models are subject to uncertainties caused by sensitivities in its parameters.</p>
<p>For example, a perturbation of \(p=10\%\), which is likely to occur in practice, causes an uncertainty in the peak time of infectious individuals of about 8 weeks in the sense of a 95% quantile interval.</p>
<h1 id="source-code">Source code</h1>
<p>The source code to reproduce the above figures is put in a repository at <a href="https://bitbucket.org/m-parente/uq-tools/src/master/examples/epidemiology/">bitbucket</a>.</p>
<p>Since the samples for the plotted distributions are independent, we computed the corresponding ODE solutions in parallel using a program called <a href="https://github.com/TACC/launcher">launcher</a>.</p>The COVID-19 pandemic recently caused and still causes major problems in several respects. Also, (mathematical) modelers face difficulties in simulating and predicting the final size of the pandemic.Published: Generalized bounds for active subspaces2020-02-18T00:00:00+00:002020-02-18T00:00:00+00:00/posts/2020/02/18/asm-poincare-pub<p>I am very proud to annouce to our article <em>Generalized bounds for active subspaces</em> from <em>Jonas Wallin</em>, <em>Barbara Wohlmuth</em>, and me was accepted and published in <a href="https://projecteuclid.org/euclid.ejs"><em>Electronic Journal of Statistics</em></a> which is open access.</p>
<p>I explained <a href="/posts/2019/10/06/asm-poincare-prepr">here</a> (<em>v1</em>) and <a href="% link _posts/2020-02-03-asm-poincare-rev.md %}">here</a> (<em>v2</em>, revised) what the article is about.</p>
<p><strong>Journal link:</strong> <a href="https://doi.org/10.1214/20-EJS1684">doi:10.1214/20-EJS1684</a><br />
<strong>arXiv link:</strong> <a href="https://arxiv.org/abs/1910.01399">arXiv:1910.01399</a></p>I am very proud to annouce to our article Generalized bounds for active subspaces from Jonas Wallin, Barbara Wohlmuth, and me was accepted and published in Electronic Journal of Statistics which is open access.Revised preprint: Generalized bounds for active subspaces2020-02-03T00:00:00+00:002020-02-03T00:00:00+00:00/posts/2020/02/03/asm-poincare-rev<p><em>Jonas Wallin</em>, <em>Barbara Wohlmuth</em>, and I put a revised version of our article <em>Generalized bounds for active subspaces</em> in the <a href="https://arxiv.org/abs/1910.01399">arXiv</a>.
The main changes consist of a formalization of our results to a theorem/proof style, the consideration of a particular supremum (more below), and a revision of the section on future work with MGH distributions (multivariate generalized hyperbolics).</p>
<p>In the former version, our counterexample to existing theoretical results considered an <em>arbitrary</em> orthogonal transformation of input variables that, however, was used before as a particular defined transformation.
Since related quantities appear in error bounds, we now consider the supremum of the related quantities over the set of all orthogonal matrices which makes it valid for us to keep regard arbitrary transformations.
In fact, we should justify why it is enough in our case to consider rotations, a subset of orthogonal transformations, only.</p>
<p>Finally, I want to thank both of my co-authors for their feedback and assistance in revising this manuscript.</p>Jonas Wallin, Barbara Wohlmuth, and I put a revised version of our article Generalized bounds for active subspaces in the arXiv. The main changes consist of a formalization of our results to a theorem/proof style, the consideration of a particular supremum (more below), and a revision of the section on future work with MGH distributions (multivariate generalized hyperbolics).