Jekyll2022-03-31T18:03:34+00:00/feed.xmlMario Teixeira Parenteacademic websiteUQ course at HM2022-03-08T00:00:00+00:002022-03-08T00:00:00+00:00/posts/2022/03/08/uq-course-hm<p>During the coming summer, I will give again lectures on <a href="https://zpa.cs.hm.edu/public/module/374/"><em>Fundamentals of Uncertainty Quantification (UQ)</em></a> in a course for Bachelor students at the <a href="https://www.cs.hm.edu/">Department of Computer Science and Mathematics</a> (FK07) of the <a href="https://www.hm.edu/">University of Applied Sciences Munich</a> (HM).</p>
<p>The motivations and contents behind this course can be found in a <a href="/posts/2021/04/01/uq-course-hm">post from last year</a>.</p>
<p>I am looking forward to give these lectures again and meet new interested students!</p>During the coming summer, I will give again lectures on Fundamentals of Uncertainty Quantification (UQ) in a course for Bachelor students at the Department of Computer Science and Mathematics (FK07) of the University of Applied Sciences Munich (HM).Published: Benchmarking autonomous scattering experiments illustrated on TAS2022-02-08T00:00:00+00:002022-02-08T00:00:00+00:00/posts/2022/02/08/benchmarking-scattering-pub<p>I am happy to announce my first article publication as a postdoc at the Jülich Centre for Neutron Science (JCNS).</p>
<p>My colleagues and I propose a benchmarking procedure that captures essential components when it comes to measuring performance in autonomous scattering experiments.
The procedure is designed as a cost-benefit analysis and illustrated on the setting of <a href="https://en.wikipedia.org/wiki/Neutron_triple-axis_spectrometry">three-axes spectrometry</a> (TAS).</p>
<p>We are curious of the comments and feedback from the community and open for a critical discussion on our ideas.</p>I am happy to announce my first article publication as a postdoc at the Jülich Centre for Neutron Science (JCNS).Mentor for the Max Weber Program2021-10-01T00:00:00+00:002021-10-01T00:00:00+00:00/posts/2021/10/01/mentor-mwp<p>I feel honoured to announce that I became a “Mentor” in the <a href="https://www.elitenetzwerk.bayern.de/en/home/funding-programs/max-weber-program">Max Weber Program of the State of Bavaria</a> (Max Weber-Programm Bayern, MWP) which is giving scholarships with financial and ideal support to promising students.
During my time as a student, I was lucky to be part of this program too and have benefited a lot from its offers.</p>
<p>Now, as an alumnus, I have the honour and responsibility to support current scholarship holders in my own mentoring group which mainly consists of computer science and mathematics students from universities in Munich.
The idea of the mentoring format is that the mentor stays in touch with the whole group (together or individually) on a regular basis such that everybody gets the chance to exchange experiences or discuss general topics from the academic life.</p>
<p>I am looking forward to meetings with the students and hope to be a supportive part for their studies, but also feel that they will certainly be a source of inspiration for me.</p>I feel honoured to announce that I became a “Mentor” in the Max Weber Program of the State of Bavaria (Max Weber-Programm Bayern, MWP) which is giving scholarships with financial and ideal support to promising students. During my time as a student, I was lucky to be part of this program too and have benefited a lot from its offers.Linear algebra course at HM2021-09-14T00:00:00+00:002021-09-14T00:00:00+00:00/posts/2021/09/14/linalg-course-hm<p>From October 2021 to January 2022, I will be part of the first semester linear algebra course at the <a href="https://www.cs.hm.edu/en/home/index.en.html">Department of Computer Science and Mathematics</a> of the <a href="https://www.hm.edu/en/index.en.html">Munich University of Applied Sciences</a>.</p>
<p>As a team, <a href="https://www.cs.hm.edu/die_fakultaet/ansprechpartner/professoren/koester/index.de.html">Prof. Köster</a>, <a href="https://www.cs.hm.edu/die_fakultaet/ansprechpartner/professoren/ruckert/index.de.html">Prof. Ruckert</a>, and I will introduce freshmen (“Erstsemester”) to the basics of this beautiful subject.
Our list of contents is based on the famous <a href="https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/">video lectures</a> of <a href="http://www-math.mit.edu/~gs/">Gilbert Strang</a> who teaches linear algebra from a more practical point of view and hence avoids becoming too formal too quickly.
This approach perfectly fits to the general program of the department, i.e., the emphasis lies on the <em>application</em> of concepts rather than their theory.</p>
<p>The core concepts that we like them to learn and experience are the following:</p>
<ul>
<li>Linear systems and matrices (Gaussian elimination, LU decomposition, inversion)</li>
<li>Basis and dimension of a subspace (linear independence, span)</li>
<li>The four fundamental subspaces of a matrix</li>
<li>Orthogonality and projections (least squares, Gram-Schmidt)</li>
<li>Determinants</li>
<li>Eigenvalues and eigenvectors (diagonalization)</li>
<li>Complex numbers</li>
</ul>
<p>The lectures and tutorials will be offered in person, but there is also an online option with the same contents.
I am especially curious as this course is my first experience with freshmen and since they had to finish their time at high school under rather difficult conditions due to the Corona crisis.
My hope is that our team is able to provide a stimulating environment for them and contributes to a successful start into their time as college students.</p>From October 2021 to January 2022, I will be part of the first semester linear algebra course at the Department of Computer Science and Mathematics of the Munich University of Applied Sciences.CAMERA Workshop2021-05-07T00:00:00+00:002021-05-07T00:00:00+00:00/posts/2021/05/07/camera-workshop<p>From April 20–22, 2021, I had the possibility to be part of a virtual workshop on <em>Autonomous Discovery in Science and Engineering</em> (<a href="https://autonomous-discovery.lbl.gov/">website</a>) organized by the <em>Center for Advanced Mathematics for Energy Research Applications</em> (<a href="https://www.camera.lbl.gov/">CAMERA</a>) at <em>Lawrence Berkeley National Laboratory</em> (<a href="https://www.lbl.gov/">LBNL</a>).</p>
<p>I gave a talk on <em>Autonomous Experiments for Neutron Three-Axis Spectrometers (TAS) with Log-Gaussian Processes</em> in the breakout session on <em>Autonomous Discovery in Neutron Scattering</em>.
The presentation covered recent methodological advances of our group in the application of log-Gaussian processes for autonomous neutron scattering experiments.</p>
<p>Other talks were either focussing on physical applications or were showing methodological approaches to autonomous material discovery.
Although I was not able to fully follow the physics parts, I got a decent impression of what the problems are that groups try to solve in this area.</p>
<p>[EDIT: You can find an extended abstract of our contribution on <a href="https://arxiv.org/abs/2105.07716">arXiv</a>.]<br />
[EDIT: The <a href="https://autonomous-discovery.lbl.gov/material">material</a> of the workshop (including a <a href="https://www.osti.gov/biblio/1818491/">DOE report</a>) and the <a href="https://drive.google.com/file/d/1ERZdC9V-iCGpzIKvxcEOO9ku2F73gytF/view?usp=sharing">slides</a> of my talk are available.]</p>From April 20–22, 2021, I had the possibility to be part of a virtual workshop on Autonomous Discovery in Science and Engineering (website) organized by the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory (LBNL).Bayes’ theorem and medical screening tests2021-04-03T00:00:00+00:002021-04-03T00:00:00+00:00/posts/2021/04/03/bayes-tests<p>The Coronavirus pandemic was an ever-present topic during the last 12 months and still is.
The virus <em>SARS-CoV-2</em> is tried to be detected by conducting medical screening tests like the PCR or antigen tests.
A lot of these tests have been made and still continue to be done on a daily basis, regardless of whether the tested persons show symtpoms or not.
Since many of the so-called nonpharmaceutical interventions are based on the number of positive tests during the last week, it is of great importance to ensure that the test results are not only reliable on the level of a single test but also meaningful as a collection.</p>
<p>The following mathematical elaboration aims for an interpretation of one of the main statistical measures that are used when it comes to assessing the performance of so-called <em>binary classification tests</em> in medicine: the <em>positive predictive value</em> (PPV).
The PPV specifies the chance that a person with a positive test is indeed infected.</p>
<p>We approach this investigation by first explaining <em>Bayes’ theorem</em>, a well-known and famous result from Bayesian statistics.
With this, we will derive an expression for an upper bound of the PPV that gives insight in its nature with respect to two other important values, the <em>false positive rate</em> and the <em>prevalence</em>.</p>
<h1 id="bayes-theorem">Bayes’ theorem</h1>
<p>The famous theorem of Bayes, or just <em>Bayes’ theorem</em>, is specifying how to “update” the chance (also called <em>degree of belief</em> in the Bayesian view on the concept of probability) of a random event \(A\) after observing another random event \(B\) with \(\mathbf{P}(B)>0\), where \(\mathbf{P}(B)\) denotes the probability or chance of the event \(B\) occurring.</p>
<p>The theorem states that
\begin{equation}
\mathbf{P}(A\,\vert\,B) = \frac{\mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A)}{\mathbf{P}(B)}
\end{equation}
and can be informally interpreted by saying that the <em>prior probability</em> \(\mathbf{P}(A)\) is updated by the term \(\mathbf{P}(B\,\vert\,A)/\mathbf{P}(B)\) to the <em>posterior probability</em> \(\mathbf{P}(A\,\vert\,B)\) after observing that \(B\) occurred.
A proof of this form of Bayes’ theorem is trivial by applying the definition of conditional probability and using symmetry of the \(\cap\)-operation (intersection).</p>
<p>We can further concretize the above expression by regarding \(\mathbf{P}(B)\) as a so-called <em>marginal probability</em> and using well-known equalities.
That is, we can write
\begin{align}
\mathbf{P}(B) &= \mathbf{P}(B \cap A) + \mathbf{P}(B \cap \overline{A}) \newline
&= \mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}) \cdot \mathbf{P}(\overline{A}) \newline
&= \mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}) \cdot (1-\mathbf{P}(A)) \newline
&= [\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A})] \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}),
\end{align}
where \(\overline{A}\) denotes the event of \(A\) <em>not</em> occurring.</p>
<p>If additionally \(\mathbf{P}(A)>0\), we get that
\begin{align}
\mathbf{P}(A\,\vert\,B) &= \frac{\mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A)}{[\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A})] \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A})} \newline
&= \frac{\mathbf{P}(B\,\vert\,A)}{\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A}) + \frac{\mathbf{P}(B\,\vert\,\overline{A})}{\mathbf{P}(A)}}.
\end{align}</p>
<p>Furthermore, since \(\frac{\alpha}{\alpha+\beta} \leq \frac{1}{1+\beta}\) for nonnegative values \(\alpha,\beta\) with \(\alpha\leq1\), it holds that
\begin{equation}
\mathbf{P}(A\,\vert\,B) \leq \frac{1}{1 - \mathbf{P}(B\,\vert\,\overline{A}) + \frac{\mathbf{P}(B\,\vert\,\overline{A})}{\mathbf{P}(A)}}.
\end{equation}</p>
<h1 id="positive-predictive-value-of-medical-screening-tests">Positive predictive value of medical screening tests</h1>
<p>Let us now apply the above result for medical screening tests to get some insight into the <em>positive predictive value</em>.</p>
<p>For this, we denote the event of a person being infected as
\begin{equation}
I := \lbrace \text{Person is infected} \rbrace.
\end{equation}
The event \(I\) replaces what was denoted by the event \(A\) above.</p>
<p>The event, that a test of this person is positive, is denoted as
\begin{equation}
T_+ := \lbrace \text{Test of person is positive} \rbrace.
\end{equation}
The event \(T_+\) replaces what was denoted by the event \(B\) above.</p>
<p>Hence, the expression \(\mathbf{P}(I\,\vert\,T_+)\) denotes the probability that a person is indeed infected after getting a positive test result.</p>
<p>Applying the upper bound from above, we get that
\begin{equation}
\mathbf{P}(I\,\vert\,T_+) \leq \frac{1}{1 - \mathbf{P}(T_+\,\vert\,\overline{I}) + \frac{\mathbf{P}(T_+\,\vert\,\overline{I})}{\mathbf{P}(I)}},
\end{equation}
where \(\overline{I}\) denotes the event that the person is <em>not</em> infected.
The term \(\mathbf{P}(T_+\,\vert\,\overline{I})\) is also called the <em>false positive rate</em> (FPR) of the test and represents the ratio between the number of falsely positive tests and the number of noninfected persons.
The <em>prevalence</em> is denoted by \(\mathbf{P}(I)\) and specifies the proportion of infected persons in the whole population.</p>
<p>Finally, repeating the above inequality with the mentioned terms, we get that
\begin{equation}
\mathbf{P}(I\,\vert\,T_+) \leq \frac{1}{1 - \text{FPR} + \frac{\text{FPR}}{\text{Prevalence}}}.
\end{equation}</p>
<p>The upper bound, viewed as a function of the FPR and the prevalence, is displayed in the following figure.</p>
<center><img src="/assets/images/fpr-preval-ppv.svg" /></center>
<p>Note that the \(y\)-axis has a <em>log</em> scale.</p>
<h1 id="interpretation">Interpretation</h1>
<p>The main observation with the above figure is that the PPV can get quite low if the FPR and the prevalence are unfavorably related.
More concretely, if the prevalence is low, say \(\text{prevalence}\approx1\%\),
then the test needs to be really accurate in the sense that it should have a FPR close to zero; otherwise the test risks becoming unreliable and invalid which can lead to false assessments of the public health situation and thus can provide incorrect information to policy makers.</p>The Coronavirus pandemic was an ever-present topic during the last 12 months and still is. The virus SARS-CoV-2 is tried to be detected by conducting medical screening tests like the PCR or antigen tests. A lot of these tests have been made and still continue to be done on a daily basis, regardless of whether the tested persons show symtpoms or not. Since many of the so-called nonpharmaceutical interventions are based on the number of positive tests during the last week, it is of great importance to ensure that the test results are not only reliable on the level of a single test but also meaningful as a collection.Scientific Computing: attempting a definition2021-04-02T00:00:00+00:002021-04-02T00:00:00+00:00/posts/2021/04/02/scien-comp-def<p>First of all, “Scientific Computing” (SC) is an accepted term for a certain area of research among mathematicians and computer scientists particularly, but also for the scientific community in general.
However, scientists seem to have varying notions of the term, even if they come from similar disciplines.
This text attempts to show why a clear definition of the term is not straightforward, but finally dares to do exactly that, a fairly clear (objective) definition.</p>
<p>Let us start with an obvious observation.
The term “Scientific Computing” consists of two words: “scientific” and “computing”.
We do not try to explain both words separately.
For the first, we would have to find a definition for “science” which is a question that already exists for centuries and is tried to be answered by philosophy, more exactly <a href="https://en.wikipedia.org/wiki/Philosophy_of_science"><em>philosophy of science</em></a>.</p>
<p>What we are rather looking for is a definition of the term “Scientific Computing” (as an interplay of both words) in which the word “scientific” is related to “computing”.
Hence, following langugage, SC is a <em>particular kind of computing</em> that is <em>scientifically sound</em>, accepts the <em>scientific method</em>, and is thus open for the scientific community to get criticized and discussed.</p>
<p>As opposed to these rather trivial observations, the more difficult question to answer is what SC <em>really does</em>, in the sense of questions like</p>
<ul>
<li>which areas of mathematics and computer science are used in SC and how they interact,</li>
<li>which problems are solved by SC and how.</li>
</ul>
<p>SC has often been tried to be defined by following questions of this type.
However, doing so increases the risk of the definition getting subjective too quickly.
For example, a statistician has answers to the above questions that can substantially differ from answers given by a numerical analyst or a computer scientist, but still everyone is convinced that the own description is more precise.
This does not get us very far.</p>
<p>To find a more objective definition of SC, we need to circumvent classifications of the mentioned type.
We base our attempt of a definition on what we want to call the <em>three pillars of SC</em>:</p>
<ol>
<li>Theory,</li>
<li>Methodology,</li>
<li>Implementation.</li>
</ol>
<p>For this attempt, we need to agree on the following: “SC tries to solve problems that can be solved by computing, i.e., by using a computer.”
Such problems are called <em>computational problems</em> in the remainder and often involve <em>mathematical models</em>.</p>
<p>Now, the main point of our definition is that neither finding a method or an algorithm alone (methodology), nor proving a numerical result for its own sake (theory), nor an efficient implementation of an algorithm in a suitable programming language (implementation) without a connection to the former two tasks is what SC does.
Much more, it is the (often complex) interplay of all of the three parts.</p>
<p><br /><center><img src="/assets/images/sc-pillars.svg" /></center><br /></p>
<p>The main purpose of SC certainly is finding a method or algorithm that solves a computational problem.
However, following our definition, only the consideration and connection of all three aspects makes the approach a scientific computing approach.</p>
<h1 id="1-theory">1. Theory</h1>
<p>Theory, as we use the term in this context, leads to a <em>formal verification</em> of the developed algorithm.
For this, it utilizes a reasonable (mathematical and logical) formalism and useful notation to show that the algorithm is indeed solving the given computational problem.
The quality of the solution can be demonstrated as well.
As an example, numerical analysts can provide promising convergence results or insightful upper bounds on approximation errors.
Additionally, formal formulations can also lead to useful abstractions which potentially broadens the applicability of the method.</p>
<p>Most of theory is done by mathematicians, or at least in a mathematical way.
Mathematical areas that are often applied are, e.g., linear algebra, calculus, numerical mathematics, probability theory, and statistics.
However, also theoretical areas from computer science, as e.g., computability theory or complexity theory, can play a role here depending on the concrete case.</p>
<h1 id="2-methodology">2. Methodology</h1>
<p>As mentioned, this is certainly the core of the scientific computing approach.
The main job of this part is the development of methods, algorithms, or techniques to solve the computational problem at hand.
Preferably, the approaches need to be described algorithmically such that others can understand them.
It is then the theorist’s task to provide a proof of the quality of the approach to the community.
The implementation in software can get started as soon as there is a reasonable description of the method and a sufficiently large chance of success.</p>
<p>In our view, it is indeterminate whether the methodological part is dominated by mathematics or computer science.
We find that both disciplines can equally contribute here.</p>
<h1 id="3-implementation">3. Implementation</h1>
<p>Implementing a proposed method or algorithm is software development, more or less.
Of course, if the problem is highly computationally expensive, techniques of <em>high performance computing</em>, which we also see as part of implementation, should be applied.
It is the job of the software developer (or computer scientist) to produce code that efficiently executes the idea of the algorithm.
In this respect, software validation by suitable tests showing the correctness of the implementation is also necessary at this point.</p>
<p>Since this part is mostly about software development, it is certainly dominated by computer science.
Of course, programming can also be done by mathematicians who however act as software developers then.</p>
<p>Theoretically, all of the above three parts can be done by one and the same person.
Though, there is more than one scientist involved in most cases since approaches can consist of multiple sufficiently complex subtasks that need to be handled by specialists.</p>
<h1 id="distinction-from-computational-science">Distinction from <em>Computational Science</em></h1>
<p>In contrast to a definition from <a href="https://en.wikipedia.org/wiki/Computational_science">Wikipedia</a>, which does <em>not</em> differ between SC and <em>Computational Science</em> (ClS; to distinguish from CS which is often used for computer science), we would like to promote such a distinction.</p>
<p>The focus with SC lies on the computing or computation aspect, in our opinion.
In other words, we have a computational problem that is tried to be solved scientifically and that orientates on the three pillars mentioned above.</p>
<p>On the other hand, ClS, as the term says, is doing <em>science</em>, science in a <em>computational</em> manner.
This means that ClS tries to answer questions from a certain scientific area and hence always has the application in mind.
For example, problems from astrophysics are nowadays often solved computationally by simulations involving mathematical models that aim to reflect reality.
We can thus say that “SC is applied to do ClS” in this case.
Of course, computational problems in SC can be motivated by questions from ClS or from a certain scientific discipline directly, but do not necessarily need to.
Problems in SC can also emerge from other problems in SC.</p>
<h1 id="summary">Summary</h1>
<p>This text tried to formulate a new definition of <em>Scientific Computing</em>.
Existing approaches are often based on questions like which mathematical or computer science areas contribute to SC, which is rather subjective.
We aimed for establishing objectivity in the new definition by following another approach called <em>the three pillars of SC</em>: theory, methodology, implementation.
Finally, an explicit distinction to <em>Computational Science</em> was made which however conflicts with other attempts; see, e.g., <a href="https://en.wikipedia.org/wiki/Computational_science">Wikipedia</a>.</p>First of all, “Scientific Computing” (SC) is an accepted term for a certain area of research among mathematicians and computer scientists particularly, but also for the scientific community in general. However, scientists seem to have varying notions of the term, even if they come from similar disciplines. This text attempts to show why a clear definition of the term is not straightforward, but finally dares to do exactly that, a fairly clear (objective) definition.UQ course at HM2021-04-01T00:00:00+00:002021-04-01T00:00:00+00:00/posts/2021/04/01/uq-course-hm<p>During this summer, I am teaching <a href="https://zpa.cs.hm.edu/public/module/374/"><em>Fundamentals of Uncertainty Quantification (UQ)</em></a> in a course for Bachelor students at the <a href="https://www.cs.hm.edu/">Department of Computer Science and Mathematics</a> (FK07) of the <a href="https://www.hm.edu/">University of Applied Sciences Munich</a> (HM).</p>
<p>At most schools, classes on UQ are only part of Master’s programs since they require decent knowledge of and education in various mathematical (linear algebra, calculus, probability theory, statistics, …) and computer science (programming, algorithms, data structures, …) subdisciplines.</p>
<p>However, we decided to offer a Bachelor’s course that introduces fundamental (as opposed to advanced) aspects of the field.
The introductory classes discuss motivating examples of why UQ actually matters and tries to give a reasonable overview of the field and a fair description of the notion <em>uncertainty</em>.
In a second chapter, we lay the basis for forthcoming contents, i.e., we repeat fundamental and necessary definitions and results of probability theory and statistics.
Basic random number sampling, Monte Carlo-type methods along with more advanced <em>Latin Hypercube Sampling</em> (LHS) is explained in chapter 3.
The final and main part of the course is chapter 4 in which we introduce techniques for <em>global sensitivity analysis</em> of mathematical models.
A table of contents and the models used for demonstration are placed below this text.</p>
<p>Besides discussing the contents in a formal way, students can get their hands on some assignments as part of their practical training.
They implement methods from chapter 4 and test them on the SEIR model, a compartmental model from epidemiology for the spread of infectious diseases.</p>
<p>Other more advanced but common UQ approaches as <em>Forward UQ</em> or <em>Inverse UQ</em> are not discussed in this course.
They could be part of courses UQ II or UQ III which then, however, would be better suited as part of a Master’s program.</p>
<p>I am very happy to get the opportunity from FK07 and HM to teach this course which is actually quite related to topics of my dissertation.
Having influence on young people and educating them to critically think about underlying assumptions and their consequences from a formal, informal, and intuitive perspective gives me great pleasure and fulfills me.</p>
<p><strong>Table of contents</strong>:</p>
<ol>
<li><strong>Introduction</strong><br />
1.1. Motivation<br />
1.2. Types and sources of uncertainties</li>
<li><strong>Fundamentals in probability theory and statistics</strong><br />
2.1. Random variables<br />
2.2. Expectation value and (co)variance<br />
2.3. Quantiles<br />
2.4. Important distributions<br />
2.5. Statistical estimators</li>
<li><strong>Sampling strategies</strong><br />
3.1. Pseudo-random number sampling<br />
3.2. Monte Carlo simulations<br />
3.3. Latin Hypercube Sampling (LHS)</li>
<li><strong>Global sensitivity analysis</strong><br />
4.1. Primitive approach<br />
4.2. Partial rank correlation coefficients<br />
4.3. Sobol indices</li>
</ol>
<p><strong>Models</strong>:</p>
<ul>
<li>Predator-prey model</li>
<li>Compartment model from epidemiology</li>
</ul>During this summer, I am teaching Fundamentals of Uncertainty Quantification (UQ) in a course for Bachelor students at the Department of Computer Science and Mathematics (FK07) of the University of Applied Sciences Munich (HM).LENS ISIS Machine Learning School2021-02-20T00:00:00+00:002021-02-20T00:00:00+00:00/posts/2021/02/20/ml-school<p>From 15 to 19 February, I was part of a series of online lectures at the LENS ISIS Machine Learning school.
It was prepared in collaboration with scientists from neutron facilities (<a href="https://europeanspallationsource.se/">ESS</a>, <a href="https://www.ill.eu/">ILL</a>, <a href="https://www.isis.stfc.ac.uk/">ISIS</a>, <a href="https://www.psi.ch/en/">PSI</a>) other than MLZ at Garching.</p>
<p>We had around 60 interested participants from the mentioned facilities each day.
Our aim was to give an understandable overview over the area of machine learning (ML) and its basic subareas to neutron scientists who are generally interested and/or think about applying ML techniques to their problems.</p>
<p>Each lecture was split into a formal talk and a short hands-on tutorial session.
<!--The recordings of the talks can be accessed at []() and tutorial material can be downloaded from []().-->
I will provide a link to the recordings of the talks and tutorial material once they are available.
A curriculum of the school can be found at the end of this post.</p>
<p>I want to thank all the other organizers for their engagement in making this school possible and hope to meet them in person someday.</p>
<p>–</p>
<p><strong>Day 1</strong></p>
<ul>
<li>Lecture 1: Introduction to deep learning and neural networks</li>
<li>Lecture 2: Dense neural networks and regression</li>
</ul>
<p><strong>Day 2</strong></p>
<ul>
<li>Lecture 3: Convolutional neural networks and classification</li>
<li>Lecture 4: Traditional ML methods</li>
</ul>
<p><strong>Day 3</strong></p>
<ul>
<li>Lecture 5: Image segmentation</li>
<li>Lecture 6: Recurrent neural networks</li>
</ul>
<p><strong>Day 4</strong></p>
<ul>
<li>Lecture 7: Generative Adversarial Networks, GANs</li>
<li>Lecture 8: Natural language processing and speech recognition</li>
</ul>
<p><strong>Day 5</strong></p>
<ul>
<li>Lecture 9: Uncertainty and attention</li>
<li>Lecture 10: Unsupervised and reinforcement learning</li>
</ul>From 15 to 19 February, I was part of a series of online lectures at the LENS ISIS Machine Learning school. It was prepared in collaboration with scientists from neutron facilities (ESS, ILL, ISIS, PSI) other than MLZ at Garching.Start at JCNS-4 with AINX2020-12-04T00:00:00+00:002020-12-04T00:00:00+00:00/posts/2020/12/04/jcns-start<p>It is now two months ago that I started my Postdoc position at the <a href="https://www.fz-juelich.de/jcns/EN/Home/home_node.html">Jülich Centre for Neutron Science</a> (JCNS).
JCNS is an institute of the <a href="https://www.fz-juelich.de/">Forschungszentrum Jülich</a> which itself is part of the <a href="https://www.helmholtz.de/">Helmholtz association</a>.
More concretely, I am working in the <a href="https://www.fz-juelich.de/jcns/EN/Leistungen/ScientificComputing/_node.html">Scientific Computing group</a> of the JCNS-4 outstation at the <a href="http://www.frm2.tum.de/en/">FRM II</a> which is the TUM neutron source.</p>
<p>I was hired to contribute to the project <em>AINX</em> (<strong>A</strong>rtificial <strong>I</strong>ntelligence for <strong>N</strong>eutron and <strong>X</strong>-ray scattering) which investigates machine learning techniques on their use for neutron and X-ray scattering experiments.</p>
<p>The project is divided into two main phases.</p>
<p><strong>Phase 1:</strong> Together with instrument scientists for the triple-axes spectrometer <a href="https://wiki.mlz-garching.de/panda:index"><em>PANDA</em></a> (Twitter: <a href="https://twitter.com/PandaMlz">@PandaMlz</a>), my principal investigator Dr. Marina Ganeva and myself try to guide corresponding experiments by using Gaussian Process regression.
<a href="https://scikit-learn.org/stable/modules/gaussian_process.html">Gaussian processes</a> are capable of quantifying uncertainties in function approximation and, hence, they can provide reasonable suggestions for informative measurements locations, namely that with highest uncertainty.</p>
<p><strong>Phase 2:</strong> Many neutron experiments are disrupted by unfavorable artifacts like noise or background signals, spurious peaks, and others.
We aim at training neural networks in that they will be able to uncover informative data by removing the mentioned disruptions.
More details need to be figured out when it comes to implement this plan.</p>
<p>I am looking forward to all the new things I can learn and accomplish in the next time.
Especially, the highly interdisciplinary flavor of this project, working in a team with scientists having various backgrounds, will be interesting and fun.</p>It is now two months ago that I started my Postdoc position at the Jülich Centre for Neutron Science (JCNS). JCNS is an institute of the Forschungszentrum Jülich which itself is part of the Helmholtz association. More concretely, I am working in the Scientific Computing group of the JCNS-4 outstation at the FRM II which is the TUM neutron source.