Jekyll2022-09-20T17:59:45+00:00/feed.xmlMario Teixeira Parenteacademic websitePreprint: Log-Gaussian processes for AI-assisted TAS experiments2022-09-05T00:00:00+00:002022-09-05T00:00:00+00:00/posts/2022/09/05/ariane-prepr<p>I am pleased to announce a new preprint containing results of our work on AI-assisted TAS experiments.
Together with the <a href="https://mlz-garching.de/panda/en">PANDA</a> team around Astrid Schneidewind and Christian Franz, Georg Brandl from the <a href="https://mlz-garching.de/englisch.html">MLZ</a> Instrument Control Group, Uwe Stuhr (<a href="https://www.psi.ch/en/sinq/eiger">EIGER</a> instrument responsible at <a href="https://www.psi.ch/en">PSI</a>), and Marina Ganeva, leader of the MLZ Data Driven Discovery Group that I am part of, I spent a lot of energy in providing evidence on the benefits and good performance of our approach <a href="https://jugit.fz-juelich.de/ainx/ariane">ARIANE</a> (<strong>AR</strong>tificial <strong>I</strong>ntelligence-<strong>A</strong>ssisted <strong>N</strong>eutron <strong>E</strong>xperiments).</p>
<p>The manuscript is an outcome of the project AINX (<strong>A</strong>rtificial <strong>I</strong>ntelligence for <strong>N</strong>eutron and <strong>X</strong>-ray scattering) funded by the <a href="https://helmholtz.ai">Helmholtz AI</a> cooperation unit of the German Helmholtz Association.</p>
<p>I would like to thank all the co-authors of this manuscript very much for their help making the good results possible.
I am looking forward to and curious about our next steps in the project and how we will approach them.</p>I am pleased to announce a new preprint containing results of our work on AI-assisted TAS experiments. Together with the PANDA team around Astrid Schneidewind and Christian Franz, Georg Brandl from the MLZ Instrument Control Group, Uwe Stuhr (EIGER instrument responsible at PSI), and Marina Ganeva, leader of the MLZ Data Driven Discovery Group that I am part of, I spent a lot of energy in providing evidence on the benefits and good performance of our approach ARIANE (ARtificial Intelligence-Assisted Neutron Experiments).Experiment at EIGER (pt. 2)2022-07-29T00:00:00+00:002022-07-29T00:00:00+00:00/posts/2022/07/29/eiger-experim<p><img src="/assets/images/psi-river.jpg" class="img-left no-margin-top" /></p>
<p>Our first experiment at the three-axes spectrometer <a href="https://www.psi.ch/en/sinq/eiger">EIGER</a> was mentioned in a <a href="/posts/2022/05/14/eiger-experim">previous post</a>.
From July 24 to 28, we have now come back to <a href="https://www.psi.ch/en/sinq">SINQ</a>/<a href="https://www.psi.ch/en">PSI</a> in Villigen (Switzerland) and performed a second part of experimental studies with <a href="https://jugit.fz-juelich.de/ainx/ariane">ARIANE</a>, our approach for AI-assisted TAS experiments.
We tested several different settings for this approach and studied properties related to its robustness w.r.t. changes in its parameters.
Indeed, for each computational approach, it is important to learn the sensitivities of its results caused by varying its inputs.</p>
<p>The results are included in a manuscript which will be submitted to a peer-reviewed journal soon.</p>
<p>I would like to thank the responsibles of SINQ and EIGER again for providing beam time at their research facility and for their support during our two stays.</p>Experiment at EIGER2022-05-14T00:00:00+00:002022-05-14T00:00:00+00:00/posts/2022/05/14/eiger-experim<p><img src="/assets/images/psi-river-gray.jpg" class="img-left no-margin-top" /></p>
<p>On the weekend of May 7/8, me and my colleagues from <a href="https://www.fz-juelich.de/jcns">JCNS</a> at <a href="https://mlz-garching.de">MLZ</a> were given beamtime at the thermal three-axes spectrometer <a href="https://www.psi.ch/en/sinq/eiger">EIGER</a> at the spallation source <a href="https://www.psi.ch/en/sinq">SINQ</a> from <a href="https://www.psi.ch/en">PSI</a> (Paul Scherrer Institute) in Villigen (Switzerland).</p>
<p>It was the first time that we had the opportunity to run our approach <a href="https://jugit.fz-juelich.de/ainx/ariane">ARIANE</a> (<strong>AR</strong>tificial <strong>I</strong>ntelligence-<strong>A</strong>ssisted <strong>N</strong>eutron <strong>E</strong>xperiments) for an AI-assisted TAS (<a href="https://en.wikipedia.org/wiki/Neutron_triple-axis_spectrometry">three-axes spectrometry</a>) experiment after over 1.5 years of development.
Our general aim was to reproduce results from <a href="https://doi.org/10.1103/PhysRevLett.112.175501">Li <em>et al.</em>, 2014</a> (Fig. 1b) using our approach and investigate some of its properties.</p>
<p>Since the results will be part of an upcoming submission at a peer-reviewed journal, we cannot discuss them here unfortunately.
However, there will be another stay at EIGER for two days in July in order to run our approach with different settings and complete our studies.
I am really looking forward to this trip, not only because it is another opportunity to test our approach, but also since I enjoyed the natural and professional environment there.</p>
<p>For now, I would really like to thank the responsibles of SINQ and EIGER for their support.</p>UQ course at HM2022-03-08T00:00:00+00:002022-03-08T00:00:00+00:00/posts/2022/03/08/uq-course-hm<p>During the coming summer, I will give again lectures on <a href="https://zpa.cs.hm.edu/public/module/374/"><em>Fundamentals of Uncertainty Quantification (UQ)</em></a> in a course for Bachelor students at the <a href="https://www.cs.hm.edu/">Department of Computer Science and Mathematics</a> (FK07) of the <a href="https://www.hm.edu/">University of Applied Sciences Munich</a> (HM).</p>
<p>The motivations and contents behind this course can be found in a <a href="/posts/2021/04/01/uq-course-hm">post from last year</a>.</p>
<p>I am looking forward to give these lectures again and meet new interested students!</p>During the coming summer, I will give again lectures on Fundamentals of Uncertainty Quantification (UQ) in a course for Bachelor students at the Department of Computer Science and Mathematics (FK07) of the University of Applied Sciences Munich (HM).Published: Benchmarking autonomous scattering experiments illustrated on TAS2022-02-08T00:00:00+00:002022-02-08T00:00:00+00:00/posts/2022/02/08/benchmarking-scattering-pub<p>I am happy to announce my first article publication as a postdoc at the Jülich Centre for Neutron Science (JCNS).</p>
<p>My colleagues and I propose a benchmarking procedure that captures essential components when it comes to measuring performance in autonomous scattering experiments.
The procedure is designed as a cost-benefit analysis and illustrated on the setting of <a href="https://en.wikipedia.org/wiki/Neutron_triple-axis_spectrometry">three-axes spectrometry</a> (TAS).</p>
<p>We are curious of the comments and feedback from the community and open for a critical discussion on our ideas.</p>I am happy to announce my first article publication as a postdoc at the Jülich Centre for Neutron Science (JCNS).Mentor for the Max Weber Program2021-10-01T00:00:00+00:002021-10-01T00:00:00+00:00/posts/2021/10/01/mentor-mwp<p>I feel honoured to announce that I became a “Mentor” in the <a href="https://www.elitenetzwerk.bayern.de/en/home/funding-programs/max-weber-program">Max Weber Program of the State of Bavaria</a> (Max Weber-Programm Bayern, MWP) which is giving scholarships with financial and ideal support to promising students.
During my time as a student, I was lucky to be part of this program too and have benefited a lot from its offers.</p>
<p>Now, as an alumnus, I have the honour and responsibility to support current scholarship holders in my own mentoring group which mainly consists of computer science and mathematics students from universities in Munich.
The idea of the mentoring format is that the mentor stays in touch with the whole group (together or individually) on a regular basis such that everybody gets the chance to exchange experiences or discuss general topics from the academic life.</p>
<p>I am looking forward to meetings with the students and hope to be a supportive part for their studies, but also feel that they will certainly be a source of inspiration for me.</p>I feel honoured to announce that I became a “Mentor” in the Max Weber Program of the State of Bavaria (Max Weber-Programm Bayern, MWP) which is giving scholarships with financial and ideal support to promising students. During my time as a student, I was lucky to be part of this program too and have benefited a lot from its offers.Linear algebra course at HM2021-09-14T00:00:00+00:002021-09-14T00:00:00+00:00/posts/2021/09/14/linalg-course-hm<p>From October 2021 to January 2022, I will be part of the first semester linear algebra course at the <a href="https://www.cs.hm.edu/en/home/index.en.html">Department of Computer Science and Mathematics</a> of the <a href="https://www.hm.edu/en/index.en.html">Munich University of Applied Sciences</a>.</p>
<p>As a team, <a href="https://www.cs.hm.edu/die_fakultaet/ansprechpartner/professoren/koester/index.de.html">Prof. Köster</a>, <a href="https://www.cs.hm.edu/die_fakultaet/ansprechpartner/professoren/ruckert/index.de.html">Prof. Ruckert</a>, and I will introduce freshmen (“Erstsemester”) to the basics of this beautiful subject.
Our list of contents is based on the famous <a href="https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/">video lectures</a> of <a href="http://www-math.mit.edu/~gs/">Gilbert Strang</a> who teaches linear algebra from a more practical point of view and hence avoids becoming too formal too quickly.
This approach perfectly fits to the general program of the department, i.e., the emphasis lies on the <em>application</em> of concepts rather than their theory.</p>
<p>The core concepts that we like them to learn and experience are the following:</p>
<ul>
<li>Linear systems and matrices (Gaussian elimination, LU decomposition, inversion)</li>
<li>Basis and dimension of a subspace (linear independence, span)</li>
<li>The four fundamental subspaces of a matrix</li>
<li>Orthogonality and projections (least squares, Gram-Schmidt)</li>
<li>Determinants</li>
<li>Eigenvalues and eigenvectors (diagonalization)</li>
<li>Complex numbers</li>
</ul>
<p>The lectures and tutorials will be offered in person, but there is also an online option with the same contents.
I am especially curious as this course is my first experience with freshmen and since they had to finish their time at high school under rather difficult conditions due to the Corona crisis.
My hope is that our team is able to provide a stimulating environment for them and contributes to a successful start into their time as college students.</p>From October 2021 to January 2022, I will be part of the first semester linear algebra course at the Department of Computer Science and Mathematics of the Munich University of Applied Sciences.CAMERA Workshop2021-05-07T00:00:00+00:002021-05-07T00:00:00+00:00/posts/2021/05/07/camera-workshop<p>From April 20–22, 2021, I had the possibility to be part of a virtual workshop on <em>Autonomous Discovery in Science and Engineering</em> (<a href="https://autonomous-discovery.lbl.gov/">website</a>) organized by the <em>Center for Advanced Mathematics for Energy Research Applications</em> (<a href="https://www.camera.lbl.gov/">CAMERA</a>) at <em>Lawrence Berkeley National Laboratory</em> (<a href="https://www.lbl.gov/">LBNL</a>).</p>
<p>I gave a talk on <em>Autonomous Experiments for Neutron Three-Axis Spectrometers (TAS) with Log-Gaussian Processes</em> in the breakout session on <em>Autonomous Discovery in Neutron Scattering</em>.
The presentation covered recent methodological advances of our group in the application of log-Gaussian processes for autonomous neutron scattering experiments.</p>
<p>Other talks were either focussing on physical applications or were showing methodological approaches to autonomous material discovery.
Although I was not able to fully follow the physics parts, I got a decent impression of what the problems are that groups try to solve in this area.</p>
<p>[EDIT: You can find an extended abstract of our contribution on <a href="https://arxiv.org/abs/2105.07716">arXiv</a>.]<br />
[EDIT: The <a href="https://autonomous-discovery.lbl.gov/material">material</a> of the workshop (including a <a href="https://www.osti.gov/biblio/1818491/">DOE report</a>) and the <a href="https://drive.google.com/file/d/1ERZdC9V-iCGpzIKvxcEOO9ku2F73gytF/view?usp=sharing">slides</a> of my talk are available.]</p>From April 20–22, 2021, I had the possibility to be part of a virtual workshop on Autonomous Discovery in Science and Engineering (website) organized by the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory (LBNL).Bayes’ theorem and medical screening tests2021-04-03T00:00:00+00:002021-04-03T00:00:00+00:00/posts/2021/04/03/bayes-tests<p>The Coronavirus pandemic was an ever-present topic during the last 12 months and still is.
The virus <em>SARS-CoV-2</em> is tried to be detected by conducting medical screening tests like the PCR or antigen tests.
A lot of these tests have been made and still continue to be done on a daily basis, regardless of whether the tested persons show symtpoms or not.
Since many of the so-called nonpharmaceutical interventions are based on the number of positive tests during the last week, it is of great importance to ensure that the test results are not only reliable on the level of a single test but also meaningful as a collection.</p>
<p>The following mathematical elaboration aims for an interpretation of one of the main statistical measures that are used when it comes to assessing the performance of so-called <em>binary classification tests</em> in medicine: the <em>positive predictive value</em> (PPV).
The PPV specifies the chance that a person with a positive test is indeed infected.</p>
<p>We approach this investigation by first explaining <em>Bayes’ theorem</em>, a well-known and famous result from Bayesian statistics.
With this, we will derive an expression for an upper bound of the PPV that gives insight in its nature with respect to two other important values, the <em>false positive rate</em> and the <em>prevalence</em>.</p>
<h1 id="bayes-theorem">Bayes’ theorem</h1>
<p>The famous theorem of Bayes, or just <em>Bayes’ theorem</em>, is specifying how to “update” the chance (also called <em>degree of belief</em> in the Bayesian view on the concept of probability) of a random event \(A\) after observing another random event \(B\) with \(\mathbf{P}(B)>0\), where \(\mathbf{P}(B)\) denotes the probability or chance of the event \(B\) occurring.</p>
<p>The theorem states that
\begin{equation}
\mathbf{P}(A\,\vert\,B) = \frac{\mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A)}{\mathbf{P}(B)}
\end{equation}
and can be informally interpreted by saying that the <em>prior probability</em> \(\mathbf{P}(A)\) is updated by the term \(\mathbf{P}(B\,\vert\,A)/\mathbf{P}(B)\) to the <em>posterior probability</em> \(\mathbf{P}(A\,\vert\,B)\) after observing that \(B\) occurred.
A proof of this form of Bayes’ theorem is trivial by applying the definition of conditional probability and using symmetry of the \(\cap\)-operation (intersection).</p>
<p>We can further concretize the above expression by regarding \(\mathbf{P}(B)\) as a so-called <em>marginal probability</em> and using well-known equalities.
That is, we can write
\begin{align}
\mathbf{P}(B) &= \mathbf{P}(B \cap A) + \mathbf{P}(B \cap \overline{A}) \newline
&= \mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}) \cdot \mathbf{P}(\overline{A}) \newline
&= \mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}) \cdot (1-\mathbf{P}(A)) \newline
&= [\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A})] \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A}),
\end{align}
where \(\overline{A}\) denotes the event of \(A\) <em>not</em> occurring.</p>
<p>If additionally \(\mathbf{P}(A)>0\), we get that
\begin{align}
\mathbf{P}(A\,\vert\,B) &= \frac{\mathbf{P}(B\,\vert\,A) \cdot \mathbf{P}(A)}{[\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A})] \cdot \mathbf{P}(A) + \mathbf{P}(B\,\vert\,\overline{A})} \newline
&= \frac{\mathbf{P}(B\,\vert\,A)}{\mathbf{P}(B\,\vert\,A) - \mathbf{P}(B\,\vert\,\overline{A}) + \frac{\mathbf{P}(B\,\vert\,\overline{A})}{\mathbf{P}(A)}}.
\end{align}</p>
<p>Furthermore, since \(\frac{\alpha}{\alpha+\beta} \leq \frac{1}{1+\beta}\) for nonnegative values \(\alpha,\beta\) with \(\alpha\leq1\), it holds that
\begin{equation}
\mathbf{P}(A\,\vert\,B) \leq \frac{1}{1 - \mathbf{P}(B\,\vert\,\overline{A}) + \frac{\mathbf{P}(B\,\vert\,\overline{A})}{\mathbf{P}(A)}}.
\end{equation}</p>
<h1 id="positive-predictive-value-of-medical-screening-tests">Positive predictive value of medical screening tests</h1>
<p>Let us now apply the above result for medical screening tests to get some insight into the <em>positive predictive value</em>.</p>
<p>For this, we denote the event of a person being infected as
\begin{equation}
I := \lbrace \text{Person is infected} \rbrace.
\end{equation}
The event \(I\) replaces what was denoted by the event \(A\) above.</p>
<p>The event, that a test of this person is positive, is denoted as
\begin{equation}
T_+ := \lbrace \text{Test of person is positive} \rbrace.
\end{equation}
The event \(T_+\) replaces what was denoted by the event \(B\) above.</p>
<p>Hence, the expression \(\mathbf{P}(I\,\vert\,T_+)\) denotes the probability that a person is indeed infected after getting a positive test result.</p>
<p>Applying the upper bound from above, we get that
\begin{equation}
\mathbf{P}(I\,\vert\,T_+) \leq \frac{1}{1 - \mathbf{P}(T_+\,\vert\,\overline{I}) + \frac{\mathbf{P}(T_+\,\vert\,\overline{I})}{\mathbf{P}(I)}},
\end{equation}
where \(\overline{I}\) denotes the event that the person is <em>not</em> infected.
The term \(\mathbf{P}(T_+\,\vert\,\overline{I})\) is also called the <em>false positive rate</em> (FPR) of the test and represents the ratio between the number of falsely positive tests and the number of noninfected persons.
The <em>prevalence</em> is denoted by \(\mathbf{P}(I)\) and specifies the proportion of infected persons in the whole population.</p>
<p>Finally, repeating the above inequality with the mentioned terms, we get that
\begin{equation}
\mathbf{P}(I\,\vert\,T_+) \leq \frac{1}{1 - \text{FPR} + \frac{\text{FPR}}{\text{Prevalence}}}.
\end{equation}</p>
<p>The upper bound, viewed as a function of the FPR and the prevalence, is displayed in the following figure.</p>
<center><img src="/assets/images/fpr-preval-ppv.svg" /></center>
<p>Note that the \(y\)-axis has a <em>log</em> scale.</p>
<h1 id="interpretation">Interpretation</h1>
<p>The main observation with the above figure is that the PPV can get quite low if the FPR and the prevalence are unfavorably related.
More concretely, if the prevalence is low, say \(\text{prevalence}\approx1\%\),
then the test needs to be really accurate in the sense that it should have a FPR close to zero; otherwise the test risks becoming unreliable and invalid which can lead to false assessments of the public health situation and thus can provide incorrect information to policy makers.</p>The Coronavirus pandemic was an ever-present topic during the last 12 months and still is. The virus SARS-CoV-2 is tried to be detected by conducting medical screening tests like the PCR or antigen tests. A lot of these tests have been made and still continue to be done on a daily basis, regardless of whether the tested persons show symtpoms or not. Since many of the so-called nonpharmaceutical interventions are based on the number of positive tests during the last week, it is of great importance to ensure that the test results are not only reliable on the level of a single test but also meaningful as a collection.Scientific Computing: attempting a definition2021-04-02T00:00:00+00:002021-04-02T00:00:00+00:00/posts/2021/04/02/scien-comp-def<p>First of all, “Scientific Computing” (SC) is an accepted term for a certain area of research among mathematicians and computer scientists particularly, but also for the scientific community in general.
However, scientists seem to have varying notions of the term, even if they come from similar disciplines.
This text attempts to show why a clear definition of the term is not straightforward, but finally dares to do exactly that, a fairly clear (objective) definition.</p>
<p>Let us start with an obvious observation.
The term “Scientific Computing” consists of two words: “scientific” and “computing”.
We do not try to explain both words separately.
For the first, we would have to find a definition for “science” which is a question that already exists for centuries and is tried to be answered by philosophy, more exactly <a href="https://en.wikipedia.org/wiki/Philosophy_of_science"><em>philosophy of science</em></a>.</p>
<p>What we are rather looking for is a definition of the term “Scientific Computing” (as an interplay of both words) in which the word “scientific” is related to “computing”.
Hence, following langugage, SC is a <em>particular kind of computing</em> that is <em>scientifically sound</em>, accepts the <em>scientific method</em>, and is thus open for the scientific community to get criticized and discussed.</p>
<p>As opposed to these rather trivial observations, the more difficult question to answer is what SC <em>really does</em>, in the sense of questions like</p>
<ul>
<li>which areas of mathematics and computer science are used in SC and how they interact,</li>
<li>which problems are solved by SC and how.</li>
</ul>
<p>SC has often been tried to be defined by following questions of this type.
However, doing so increases the risk of the definition getting subjective too quickly.
For example, a statistician has answers to the above questions that can substantially differ from answers given by a numerical analyst or a computer scientist, but still everyone is convinced that the own description is more precise.
This does not get us very far.</p>
<p>To find a more objective definition of SC, we need to circumvent classifications of the mentioned type.
We base our attempt of a definition on what we want to call the <em>three pillars of SC</em>:</p>
<ol>
<li>Theory,</li>
<li>Methodology,</li>
<li>Implementation.</li>
</ol>
<p>For this attempt, we need to agree on the following: “SC tries to solve problems that can be solved by computing, i.e., by using a computer.”
Such problems are called <em>computational problems</em> in the remainder and often involve <em>mathematical models</em>.</p>
<p>Now, the main point of our definition is that neither finding a method or an algorithm alone (methodology), nor proving a numerical result for its own sake (theory), nor an efficient implementation of an algorithm in a suitable programming language (implementation) without a connection to the former two tasks is what SC does.
Much more, it is the (often complex) interplay of all of the three parts.</p>
<p><br /><center><img src="/assets/images/sc-pillars.svg" /></center><br /></p>
<p>The main purpose of SC certainly is finding a method or algorithm that solves a computational problem.
However, following our definition, only the consideration and connection of all three aspects makes the approach a scientific computing approach.</p>
<h1 id="1-theory">1. Theory</h1>
<p>Theory, as we use the term in this context, leads to a <em>formal verification</em> of the developed algorithm.
For this, it utilizes a reasonable (mathematical and logical) formalism and useful notation to show that the algorithm is indeed solving the given computational problem.
The quality of the solution can be demonstrated as well.
As an example, numerical analysts can provide promising convergence results or insightful upper bounds on approximation errors.
Additionally, formal formulations can also lead to useful abstractions which potentially broadens the applicability of the method.</p>
<p>Most of theory is done by mathematicians, or at least in a mathematical way.
Mathematical areas that are often applied are, e.g., linear algebra, calculus, numerical mathematics, probability theory, and statistics.
However, also theoretical areas from computer science, as e.g., computability theory or complexity theory, can play a role here depending on the concrete case.</p>
<h1 id="2-methodology">2. Methodology</h1>
<p>As mentioned, this is certainly the core of the scientific computing approach.
The main job of this part is the development of methods, algorithms, or techniques to solve the computational problem at hand.
Preferably, the approaches need to be described algorithmically such that others can understand them.
It is then the theorist’s task to provide a proof of the quality of the approach to the community.
The implementation in software can get started as soon as there is a reasonable description of the method and a sufficiently large chance of success.</p>
<p>In our view, it is indeterminate whether the methodological part is dominated by mathematics or computer science.
We find that both disciplines can equally contribute here.</p>
<h1 id="3-implementation">3. Implementation</h1>
<p>Implementing a proposed method or algorithm is software development, more or less.
Of course, if the problem is highly computationally expensive, techniques of <em>high performance computing</em>, which we also see as part of implementation, should be applied.
It is the job of the software developer (or computer scientist) to produce code that efficiently executes the idea of the algorithm.
In this respect, software validation by suitable tests showing the correctness of the implementation is also necessary at this point.</p>
<p>Since this part is mostly about software development, it is certainly dominated by computer science.
Of course, programming can also be done by mathematicians who however act as software developers then.</p>
<p>Theoretically, all of the above three parts can be done by one and the same person.
Though, there is more than one scientist involved in most cases since approaches can consist of multiple sufficiently complex subtasks that need to be handled by specialists.</p>
<h1 id="distinction-from-computational-science">Distinction from <em>Computational Science</em></h1>
<p>In contrast to a definition from <a href="https://en.wikipedia.org/wiki/Computational_science">Wikipedia</a>, which does <em>not</em> differ between SC and <em>Computational Science</em> (ClS; to distinguish from CS which is often used for computer science), we would like to promote such a distinction.</p>
<p>The focus with SC lies on the computing or computation aspect, in our opinion.
In other words, we have a computational problem that is tried to be solved scientifically and that orientates on the three pillars mentioned above.</p>
<p>On the other hand, ClS, as the term says, is doing <em>science</em>, science in a <em>computational</em> manner.
This means that ClS tries to answer questions from a certain scientific area and hence always has the application in mind.
For example, problems from astrophysics are nowadays often solved computationally by simulations involving mathematical models that aim to reflect reality.
We can thus say that “SC is applied to do ClS” in this case.
Of course, computational problems in SC can be motivated by questions from ClS or from a certain scientific discipline directly, but do not necessarily need to.
Problems in SC can also emerge from other problems in SC.</p>
<h1 id="summary">Summary</h1>
<p>This text tried to formulate a new definition of <em>Scientific Computing</em>.
Existing approaches are often based on questions like which mathematical or computer science areas contribute to SC, which is rather subjective.
We aimed for establishing objectivity in the new definition by following another approach called <em>the three pillars of SC</em>: theory, methodology, implementation.
Finally, an explicit distinction to <em>Computational Science</em> was made which however conflicts with other attempts; see, e.g., <a href="https://en.wikipedia.org/wiki/Computational_science">Wikipedia</a>.</p>First of all, “Scientific Computing” (SC) is an accepted term for a certain area of research among mathematicians and computer scientists particularly, but also for the scientific community in general. However, scientists seem to have varying notions of the term, even if they come from similar disciplines. This text attempts to show why a clear definition of the term is not straightforward, but finally dares to do exactly that, a fairly clear (objective) definition.