Cookies disclaimer

I agree Our site saves small pieces of text information (cookies) on your device in order to deliver better content and for statistical purposes. You can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings you grant us permission to store that information on your device.

Title image of the page - Christopher Körber

Christopher Körber

Postdoc: Physics

The Standard Model (SM) of particle physics is currently the most precise theory describing fundamental physics and drives research at a global level. As the discovery of the Higgs particle further confirms the SM's range of application, it also denied the existence of new physics – physics Beyond the Standard Model (BSM) – up to the TeV scale. This is challenging as current theories can not explain macroscopic observations like the existence of Dark Matter or the asymmetry between matter and antimatter. The essential question is therefore: where does one expect signals beyond the standard model on a microscopic level and how can one describe this?

Complementary to the collider investigations of BSM signatures that take place at the high-energy frontier, it is also possible to probe for these signals with experiments at low energy via precision measurements. Indeed, it can be argued that such indirect searches may have greater reach, with sensitivities in some cases approaching the Grand Unified scale. Examples for such sensitive low-energy tests of BSM physics include searches for nucleon, electron, and atomic Electric Dipole Moments; efforts to directly detect Dark Matter through its scattering off atomic nuclei; measurements of lepton number violating processes like the neutrinoless double beta decay; and sensitive tests for new sources of flavor violation through the conversion of a muon to an electron in the nuclear field. Though there is a variety of ideas on how to detect such signals, the common ground of these low-energy experiments is the measurement of BSM signals on a nuclear level – ranging from large nuclear cores such as Xenon to planned experiments on light nuclei like Helium.

The variety of possible BSM theories hereby answers these questions differently and leads to distinctive beyond-the-SM phenomena. Thus, linking these experiments to the fundamental level is of relevance for two reasons:

  • Identification of BSM sources

    While a non-vanishing signal would confirm the existence of BSM structures, to identify the sources of these signals, one needs to propagate the signal from the target (nuclear many-body level) to the level of the fundamental theory.

  • Experimental guidance

    Once one is able to propagate possible BSM interactions to the level of nuclear cores, it might be possible to identify special nuclei which feature a coherent enhancement or suppression of such structures and thus guide experimental efforts.

Theoretical descriptions of such phenomena still suffer from large uncontrolled uncertainties – mostly associated with the nuclear many-body methods or an insufficient treatment or relevant interactions. In some cases, the method-dependent extrinsic uncertainties have improved by several factors but still can be as big as 100%. Since expected signals are supposed to be small, an accurate and precise description is needed to discriminate between different BSM structures. Therefore it is essential to understand the effects of all relevant uncertainties associated with the propagation of scales.

As the main objective of my current research, I intend to set up a consistent and accurate framework for analyzing BSM effects on a nuclear level, which can be systematically improved in order to increase the desired precision of the description.

ill-qa

While quantum computing proposes promising solutions to computational problems not accessible with classical approaches, due to current hardware constraints, most quantum algorithms are not yet capable of computing systems of practical relevance, and classical counterparts outperform them. To practically benefit from quantum architecture, one has to identify problems and algorithms with favorable scaling and improve on corresponding limitations depending on available hardware. For this reason, we developed an algorithm that solves integer linear programming problems, a classically NP-hard problem, on a quantum annealer, and investigated problem and hardware-specific limitations. This work presents the formalism of how to map ILP problems to the annealing architectures, how to systematically improve computations utilizing optimized anneal schedules, and models the anneal process through a simulation. It illustrates the effects of decoherence and many body localization for the minimum dominating set problem, and compares annealing results against numerical simulations of the quantum architecture. We find that the algorithm outperforms random guessing but is limited to small problems and that annealing schedules can be adjusted to reduce the effects of decoherence. Simulations qualitatively reproduce algorithmic improvements of the modified annealing schedule, suggesting the improvements have origins from quantum effects.

nn-s-wave-su3

We report on the first application of the stochastic Laplacian Heaviside method for computing multi-particle interactions with lattice QCD to the two-nucleon system. Like the Laplacian Heaviside method, this method allows for the construction of interpolating operators which can be used to construct a positive definite set of two-nucleon correlation functions, unlike nearly all other applications of lattice QCD to two nucleons in the literature. It also allows for a variational analysis in which optimal linear combinations of the interpolating operators are formed that couple predominantly to the eigenstates of the system. Utilizing such methods has become of paramount importance in order to help resolve the discrepancy in the literature on whether two nucleons in either isospin channel form a bound state at pion masses heavier than physical, with the discrepancy persisting even in the $SU(3)$-flavor symmetric point with all quark masses near the physical strange quark mass. This is the first in a series of papers aimed at resolving this discrepancy. In the present work, we employ the stochastic Laplacian Heaviside method without a hexaquark operator in the basis at a lattice spacing of $a\sim0.086$~fm, lattice volume of $L=48 a \simeq 4.1$~fm and pion mass $m_\pi \simeq 714$~MeV. With this setup, the observed spectrum of two-nucleon energy levels strongly disfavors the presence of a bound state in either the deuteron or dineutron channel.

– Published in Phys. Rev. D 102, 034507
fk-over-fpi

We report the results of a lattice QCD calculation of $F_K/F_\pi$ using Möbius Domain-Wall fermions computed on gradient-flowed $N_f=2+1+1$ HISQ ensembles. The calculation is performed with five values of the pion mass ranging from $130 \lesssim m_\pi \lesssim 400$~MeV, four lattice spacings of $a\sim 0.15, 0.12, 0.09$ and $0.06$~fm and multiple values of the lattice volume. The interpolation/extrapolation to the physical pion and kaon mass point, the continuum, and infinite volume limits are performed with a variety of different extrapolation functions utilizing both the relevant mixed-action effective field theory expressions as well as discretization-enhanced continuum chiral perturbation theory formulas. We find that the $a\sim0.06$~fm ensemble is helpful, but not necessary to achieve a sub-percent determination of $F_K/F_\pi$. We also include an estimate of the strong isospin breaking corrections and arrive at a final result of $F_{\hat{K}^+}/F_{\hat{\pi}^+} = 1.1942(45)$ with all sources of statistical and systematic uncertainty included. This is consistent with the FLAG average value, providing an important benchmark for our lattice action. Combining our result with experimental measurements of the pion and kaon leptonic decays leads to a determination of $|V_{us}|/|V_{ud}| = 0.2311(10)$.

– Published in JOSS 02007
espressodb

EspressoDB is a programmatic object-relational mapping (ORM) data management framework implemented in Python and based on the Django web framework. EspressoDB was developed to streamline data management, centralize and promote data integrity, while providing domain flexibility and ease of use. It is designed to directly integrate in utilized software to allow dynamical access to vast amount of relational data at runtime. Compared to existing ORM frameworks like SQLAlchemy or Django itself, EspressoDB lowers the barrier of access by simplifying the project setup and provides further features to satisfy uniqueness and consistency over multiple data dependencies. In contrast to software like DVC, VisTrails, or Taverna, which describe the workflow of computations, EspressoDB rather interacts with data itself and thus can be used in a complementary spirit.

luescher-nd

Contact interactions can be used to describe a system of particles at unitarity, contribute to the leading part of nuclear interactions and are numerically non-trivial because they require a proper regularization and renormalization scheme. We explain how to tune the coefficient of a contact interaction between non-relativistic particles on a discretized space in 1, 2, and 3 spatial dimensions such that we can remove all discretization artifacts. By taking advantage of a latticized Lüscher zeta function, we can achieve a momentum-independent scattering amplitude at any finite lattice spacing.

– Published in Phys. Rev. B 100, 075141
hubbard-ergodicity

The Hubbard model arises naturally when electron-electron interactions are added to the tight-binding descriptions of many condensed matter systems. For instance, the two-dimensional Hubbard model on the honeycomb lattice is central to the ab initio description of the electronic structure of carbon nanomaterials, such as graphene. Such low-dimensional Hubbard models are advantageously studied with Markov chain Monte Carlo methods, such as Hybrid Monte Carlo (HMC). HMC is the standard algorithm of the lattice gauge theory community, as it is well suited to theories of dynamical fermions. As HMC performs continuous, global updates of the lattice degrees of freedom, it provides superior scaling with system size relative to local updating methods. A potential drawback of HMC is its susceptibility to ergodicity problems due to so-called exceptional configurations, for which the fermion operator cannot be inverted. Recently, ergodicity problems were found in some formulations of HMC simulations of the Hubbard model. Here, we address this issue directly and clarify under what conditions ergodicity is maintained or violated in HMC simulations of the Hubbard model. We study different lattice formulations of the fermion operator and provide explicit, representative calculations for small systems, often comparing to exact results. We show that a fermion operator can be found which is both computationally convenient and free of ergodicity problems.

– Published in Physics Today
simulations-for-high-school-students

In our previous publication "A primer to numerical simulations: The perihelion motion of Mercury", we describe a scenario on how to teach numerical simulations to high school students. In this online article, we describe our motivation and the experiences we have made during the two summer schools where we have presented this course.

– Published in Universitäts- und Landesbibliothek Bonn
phd-thesis

Fundamental symmetries (and their violations) play a significant role in active experimental searches of Beyond the Standard Model (BSM) signatures. An example of such phenomena is the neutron Electric Dipole Moment (EDM), a measurement of which would be evidence for Charge-Parity (CP) violation not attributable to the basic description of the Standard Model (SM). Another example is the strange scalar quark content of the nucleon and its coupling to Weakly Interacting Massive Particles (WIMPs), which is a candidate model for Dark Matter (DM). The theoretical understanding of such processes is fraught with uncertainties and uncontrolled approximations. On the other hand, methods within nuclear physics, such as Lattice Quantum Chromodynamics (LQCD) and Effective Field Theories (EFT), are emerging as powerful tools for calculating non-perturbatively various types of nuclear and hadronic observables. This research effort will use such tools to investigate phenomena related to BSM physics induced within light nuclear systems. As opposed to LQCD which deal with quarks and gluons, in Nuclear Lattice Effective Field Theory (NLEFT) individual nucleons—protons and neutrons—form the degrees of freedom. From the symmetries of Quantum Chromodynamics (QCD), one can derive the most general interaction-structures allowed on the level of these individual nucleons. In general, this includes an infinite number of possible interactions. Utilizing the framework of EFTs, more specifically for this work Chiral Perturbation Theory (χPT), one can systematically expand the nuclear behavior in a finite set of relevant nuclear interactions with a quantifiable accuracy. Fundamental parameters of this theory are related to experiments or LQCD computations. Using this set of effective nuclear interaction-structures, one can describe many-nucleon systems by simulating the quantum behavior of each involved individual nucleon. The 'ab initio' method NLEFT introduces a spatial lattice which is finite in its volume (FV) and allows to exploit powerful numerical tools in the form of statistical Hybrid Monte Carlo (HMC) algorithms. The uncertainty of all three approximations—the statistical sampling, the finite volume and the discretization of space—can be analytically understood and used to make a realistic and accurate estimation of associated uncertainty. In the first part of the thesis, χPT is used to derive nuclear interactions with a possible BSM candidate up to Next-to-Leading Order (NLO) in the specific case of scalar interactions between DM and quarks or gluons. Following this analysis, Nuclear Matrix- Elements (NMEs) are presented for light nuclei (2H, 3He and 3H), including a complete uncertainty estimation. These results will eventually serve as the benchmark for the many-body computations. In the second part of this thesis, the framework of NLEFT is briefly presented. It is shown how one can increase the accuracy of NLEFT by incorporating few-body forces in a non-perturbative manner. Finite-Volume (FV) and discretization effects are investigated and estimated for BSM NME on the lattice. Furthermore, it is displayed how different boundary conditions can be used to decrease the size of FV effects and extend the scope of available lattice momenta to the range of physical interest.

– Published in Physics Education
mercury-numerical

Numerical simulations are playing an increasingly important role in modern science. In this work it is suggested to use a numerical study of the famous perihelion motion of the planet Mercury (one of the prime observables supporting Einsteins General Relativity) as a test case to teach numerical simulations to high school students. The paper includes details about the development of the code as well as a discussion of the visualization of the results. In addition a method is discussed that allows one to estimate the size of the effect as well as the uncertainty of the approach a priori. At the same time this enables the students to double check the results found numerically. The course is structured into a basic block and two further refinements which aim at more advanced students.

– Published in Europhysics Letters
n-body-hs

We present a general auxiliary field transformation which generates effective interactions containing all possible N-body contact terms. The strength of the induced terms can analytically be described in terms of general coefficients associated with the transformation and thus are controllable. This transformation provides a novel way for sampling 3- and 4-body (and higher) contact interactions non-perturbatively in lattice quantum monte-carlo simulations. We show that our method reproduces the exact solution for a two-site quantum mechanical problem.

– Published in EPJ Web Conf. Volume 175, 2018
cns-proceeding

We show how lattice Quantum Monte Carlo simulations can be used to calculate electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path integral formalism and use methods developed within the lattice QCD community for our numerical work and compare our results to empirical data of the Anti-Ferromagnetic Mott Insulating gap in large diameter tubes.

– Published in EPJ Web Conf. Volume 175, 2018
few-body-hs

Through the development of many-body methodology and algorithms, it has become possible to describe quantum systems composed of a large number of particles with great accuracy. Essential to all these methods is the application of auxiliary fields via the Hubbard-Stratonovich transformation. This transformation effectively reduces two-body interactions to interactions of one particle with the auxiliary field, thereby improving the computational scaling of the respective algorithms. The relevance of collective phenomena and interactions grows with the number of particles. For many theories, e.g. Chiral Perturbation Theory, the inclusion of three-body forces has become essential in order to further increase the accuracy on the many-body level. In this proceeding, the analytical framework for establishing a Hubbard-Stratonovich-like transformation, which allows for the systematic and controlled inclusion of contact three- and more-body interactions, is presented.

– Published in Phys. Rev. C 96, 035805
dm-light

We study the scattering of Dark Matter particles off various light nuclei within the framework of chiral effective field theory. We focus on scalar interactions and include one- and two-nucleon scattering processes whose form and strength are dictated by chiral symmetry. The nuclear wave functions are calculated from chiral effective field theory interactions as well and we investigate the convergence pattern of the chiral expansion in the nuclear potential and the Dark Matter-nucleus currents. This allows us to provide a systematic un- certainty estimate of our calculations. We provide results for $^2$H, $^3$H, and $^3$He nuclei which are theoretically interesting and the latter is a potential target for experiments. We show that two-nucleon currents can be systematically included but are generally smaller than predicted by power counting and suffer from significant theoretical uncertainties even in light nuclei. We demonstrate that accurate high-order wave functions are necessary in order to incorporate two-nucleon currents. We discuss scenarios in which one-nucleon contributions are suppressed such that higher-order currents become dominant.

– Published in Phys. Rev. C 93, 054002
twisted-boundaries

We describe and implement twisted boundary conditions for the deuteron and triton systems within finite volumes using the nuclear lattice EFT formalism. We investigate the finite-volume dependence of these systems with different twist angles. We demonstrate how various finite-volume information can be used to improve calculations of binding energies in such a framework. Our results suggest that with the appropriate twisting of boundaries, infinite-volume binding energies can be reliably extracted from calculations using modest volume sizes with cubic length $L \approx 8–14$ fm. Of particular importance is our derivation and numerical verification of three-body analogs of “i-periodic” twist angles that eliminate the leading-order finite-volume effects to the three-body binding energy.

– Published in Ruhr-Universität Bochum (Master thesis)
large-nc

The work tests the consistency of nucleon-nucleon forces derived by two different approximation schemes of Quantum Chromodynamics (QCD)—the chiral perturbation theory ($\chi$PT) and large-Nc QCD. The approximation schemes and the derivation of the potential are demonstrated in this work. The consistency of the chiral potential, derived using the method of unitary transformation, is verified for chiral orders $Q^\nu$ for $\nu = 0,2,4$. Used methods, as well as possible extensions for higher orders, are presented.

, On Demand Talk at Qubits 2020 (digital)
qa-ilp-qubits-2020

Many problems across several disciplines ranging from production planning over DNA, can be represented as constrained integer optimization problems. Utilizing classical algorithms, generally exact solutions to integer linear programming (ILP) problems are exponentially hard to obtain, and heuristic algorithms are employed to find approximate solutions. This talk demonstrates a new algorithm that solves such constrained ILP on a quantum annealer. Details of the formalism are presented in the case of the minimum dominating set problem, and annealing results for the D-Wave 2000Q architecture are benchmarked against brute force classical solutions. For a variety of different annealing schedules, results are compared against numerical simulations of the quantum architecture to interpret hardware-specific limitations, like the effects of decoherence and many-body localization.

, INT 20-2b contribution at Seattle
bridging-the-scales

What is the nature of so-called Dark Matter, and does it interact with regular matter except through gravity? Direct detection experiments aim to answer this question. Propagating measurements (or constraints) to the fundamental theory requires bridging several scales—from target nucleus to individual nucleons to the level of quarks & gluons and beyond. However, the sheer number of parameters in model-independent descriptions of DM, and uncertainties associated with bridging the scales make it difficult to fully quantify uncertainties from theory to experiment. This talk exemplifies challenges associated with propagating uncertainties and presents an in-progress open-source and open-data meta-analysis tool for connecting strong-operators at the QCD scale to nuclear observables aiming at pinning down associated uncertainties. Since this project intends to unify efforts from different communities (from EFTs & LQCD over many-body computations to experiments), this talk addresses faced challenges and intends to spark conversations regarding community-specific interests.

, NSD Seminar at LBNL & RUB (14.05.2020)
dark-matter-nsd

What is the nature of so called Dark Matter and does it interact with regular matter except through gravity? Direct detection experiments aim to answer this question. Yet, propagating measurements (or constraints) to the fundamental theory requires bridging several scales—from target nucleus to individual nucleons to the level of quarks & gluons and beyond. In this talk I describe the methods used, assumptions made and challenges faced by the bridging of scales to address implications of measurements.

, Workshop at LBL
lattedb-poetic

What is LatteDB? LatteDB aims to integrate all aspects of a Lattice QCD calculation, ensuring data integrity and provenance. By offering this framework, the hope is that lattice practitioners can then spend more time thinking about science, and less about these well-understood, yet complex and time consuming, workflow challenges. LatteDB is built on top of EspressoDB and is publicly available under a BSD license. For more information, see also the LatteDB manuscript and the Django documentation. This talk presents work in progress.

, Parallel Session Talk at APS April Meeting 2019, Denver
perihelion-mercury

Numerical simulations play an increasingly important role in modern science. In this work, we suggest using a numerical study of the famous perihelion motion of the planet Mercury (one of the prime observables supporting Einsteins General Relativity) as a test case to teach numerical simulations to high school students. The project was presented as a one day course at a student summer school. This work includes details about the development of the code (Python) for which no prior programming experience is needed, a discussion of the visualization as well as the course teaching experience. This course encourages students to develop an intuition for numerical simulations, motivates students to explore problems themselves and to critically analyze results.

, Seminar at IKP, Jülich
git-tutorial

In this talk, a simple git tutorial for users not familiar with git is presented. The tutorial teaches the essential basics for collaborating with git. In addition, a cheat sheet, good practice advises and further references as well as recommendations are provided. Press the "?" key within the "Web Presentation" to get navigation clues. "Download" the presentation for an offline version or print and save the "Web Presentation" to PDF for traditional formats (landscape format).

, PhD Defense at University of Bonn
phd-defense

What do we know about Dark Matter and how can we possibly describe it on a fundamental scale? Dark Matter direct detection experiments provide the intriguing possibility to discover the existence of Dark Matter. The interpretation of the experiments, however, depends on a description of Dark Matter. In this talk, I present the calculation of Dark Matter particles scattering off various light nuclei from first principles. The calculations are based on the framework of Chiral Effective Field Theory, which relates nuclear interactions to the underlaying fundamental theory: Quantum Chromodynamics. I introduce this framework and show how it can be extended to include Dark Matter interactions.

, Seminar at TP2, Ruhr-Universität Bochum
tp2-seminar-2017

What do we know about Dark Matter and how can we possibly describe it on a fundamental scale? In this talk, I present the scattering of Dark Matter particles off various light nuclei within the framework of chiral effective field theory. I focus on scalar interactions and include one- and two-nucleon scattering processes whose form and strength are dictated by chiral symmetry. The nuclear wave functions are calculated from chiral effective field theory interactions as well. The convergence pattern of the chiral expansion in the nuclear potential and the Dark Matter-nucleus currents is investigated. This allows to provide a systematic uncertainty estimate of the calculations. Results for ${}^2$H, ${}^3$H, and ${}^3$He nuclei, which are theoretically interesting and the latter is a potential target for experiments, are provided.

, Poster at BCGS Poster Session 2017
bcgs-poster-2017

What do we know about Dark Matter and how can we possibly describe it on a fundamental scale? In this session, I present the scattering of Dark Matter particles off various light nuclei within the framework of chiral effective field theory. I focus on scalar interactions and include one- and two-nucleon scattering processes whose form and strength are dictated by chiral symmetry. The nuclear wave functions are calculated from chiral effective field theory interactions as well. The convergence pattern of the chiral expansion in the nuclear potential and the Dark Matter-nucleus currents is investigated. This allows to provide a systematic uncertainty estimate of the calculations. Results for ${}^2$H, ${}^3$H, and ${}^3$He nuclei, which are theoretically interesting and the latter is a potential target for experiments, are provided.

, Parallel Session Talk at Lattice 2017, Granada
n-body-hs-lattice

Through the development of many-body methodology and algorithms, it has become possible to describe quantum systems composed of a large number of particles with great accuracy. Essential to all these methods is the application of auxiliary fields via the Hubbard-Stratonovich transformation. This transformation effectively reduces two-body interactions to interactions of one particle with the auxiliary field, thereby improving the computational scaling of the respective algorithms. With the increasing size of involved particles, the relevance of collective phenomena and interactions grows as well. For many theories, e.g. Chiral Perturbation Theory, the inclusion of three-body forces has become essential in order to further increase the accuracy on the many-body level. In this seminar, the analytical framework for establishing a Hubbard-Stratonovich-like transformation, which allows for the systematic and controlled inclusion of contact three- and more-body interactions, is presented.

, Seminar at IKP
nleft

Nuclear Lattice Effective Field Theory (NLEFT) has proven to be a valuable candidate for pushing the borders of nuclear physics in the regime of nuclei as Carbon-12 and has enabled the computation of nuclear matrix elements for larger systems. The applicability of NLEFT to such systems is enabled through the utilization of lattice stochastic approaches as Hybrid Monte Carlo (HMC) algorithms. In this seminar, I briefly present the underlying principles and strategy behind modern algorithms.

, Conference Talk at IAS-Symposium, Jülich
IAS_DM

As there are several indications for the existence of so-called "Dark Matter" (DM) from an astrophysical point of view, direct detection of a candidate particle in a lab has not been successful this far. A set of direct detection experiments, which use different targets, could potentially test the various types of DM interactions with different properties of nuclei. To connect possible future measurements of DM signals to a candidate DM particle, these DM experimental signals must be propagated through nuclear cores — composed of many protons and neutrons — to the fundamental level of the DM theory. Due to the complexity in describing the many-body nucleus, this propagation has historically been done in a model- dependent fashion. As a candidate for propagating possible experimental data to the level of protons and neutrons, Nuclear Lattice Effective Field Theory (NLEFT) provides an approach which systematically enables the reduction of uncertainties.

, Seminar at IKP, Jülich
juelich-nleft-non-stochastic

Recently nuclear lattice effective field theory (NLEFT) has proven to be a valuable candidate for pushing the borders of nuclear physics in the regime of nuclei as Carbon-12 and has enabled the computation of nuclear matrix elements for larger systems. The applicability of NLEFT to such systems is enabled through the utilization of lattice stochastic approaches as Hybrid Monte Carlo (HMC) algorithms. As it is common for effective field theories, low energy coefficients (LECs) which describe the strength of effective interactions, need to be determined before one is able to describe and predict physical systems. In this process, it is essential to understand the effect of lattice artifacts as a discrete lattice spacing and the finite volume – independent of stochastic artifacts. Therefore non-stochastic approaches for smaller systems are employed to estimate LECs. In this seminar, such a non-stochastic approach for extracting nuclear energy levels on the lattice will be presented.

, Invited Talk at INT, Seattle
INT-twists

In this talk, I describe twisted boundary conditions for the deuteron and triton systems within finite volumes using the nuclear lattice EFT formalism. The finite-volume dependence of these systems with different twist angles is presented. Various finite-volume information can be used to improve calculations of binding energies in such a framework. The results suggest that with the appropriate twisting of boundaries, infinite-volume binding energies can be reliably extracted from calculations using modest volume sizes with cubic length $L \approx 8–14$ fm. Of particular importance is the derivation and numerical verification of three-body analogs of “i-periodic” twist angles that eliminate the leading-order finite-volume effects to the three-body binding energy.

, Seminar at ECT$^*$, Trento
ECT-NLEFT-EDM

One of the greatest unsolved problems in physics is the asymmetry of observed matter – the amount of baryonic matter greatly exceeds the amount of antibaryonic matter. Based on the assumption that at some point in time the universe contained the same amount of particles and antiparticles, a possible explanation for this observation requires the existence of physical processes which violate charge-parity (CP). Though it is known that CP-violating processes exist in nature, the effects by the CP-violating contributions of the standard model (SM) of particle physics are not sufficient to explain the current asymmetry of matter. Therefore there must be additional sources of CP violation that come from beyond the standard model (BSM). A quantity for probing the magnitude of such CP violating processes is the electric dipole moment (EDM) of nuclei. In this talk, a strategy for creating a link between a possible future measurement and the fundamental interpretation is presented.

, Conference Talk at Students Exchange Week in Bad Honnef
bad-honnef-large-Nc

Conference talk on the consistency of nucleon-nucleon forces derived by two different approximation schemes of Quantum Chromodynamics (QCD)—the chiral perturbation theory ($\chi$PT) and large-Nc QCD. The approximation schemes and the derivation of the potential are demonstrated. The consistency of the chiral potential, derived using the method of unitary transformation, is verified for chiral orders $Q^\nu$ for $\nu = 0,2,4$. Used methods, as well as possible extensions for higher orders, are presented.