Paper Explainer: Asymmetry Observables and the Origin of RD^(∗) Anomalies

Asymmetry Observables and the Origin of $R_{D^{(∗)}}$ Anomalies

This is an explainer of my recent paper, written with my colleague here at Rutgers David Shih, and David’s graduate student, Pouya Asadi. This is a follow-up in some sense to our previous collaboration, which for various reasons I wasn’t able to write up when it came out earlier this year.

This paper concerns itself with the $R_D$ and $R_{D^*}$ anomalies, so I better start off explaining what those are. The Standard Model has three “generations” of matter particles which are identical except for their masses. The lightest quarks are the up and down, then the charm and strange, and finally the heaviest pair, the top and bottom. The electron has the muon and the tau as progressively heavier partners. The heavier particles can decay into a lighter version only through the interaction with a $W$ boson — these are the only “flavor changing” processes in the Standard Model.

So, if you look at the $b$-quark, it can decay into a $c$-quark by radiating off a $W$ boson. Now, the $b$ quark only weighs around 5 GeV, so it can’t make a “real” $W$. Instead it creates a “virtual” particle, which quickly decays. Some of the time, the $W$ can turn into an electron/muon/tau and a paired neutrino, so \[ b \to c (W^- \to \ell \nu) \] where $\ell$ is one of the charged leptons. In the Standard Model, each of the possible three leptons $\ell = e,\mu,\tau$, should occur nearly at the same rate (with only small differences due to the fact that the tau is heavier than the mu which is heavier than the electron). Therefore, we could measure the ratio \[ \frac{b \to c (W^- \to \tau \nu)}{\sum_{\ell = e,\mu\tau} b \to c (W^- \to \ell \nu)} \] and this should be $\sim 1/3$.

Now, we can’t measure “bare” quarks, so we have to look at the decays of mesons — a quark-antiquark bound state. In this case, we care about the $B$ meson (which contains a $b$ quark), decaying into either the $D$ or $D^*$ mesons (both of which contain a charm quark). Looking at mesons means we have all sorts of additional calculational and experimental issues to deal with (which is why we look at this ratio: dividing different rates cancels out many of the experimental effects). So we can define \[ R_D = \frac{B \to D \tau \nu}{\sum_{\ell = e,\mu\tau} B \to D \ell \nu} \] and \[ R_{D^*} = \frac{B \to D^* \tau \nu}{\sum_{\ell = e,\mu\tau} B \to D^* \ell \nu} \] In the Standard Model, including all effects, we can predict \[ R_D = 0.299 \pm 0.003, R_{D^*} = 0.258 \pm 0.005. \] When we measure these ratios, we find \[ R_D = 0.407 \pm 0.046, R_{D^*} = 0.304 \pm 0.015. \] All told, these two measurements combined are a $3.8\sigma$ deviation from the Standard Model — one of the largest known discrepancies. It’s not $5\sigma$ yet, so it isn’t a discovery, but it is very interesting.

Now, these anomalies have been known for a while, and there are many theoretical ideas on how to explain them. The basic idea is usually to introduce a new particle that couples to $b-c$ quarks and $\tau-\nu$ leptons. This new particle can then act as a new source of flavor-violation, and mediate a decay into taus slightly more than to the other charged leptons.

In our previous paper, we added a new such mediator: a new particle like the $W$ boson, that coupled to “right-handed” quarks and a new “right-handed” neutrino alongside the tau lepton. This avoided several of the stringent constraints on new “left-handed” $W$-like bosons (left and right here really refer to the way that the Standard Model particles are spinning. In the Standard Model, the $W$ boson only interacts with quarks and leptons whose spins are oriented opposite to their direction of motion). But there are other options: charged scalars, new bosons, and a set of particles called “leptoquarks” which can decay into both a single quark and a single lepton (something no particle in the Standard Model can).

In this paper, we consider how to tell all these different ideas apart. We start off by considering the range of possible $R_D$ and $R_{D^*}$ values that each model could cause. For a given new mediator particle with a specific mass and a specific interaction with the Standard Model particles, you can change the two ratios by specific amounts. The details of how those interactions work tell you how you can change the two in combination. In this figure, we show where each model can populate the space of $R_D$ and $R_{D^*}$. Some can only live along certain lines, while other models can cover regions of parameter space. We consider both models with left-handed and right-handed neutrinos.

single_medL.png

In this figure, we also show the current experimental measurements (the grey lines, centered at the average value with error ellipses around it), and the projected accuracy of a future experiment called Belle II. Belle II will generate about 55 million $B$ mesons by 2025, and measure these ratios with much higher precision (the red and magenta ellipses). We don’t know what values for the ratios it will measure (obviously, otherwise why do the experiment?), so we pick two possibilities. The first is that the Belle II results are just like today’s average values, but with much smaller error bars. If that’s the case, the anomalies will grow to something like $10\sigma$, which will be amazing evidence for new physics. We also consider a case where the Belle II measurements are lower than the current values, but still at $5\sigma$ (this is completely possible, and would be only a $2\sigma$ downward shift from the present values). Still, that’s a discovery. Other outcomes are possible, but the $5\sigma$ case is a “worst-case” scenario for our purposes (other than a less than $5\sigma$ anomaly, but in that case, our work will be less important).

single_medR.png

Using just this, you can see that, once the measurements from Belle II are made, we might be able to narrow down the possible models of new physics (especially in the $10\sigma$ case). But we still won’t know for sure. So in our paper, we consider what else we can look at that would distinguish the various possibilities. Our benchmark goal is to see if we can distinguish left- and right-handed neutrinos.

We therefore consider “asymmetry observables:” we consider how each model will effect the direction of the tau lepton relative to the motion of other particles in the decay, or how the spin or the tau will tend to be oriented. The asymmetry ${\cal A}_{FB}$, for example, measures how often the tau tends to move in the same direction as the $D$ (or $D^*$) versus how often it moves in the opposite direction. The polarization asymmetries ${\cal P}$ tell you how often you will measure the spin of the tau with or against one particular direction in the decay. There are three directions you can choose: we denote them as $\perp$, $\tau$ and $T$. The “$T$” direction will turn out to be especially interesting, but it turns out no one has a fully viable plan to measure it (though it is, at least, theoretically possible to measure).

For the $10\sigma$ and $5\sigma$ benchmark scenarios at Belle~II, we can then consider the range of possible measurements we can expect for each of these observables, for the different models of new physics. We end up with plots like this (figure 6). Here, we’re showing the correlation of different observable measurements for different models. Right-handed neutrino models in red, left-handed in green. The point here is that, for this $10\sigma$ scenario, the green and red blobs are distinct: if you can measure all of the observables (not including in this case the ${\cal P}_T$ measurement), you can determine if the new physics involves left- or right-handed neutrinos.

You can do even better: for most possible measurements, you can even tell which model specifically would be responsible for the new physics — assuming again you can make these measurements. In a few possible outcomes, you won’t be able to tell apart certain leptoquark models from other leptoquarks (both using the same handedness of neutrinos). In this case, the ${\cal P}_T$ measurement can break the degeneracy in most, but not all cases. If you can make the measurement.

optimistic.png

In the $5\sigma$ case, you can see from these blob plots that the situation will be harder. In many cases, the asymmetry observables will tell the left- and right-handed scenarios apart, but not in all. Again, the ${\cal P}_T$ measurement can come to the rescue: here completely breaking the degeneracy, assuming that the errors in the measurement can be made sufficiently small. So overall, a pretty positive result, though much depends on finding a way to measure one asymmetry observable that may have a big role to play.

You can do even better: for most possible measurements, you can even tell which model specifically would be responsible for the new physics — assuming again you can make these measurements. In a few possible outcomes, you won’t be able to tell apart certain leptoquark models from other leptoquarks (both using the same handedness of neutrinos). In this case, the ${\cal P}_T$ measurement can break the degeneracy in most, but not all cases. If you can make the measurement.

pessimistic.png

In the $5\sigma$ case, you can see from these blob plots that the situation will be harder. In many cases, the asymmetry observables will tell the left- and right-handed scenarios apart, but not in all. Again, the ${\cal P}_T$ measurement can come to the rescue: here completely breaking the degeneracy, assuming that the errors in the measurement can be made sufficiently small. So overall, a pretty positive result, though much depends on finding a way to measure one asymmetry observable that may have a big role to play.

You can do even better: for most possible measurements, you can even tell which model specifically would be responsible for the new physics — assuming again you can make these measurements. In a few possible outcomes, you won’t be able to tell apart certain leptoquark models from other leptoquarks (both using the same handedness of neutrinos). In this case, the ${\cal P}_T$ measurement can break the degeneracy in most, but not all cases. If you can make the measurement.

In the $5\sigma$ case, you can see from these blob plots that the situation will be harder. In many cases, the asymmetry observables will tell the left- and right-handed scenarios apart, but not in all. Again, the ${\cal P}_T$ measurement can come to the rescue: here completely breaking the degeneracy, assuming that the errors in the measurement can be made sufficiently small. So overall, a pretty positive result, though much depends on finding a way to measure one asymmetry observable that may have a big role to play.

I haven’t played much with flavor models, these projects with David and Pouya are some of my first forays into the field. Anomalies in these finicky flavor measurements tend to get less attention sometimes from the community than the results from the LHC. However, there are anomalies in the data, anomalies that might be statistically significant and may turn into new physics. In that case, we’ll all have to become experts in the language of flavor physics.