Why is there a conserved quantity for every continuous global symmetry?

EmmyNoether MFO3096.jpg

One of the most important theorems in physics is the theorem that states:

"To every differentiable global symmetry of a physical system there corresponds a conservation law"

This theorem is known as the first Noether's theorem, in honor of the great mathematician Emmy Noether, who proved it in 1915 in the context of classical mechanics (both relativistic and non-relativistic, but not quantum). By the way, Noether is part of the group of leading scientists and professors in her field who lost their jobs due to the intolerance of the Nazis when they came to power, as they immediately passed a law preventing Jews and Communists from working in universities and public institutions. This happened before the Holocaust and the Second World War. It is important to remember this so that it does not happen again.

This relationship between symmetries and conservation laws established by Noether is one of the most powerful ideas that human beings have ever had. Conservation laws are a very useful tool for finding out how the quantities of a physical system change over time. Knowing that there are physical quantities that do not change allows us to write equations where the unknowns are the quantities that do change. We can then use the quantities that do not change to find out how the other quantities change.

On the other hand, the symmetries of a physical system are related to its aesthetic aspect. For example, a sphere is beautiful because, no matter how you rotate it, it remains the same. Noether's theorem thus relates beauty to usefulness in physics in a certain way. Pragmatism and aesthetics go hand in hand.

However, for the physics student, it is not immediately evident that a continuous symmetry implies a conserved quantity. Apparently, they are two things that have nothing to do with each other. What is the reason for this relationship?

However, today we know that the world is not classical, but quantum, and that classical mechanics is nothing more than an approximation of the behavior of physical systems in a certain limit. Therefore, Noether's original proof does not serve us for the fundamental laws of nature. Does Noether's theorem still hold in quantum mechanics?

These are the two questions we are going to answer in this post.


Continuous transformations in quantum mechanics

Quantum mechanics has forever changed our conception of the physical world, the foundations of information theory and even our conception of the mathematical objects. An example of the latter is what happens with transformation groups. The way a group $G$ of transformations acts on a physical system can be very complicated. The physical system can be highly nonlinear and have a highly nontrivial geometric structure. However, the principle of superposition of quantum states tells us that in quantum mechanics, the states in which any physical system can be found are elements of a vector space $\mathcal{H}$, since any linear combination of several quantum states represents another possible quantum state for the system.

This means that for each element $g$ of the group $G$, when it acts on the physical system, it corresponds to a linear transformation $\pi(g)$, an operator that acts on this vector space of quantum states. Linear transformations are much easier to study. Moreover, these transformations must be unitary, since they are the only ones that preserve the total probability, as the sum of the probabilities of all possible events in quantum mechanics must always be equal to one, no matter how many transformations we apply to the physical system.

The set of all these linear transformations on the space of quantum states is said to form a representation of the group $G$ on the space $\mathcal{H}$. More specifically, a representation is a function $\pi$ that associates, to each element $a$ of a group, its corresponding linear transformation $\pi(a)$ in a vector space, such that the structure of the group is respected, that is, it is the same to compose two transformations and find the representation of the result, as to compose the representations of each transformation. Mathematically, it is written as:

$$ \pi(b) \cdot \pi(a)=\pi(ba)$$

It is evident that mapping all the elements of a group to the identity transformation in a vector space is trivially a representation, which is called the trivial or scalar representation. What is interesting for mathematicians is to study the non-trivial representations of the different groups that faithfully reproduce their structure. The dimension of the vector space $\mathcal{H}$ on which the operators $\pi(g)$ act is called the dimension of the representation $\pi$. It is also interesting from the mathematical point of view to study which are the irreducible representations of each group $G$. These are the ones that have no sub-representations, that is, there is no proper vector subspace $\mathcal{H}^\prime \subset \mathcal{H}$ such that $\pi$ acting on $\mathcal{H}^\prime$ is a representation. This is because, given two representations $\pi_1$ and $\pi_2$ acting on two vector spaces $\mathcal{H}_1$ and $\mathcal{H}_2$, it is always possible to construct the direct sum representation $\pi_1 \oplus \pi_2$ that acts on the direct sum vector space $\mathcal{H}_1 \oplus \mathcal{H}_2$ generated by the set of vectors that arise from combining the set of vectors of a basis of $\mathcal{H}_1$ with that of a basis of $\mathcal{H}_2$. In matrix notation:

$$ \pi(a) = \begin{pmatrix}\pi_1(a) & 0 \\ 0 & \pi_2(a)\end{pmatrix} $$

In physics, however, what is interesting is to know in each specific case under which representations the quantum states of each physical system transform. It is said that the set of all these linear transformations on the quantum state space forms a representation of the group $G$ on the space $\mathcal{H}$. More specifically, a representation is a function $\pi$ that associates with each element $a$ of a group, its corresponding linear transformation $\pi(a)$ in a vector space, so that the structure of the group is respected, that is, it is the same to compose two transformations and find the representation of the result, than to compose the representations of each transformation. Mathematically, it is written

$$ \pi(b) \cdot \pi(a)=\pi(ba)$$
It is evident that mapping all elements of a group to the identity transformation in a vector space is trivially a representation, which is called trivial or scalar representation. What is interesting for mathematicians is to study the non-trivial representations of the different groups that reproduce their structure faithfully. The dimension of the vector space $\mathcal{H}$ on which the operators $\pi(g)$ act is called the dimension of the representation $\pi$. It is also interesting from a mathematical point of view to study the irreducible representations of each group $G$. These are those that have no sub-representations, that is, where there is no proper vector subspace $\mathcal{H}^\prime \subset \mathcal{H}$ such that $\pi$ acting on $\mathcal{H}^\prime$ is a representation. This is because, given two representations $\pi_1$ and $\pi_2$ acting on two vector spaces $\mathcal{H}_1$ and $\mathcal{H}_2$, it is always possible to construct the direct sum representation $\pi_1 \oplus \pi_2$ that acts on the direct sum vector space $ \mathcal{H}_1 \oplus \mathcal{H}_2$ generated by the set of vectors that arises from joining the set of vectors of a basis of $ \mathcal{H}_1 $ with that of a basis of $\mathcal{H}_2$. In matrix notation:

In physics, however, what is interesting is knowing in each concrete case under which representations the quantum states of each physical system transform. Since all these representations have to be unitary, the operators $\pi(g)$ must all be unitary, that is, their adjoint must be equal to their inverse. Such representations of the group $G$ are called unitary representations. As unitary transformations in a vector space conserve the scalar product, vectors that were orthogonal remain so after that transformation. This makes any reducible representation writable as a direct sum of irreducible representations, although it may be difficult to find the basis in which it is expressed in block diagonal form, as in the previous matrix.


Lie algebra of the U(1) group


Since every element of U(1) can be written in the form $e^{i\theta}$, with $\theta$ a real parameter, the imaginary unit $i$ is said to be the generator of the U(1) group. This terminology is due to the fact that every element of the group infinitesimally close to the neutral element (identity transformation) can be written as
$ e^{i\theta} \sim 1+i\theta $
since $\theta$ is small, which can be reached by adding quantities proportional to the imaginary unit. Analogously, since this element corresponds to the operator $e^{-iL_z\theta}$ in the presentation $\pi$, in which the group acts on the vector space of quantum states, the antihermitian operator $X=-iL_z$ is said to be the generator of the action of the U(1) rotation group on the space of states. Since physicists prefer to work better with quantum mechanical observables, which are hermitian operators, we call the generator of the rotations, abusing language, $L_z$, that is, the angular momentum $L_z$ is the generator of the action of the U(1) rotation group on the space of states.

Mathematicians normally use a different language. U(1) is a Lie group, a set that, in addition to having a group structure, has a differential manifold structure, in this case of real dimension 1. The tangent space to this manifold at the point where the neutral element is located has real dimension 1, and is generated by the imaginary unit $i$. That is, every vector of that tangent space is of the form $i\theta$. This tangent space is called the Lie algebra of the U(1) group. The imaginary unit $i$ is therefore the generator of the Lie algebra of the U(1) group, which is denoted as u(1) and is simply the real line (or rather, the imaginary line). The unitary representation $\pi$ is nothing more than a function between the Lie group U(1) and the Lie group of unitary transformations that act on the state space of the quantum system. As such, it must have a derivative $\pi^\prime$, which is nothing more than a linear function that goes from the Lie algebra of the group U(1) to the tangent space at the identity of the Lie group of unitary transformations of the state space:

$ \pi^\prime (i\theta)=\theta \frac{d}{d\theta^\prime} \pi(e^{i\theta^\prime}) \arrowvert_{\theta^\prime=0}=\theta \frac{d}{d\theta^\prime} e^{-iL_z\theta^\prime}\arrowvert_{\theta^\prime=0}=-iL_z\theta $


Lie algebras and their representations


This is something that not only happens with the group U(1), but with any other Lie group, only that in general the Lie algebra of the group will be a real vector space of dimension greater than 1, generated by operators $X$ that do not commute with each other in the general case of a non-abelian group. In particular, the Lie algebra of the group $G$ is defined as the space of matrices $X$ such that

$ e^{tX}\in G $

for any real number $t$. It is important to note that not all elements of $G$ need to be expressible in this way, nor does it necessarily happen that each $t$ gives a different element of $G$. What does happen is that each vector of the Lie algebra $X$ defines a path in $G$ that passes through the identity element (when $t=0$) with velocity vector

$ \frac{d}{dt} \left( e^{tX} \right) \arrowvert_{t=0}=X $

What we have seen happen with representations of the U(1) group also holds for representations of other groups G, even if they are not unitary.

In this case, we have that $\pi \left(  e^{tX} \right) $ is a path in the space of linear transformations of the vector space on which the representation acts, which also passes through the identity transformation when $t=0$, and that it has as velocity vector at that point
$$ \pi^\prime(X)= \frac{d}{dt} \pi \left( e^{tX} \right) \arrowvert_{t=0} $$
Then we say that $\pi^\prime$ is the representation of the Lie algebra of the group G. Note that $\pi^\prime$ is uniquely determined by the representation of the Lie group $\pi$. Furthermore, it can be easily shown that
$ e^{t\pi^\prime(X)}=\pi \left( e^{tX} \right) $

The fact that the Lie algebra is a vector space over the field of real numbers allows us to define a special representation of the Lie group, called the adjoint representation, in which the element $g$ acts on the Lie algebra operator $X$ by the following operation
$$ (Ad(g))(X)=gXg^{-1} $$
In the case of the group U(1), being abelian, the adjoint representation is trivial. This means that the angular momentum operator $L_z$ does not change when we perform a rotation around the $Z$ axis. However, if the group $G$ is non-abelian, then in general the generators $X$ of the group will change when a transformation belonging to $G$ is performed. If that transformation has been generated by the operator $Y$, then the infinitesimal version of that change, which is also an element of the Lie algebra, is
$ \frac{d}{dt} (e^{tY}Xe^{-tY}) \arrowvert_{t=0}=[Y,X] $
which is equal to the commutator of $Y$ with $X$. Therefore, this commutator measures how $X$ changes infinitesimally under a transformation generated by $Y$.

Therefore, the Lie algebra representation of the group $G$ that corresponds to the adjoint representation $Ad$ of the group is:
$$ ad(Y)(X)=[Y,X] $$
And this is why mathematicians call this commutator the Lie bracket. Only when both operators commute, the transformation generated by one of them leaves the other invariant and, moreover, we know that the observables associated with both operators can take well-defined values simultaneously. But the non-Abelian character of the group $G$ means that these commutators are not generally zero. The structure of the group $G$ gives rise to a Lie bracket structure on its tangent space, which is the Lie algebra of the group $G$. And this is why this tangent space is called the Lie algebra: in addition to having a vector space structure, it has an associated Lie bracket.

The corresponding formulas for the representation $\pi$ of $G$, which acts on the quantum state space, are:
$ \pi^\prime(gXg^{-1})=\pi(g)\pi^\prime (X) (\pi (g))^{-1} $
$ \pi^\prime ([Y,X])=[ \pi^\prime (Y) , \pi^\prime (X) ] $
Therefore, the representation $\pi^\prime$ inherits the same commutator structure as the Lie algebra it represents.

The Noether theorem in quantum mechanics


The quantum state of the system changes over time, so that the probability amplitudes of the different mechanical-quantum events will change, but the probabilities must still add up to one. In addition, the temporal evolution of two mutually exclusive mechanical-quantum states (perpendicular to each other) must give rise to two other states that are also mutually exclusive, and this evolution must be linear, by the second principle of superposition of quantum states. The mechanical-quantum states will therefore change according to a certain unitary transformation whose representation in the state space we call $\hat{U}(t)$. We can understand this unitary transformation as a representation of the group of real numbers (the time line) acting on the vector space of the quantum states of the system.

If we call $\psi(t_o)$ the wave function of the quantum system at a given instant $t_o$, the result of applying the evolution operator $\hat{U}(t)$ to the wave function is:
$ \psi(t+t_o)=\hat{U}(t) \psi(t_o) $
On the other hand, the Taylor expansion of the function $\psi(t+t_o)$ around $t=0$ is:
$ \psi(t+t_o)=\sum_{n=0}^\infty \frac{1}{n!}\frac{d^{(n)}\psi}{dt}(t_o)t^n=e^{t\frac{d}{dt_o}}\psi(t_o) $
which coincides with the exponential function if we accept that there can be a derivative operator in the exponent. The conclusion is that the evolution operator in quantum mechanics can be written as an exponential of a Hermitian operator:
$ \hat{U}(t)=e^{-\frac{i}{\hbar}\hat{H}t} $
where $\hat{H}=i\hbar\frac{d}{dt_o}$ is called the Hamiltonian operator   of the system. Therefore, the Hamiltonian operator (multiplied by $-i$) is a vector belonging to the Lie algebra corresponding to a representation of the group of temporal translations on the vector space of quantum states of the system, that is, the generator of these temporal translations in that representation. Note that in the case of temporal evolution, there is a different convention for signs than the one we have used for rotations and is used for the rest of transformations. With this last convention, the generator of temporal translations is not $H$, but $-H$. Therefore, the generator vector of the Lie algebra is $X=+iH$, while for rotations $X=-iL_z$. The reason for doing this for the Hamiltonian is due to the signature of space-time in relativity, where the sign for the component of the four-vector corresponding to energy is opposite to the sign for the components corresponding to momentum. This makes the unitary operator that converts the wave function $\psi(t_0)$ into $\psi(t_0-t)$ to be
$ e^{+itH}$
However, the temporal evolution operator that transforms the wave function $\psi(t_0)$ into $\psi(t_0+t)$ must then be
$\hat{U}(t)=e^{-\frac{i}{\hbar}\hat{H}t}$
In quantum mechanics, the generators of unitary transformations are Hermitian operators that physically represent observables. In this case, the observable that represents the Hamiltonian is energy. The eigenvalues of this operator are therefore the possible values of the system's energy. If the Hamiltonian of a system is time-independent, then the eigenvectors of the Hamiltonian, being also of the evolution operator, only change under a temporal evolution with a phase factor
$ e^{-\frac{i}{\hbar}Et}=\cos(Et/\hbar)-i\sin(Et/\hbar),$
where $E$ is the corresponding eigenvalue. The temporal dependence of these states, which are called stationary, is that of a classical harmonic oscillator with angular frequency $\omega=E/\hbar$. As they remain eigenvectors of the Hamiltonian with the same eigenvalue, the system's energy remains the same. It is a conserved quantity.

Now consider the Lie group $G$ generated by both rotations around the $Z$ axis and temporal evolution of the quantum system. The action of this group on the system's state space is given by a representation in which each rotation angle $\theta$ and each time $t$ elapsed corresponds to a unitary operator
$$ U(\theta, t)=e^{-i\theta L_z-itH}$$
Therefore, in this case the Lie algebra of the group is a real vector space of dimension 2, and is generated, at least in its representation $\pi^{prime}$ on the state space, by the anti-Hermitian operators $-iL_z$ and $-iH$. For a general quantum system, rotations around the $Z$ axis do not commute with temporal translations, so the group $G$ is non-Abelian. This is reflected in its Lie algebra in that the generator of rotations and the generator of temporal translation do not commute, so their corresponding representations on the system's state space, $L_z$ y $H$, do not commute.
This physically means that for most quantum systems one can imagine, these two observables cannot be well defined simultaneously. If this is the case, when performing a rotation, instead of considering that the quantum state changes, we can consider that the Hamiltonian of the system changes according to the adjoint representation:
$$
H^\prime= e^{-i \theta L_z} H e^{+i \theta L_z}
$$
Thus, if this rotation is infinitesimal:
$$
H^\prime = H -id\theta [L_z, H]
$$
On the other hand, when evolving with time, instead of considering that the system's state changes, we can consider that the angular momentum operator changes in the form:
$$
L_z(t)= e^{itH} L_z e^{-itH}
$$
$L_z(t)$ is called the angular momentum operator in the Heisenberg picture. After an infinitesimal time, this operator changes in the form:
$$
\frac{dL_z}{dt} (t)=-i [L_z,H]
$$
Therefore, the commutator $[L_z,H]$ simultaneously measures two things: how energy changes when performing a rotation on the system, and how angular momentum changes when the system evolves with time. What happens in those physical systems in which the Hamiltonian is invariant under rotations around the $Z$ axis? In that case, as $H$ and $L_z$ commute, it also happens that the angular momentum operator in the Heisenberg picture does not change with time. That is, quantum systems with symmetric Hamiltonians under rotations are the same as those in which angular momentum is conserved. We thus obtain the quantum-mechanical version of Noether's theorem.
It is evident that what we have done with rotations also holds for the rest of differentiable transformations. If the quantum system has the symmetry of being invariant under all elements of a Lie group $G$, then the unitary operators that represent the action of $G$ on the quantum states commute with the temporal evolution operator.
This makes the generators of these unitary representations, which are Hermitian operators, also commute with the evolution operator and the Hamiltonian. The physical quantities represented by these Hermitian operators will therefore also be conserved quantities. In addition, by Schur's lemma, the Hamiltonian, when commuting with the operators that implement the unitary representations of that symmetry group, has to be proportional to the identity operator within each representation. This means that all states in the same representation of that symmetry group are degenerate and have the same energy. Therefore, when a quantum system has a symmetry, the vector space of all states that have the same energy will always be a representation of that symmetry group. Moreover, that representation, being unitary, will always be a direct sum of irreducible representations of the group G. This idea has allowed for a surprising connection between the theory of group representations and modular functions to be found, a connection for which, by the way, string theory has been necessary, which is not only a fundamental tool for every theoretical physicist who wants to stay up-to-date in their discipline today but also for any mathematician.

The Noether theorem in classical mechanics


In classical mechanics, each state of the system is characterized by a point of coordinates $(q,p)$ in the phase space. The phase space can be very complicated, even singular, and studying how the transformations made to the physical system act on this space is much more complicated than in quantum mechanics, where the state space is linear.
Therefore, the easiest way to prove the Noether theorem in classical mechanics is to try to emulate what is done in quantum mechanics, linearizing the problem in some way. The technique that needs to be applied here is a standard technique in modern mathematics: if we work not with a space but with the functions in that space, then transformations on that space become much more tractable.
In classical physics, any function $f(q,p)$ represents a specific observable. If the Hamiltonian of the system were equal to the observable $f(q,p)$, then the state of the system would evolve over time according to the Hamilton equations
$$\dot{q}=\frac{\partial f}{\partial p}$$
$$\dot{p}=-\frac{\partial f}{\partial q}$$
which can be written in terms of the Poisson bracket as
$$\dot{g}=\{ g,f \}$$
where
$$\{ g,f \}= \frac{\partial g}{\partial q} \frac{\partial f}{\partial p} -  \frac{\partial g}{\partial p} \frac{\partial f}{\partial q} $$

When the observable $f(q,p)$ is not the Hamiltonian of the system, we can interpret these equations as the equations that tell us how the classical state of the physical system changes when we perform a transformation generated by the observable $f(q,p)$ on it.

For example, the observable $p$ generates the transformation

$\frac{dq}{da}=\{ q,p \}=1 \Rightarrow q(a) = q(0) + a$

which is nothing but a translation in the $q$ coordinate. Physically, therefore, the Poisson bracket $\{ g,f \}$ tells us how the observable $g$ changes in classical mechanics when we perform on the physical system the transformation generated by the observable $f$. Mathematically, it is said that the observable $f$ acts as the moment map of a vector field in the phase space. This is nothing but a pedantic way of saying that to each function $f(q,p)$ corresponds a vector field $X_f$ which associates to each function $g(q,p)$ another function

$$X_f(g)=\{ g,f \}$$

This vector field gives us the velocity vector (careful, "velocity" in the phase space) associated with the trajectories in the phase space that give us how the physical system evolves when applying the transformation generated by $f$. This transformation is what physicists know as "canonical transformation", since it preserves the Poisson brackets.

As the Poisson bracket is antisymmetric, bilinear, and also satisfies the Jacobi identity, mathematically, it is said that the Poisson bracket is the Lie bracket that assigns the Lie algebra structure to the space of functions in the phase space. In fact, this Lie algebra, which has infinite dimension, was historically the first Lie algebra that was studied, although it is clearly more complicated than the Lie algebras we have worked with before in quantum mechanics, since in quantum mechanics we have the linearity of the Hilbert space, but not here.

The application that associates to each $f$ the vector field $-X_f$ is a homomorphism between the Lie algebra of functions in the phase space and the Lie algebra of vector fields in the phase space. But note that this homomorphism is not injective, since adding a constant to $f$ does not change the corresponding $X_f$. Nor is it surjective, because not all vector fields in the phase space can be written in the form

$$X_f(g)=\{ g,f \}$$

The physical requirement that the Poisson bracket be equal to a derivative forces this Lie bracket to have an extra property: it must obey the Leibniz product rule for derivatives. This makes, at least for polynomial functions in the phase space, the Poisson bracket determined solely by its values on linear functions, since the calculation of all others is reduced to those of linear functions by applying the Leibniz rule. These values of the Poisson brackets of the linear functions define a symplectic form in the dual phase space (the space generated by the coordinates $q$ and $p$)

$$\Omega (g,f) = \{ g,f \}$$

Note that $\Omega (g,f)$ is an antisymmetric and nondegenerate bilinear map that endows the dual phase space with the structure of a symplectic space. And this is why mathematicians call canonical transformations symplectomorphisms. By preserving the Poisson brackets, they also preserve the symplectic form defined from them.

Once we have all the machinery of Lie algebras in place, we see that the same thing happens in classical mechanics as in quantum mechanics, since the vector field
$$ X_H(g)=\{ g,H \} $$
gives us the velocity vector associated with the trajectories in the phase space that tell us how the physical system evolves when we apply the transformation generated by the Hamiltonian $H$, that is, it tells us how the physical quantity $g$ evolves with time. But at the same time, this same quantity also coincides, due to the antisymmetry of the Poisson bracket, with how the Hamiltonian $H$ changes as we change the system according to the transformation generated by $g$.
$$ -X_g(H)=\{ g,H \} $$
If the Poisson bracket between $H$ and $g$ is zero, it happens at the same time that the transformation generated by $g$ leaves the Hamiltonian invariant and that $g$ does not change with time. This is the content of the original theorem proved by Noether, although she did not do it this way, and which we now understand as a particular case of the corresponding theorem in quantum mechanics applied in the limit where the system can be approximated as classical. Recall that in this limit, the relationship between Poisson brackets and commutators is:
$$ \widehat{\{f,g\}}=-\frac{i}{\hbar}[\hat{f},\hat{g}] $$

Conclusion


While in quantum mechanics the possible states of the system, which give us maximum knowledge about it, although not complete in the classical sense due to the uncertainty principle, form a Hilbert space, which is a linear vector space. This linearity makes transformations on the system much easier to handle than in quantum mechanics, where states of maximum knowledge, which are also states of complete knowledge, are points in a phase space that can be very complicated and even singular. It is a space with much fewer restrictions than the Hilbert space of quantum states. Transformation groups are, for this reason, much easier to handle in quantum mechanics than in classical mechanics.

The original article by Noether, where she demonstrates the relationship between differentiable symmetries of a system and conservation laws in classical mechanics, is difficult to read. But in quantum mechanics, it is trivial to see this relationship between conservation laws and symmetries. This relationship comes from the formula $[H,g]=0$, which, being antisymmetric, can be understood in two ways. On the one hand, it can be treated as the equation that gives us the temporal derivative in the Heisenberg picture of the operator $g$. Being zero means that $g$ is a conserved quantity.

On the other hand, that formula can be understood as the infinitesimal change of the Hamiltonian under the transformations generated by $g$. When it is zero, we can say that $g$ generates a symmetry. That is why, whenever there is a symmetry, its generator $g$ is conserved over time, and whenever an observable is conserved over time, that observable generates a symmetry.

In the classical limit, this proof still holds, but the commutators are replaced by Poisson brackets, which are more complicated structures. Quantum mechanics makes the proof of Noether's theorem more direct and simple. The linearity of the Hilbert space of quantum states makes everything much more elegant. The greater beauty of quantum mechanics compared to classical mechanics is also evident in Noether's theorem.

About the author: Sergio Montañez Naz has a PhD in physics and is a public high school teacher in the Community of Madrid.

This post is a translation of the post Por qué a toda simetría continua le corresponde una cantidad conservada using ChatGPT. Some proofreading is still needed, but most sentences are accurate.

Bibliographic references

  • Stillwell J. (2008), Naive Lie theory, Undergraduate Texts in Mathematics, Springer-Verlag.
  • Tapp K. (2016), Matrix groups for undergraduates, second ed., American Mathematical Society.
  • Woit P. (2017): Quantum Theory, Groups and Representations. An Introduction. Springer International Publishing.

No comments:

Post a Comment