Table of Contents:
- a. A Theory Takes Form (and the Gluon is Introduced)
- b. Solving the Problems
- a. The Tale of the SSC and the LHC
- b. The Discovery of the Higgs Boson
- a. Introduction
- b. Fermions: Quarks and Leptons
- c. Bosons
Up until very recently, news out of the European Organization for Nuclear Research (CERN) regarding the progress of the new Large Hadron Collider (LHC) had been slow in coming, and nary a major discovery had been announced. On July 4th, though, all of that changed. As on that day CERN announced the discovery of nothing less than the Higgs boson, the ‘God particle’.
The potential discovery of the Higgs boson had been one of the principal reasons why physicists were so excited about the LHC; and therefore, within the scientific community the announcement was cause for a major celebration indeed. For most of the general public, however, while the announcement was certainly intriguing, there were many basic questions yet to be answered: Just what was the Higgs boson, and why had it been labeled the God particle? Why were physicists expecting to find it, and what did the discovery really mean? Adequately answering these questions was more than what journalists were able to do in their compressed news segments and newspaper articles–and, besides this, it was a task that many journalists were not up to regardless.
Jim Baggott’s new book Higgs: The Invention and Discovery of the ‘God Particle’ is meant to remedy this situation and provide the necessary context that the general public needs in order to understand the discovery of the Higgs boson and what it all means.
Baggott first takes us through the history of the development of the Standard Model of particle physics (which theory the Higgs boson is a part). He begins with the discovery that atoms are made up of the still more elementary particles of electrons, protons and neutrons. And then takes us through the discovery of the still more fundamental particles of quarks, leptons and bosons, and the 4 fundamental forces that govern these particles: gravity, the electromagnetic force, the weak nuclear force, and the strong nuclear force.
At every step of the way, Baggott is sure to explain what difficulties confronted the understanding of particle physics that was current at the time, what theoretical models were developed to overcome these difficulties, and the empirical evidence that was used to establish which theoretical model won the day. For instance, and of crucial importance here, is that–after learning of the 3 types of elementary particles, and the 4 basic forces–we learn that there was a problem with the then-current theory regarding the masses of the elementary particles–in that the 4 forces alone were simply unable to account for it. In order to overcome this difficulty, some physicists postulated that there must be a charged field pervading space, since such a field appeared to be the only appealing way to solve the mass mystery. This field was called the Higgs field.
The problem was that there was as yet no empirical evidence that the Higgs field actually exists. What physicists did think, though, was that if it did exist, it would imply the existence of a certain type of boson particle, dubbed the Higgs boson. What this meant is that if physicists could find the Higgs boson, they would have empirical evidence that the Higgs field does in fact exist, and the problem regarding the masses of elementary particles would be adequately solved. On July 4th, it was the discovery of this very particle that was announced, and Baggott takes us behind the scenes at the LHC to explain just what went into the discovery.
While the discovery of the Higgs boson solved one major problem with the Standard Model, there are a few others that have yet to be solved—including the hierarchy problem, and the problem of unifying the fundamental forces into a single theory—and Baggott does touch on these issues as well. It is hoped that further work at the LHC may eventually help to resolve some of these problems.
*To check out this book at Amazon.com, or purchase it, please click here: Higgs: The Invention and Discovery of the God Particle
What follows is a comprehensive summary of Jim Baggott’s Higgs: The Invention and Discovery of the ‘God Particle’
The systematic attempt to discern the basic building blocks of matter and the fundamental forces that govern this matter goes back (at least) to the ancient Greeks. One such Greek, the 5th century philosopher Empedocles, theorized that all matter is made up of 4 basic elements: earth, air, fire and water—and that these elements are governed by 2 forces: love and strife. As Baggott explains, “the elements were judged to be eternal and indestructible, joined together in rather romantic combinations through the attractive force of Love and split apart through the repulsive force of Strife, to make up everything in the world” (loc. 213).
A contemporary of Empedocles, named Leucippus, developed a competing theory which had it that all matter is made up of small indestructible particles called atoms (literally ‘not-dividable’), which atoms scatter randomly through empty space (the void), at times coming together and hooking on to one another to form perceptible objects, and at other times breaking away from one another resulting in the destruction of these objects (loc. 219).
Just like the physicists of today, the Ancient Greeks used their powers of observation, intuition and reasoning to try and tease out the true nature of reality. However, one very important factor separates the approach used by the Ancient Greeks and that used by the scientists of today, and that is the latter’s reliance on experiments meant to rigorously test the theories that come up—otherwise known as the scientific method. As Baggott explains, “it was not until the development of a formal experimental philosophy in the early seventeenth century that it became possible to transcend the kind of speculative thinking that had characterized the theories of the Ancient Greeks. The old philosophy had tried to intuit the nature of material substance from observations contaminated with prejudices about how the world ought to be. The new scientists now tinkered with nature itself, teasing out evidence about how the world really is” (loc. 234).
Using the new experimental philosophy, scientists quickly began making exciting new discoveries about how the world really is. For example, it was discovered that matter and force are connected via the mass of matter. So, for instance, an object’s mass determines how it will react when met with a particular force, in that “a small object will accelerate much faster than a large one when kicked with the same force” (loc. 237). Similarly, it was discovered that an object’s mass determines its ability to generate a gravitational pull on another object. So, for instance, “the force of gravity generated by the moon is weaker than the force generated by the earth, because the moon is smaller and so possess a smaller gravitational mass” (loc. 240). Interestingly, while contact force is very different from gravity, it was discovered that inertial mass and gravitational mass are actually identical (loc. 240). This suggested that these two basic forces may be connected in some fundamental way.
Beyond the relationship between matter and force, discoveries were also made regarding the ultimate make-up of matter. To begin with, substances were found to be made up of molecules, which molecules were themselves found to be made up of atoms of pure elements: “the fundamental Greek ‘element’ water was found to consist not of geometrical solids… but of molecules composed of atoms of the chemical elements hydrogen and oxygen, in a combination we write today as H20” (loc. 244).
From here it was found that atoms are composed of still more elementary particles, as the electron was discovered by the physicist Joseph John Thompson in 1897 (loc. 247). Thompson’s discovery was followed up by a closer look at the composition of the atom, which revealed that it appears to be mostly empty space, and consist in a “positively charged nucleus, around which the negatively charged electrons orbit much like planets orbit the sun” (loc. 250).
This ‘planetary’ model of the atom is still the one that pops up most frequently, and looks like the following:
In fact, this is probably the image of the atom that you remember from high school. As pervasive as it remains to be, however, it became clear almost immediately that this model is somewhat inaccurate. This proves to be the case because, as Baggott explains, “unlike planets moving around the sun, electrically charged particles moving in an electric field radiate energy in the form of electromagnetic waves. Such planetary electrons would exhaust their energy within a fraction of a second, and the internal architecture of the atom would then collapse” (loc. 254). In others words, if the atom truly looked like a planetary system it would implode immediately and cease to exist. So much for the planetary model of the atom.
Still, the model itself remains prevalent for a very simple reason: it’s because this is the last time our understanding of the atom could be called anything near intuitive, and/or easily visualizable. Indeed, the solution to the atomic problem involves quantum mechanics, and to say that this theory represents an all out assault on common sense would not be much of an exaggeration. To begin with, unlike the planetary model, which has it that the electron is a particle, it was later discovered (under quantum mechanics) that the electron could be explained much better if it were understood to be simultaneously a particle and a wave. As Baggott explains, “the electron is not just a particle—which we might visualize as a tiny ball of negatively charged matter—it is simultaneously both wave and particle. It is not ‘here’ or ‘there’, as might be expected of a localized bit of stuff, but literally ‘everywhere’ within the confines of its ghostly, delocalized wavefunction. Electrons do not orbit the nucleus as such. Instead their wavefunctions form characteristic three-dimensional patterns—which we call ‘orbitals’—in the space around the nucleus. The mathematical form of each orbital relates the probability of finding the now wholly mysterious electron at specific locations—‘here’ or ‘there’—inside the atom” (loc. 260). As the quote makes clear, the position of an electron can only be known probabilistically, and this is not a matter of a deficiency in our measuring tools; rather, it is an inherent feature of the mathematics used to describe the electron’s behavior.
Now, it is difficult enough (and maybe impossible) to visualize a single thing that takes two forms at once, but when you add in the fact that the location of that thing in one of its forms cannot be identified for certain, but is strictly a matter of probability, then good luck drawing that up in your imagination! Efforts to depict this set-up end up looking something like the following:
*The various images represent the same atom (the hydrogen atom) at different energy levels: as the energy level changes, the wavefunctions of the electron in the atom change. The shading of the wavefunctions represents the probability of finding the electron in that area of the wave: the brighter the area, the greater the likelihood that the electron is there.
In any event, while quantum mechanics is certainly counter-intuitive enough, it proved to be a very powerful tool in helping explain the organization and behavior of atoms. For instance, when the physicist Paul Dirac combined the mathematics of quantum mechanics with that of Einstein’s special theory of relativity to help explain the behavior of electrons, he found that the resulting hybrid implied that electrons should come in two distinct orientations, known as spin-up and spin-down–which feature explained how electrons produce a magnetic field (loc. 289). In speaking of these spin orientations, Baggott explains that “these are not orientations along specific directions in conventional, three-dimensional space, but orientations in a ‘spin-space’ which has only two dimensions—up or down” (loc. 274). In fact, the property of spin had already been observed by experimentalists (loc. 265), so the fact that quantum mechanics predicted its existence gave the theory a major boost in terms of credibility.
By 1932, scientists were able to attain experimental evidence that the positively charged nucleus of an atom actually consists of not just a single particle, but two: the positively charged proton, and the neutral neutron (loc. 295). The story of matter now went like this: “all the material substance in the world is made of chemical elements. These elements come in a great variety of forms which make up the periodic table, from the lightest, hydrogen, to the heaviest-known, naturally occurring element, uranium. Each element consists of atoms. Each atom consists of a nucleus composed of varying numbers of positively charged protons and electrically neutral neutrons. Each element is characterized by the number of protons in the nuclei of its atoms. Hydrogen has one, helium two, lithium three, and so on, to uranium, which has 92. Surrounding the nucleus are negatively charged electrons, in numbers which balance the numbers of protons, so that overall the atom is electrically neutral. Each electron can take either a spin-up or spin-down orientation and each orbital can accommodate two electrons provided their spins are paired” (loc. 306). The model could also explain isotopes, since these were understood to be atoms that had picked up some extra neutrons in their nuclei (loc. 309).
Under this model, mass is inherent in the elementary particles themselves, and the mass of any object can be arrived at by way of adding up the masses of its elementary particles (primarily its protons and neutrons, “which account for about 99 per cent of the mass of every atom” [loc. 309]). Of course, as we have already seen, Einstein’s theory of special relativity was already on the scene by this point, and one of the key equations of this theory–the famous E=mc2–had it that mass is fully interchangeable with energy; and that therefore, mass is but another form of energy. As we shall soon see, the full implications of this truth would eventually find its way into the physics leading up to the discovery of our Higgs boson (loc. 336).
While scientists had yet to see that they would eventually encounter problems with their understanding of mass as being inherent in elementary particles, they did see that there were problems with their understanding of the elementary particles themselves. Indeed, it had already been observed that the isotopes of certain types of elements do not maintain their form, but rather naturally transmute into other types of atoms over time. This process is known as radioactive decay. There are different kinds of radioactivity, and one, called beta-radioactivity, “involves the transformation of a neutron in a nucleus into a proton, accompanied by the ejection of a high-speed electron (a ‘beta-particle’)” (loc. 342). This is a particularly interesting process because it is essentially a kind of natural alchemy. Indeed, as Baggott explains, “changing the number of protons in the nucleus [of an atom] necessarily changes its chemical identity” (loc. 342). Of still greater importance for our purposes, though, is that the phenomenon of beta-radioactivity implies that neutrons are composites (loc. 346). In other words, it was clear that neutrons are not in fact elementary particles at all, and that therefore, there was still some work to be done in order to discover just what neutrons are made of, and what the ultimate elementary particles are (loc. 346).
On the force front, though, even more work remained to be done. To begin with, it was now clear that there were four fundamental forces holding matter together: gravity, the electromagnetic force, the strong nuclear force, and the weak nuclear force. Gravity was needed to account for the attraction between bodies at long distances, while the other 3 forces were needed to explain the various interactions occurring within the atom itself (loc. 350). Specifically, the electromagnetic force was needed in order to account for the interactions between charged particles within the atom (loc. 353). The strong nuclear force was needed in order to account for the attraction between protons and neutrons in the nucleus of the atom (loc. 365). Finally, the weak nuclear force was needed in order to account for the behaviour of particles as they decay in the radioactive process (loc. 365).
Einstein had, by then, shed much light on the force of gravity when he was able to extend his special theory of relativity (which understands space and time as a unified entity known as space-time) to the phenomenon of gravity, with his general theory of relativity. Specifically, the general theory of relativity has it that gravity is actually an effect of the curvature of the space-time continuum. The electromagnetic force, too, was fairly well understood at the time, thanks to “the pioneering work of nineteenth century physicists which, among many notable achievements, laid the foundations for the power industry” (loc. 354). Still, the understanding of the electromagnetic force was divvied up between a myriad of narrow laws (loc. 470-81), and the phenomenon was badly in need of a single law to help straighten it all out. As for the two nuclear forces, these were still very new and enigmatic.
Given that quantum mechanics had already been used so successfully to help explain the behavior of electrons, this was the logical field for physicists to turn to in order to flesh out a fully unified theory of electromagnetism (loc. 590). Essentially, what physicists were after was “a quantum version of… equations that conformed to Einstein’s special theory of relativity” (loc. 590). While arriving at such a theory was not without considerable difficulties (and was interrupted by the Second World War [loc. 612-18]), the combined efforts of Werner Heisenberg, Wolfgang Pauli, Hendrik Kramers, Hans Bethe, Julian Schwinger, Richard Feynman, Sin-Itiro Tomonaga and Freeman Dyson eventually paid off, and a fully relativistic theory of Quantum Electrodynamics (QED) was finally formulated in 1948 (loc. 663-70).
The QED postulates that the interactions between charged particles can be explained in terms of the activity of force particles (none other than photons), that are responsible for the electromagnetic field: “for example, as two electrons approach each other, they exchange a force particle [a photon] which causes them to be repelled” (loc. 359). This interaction is pictured below:
The QED proved to be an extremely satisfying theory, according with experimental measurements to a remarkable degree. For instance, the g-factor of the electron is “a physical constant which reflects the strength of the interaction of an electron with a magnetic field” (loc. 629), and had been measured experimentally to have a value of 2.00231930482 (loc. 669). Meanwhile, as Baggott notes, “the g-factor for the electron is predicted by QED to have the value 2.00231930476” (loc. 669). As Richard Feynman put it, “‘it comes out like this: If you were to measure the distance from Los Angeles and New York to this accuracy, it would be exact to the thickness of a human hair’” (loc. 673).
The success of quantum mechanics was now firmly established, and physicists eagerly turned to it to try and explain the strong nuclear force. As Baggott explains, “it now seemed that the correct way to describe a fundamental particle and its interactions was in terms of a quantum field theory in which the force involved is carried by field particles” (loc. 676). Two of the physicists who led the way in the attempt to apply quantum mechanics to crack the strong nuclear force were Chen Ning Yang and Robert Mills (loc. 729).
Now, when quantum mechanics is applied to the strong force between protons and neutrons in the nucleus of an atom, it understands these particles as being essentially the same particle, but with two separate orientations (just like spin-up and spin-down electrons are the same particle with two different orientations) (loc. 717-21). The spin orientation of a proton/neutron is called isopsin (loc. 721). Both protons and neutrons are able to reverse their isospin orientation, and, by doing so, turn into the other (loc. 733).
The fact that this process occurs makes the interaction between protons and neutrons more complex than the interactions involved in the electromagnetic force (loc. 737). Accordingly, the mathematics needed to accommodate this added complexity must itself be more complex. Specifically, in order to crack the electromagnetic force, physicists had turned to the mathematics from a simple symmetry group known as U(1). When it comes to the strong nuclear force, however, as Baggott explains, “the simple symmetry group U(1) is insufficient for this kind of complexity, and Yang and Mills reached for the symmetry group SU(2), the special unitary group of transformations of two complex variables. A larger symmetry group is needed simply because there are now two objects that can transform into each other” (loc. 737).
One of the implications of this added complexity is that 3 field particles are needed in order to carry the strong nuclear force (as opposed to the single field particle [the photon] involved in the electromagnetic force). As Baggott explains, “two of the three field particles were required to carry electric charge, accounting for the change in charge resulting from proton-neutron and neutron-proton interactions. Yang and Mills referred to these particles as B+ and B-. The third particle was neutral, like the photon, and was meant to account for proton-proton and neutron-neutron interactions in which there is no change in charge. This was referred to as B0” (loc. 744). Adding even more complexity to the issue is that these field particles were found to interact with one another (whereas photons do not interact with one another) (loc. 744).
Now, by using the mathematics from the SU(2) symmetry group mentioned above, Yang and Mills were able to come up with an initial quantum theory of the strong nuclear force. However, given the added complexities involved with the strong force, it is perhaps no surprise that there were certain problems with the math at first. For starters, some of the equations produced infinite terms that simply didn’t make sense (loc. 748). Actually, initial attempts at applying quantum mechanics to the electromagnetic force had produced similar infinite values, but in that case physicists were able to solve the infinities (through a process called renormalization) (loc. 607, 646, 653-69). Here, similar attempts to renormalize the infinities simply didn’t work (loc. 748).
There was yet another major problem with the mathematics of the SU(2) theory however. And this was that the math implied that the field particles involved in the strong force should be massless, like the photon (loc. 748). As Baggott explains, though, this really didn’t add up: “Heisenberg and Japanese physicist Hideki Yukawa had suggested in 1935 that the field particles of short-range forces like the strong force should be ‘heavy’, i.e. they should be large, massive particles. Massless field particles for the strong force made no sense whatsoever” (loc. 750).
The Yang-Mills theory had looked promising, but ultimately the problems it encountered could not be solved at the time. As a result, physicists now turned their attention to the weak nuclear force.
While initial efforts to apply quantum mechanics to the strong nuclear force had run into problems, physicists recognized that the theory still held enormous potential, and it remained the best candidate to deal with the weak force. As mentioned above, when Yang and Mills applied quantum mechanics to the problem of the strong nuclear force, they came to the conclusion that they needed 3 field particles in order to account for it. Now, when Julian Schwinger confronted the problem of the weak nuclear force, he believed that 3 particles would also be needed here (loc. 850). As Baggott explains, in 1957 Schwinger “published an article in which he speculated that the weak force is carried by three field particles. Two of these particles, the W+ and W- (in modern parlance) are necessary to account for the transmission of electric charge in weak interactions. A third, neutral particle is needed to account for instances in which no charge is transferred” (loc. 850).
Actually, there was already evidence that there was in fact a strange connection between the weak nuclear force and electromagnetism, and Schwinger postulated that the neutral carrier of the weak force may in fact be none other than the photon of electromagnetism (loc. 850). When Schwinger ran through the math, though (or, rather, when Schwinger had his grad student Sheldon Glashow run through the math), Glashow found that the numbers simply didn’t work (loc. 871). Given that this was the case, Glashow switched strategies and began by way of “combining the Yang-Mills SU(2) gauge field with the U(1) gauge field of electromagnetism, in a product written SU(2) X U(1)” (loc. 871). This new theory covered both the weak nuclear force and the electromagnetic force; and, though falling short of being a fully unified electro-weak theory, did represent “a ‘mixture’ of weak and electromagnetic forces” (loc. 869).
The new theory preserved the three force carriers first postulated by Schwinger, but replaced the photon with a new neutral carrier that Glashow called Z0. Accordingly, “Glashow now had three massive weak-force particles equivalent to the triplet of B particles first introduced by Yang and Mills. These were the W+, W-, and Z0” (loc. 874).
Unfortunately, though, Glashow ran into some of the same problems that Yang and Mills had. As Baggott explains, “just as Yang and Mills had discovered, the SU(2) X U(1) field theory predicted that the carriers of the weak force should be massless, like the photon” (loc. 881). What’s more, any attempts to smuggle in the particles measured masses resulted in messy infinite values that made the entire theory nonsensical (loc. 881). Glashow, like Yang and Mills before him, simply “could not figure out how the field particles were supposed to acquire their mass” (loc. 881).
While efforts to understand the weak and strong nuclear forces had run into a wall, physicists were having much better luck finding new matter particles. As mentioned above, the neutron was only discovered in 1932, and by this time it was already strongly suspected that it was not in fact an elementary particle, as its behavior in beta-radioactivity implied that it was made up of still more fundamental bits.
Following the discovery of the neutron, a flood of other particles were also discovered. This was made possible by the fact that “cosmic rays—streams of high-energy particles from outer space—wash constantly over the upper atmosphere” (loc. 800), and, by this time, physicists had discovered that if they were up high enough, such as on the tops of certain mountains, they could detect some of these cosmic rays and their high-energy collisions (loc. 800).
While the discovery that new particles could be found simply by sending particle detectors up to the tops of mountains was certainly a great boon, the method did have a number of disadvantages. As Baggott explains, “such studies rely on chance detection of the particles and, because of their randomness, no two events ever have quite the same conditions” (loc. 800). Better than this by far is if you can create these high-energy particle collisions yourself in controlled conditions. And soon enough, physicists had developed the technology to do just this.
The great age of particle accelerators began in the late 1920’s (loc. 1500). Essentially, the first particle accelerators worked by way of speeding up electrons and protons “by passing them through a linear sequence of oscillating electric fields” (loc. 1500), and then smashing them up against stationary objects to see what they’d get (loc. 1500). Gradually, physicists found new and better ways to speed up particles, thus leading to particle collisions of ever higher energy levels.
Now, intuitively, we tend to think that smashing up particles at higher and higher speeds would yield smaller and smaller bits, until finally, when you reach speeds high enough, you would break a particle into all of the elementary particles there are. While some particles react this way to certain high-speed collisions (loc. 1548), this is not the end of the story. Rather, high speed collisions can also excite particles to higher levels of energy, which then causes them to give off other particles, though they do not break down themselves (loc. 1545).
Now, elementary particles come in an array of different masses, and the heavier ones naturally decay into the smaller ones at reduced energy levels. In fact, the heavier particles only form where there are very high energy levels, such as when certain particles are smashed up at very high speeds (either in cosmic rays, or in particle accelerators and colliders), or where temperatures are very high (such as immediately after the big bang). In the case of particle accelerators and colliders, the heavier particles that are produced immediately decay back down to lighter ones, and/or combine with other elementary particles to produce composites. (This helps explain why the vast majority of the matter that we see around us is made up of just 3 of the lightest elementary particles: up-quarks and down-quarks [which form protons and neutrons], and electrons—more on this below).
In any event, using both the mountain-top method, and the particle accelerator method, physicists were now able to discover a whole plethora of new matter particles—some of them elementary, and some of them not. For instance, physicists began discovering leptons (including the neutrino [loc. 349], and the muon [loc. 817]), and antiparticles (including the positron [loc. 803]), both of which are elementary; but they were also finding baryons and mesons (including pions [loc. 820], kaons [loc. 820], sigma particles [loc. 903] and the lambda particle [loc. 820]), which are composites. Of course, physicists at the time were not aware that the baryons and mesons that they were finding were composites, and this only confused the search for elementary particles even more. (Physicists now know that baryons and mesons are in fact composites [known as hadrons]. This is shown in the diagram below, and will be explained in further detail below)
*I have added an appendix to this article that explains the elementary particles that the Standard Model postulates, and how these particles relate to one another. You may wish to consult this appendix now in order to make more sense of the paragraph above. The information contained in the appendix also helps greatly with understanding the remainder of the article, for it offers a fully fleshed out understanding of what will now be revealed bit by bit.
Nevertheless, as physicists began trying to classify the new zoo of particles, it became clear that the baryons and mesons that they had found were made up of still more elementary bits (loc. 1054), and theories that attempted to explain just what these bits are began to pop up. One of these theories, forwarded by the physicist Robert Serber, held that hadrons (which include both baryons and mesons) are made up of 3 elementary particles and their antiparticles (loc. 1060): “in this model, each member of the baryon octet would be formed from combinations of the three new particles, and the meson octet from combinations of the fundamental particles and their anti-particles” (loc. 1060).
The theory carried some very quirky implications, though. For one, it implied that these new elementary particles had fractional electric charges—a bizarre concept. As the physicist Murray Gell-Mann commented at the time, the theory “mean[t] that the particles would have to have fractional electric charges -1/3, + 2/3, like so—in order to add up to a proton or neutron with a charge of plus one or zero.’… an appalling result” (loc. 1067). Appalling indeed, a fractionally charged particle had never once been observed, and there was absolutely no empirical evidence to suggest that they might exist (loc. 1075). As Baggott explains, “at no time in the 54 years that had elapsed since the notion of a fundamental unit of charge had been established had there been even the merest hint that there might exist particles with charge less than this” (loc. 1075). Given the quirky nature of these theoretical particles, Gell-Mann initially referred to them as ‘quorks,’ “a nonsense word deliberately chosen to highlight the absurdity of the suggestion” (loc. 1075).
Nevertheless, the fact that fractional charges had never been seen before did have a potential explanation. Gell-Mann himself reasoned that “if the ‘quorks’ were forever trapped or confined inside the larger hadrons then this might explain why fractionally charged particles had never been seen in experiments” (loc. 1081). Gell-Mann eventually warmed up to the idea of quorks enough that he published a paper outlining quork theory in 1964 (loc. 1087).
In the paper, Gell-Mann renamed his quorks quarks (after a line in James Joyce’s Finnegan’s Wake [loc. 1081]), and proposed that there were 3: an up-quark (u) with charge of +2/3, a down-quark (d) with a charge of -1/3, and a strange-quark (s) with a charge of -1/3 (loc. 1089). As Baggott explains, “in this scheme the proton consists of two up-quarks and a down quark (uud), with a total charge of +1. The neutron consists of an up-quark and two down-quarks (uud), with a total charge of zero” (loc. 1093).
As outlandish as the theory was, it did have several pleasing strengths. For one, it was able to explain the characteristic of isospin in terms of the number of quarks making up a hadron. As Baggott explains, “the neutron and proton possess isospins that can be calculated as half the number of up-quarks minus the number of down-quarks” (loc. 1093). In addition, the theory of quarks could also account for beta-radioactivity: “beta-radioactivity now involves the conversion of a down-quark in a neutron into an up-quark, turning the neutron into a proton, with the emission of a W- particle” (loc. 1096).
Still, there was as yet no empirical evidence that quarks did in fact exist (loc. 1127). What’s more, the theory did contain a couple of major practical difficulties. First off, the fact that it postulated that both protons and neutrons contain 2 quarks of the same variety put it at odds with a very well established principle known as Pauli’s exclusion principle, which has it that no 2 identical fermions (matter particles) can occupy the same quantum state (loc. 277). Furthermore, it was later found that the theory was at odds with the observed decay rate of pions (loc. 1447).
Eventually, however, it was discovered that both problems could be solved by way of modifying quark theory somewhat. Specifically Gell-Mann, together with Harald Fritzsch and William Bardeen, developed a theory of quarks which postulated that quarks come in 3 different varieties or ‘colours’. Specifically, each quark could come in blue, red or green (loc. 1453). As Baggott explains, “baryons would be constituted from three quarks of different colour, such that their total ‘colour charge’ is zero and their product is ‘white’. For example, a proton could be thought to consist of a blue up-quark, a red up-quark and a greed down-quark (ub ur dg). A neutron would consist of a blue up-quark, a red down-quark and a green down-quark (ub dr dg)” (loc. 1457) (both are pictured below). The model worked marvellously, and was able to solve both the Pauli’s exclusion principle problem, and the pion decay problem (loc. 1461).
The Proton: The Neutron:
Actually, the new theory of quarks proved to have an even more impressive feature, and this was that it offered a new way to attack the strong nuclear force problem. Before we address this issue, though, we need to get caught up on the force side of things, for while physicists were discovering new particles and developing quark theory, important progress was also being made on the force front—and we simply must catch up with this progress before going any further here.
The last time we checked in on the force problem, we saw that physicists were having trouble marrying special relativity with quantum mechanics in their efforts to explain the strong and weak nuclear forces. Specifically, physicists were finding that the theories implied that certain particles are massless, while these particles certainly appeared to have mass.
Slowly but surely, though, a solution was developing that would solve the mass problem. To begin with, it dawned on physicist Yoichiro Nambu that if a charged field pervaded empty space that otherwise massless particles would acquire mass through their interaction with this charged field (loc. 1032). As Baggott explains, “what this implies is that empty space is not actually empty. It contains energy in the form of an all-pervasive quantum field… The result was indeed protons and neutrons with mass” (loc. 1038). There was a problem with Nambu’s theory, though. The physicist Jeffrey Goldstone pointed out that while such a charged field would indeed lend mass to certain particles, the mathematics behind it implied that it would also create another massless particle! Specifically, “in addition to giving mass to protons and neutrons, their model also predicted massless particles formed from nucleons and anti-nucleons” (loc. 1042). Once again, these massless particles didn’t make any sense, so the theory had some serious issues (loc. 1049).
Eventually, though, a solution to this problem was found. To begin with, it was realized by the physicist Philip Anderson that there should be a way to construe the mathematics of the theory in such a way that the massless particles that were created would simply cancel each other out (loc. 1140). The contention caused a great deal of excitement (loc. 1143), and a number of physicists went about seeing if they could in fact come up with the right math.
Shortly thereafter, success was achieved “independently by Belgian physicists Robert Brout and Francois Englert, English physicist Peter Higgs at Edinburgh University, and Gerald Guralnik, Carl Hagen, and Tom Kibble at Imperial College in London” (loc. 1146). The mechanism behind the theory came to be called the Higgs mechanism, “or, in some quarters more concerned with the democracy of discovery, the Brout-Englert-Higgs-Hagen-Guralnik-Kibble — BEHHGK, or ‘beck’ mechanism” (loc. 1146); and the charged field pervading space came to be called the Higgs field (loc. 1153).
According to the theory, the Higgs field interacts with particles and slows them down in the process (loc. 1178). This makes it appear as though the particle has mass in itself, but truly it only acquires its mass through the nature of the interaction. The degree to which the Higgs field slows down any given particle (and therefore, the mass that that particle acquires) depends on the degree to which that particle interacts with the field (loc. 1178). Some particles (such as the photon) do not interact with the Higgs field at all, and therefore remain massless, and thus move at the speed of light (loc. 1178). Baggott explains the process thus: “our instinct is to equate inertial mass with the amount of substance that the object possesses. The more ‘stuff’ it contains, the harder it is to accelerate. The Higgs mechanism turns this logic on its head. We now interpret the extent to which the particle’s acceleration is resisted by the Higgs field as the particle’s (inertial) mass. The concept of mass has vanished in a puff of logic. It has been replaced by interactions between otherwise massless particles and the Higgs field” (loc. 1189).
The Higgs mechanism held out the promise of solving the mass problem that had been encountered in efforts to come up with a field theory that would explain the strong and weak nuclear forces. The physicist Steven Weinberg eventually focused his attention on the weak nuclear force and found that by applying the Higgs mechanism to Glashow’s SU(2) X U(1) field theory, that he could reduce it to a U(1) theory (loc. 1248). This new theory was a fully unified electro-weak theory that covered both the weak nuclear force and the electromagnetic force together (loc. 1248). Weinberg’s electro-weak theory implied that the Higgs field has four components: “three of these would give mass to the W+, W-, and Z0 particles. The fourth would appear as a physical particle—the Higgs boson” (loc. 1255). So the Higgs boson had now been postulated as a hypothetical particle—now it had to be found.
Here is a nice little clip (in 2 parts) detailing the Higgs field and the Higgs boson:
This was exciting stuff, but Weinberg’s theory still faced a few problems. One of the major problems here was the same problem that had plagued earlier attempts at developing quantum field theories of the strong and weak nuclear forces: the theory contained messy infinite values. Physicists would need to find a way to renormalize these infinite values before the theory could be considered acceptable.
Eventually, though, this problem would be solved. It was the Dutch physicist Gerard ‘t Hooft who would finally come up with the right mathematics to solve the renormalization problem that plagued field theories (loc. 1404). When ‘t Hooft applied his solution to Weinberg’s electro-weak theory, the result was “a fully renormalizable quantum field theory of electro-weak interactions” (loc. 1402). Success at last! Without a doubt, “it was a major breakthrough” (loc. 1402).
a. A Theory Takes Form (and the Gluon Is Introduced)
Physicists now turned their attention to the strong nuclear force. As mentioned above, the quark theory developed by Gell-Mann and Fritzsch offered an enticing new way to attack the strong force problem. We are now ready to see how this played out.
To begin with, by the time Gell-Mann and Fritzsch started developing their theory of quarks, ‘t Hooft had just finalized the electro-weak theory. Fritzsch in particular was confident that it should be possible to use quarks (in conjunction with the new discoveries involving the Higgs field, and renormalization) to develop a new understanding of the strong force (loc. 1436). And this would in fact turn out to be correct.
In reference to quark theory (which had already solved Pauli’s exclusion principle problem, and the problem of pion decay), Gell-Mann explained that “‘we realized that it could also fix the dynamics, because we could build an SU(3) gauge theory, a Yang-Mills theory, on it’” (loc. 1464). Specifically, Gell-Mann and Fritzsch were able to take their understanding of quarks and add a system of field particles (called gluons) that could explain the strong force operating on them. As Baggott explains, “by September 1972, Gell-Mann and Fritzsch had elaborated a model consisting of three fractionally charged quarks which could take three ‘flavours’—up, down, and strange—and three colours, bound together by a system of eight coloured gluons, the carriers of the strong ‘colour force’” (loc. 1467).
There was a problem with the theory, though (of course, right). By this time, preliminary evidence of the more elementary particles that make up protons and neutrons was beginning to filter in. As Baggott explains, “the results of experiments conducted at the Stanford Linear Accelerator Center (SLAC) in California hinted strongly that the proton consists of point-like constituents” (loc. 1485). However, it still wasn’t clear whether these point-like constituents were actually quarks (loc. 1485). What’s more, these particles seemed to behave in a way that put them at odds with what was understood of the strong force. Specifically, it was understood that the strong force should keep quarks squeezed tightly together. However, it was observed that “far from being held in a tight grip inside the proton, the constituents behave[d] as though they were entirely free to roam around inside their larger hosts. How was this meant to be compatible with quark confinement?” (loc. 1485).
b. Solving the Problems
However, as experimental evidence mounted, these problems began to disappear. To begin with, it became ever more clear that the point-like particles that were being detected within protons and neutrons were in fact quarks, as they demonstrated behavior that was entirely consistent with quark theory (loc. 1588).
What’s more, the strong force problem turned out to be no problem at all. For it was discovered that the strong force simply behaves in a way that is very counter-intuitive. As Baggott explains, “when imagining the nature of an interaction governed by a force between two particles, we tend to think of examples such as gravity or electromagnetism, in which the force grows stronger as the particles get closer together. But the strong force doesn’t behave in this way. The force exhibits what is known as asymptotic freedom” (loc. 1711). Asymptotic freedom means that the strong force between two particles gets stronger the further the two particles are away from each other. In effect, the strong force doesn’t kick in enough to keep two quarks from getting away from one another until they are already a certain distance apart. As Baggott explains, “it is as if the quarks were fastened to the ends of a strong elastic. When the quarks are close together inside a nucleon, the elastic is relaxed and there is little or no force between them. The force is experienced only when we try to pull the quarks apart and so stretch the elastic” (loc. 1718).
This truth about the strong force was discovered by the physicist David Gross (and a student of his named Frank Wilczek), who, ironically, were trying to prove the exact opposite at the time (loc. 1718). The discovery was also made simultaneously by a young Harvard graduate student named David Politzer, and “their papers were published back-to-back in the June 1973 issue of Physical Review Letters” (loc. 1721).
All of the pieces were now in place for Gell-Mann and Fritzsch to complete their theory of the strong force. They were joined in this mission by a Swiss theorist named Heinrich Leutwyler, and “together they developed a Yang-Mills quantum field theory of three coloured quarks and eight coloured, massless gluons. To account for asymptotic freedom, the gluons were now required to carry colour charge” (loc. 1727). Gell-Mann christened the new theory quantum chromodynamics (QCD) (loc. 1731).
One interesting feature of QCD is that it implies that a large majority of the mass of any object is made up of the energy carried by gluons within protons and neutrons. As Baggott explains, “about 99 per cent of the mass of protons and neutrons is energy carried by the massless gluons that hold the quarks together” (loc. 1759). The remaining mass of protons and neutrons is made up by the interaction that their underlying quarks have with the Higgs field (loc. 2712).
The Standard Model of particle physics as we know it today (based on the SU(3) symmetry group of the strong force, together with the SU(2) X U(1) symmetry group of the electro-weak force [written SU(3) X SU(2) x U(1)]) was now essentially complete. All that remained to be done was to establish empirical evidence of the remaining particles that had been postulated (including the Higgs boson).
By this time, both of the first 2 generations of quarks and leptons had been found. As Baggott explains, “there were now two ‘generations’ of fundamental particles, each consisting of two leptons and two quarks… The electron, electron neutrino, up-quark, and down-quark form the first generation. The muon, muon neutrino, strange-quark, and charm-quark form the second generation, differentiated from the first by their masses” (loc. 1786). With regards to the force particles, physicists had evidence of the photon of the electromagnetic force, as well as the gluons of the strong nuclear force.
In 1977, physicists discovered the tau lepton, and the bottom-quark, confirming that there must, in fact, be 3 generations of quarks and leptons (loc. 1789). The W+, W-, and Z0 field particles of the weak nuclear force were discovered by CERN in 1983 (loc 1901-12). The final quark (the top-quark), and the final lepton (the tau neutrino) were found at Fermilab in 1995, and 2000 respectively (loc. 2170, 2176). This left only the illusive Higgs boson to be found. Still, it was understood that a new generation of more powerful particle colliders was going to be needed in order to do the trick (loc. 1986).
a. The Tale of the SSC and the LHC
A particle collider of more than sufficient size (called the Superconducting Supercollider [SSC]) had been proposed (in 1987) to be built in Texas as early as the year 1999 (loc. 2052). However, the US government eventually rejected the proposal due to the enormous cost of the project (the project had originally been expected to cost $4.4 billion, but the projected cost eventually ballooned [as these things tend to do] to $11 billion). In the end, the American government chose to support the building of the International Space Station, rather than the SSC (loc. 2082). Nevertheless, the SSC did receive some funding initially, and work on the project had even started. Indeed, by the time the government had made their final decision, “23 kilometres of tunnel had been excavated and $2 billion had been spent” (loc. 2082) on the SSC. Below is a picture of the abandoned project.
Just as the plans to build the SSC in the US collapsed, plans began in Europe to build a new collider at CERN (loc. 2093). As Baggott explains, the new Large Hadron Collider (LHC) “would produce collision energies up to 14 TeV, less than half the maximum energy of the SSC but more than enough to find the Higgs” (loc. 2096). Actually, the LHC project did not involve building a brand new collider at all; but rather entailed taking the existing LEP collider at CERN and upgrading it into the LHC (loc. 2091). The project was originally slated to cost $15 billion, and it certainly took some work to convince politicians in Europe to support it (loc. 2110-65), but eventually the necessary support was achieved, and in 2000 the project began (loc. 2272). While the LHC was originally scheduled to be completed by 2006, cost over-runs and budgetary constraints ultimately pushed the completion date back to August 2008 (loc. 2316). The LHC was switched on for the first time on September 10, 2008, but operated for only 9 days before experiencing technical difficulties (actually, a faulty magnet caused an explosion that damaged 53 other magnets, and contaminated the system with soot [loc. 2330]). The LHC would have to be shut down for a full year before coming back on line the following November (loc. 2336).
b. The Discovery of the Higgs Boson
While the LHC was now operational, it would take time before physicists were able to ramp up its energy potential to full capacity. Indeed, this process was expected to take several years to complete, and was scheduled to be finished only in 2012 (loc. 2386)—though this has now been pushed back to 2013. Nevertheless, this did not prevent physicists from using the system in the meantime. And data quickly started streaming in (loc. 2345).
As mentioned above, the larger an elementary particle is, the more energy is required of a collision in order to produce it. Physicists were not sure exactly how heavy the Higgs boson should be, but were confident that it would be between 100-250 GeV (loc. 2186). By comparison, the largest elementary particle that had been found to that point was the top-quark, which rang in at 175 GeV (loc. 2174), while the next largest particle below it was the Z0 particle, which weighs around 95 GeV (loc. 1907). Now, a certain amount of energy is needed if you are to form any one of the larger elementary particles. However, achieving this amount of energy does not guarantee that you will produce the particle you are looking for (loc. 2365, 2548). Nevertheless, it is possible for physicists to calculate the likelihood, or probability, that a given particle will form at a given energy level; and therefore, they can use these probabilities to help them interpret the data that they collect, and to tell whether the data represents evidence of a certain particle.
Still, random events can contaminate the data (loc. 2548), and therefore, many events of the kind you are looking for must be witnessed before you can say with any certainty that they represent the particle you are after (loc. 2551). In fact, in order for physicists to declare that they have made a discovery, they must be 99.9999% certain that the phenomenon in question represents the particle they are looking for, and not some random noise (loc. 2362). The evidence needed to achieve this level of certainty is known as sigma-5 (loc. 2362). As you can well imagine, achieving sigma-5 level evidence requires capturing many events of the kind you are looking for. And since a particle as large as the Higgs simply doesn’t form very often, this can take a great many particle collisions, and a great deal of time indeed (loc. 2366). What’s more, it helps greatly if you know just how heavy the particle you are looking for is supposed to be, because then you can concentrate your efforts on a particular area of data; however, as mentioned above, physicists did not have this luxury when it came to the Higgs (loc. 2390).
It is for these reasons that it took so long for physicists to find the Higgs boson at the LHC. Here is a nice little clip detailing some of the difficulties mentioned above:
Nevertheless, physicists were eventually able to overcome these obstacles, and were able to declare with sigma-5 level certainty that they had in fact found the Higgs particle. The discovery was announced on July 4, 2012 (loc. 2647-75). As Baggot notes, the new particle “has a mass of between 125-126 GeV and interacts with other Standard Model particles in precisely the way expected of the Higgs boson” (loc. 2678).
Still, much work remains to be done in order to pin down the precise characteristics of the particle that was found (loc. 2688). And future work at the LHC is expected to do just this. Actually, this work is more important that you might think, for while the discovery of the Higgs boson essentially completed the Standard Model, there are still plenty of questions yet to be answered. And it is to these unanswered questions that we will turn to next.
One very interesting implication of the Standard Model is that it implies that the electromagnetic force and the weak nuclear force were—at one time in the very early stages of the universe—unified into a single force (loc. 1927). As Baggott explains, “the electro-weak theory implie[s] that at some time shortly after the big bang, the temperature of the universe would have been so high that the weak nuclear force and the electromagnetic force would have been indistinguishable. There was instead a single electro-weak force carried by massless bosons” (loc. 1929). As the universe cooled, the Higgs field took form, and “the massless bosons of electromagnetism (photons) continued unimpeded, but the weak-force bosons interacted with the Higgs field and gained mass to become the W and Z particles” (loc. 1932). The end result is that the electromagnetic force and the weak nuclear force now manifest themselves in very different ways (loc. 1933).
One problem with the Standard Model as it now stands is that the mass of the Higgs boson carries certain implications regarding the strength of the weak nuclear force, and what it implies is that the weak force should actually be much weaker than it shows itself to be. This is known as the hierarchy problem (loc. 2223), and it is a major thorn in the side of the theory.
In addition, it was mentioned above that the Standard Model regards the electromagnetic force and the weak nuclear force as being ultimately one and the same. In fact, it has been shown that the strength of the electromagnetic force, the weak nuclear force, and the strong nuclear force—as they are understood by the Standard Model—are nearly the same at a certain temperature that would in fact have existed a fraction of a second after the big bang (loc. 1940). As Baggott explains, “it seems reasonable to suppose that in this ‘grand unification epoch’, the strong nuclear force and the electro-weak force would have been… indistinguishable, collapsing into a single ‘electro-nuclear’ force” (loc. 1940). However, a way has yet to be found within the Standard Model of unifying the 3 forces (attempts to do so are referred to as grand unified theories [GUTs] [loc. 1944]). Indeed, as Baggott explains, “despite Glashow, Weinberg, and Salam’s ultimately successful combination of the weak and electromagnetic forces, the SU(3) X SU(2) X U(1) structure of Yang-Mills field theories that makes up the Standard Model is far from being a fully unified theory of particle forces” (loc. 2226).
Interestingly, there is a theory that solves both of the hierarchy problem and the unification problem. It is known as the theory of supersymmetry (SUSY). As Baggott explains, “there are many varietiese of supersymmetric theories but one of the simplest—first proposed in 1981 and called the Minimal Supersymmetric Standard Model (MSSM)—features ‘super-multiplets’ which connect matter particles (fermions) with the bosons that carry forces between them” (loc. 2232). One implication of the theory is that for every particle in the Standard Model, there is a massive supersymmetric particle that differs in its spin (from its correlate) by ½ (loc. 2235). As Baggot explains, “the partner of the electron is called the selectron (a shortening of scalar electron). Each quark is partnered by a corresponding squark. Likewise, for every boson in the Standard Model, there is a corresponding supersymmetric boson, called a bosino, which is actually a fermion. Supersymmetric partners for the photon, W, and Z particles are the photino, wino, and zino” (loc. 2238). In addition to the supersymmetric particles, the MSSM also postulates the existence of 5 Higgs particles, each with a different mass (loc. 2242). So, if the MSSM is correct, the LHC may yet find more Higgs particles (loc. 2268).
The MSSM is able to solve the hierarchy problem because the activity of the supersymmetric particles makes proper sense of the strength of the weak force (loc. 2242). As for the unification of the electromagnetic, weak, and strong forces, the MSSM is able to accomplish this as well, since, as Baggott explains, “in the MSSM, the strengths of the three particle forces are predicted to converge on a single point” (loc. 2248).
In addition to this, early indications are that the MSSM might be able to solve a long-standing mystery in cosmology as well: the mystery of ‘dark matter’. Based on observations of the effect of gravity on clusters of galaxies, it would appear that there is much more matter out there than can be explained by the forms of matter that we are currently aware of (loc. 2251). This mystery matter has been dubbed ‘dark matter’. As Baggott explains, “observations of the cosmic microwave background radiation by the COBE and, more recently, WMAP satellites, suggest that dark matter constitutes about 22 per cent of the mass-energy of the universe. About 73 per is ‘dark energy’, associated with an all-pervasive vacuum energy field, leaving the ‘visible’ matter of the universe: stars, neutrinos, and heavy elements—everything we are and everything we can see—to account for less than five per cent” (loc. 2258). (I touch upon these topics in much greater detail in my executive summary of Lawrence Krauss’ A Universe from Nothing: Why There Is Something Rather than Nothing).
The MSSM postulates particles that do not interact with either the electromagnetic force or the strong force (such as neutralinos), and these particles could well account for the dark matter that cosmologists have detected in the universe (loc. 2258). The only problem with the MSSM is that there is no evidence of any of the supersymmetric particles that it postulates. Nevertheless, if they do exist, it is thought that the LHC may well be able to detect them (loc. 2265).
Of course, it is also possible that the LHC may end up finding something not predicted by either the Standard Model or the MSSM. Given that this is the case, it becomes clear that finding the Higgs boson is not the only purpose of the LHC. As Baggott puts it, the purpose of the LHC “[is] about pushing beyond the Standard Model; it [is] about our ability to understand what things are made of and how these things have shaped our universe” (loc. 2268). Whatever the LHC finds, we’ll now be watching with a better understanding of what it all means.
*To purchase this book at Amazon.com, please click here: Higgs: The Invention and Discovery of the God Particle
According to the Standard Model of particle physics (which was first fully fleshed out in the 1970’s), there are two basic types of elementary particles: matter particles (called fermions) and force particles (called bosons). Bosons have been found for the electromagnetic force, the strong nuclear force, and the weak nuclear force, but not for the gravitational force (just why the latter have not been found is still a bit of a mystery). Fermions are split into two basic types: quarks and leptons. All of these can be seen in the diagram below:
The major difference between quarks and leptons is that quarks experience all four of the basic forces, whereas leptons do not. Specifically, half of the 6 leptons (known as the electron-like leptons) do not experience the strong nuclear force, whereas the other half (known as the neutrinos) experience neither the strong nuclear force nor the electromagnetic force (a visual representation of this can be seen in the diagram below; specifically, the screw represents the strong nuclear force, the magnet represents the electromagnetic force, and the nuclear symbol represents the weak nuclear force—all particles, remember, experience the gravitational force).
In addition to the 6 leptons, there are also 6 quarks. The quarks and leptons differ from each other in a few different ways, but one of the major differences between them is their mass. As you can see from the diagram above, both quarks and leptons are split into 3 ‘generations’ of mass, with each successive generation being heavier than the generation before it (the weight differential between the 3 generations is also nicely depicted in the diagram below). The quarks and leptons are also distinguished by their particular electromagnetic charge (which is labeled in the diagram above). All fermions have the same spin orientation (also labeled in the diagram above).
All particles also have antiparticles. Antiparticles are identical to their particle correlate, but have the opposite electric charge. Antiparticles are only created in high-energy collisions (and some are also yielded as the product of beta-radioactivity). When particles and anti-particles meet they annihilate one another and produce field particles such as photons, gluons, or weak force carriers, which explains why antiparticles are not very prevalent in the universe.
Bosons (which are force carrying particles, otherwise known as field particles) are a little more straightforward than fermions. Each of the 3 field forces (the electro-magnetic force, the strong nuclear force, and the weak nuclear force) has its own field particle (or particles), whose activity accounts for the behavior of its corresponding force. Photons are the field particle of the electromagnetic force. Gluons are the field particles of the strong nuclear force. Weak force bosons are the field particles of the weak nuclear force.
The following is a diagram that charts which bosons interact with which fermions, and other bosons (gluons and the charged bosons of the weak force also interact with themselves).
Here is a nice video that explains all of this (though it’s just a touch out of date, as you’ll see):
Many believe that a fully quantized theory of gravity should be achievable. The proposed field particle of the gravitational force is known as the graviton, but it has not yet been observed. Here is a nice little clip of the proposed graviton:
The diagram below is a nice representation of how everything works (or might work) together:
Here is a nice video detailing how everything works together (beginning at the beginning of the universe). It should be noted that the video is overly optimistic regarding what is known about the unity of the 4 fundamental forces (particularly gravity).
*Thank you for taking the time to read this article. If you have enjoyed this summary of Jim Baggott’s Higgs: The Invention and Discovery of the ‘God Particle’ or just have a thought, please feel free to leave a comment below. Also, if you feel others may benefit from this article, please feel free to click on the g+1 symbol below, or share it on one of the umpteen social networking sites hidden beneath the ‘share’ button.
The Book Reporter