The following

content is provided under a Creative

Commons license. Your support will help MIT

OpenCourseWare continue to offer high quality

educational resources for free. To make a donation or to

view additional materials from hundreds of MIT courses,

visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that

case, let’s get going. Today’s lecture will be

mainly on the blackboard. I have a few slides

I want to show. And what we want to talk about

is the inflationary universe model. So I’ll start by describing

the mechanism of inflation, how it happens. Inflation is based

on the physics of scalar fields and gravity. As I think we’ve said, in

present day particle theory, and by present day I mean

not yet string theory, all particles are

described as fields, quantum excitations

of the field. The analogy that most people

are at least qualitatively familiar with, is

the photon, which is a quantum excitation of

the electromagnetic field. But in fact, to describe

a relativistic theory of interacting

particles, the only way we really know for

any kind of particle is to introduce a field

and describe the particle as a quantized

excitation of the field. So when we talk

about a scalar field, that’s the quantum

representation of some kind of a

scale or particle. And scalar in this

case, means spinless, same in all directions. A scalar field then,

is just a number defined at each point in space. The only scalar field that we’ve

actually seen in nature so far, is the Higgs field. And indeed, inflation

is very much modeled on the Higgs field. Although the field

that drives inflation, which is by definition,

called the inflation, is probably not the Higgs

field of the standard model. Although recently, actually,

in the past few years, people have written a

number of papers proposing that maybe the Higgs field

of the standard model could, in fact, be the

field that drives inflation. So we don’t know. It’s an open question. But in any case, the

field drives inflation is some kind of

cousin, at least, of the Higgs field

of the standard model and that has many of

the same properties. In particular, the

properties of a scalar field are pretty much summarized by

its potential energy function. Energy density is a function

of the value of the field. And there are two kinds of

potential energy functions that I like to talk about. One is the kind of that is used

in New inflationary models. And potential energy

versus field value. And it has a plateau with a peak

at someplace, which is usually assumed to be phi equals zero

and a potential energy function which may or may not be

symmetric about phi equals zero but I’ll assume it is,

just for simplicity. And the second type, which

I’d like to talk about, mainly for comparison–

This is really the one that will

be interesting. But one could also imagine

a potential energy function which really has a local

minimum someplace, which is not the global minimum. And I’ll draw it with

the local minimum at the origin and the

global minima, two of them degenerate, elsewhere. And this, again, is a

graph of mi versus phi. And the reason I’m

drawing this is partly for historical interest. This was what was used in the

original inflationary model. It is my original paper. It does not work. But we’ll want to talk

about why it does not work. In both cases we’re

interested in a state, which can be called a false vacuum,

which is a state where the scalar field is just

sitting at phi equals zero. In the case of the second

of these potentials, phi equals zero being

a local minimum, is classically

completely stable. If one had a scalar field

in some region of space just sitting in that

minimum, there’d be no place where energy could

come from that would drive it out of that minimum

over the barrier. In the second case, the

field is classically stable. But it’s still possible for it

to quantum mechanically tunnel through the barrier. And that process has been

calculated and understood. Originally by Sidney

Coleman and collaborators. And an important feature

of that tunneling is that does not

happen globally. You might think that there

would be some probability that suddenly everywhere

in the universe the scalar field would

tunnel over the barrier and go down to the other side. The probability of

that is zero, as you might realize if you

thought about a bit more. There’s just no

way that the scalar field that far

over there is going to know the tunnel at the

same time as the scalar field far over there. So the timely happens

locally and it really happens in a very

small region, which then tunnels over the

barrier and the scalar field start rolling down on the other

side in this small region. And then that region grows. The scalar field, as it

rolls over the barrier, pulls the scalar field nearby. And the region

grows with a speed that rapidly approaches

the speed of light. In the New

inflationary potential, where we have actually

a local maximum here, the situation is

classically meta stable, in the sense that the smallest

possible fluctuation can start the field rolling down the hill. And in particular,

quantum fluctuations will, if nothing

else, start the field rolling down the hill in

some finite amount of time. We’re interested

in the case where the amount of time it takes

for quantum fluctuations to push the field off the hill

is relatively long compared to time scales involved

in the early universe. And the key time scale

involved in the early universe is the Hubble time. And the Hubble time is just

driven by the energy density. So one can calculate

the Hubble time. And one is interested

in the case for building

inflationary models, where the top of the hill is

smooth enough, gentle enough, has a small enough

second derivative so that the amount of time it

will take for the scalar field to roll off the hill

is long compared to the Hubble time

of the universe. So in both cases,

for short times, the scalar field is just

stuck at the origin. And that’s what’s

important, as far as what we want to talk about next. So the characteristic

of this state called the false vacuum is

that the scalar field is pinned at a high energy state. Scalar field is pinned

at a high energy density. And by pinned, I mean in advance

you just can’t change quickly. In general, particle

physicists use the word vacuum to mean the state of lowest

possible energy density. When we call this

a false vacuum, we’re really using the word

false in the sense of the word temporary. These states are

temporary vacuums, in that for some period of time,

which is long by early universe standards, the energy

density can’t get any lower. So it’s acting like a vacuum. Now what are the

consequences of that? The important

consequence of that is that the pressure has

to be large in negative and in fact, equal to the

negative of the energy density. And there are two ways we can

convince ourselves of that. The first is that if we remember

the cosmological equation that we derived somewhere in the

middle of course for row dot. We learned that row dot is

equal to minus 3 a dot over a where a is the scale

factor times row plus the pressure

over c squared. Now what we’re saying here, is

that as the universe expands, the scalar field is just stuck

at this false vacuum value. The energy density is stuck at

the energy density associated with that value of the

field, the potential energy density of the field itself. And therefore, row dot will be

zero as the universe expands. And if row dot is

zero, we can just read off from this equation

what the pressure has to be. Row dot equals zero implies

that the pressure is just equal to minus the energy

density times c squared, which is another way of

saying it’s minus– Excuse me, the mass density

times c squared, which is another way of saying

it’s minus the energy density. I’m using u for energy density

and row for mass density. And they just differ by

a factor of c squared. So this is the straightforward

equation method of seeing the answer here. But if you want to explain this

to your roommates or somebody who is not taking

this class, there’s also a simple argument based

on a thought experiment, which I think is worth

keeping in mind. And we’ve used this

argument before, actually, in similar contexts. If we imagine a piston chamber

in our thought experiment. And in our piston

chamber, we’re going to put false vacuum

on the inside. And the false vacuum will

have an energy density, I’ll call it u sub f. And on the outside. we’re going to have

a zero energy vacuum. Now we’ve learned

that since 1998, we’ve known that our vacuum is

not a zero energy vacuum. We seem to be seeing a non-zero

vacuum energy in our universe. However, even if that’s

true, the vacuum energy of our universe is

incredibly small compared to the false

vacuum energy density that we’re talking about in

terms of the early universe. So you could still very

well approximate it as zero and not worry about it. So that’s what

we’ll be doing here. So we’ll think of

the outside as being either a fictitious vacuum,

which by definition, has zero energy density and

we can talk about it even if it doesn’t exist. Or we could think of it

as being the real vacuum in our universe, which has

an energy density which is approximately

zero on this scale. Now what we want to

do is just imagine pulling out that piston. So we have now created

an extra region on the interior of

the piston chamber. And we’re going to be

assuming that we’ve somehow rigged the walls of the chamber

so that the false vacuum will be stretched as we

pull out the piston. The piston is attached to

the false vacuum in some way. So this entire area inside

region is now false vacuum. And therefore, the volume

of the false vacuum region has enlarged. And if we call the

extra region here delta v, the volume

of that region, we now have a situation where

the energy has increased by the energy density of the

false vacuum times delta v by changing the

volume of the chamber. Now, energy has to be conserved. So this energy has to be

equal to the work that was done by whatever force

pulled out on this piston. We won’t need to specify who

was pulling on the piston, but the work done when

one pulls out on a piston is just equal to

minus p times delta v, the work done by the person

pulling on the piston. So the normal case, the

pressure would be positive, the piston would be

pushing out on the person holding the piston and the

interior would be doing work on the person pulling out. And it would be positive if

the pressure were positive. The work done on

the person, but this is supposed to be the

work done on the gas. And that’s minus

p times delta v. So if energy is conserved,

the work done on the gas has to be equal to the

change in energy of the gas. And the change in energy

of the gas is that. So conservation

of energy implies that delta e equals delta

w or use of f times delta v equals minus p times delta

v, which of course implies that p is equal to minus

use of f, as we said before. So the point is that if the

energy inside the piston is going to increase, the

person pulling out on the piston had better be doing work, had

better be doing positive work. And if the pressure

inside were positive, the person pulling

out on the piston would be doing negative work. The piston would

be pushing on him. So for it to make sense

here, with the energy in the piston increasing,

the person pulling out has to really be pulling

against a suction. He has to do work to pull out. And a suction means

a negative pressure, if we have zero

pressure outside. Pressure inside

has to be negative. So we could reach this

conclusion either of two ways and we get the same conclusion. The pressure is just equal

to minus the energy density. Yes? AUDIENCE: I’m a little

confused about why the energy’s increasing inside. Because why couldn’t

you just say the energy density decreases

with the increased volume? PROFESSOR: OK the question

is why couldn’t you just say that the

energy density would decrease with the

increased volume. That certainly is

what will happen if you have normal gas inside. What makes this particular

false vacuum odd is the origin of this

energy density, which is the potential energy

density of the field. So if we were talking

about the situation, for example, which

is the clearest cut, the only way the energy

density here could go down is if the scalar field

goes up over the barrier and then comes down over here. And there’s no way to drive

it there, except to wait for a quantum fluctuation,

which is a very slow process. And similarly, here

there’s no barrier. So it can’t just roll down. But that takes a

certain amount of time. And we’re assuming that all the

things we’re talking about here are happening on

a time that’s fast compared to the amount of time

it takes for the scalar field to roll. So what makes this peculiar

false vacuum special is that it cannot lower its

energy density quickly. And that’s what the word

false vacuum implies. And there are states like that. And then those

states necessarily have a negative

pressure, or pressure that’s equal to good accuracy

to minus the energy density. Or I say to good accuracy only

because the energy density could change a

little bit slowly, at least for the top case. But it’s limited how

much it can change. OK, now what are

the consequences of this cosmologically? Well we’ve also learned that

we could write the second order Friedman equation, which is the

equation really tells us what the force of gravity is doing. A double dot is equal minus

4 pi over 3 g times row plus 3 p over c squared. Now for the false vacuum,

p is equal to minus row c squared minus

the energy density. And that means that this term is

negative and three times as big as that term. So for the false

vacuum, this quantity, which we normally think

of as being positive, becomes negative. I should write factor of a here. And that means that

instead of gravity slowing down the expansion the

universe for a false vacuum, the expansion is accelerated. And that’s also what

we’re seeing today with the vacuum

energy, which behaves the same way as

this false vacuum and produces gravitational

repulsion in exactly the same way. So false vacuum implies

gravitational repulsion. OK, this basically is the

mechanism of inflation. So we’re sort of finished

with this chapter. Are any questions

before I go on about how this gravitational

repulsion arises? Michael. AUDIENCE: So, for the top vacuum

that you’ve drawn up there, where there’s no barrier,

you just roll slowly, are we assuming that it

takes a long time for it to begin to roll or

after it’s started rolling that it also takes

a very long time to reach the bottom. PROFESSOR: I guess I’d

say both, Begin to roll is not that well defined. Because it may have it

an infinitesimal velocity from the time you

start discussing it. And then that

infinitesimal velocity gets bigger and bigger. But I think what we’re saying is

that the whole process, however you divide it up, it’s going

to take a long time compared to the time it takes for

the exponential expansion to set in. OK. So now, I’d like to

take this physics and just put a

scenario around it. And we’ll call the new

inflationary scenario because that’s what it is. Maybe now I should

mention a little bit more about the history here. When I wrote my

original paper, I was assuming a

potential of something like this, because it seemed

generic and created inflation and I was able to understand

that inflation would solve a number of cosmological

problems, which are the problems that

we’ve talked about. And we’ll come back

to talk about how inflation solves them. But But, one still has

to end inflation. In this model,

inflation would end only by the tunneling

of the scalar field through the barrier,

which as I said, happens in small

regions which then grow. Those regions are spherical

so they’re called bubbles. And the whole process really is

very much the way water boils. When you boil water, it forms

very small bundles initially and the bundles grow and then

start colliding with each other and making a big frothy mess. And it turns out that that’s

exactly what would happen in the early universe

if you had this model. When I first started

thinking about it, I hoped that these bundles

could collide with each other while they’re still

small and merge into a uniform, hot region of

the new phase, a phase where the scalar field is

not there, but there. But that turned

out to be the case. It turned out that the bubble

formation process produced horrible inhomogeneities

that there did not seem to be anyway to cure. And that then was

the downfall of the original inflationary model. But a few years later, Andrei

Linde in the Soviet Union, and independently, Albrecht

and Steinhardt in the US, proposed what came to be called

the new inflationary model, which started with a

different assumption about what the underlying

potential energy function for the

scalar field was. Instead of assuming

something like this, which might be called

generic in some sense, they instead assumed

something that’s a little bit more special,

a potential energy function with a

very flat plateau somewhere, which, well, we

normally put in the middle. And this has the advantage,

that the inflation ends, not by bubbled nucleation by

tunneling, but instead, by just small fluctuations

building up and pushing the scalar field down the hill. And what makes it

work, basically is that those small fluctuations

have some spatial correlations built into them. So over some small region, which

I will calla coherence region, the fluctuations are

essentially uniform. And the other

important feature is that once the scalar

field starts to roll, it still has some nearly

flat hill to roll on. So a significant

amount of inflation happens after this homogeneous

coherence region forms. So the initial coherence

region can be microscopic, but it is then stretched

by the inflation that continues as the

scalar field rolls down the hill towards the bottom. So that process of stretching

the coherence region after it has already formed is

what makes this model workable, while this model was not. So that’s the basic story of

how new inflation succeeded in allowing inflation

to end gracefully, is the phrase that was used. The problems associated

with this model came to be called the

graceful exit problem. And this is the first solution

to the graceful exit problem. They’re now other solutions. But they’re very

similar actually. So I’ll just write

here that’s it’s a modification of the

original inflationary model to solve this graceful

exit problem problem. Now I should say a little bit

about how inflation starts. But I can only say a

little bit about it because the bottom line

really is we don’t know. We still don’t have any real

theory of initial conditions for cosmology, whether

it’s inflationary cosmology or any kind of cosmology. The nice feature of

inflation is that it allows a significantly broader

set of initial conditions than is required, for example,

in the standard cosmological model, where, as we discussed,

the needed initial conditions are very precisely specified. I might say a few things though,

about ideas people have had. One idea, which I think

sounds very reasonable, is due to Andrei Linde. And it’s a vague

idea, so it really needs to be more precise

before it could really be considered a theory. But this is just the idea

that the universe started out with some kind of chaotic

random initial conditions. And then the hope is simply that

inflation will start somewhere. That somewhere in the initial

chaotic distribution there’ll be a place where the

scalar field will have the right properties,

the right configuration to initiate inflation. There are also

models by Vilenkin, Alex Vilenkin of Tufts, and

independently, Andrei Linde, who by the way, is at Stanford. They both worked on

models where the universe could begin by a quantum

tunneling process, starting from

absolutely nothing. I wrote here absolutely

nothing and that’s more nothing than nothing. You might think of nothing

as just empty space. But from the point of view

of general relativity, as you already know

enough to understand, empty space is not

really nothing. Empty space is really

a dynamical system. Empty space can bend and

twist and stretch and do all kinds of complicated things. It’s really no different,

in some basic sense, from a big piece of rubber. So nothingness

really is intended to mean a state where there’s

not only no matter present, but also no space and

no time, really nothing. One way to think

of it, perhaps, is as a closed universe, the

limit as the size of the closed universe goes to zero so

that there’s nothing left. None of these

theories are precise. We don’t really know how to

precisely formulate them. In this tunneling

from nothing, one is talking about tunneling

in the context where the structure of space itself

changes during the tunneling process. So it’s tunneling in the

context of general relativity. And we don’t really have

a successful quantum theory of general relativity. So these ideas are very

speculative and quite vague. But they do indicate

some possibilities for how the universe

might have started. An idea closely related to

this tunneling from nothing is the Hartle and Hawking–

This is Jim Hartle, of UC Santa Barbara, who’s also the

author of a general relativity textbook now. And Stephen Hawking,

who you must know from Cambridge University. They proposed something

called the wave function of the universe. From their point of view,

it’s self contradictory to talk about the

universe having an origin because before the origin of

the universe, space and time we’re not even defined. And therefore, you could not

think of there being a time before the universe was created. And therefore universe

didn’t actually get created. It just is and has some

earliest possible time. And that’s what this wave

function of the universe formalism reflects. But otherwise, it’s

pretty similar, really, to the idea of

tunneling from nothing. The idea is that the universe

had some kind of a quantum origin, which determined the

initial state of the universe. In any case, for the purpose

of inflation, what we really need to assume, and this could

be an assumption which follows from any of these

theories, we need to assume that the

early universe contained at least a patch, and

we don’t know exactly know how big the patch

has to be, but greater than or about equal to the

inverse Hubble constant times the speed of light,

the Hubble length. And this initial patch

also has to be expanding, or else it would just collapse. It really has to

be expanding faster than a certain threshold. But I won’t try to put that

all into the one sentence. Oh, I didn’t say

patch of what yet. Whoops. That’s where the average

value of phi is about zero. And by average, I mean averaging

over rapid fluctuations, if there are any. And if one has this, no

matter where one got it from, inflation will begin. And once inflation

begins, it doesn’t matter much how it begins. To see what happens next, it’s

easiest to at least pretend, that to a good

approximation, you can treat a small

region of this patch as if it were

homogeneous and behaving like a Robertson-Walker universe

of the type we know about. Then we can write the first

order Friedman equation, which is a little

bit more informative than the second order one. I’m going to leave out

the curvature term. We’ll argue later that the

curvature term becomes small quickly. But for our first

pass, we’ll just assume that the universe

is described by something as simple as the

Friedman Robertson-Walker equation for a flat universe. For row, we’re just

going to put row sub f. We’ll assume that

our space is just dominated by this false

vacuum energy density. And this can easily be solved. It just says that the

first derivative divided by the function

itself is a constant. Just take the square

root of this equation. And that is an equation

which just immediately get solved and gives

you an exponential. So you find that for late

times, you just get a of t behaving as a constant times

an exponential of time where the exponential time constant,

which I’m calling chi. Chi is just the square

root of this coefficient. The square root of 8 pi

g over 3 times row sub f. So clearly this is the

solution to that equation. And for late times, it is the

solution that will dominate. Actually it is

the only solution, the way I’ve already

simplified this. But if we started with the

full system of equations, there’d be other solutions with

different initial conditions. But this is always

what you would be led to, for late times. The exponential

expansion would dominate. So that’s what

innovation basically is. It’s a period of

exponential expansion. There are a few

features of inflation, which helps to understand

why it is so robust. That is, why no

matter how it starts, it leads to the same result. So one feature of inflation

I’d like to mention is the cosmological no-hair. Some people call it a

theorum and some people call it a conjecture. I think the more precise

statement about this theorem is that you can prove it as

a theorem perturbatively. That is, if all initial

deviations are small, you can really

prove it, but people think it’s true, even

beyond perturbation theory. And in that case

it’s a conjecture. But it’s basically the statement

that if one has a system with p equals minus row c squared

and row is greater than zero, if that describes the

matter, then essentially any metric that you

start with will evolve into this exponentially

expanding flat metric. Any system will evolve

to locally resemble a flat exponentially

expanding space time. And the word locally there

is needed to make it true. If, for example, you start

with a closed universe, as just a simple example, which has

this kind of matter filling it. It will start to

grow exponentially. It will always stay

a closed universe. It will never become

literally flat. But as it gets bigger and

bigger, any piece of it will look flatter and flatter. And it will keep getting bigger

and bigger exponentially fast forever. So it will rapidly

approach a space which looks like an

exponentially expanding flat space. Now this exponentially

expanding flat space time has a name, which is

de Sitter space, named after a Dutch astronomer. It was discovered

early on in the history of general relativity. 1917, I think, was the date

that de Sitter wrote his paper about de Sitter space. It has some very

interesting properties, which De Sitter do not

notice all of them. In spite of the fact

that I’m describing it as a flat exponentially

expanding space time, that’s not the only

possible description. It turns out that

the same space time, by changing what you

call equal time services, can be described as

either an open or closed Robertson-Walker universe,

completely homogeneous in both cases. So it’s very weird. But the easy way to think

of it for most purposes, is as this flat exponentially

expanding picture. OK Next thing I want to point

out about de Sitter spaces is that they have what are

called event horizons. Now early in the course

we talked about horizons and didn’t really try

to quantify the name. The horizons that we

used to talk about are technically called

particle horizons. Those are horizons

that have to do with the past history

of the universe and are related to the fact

that since the universe has only a finite past history,

or as a cosmological model at least, there’s a finite

distance that light could have traveled up

until this time. And we cannot have any way of

seeing anything that’s further away than that maximum distance

that light could have traveled. That’s the particle horizon. These event horizons

are different. They’re related really, to

the future of the universe, rather than the past. It’s a statement that,

because of the fact that these universes are

exponentially expanding, if two events that happen

at a particular time are separated from each other

by more than a certain distance, then the light

from one will never reach the future

evolution of the other. And one can see that by

looking at the total coordinate distance that light could

travel between any two times. So I going to let

delta r of t1 t2 be equal to the

coordinate distance that light travels

from t1 to t2. And I’m going to

assume that a of t is given by exactly

this formula. And I’ll write out

const because I don’t want to write

it too many times. I could give it a one variable

symbol if I wanted to. This delta r of t1 t2

is just the integration of the coordinate

velocity of light from t1 to t2 of c

divided by a of t dt. The coordinate velocity of light

is just c divided by a of t. We’ve seen that formula before. And this can easily be done

by putting in what a of t is. And we get c over

the constant that appeared in that

formula, whatever it is, times chi, the

exponential expansion rate, times e to the minus chi t

1 minus e to the minus chi t 2. And now, the question we

want to ask ourselves is, suppose we let this

light ray travel for an arbitrarily long

amount of time, which means taking t2 to infinity. And the important feature

of this expression is that as t2 goes to

infinity, the expression approaches a finite value. The second term just disappears. And you’re left

with the first term. So no matter how long

you wait, anything that started out with a

coordinate separation larger than that value, that

asymptotic value, will just never be

reached by the light pulse that you’ve sent. And that’s what this

event horizon is. And it’s easy to

see what it actually amounts to numerically. If you want to know how far

away the object has to be now, in physical terms, so that its

coordinate distance is larger than the maximum we get

here, we know how to do that. The maximum value can be

written as just the limit as t2 goes to infinity

of delta of r1 r2. And we have the expression

for it right here. It’s just the first

piece of this answer. And this is the

coordinate distance. If we want to know the present

physical distance of something which is at that

coordinate distance, we would just multiply it

by the present scale factor. And present here means, t1

and t2 are the arguments here, and we just want to multiply

by a of t1 to get the physical distance of an object

which is at this boundary, the boundary of what we’ll be

able to receive a light ray from and what we won’t. So this is the event horizon

distance, physical distance, and it’s just equal to

c times chi inverse. When you multiply

by a of t1, you cancel the constant

of the denominator and you cancel the

e minus chi t1. And you’re just left

with c times chi inverse. Which is the Hubble length. It’s the inverse

Hubble constant times the speed of light, which

is the Hubble length. So anything that is further away

one Hubble length, from us now, if that object emits a light

ray, we will never receive it. And that’s called

the event horizon. Now the reason this is

important is nothing travels faster than light. And that means that

in a de Sitter space, everything is limited in

how far it can ever get. And an important

implication of that is that if, in our full space,

which may not be entirely de Sitter space, if we

have a de Sitter region, but junk outside that,

which we don’t understand, don’t know how to predict,

could be anything, we would still know, even

without knowing what’s outside, that whatever’s

outside can never penetrate into the

de Sitter region by more than one event horizon,

by more than one Hubble length. So the interior of

the de Sitter region is protected from

anything on the outside. And that is a rigorous

theorem of general relativity, this protection. And that means that once you

have a sizable region of de Sitter space, no matter

what’s going on outside, it’s never going to disappear. It will always be protected

by this event horizon. I should give you

now a few sample numbers associated

with this scenario. And here I have to say

that we don’t really know very accurately what are

the right numbers to give here. So I think the word sample

numbers was well chosen there. What we don’t know is what

energy scale inflation actually happened at in the

history of universe. Turns out that the

consequences are pretty much identical for most questions,

or all the questions that they have been able so far

to investigate observationally, regardless of what energy

scale inflation happened. Inflation was

originally invented in the context of

Grand Unified Theories. And I think that’s still

a very plausible context in which inflation

might have happened. And the sample

numbers I’ll give you will be numbers associated

with Grand Unified Theories. And what starts the whole

story is the energy scale of Grand Unified Theories,

which is about 10 to the 16 GeV billion

electron volts. And this number is arrived at by

measuring, at accessible energy with accelerators,

the interaction strengths of the three

fundamental interactions of the standard model

of particle physics. The standard model

of particle physics is based on three different

gauge groups, su 3, su 2 and u1. Each one of those

gauge groups has associated with it an

interaction strength. And they can be measured. And that’s where we

start this discussion. Then once you measure them

at accessible energies, which is like 100 GeV, or

something like that, then you can

theoretically extrapolate to much higher energies. And what is found is

that to good accuracy, the three actually

meet at one point. And that is the

underlying basis, really, of Grand Unified Theories. That’s what allows the

possibility that all three interactions are really just a

manifestation of one underlying interaction, where the one

underlying interaction is made to look like three

interactions at lower energies through this process

called spontaneous symmetry breaking, which was talked about

a little bit in a lecture I gave the time before

last, I think, in, probably in Scott’s lecture. Now this meeting

of the three lines is decent in the context

of what is literally the standard model

of particle physics. But if one modifies the standard

model of particle physics by incorporating supersymmetry,

a symmetry between fermions and bosons, and that involves

adding a lot of extra particles because none of the

particles that we know of make up a fermion boson pair. So in a supersymmetric model

for every known particle, you introduce a new

unknown particle. Which would be it’s

supersymmetric partner. In that minimal

supersymmetric extension of the standard model,

the meeting of the lines works much better. So it’s a piece of evidence

in favor of supersymmetry. In any case, where

the lines meet to good approximation in either

one of these two discussions, whether it’s supersymmetric or

not, is it about 10 to 16 GeV. So that becomes the

fundamental math scale of the end unified theories. Hold on a second. That’ what I’m looking for. Now once one has

this mass scale, one can figure out an

appropriate mass density. And that’s what we’re

really interested in, what would be an

appropriate mass density for a false vacuum

in a grand unified theory. And one can develop

that, and we really don’t know how to do any better. Because as I’ve told

you, we don’t know really know how to calculate

vacuum energies anyway. But as a dimensional

analysis answer, we can get the answer because

it is really uniquely determined by dimensional

analysis up to factors. If one wants to make an

energy density out of E gut plus constants of

physics, the only way to do that is to take E

gut to the fourth power and divide it by h bar

cubed c to the fifth. And you can convince

yourself at home that that gives you

an energy density. And you could even evaluate it

numerically, by mass density, excuse me. And this is about

equal to 2.3 times 10 to the 81 grams per

centimeter cubed. So it’s a fantastically

high mass density, 10 to the 81 grams

per centimeter cubed. And if one puts this

into the formula for chi, the exponential

expansion rate, chi turns out to be about 2.8 times

10 to the minus 38 seconds. And c times chi inverse,

the Hubble length, it turns out to be about 8 times

10 to the minus 28 centimeters. So all these numbers, off

scale by human standards. And that’s just a

feature of the fact the Grand Unified Theories are

off scale by human standards. AUDIENCE: [INAUDIBLE] PROFESSOR: Do I

have this backwards? No, this is incredibly small. This is 10 to the minus 28. AUDIENCE: So then

it’s chi [INAUDIBLE] PROFESSOR: I’m sorry. Hold on. Yeah, no this– AUDIENCE: Chi should

be [INAUDIBLE] PROFESSOR: Chi

inverse is a time. C times the time is a distance. So I think that’s right. AUDIENCE: So is chi inverse

10 to the [INAUDIBLE] PROFESSOR: Yeah, if we’re in

cgs units, Chi inverse by itself would differ by a

factor of 10 to the 10. So it would be 10

to the minus 38, Hm. Hold on. This must be chi inverse. AUDIENCE: Oh, OK. PROFESSOR: There is

an inconsistency here. You are right. Yes, that’s chi inverse. This is time. And then this just

multiplies by c. OK so the way this

scenario would work is, we would start with the early

universe with some patch or order of magnitude this size. Which I might point out

is 14 orders of magnitude smaller than the size of

a single proton, which would be about 10 to the

minus 13 centimeters. So 15 orders of

magnitude, maybe. And then we would

need enough inflation, so that at the end of

inflation, the patch should be on the order of maybe one

to 10 centimaters or more It has to be at least about this

big, but could be much bigger. There’s no problem with

the being much bigger. Much bigger would just

mean there’s more inflation than you minimally needed. There’s no problem with

having too much inflation. And then it’s a

matter of checking and a calculation, which

I’ll tell you the answer of. If we want to go from some

size of the end of inflation to the present universe–

And that’s really what we’re interested in, ultimately,

getting the present universe. –there’d be a further

coasting expansion from the end of

inflation until now, which can just be calculated

by using the idea that a times temperature, scale factor times

temperature, is a constant. So the increase in

the scale factor is proportional to decrease

in the temperature. And the reheat temperature

of this model– Maybe I didn’t describe

reheating exactly, I’ll describe it

quickly in words. At the end of inflation,

the scalar field is destabilized by

these fluctuations and rolls down the hill, then

oscillates about the bottom. And when it oscillates

about the bottom, we need to take into account the

fact that this field interacts with other fields. And it then gives its energy

to the other fields, basically the standard model

fields ultimately, heating them up, producing

the hot soup of particles that we think of as the starting

point for the conventional Big Bang Theory. So this reheating

process at the end of inflation as

the inflaton field oscillates about its minimum,

reproduces the starting point of the conventional

Big Bang Theory. And it produces a

temperature which is comparable to the

temperature that you started at, which is the temperature

scale of the theory. So if it’s Grand

Unified Theory scales, we would reheat to a temperature

of order 10 to the 16 GeV. And then, to ask what will

be the expansion factor between then and now, it would

be 10 to the 16 GeV times the Boltzmann constant

times 2.7 Kelvin. This is the ratio of

the temperature then to the temperature now,

both expressed as energies. And then we might

want to multiply this by 10 centimeters, if we say

that at the end of inflation, the universe was 10

centimeters across. Size at end of inflation. This, I worked out at

home, is about 450 times 10 to the 9th light years. And we would want

something like 40 times 10 to the 9th light years to

explain the present universe. So this is about

10 times too big. And that’s OK. It means that we could

get by with one centimeter and 10 centimeters is

being a bit generous. So inflations would start

with this tiny patch. At the end of

inflation the patch would have grown

to one or maybe 10 or perhaps more,

centimeters in length. And then by coasting

up to today it becomes something that’s

larger than the region that we now observe. And that’s basically

how the scenario works. Any questions about those

numbers or the general pattern of what we’re

talking about here? OK. What I want to talk about

next, and this will pretty much be where we’ll stop,

although a few other things we might mention

if we have time, I want to talk about

how it solves the three cosmological problems

that we have discussed of the conventional

Big Bang model. And the explanations are

actually quite simple. So we can go through

them pretty quickly. First we had the horizons

slash homogeneity problem. Remember that was caused, or

could be stated as, the problem that the early universe

expanded so fast that the different

pieces of it did not have time to talk to each other. And, in particular, when the

cosmic microwave background was released, points at opposite

sides of the universe were separated from each other

by about maybe 50 horizon distances, we calculated. And that means there’s no way

they could have communicated with each other, and

therefore no way we could explain how

they turned out to have the same temperature

at the same time. In this case, what

we’ve done is, we’ve inserted into the

history of the universe an extra phase of evolution,

the inflationary phase. And if we go back

to the beginning of the inflationary

phase, we see that that problem

is just not there. And if it’s not there,

it doesn’t develop later. At the beginning of

the inflationary phase, by assumption, the region that

we’re starting to talk about was about horizon

length in size. And if we had enough inflation

to produce 10 centimeters out of that, that was 10

times more than we needed, it would mean that the entire

observed universe would be coming from a region

that would be only about a tenth of the size

of this Hubble length. So that would therefore

be well inside the horizon at that time. And that means that if you

allow a little bit of leeway with these numbers by having a

little bit of extra inflation, there can be plenty of time

for the entire region that’s going to become our

personally observed region, to come to a uniform temperature

by the ordinary processes of thermal equilibrium. Because they’re much less than

the horizon distance apart. And then once the uniformity is

established, before inflation, when the region that

we’re talking about is incredibly tiny, inflation

takes over and stretches that tiny region

so that today, it’s large enough to encompass

everything that we see. And therefore

everything that we see had a causally

connected past and had time at the early stages to come

to uniform temperature, which is then preserved as

the whole thing expands. So that gives a very

simple explanation for the homogeneity problem. Basically before inflation

the region was tiny. Second on our list was

the flatness problem. And the basis of that

problem was the calculation that we did about how omega

minus 1 evolves in time. And we discovered

that omega minus 1 always grows in magnitude

during conventional evolution of the universe. And that therefore, for omega

minus 1 to be small today, it would have to be amazingly

small in the early universe, as small as 10 to the minus

18 at one second after the Big Bang to be consistent with

present measurements of omega minus 1. The key element there was

this unstable equilibrium and the fact omega L minus

1 always grew with time. And that depended on

the Friedman equations. During inflation, the

Friedman equations in some schematic sense

were the same equations, but the rows that go

into it are different. So the equations

basically are different. And if we look at the key

equation, the first order Friedman equation,

H squared equals 8 pi over 3 G row minus

Kc squared over a squared. This was the

equation that we used to derive this flatness problem. We could see

immediately, if we now think about it, during

the inflationary process, things are completely reversed. Omega is driven towards

1, and exponentially fast. And the way I see that is to

just ask what is this equation do during inflation. And during inflation,

we just replace row by this constant

value row sub f, the energy density of the

false vacuum is fixed. And that means that during

inflation, this term is fixed. This term is falling off

like 1 over a squared. And a is growing

exponentially with time. So that means that

this term is decreasing relative to that term

by a huge factor, by the square of the

expansion factor. So in our sample

numbers over there, we were talking about an

overall expansion from 10 to the minus 27

centimeters to 10. That’s expansion by a

factor of 10 to the 28. In that case, during

inflation, this term decreases by a factor

of 10 to the 56 while this term

remains constant. And that means that

by the end inflation, this is completely

negligible and this equation without this extra term means

you have a flat universe. So during inflation,

the universe is driven towards flatness,

like one over a squared, which is 1 over the square of this

exponential expansion factor, so very, very rapidly. And finally, the third of the

problems that we talked about was the monopole problem. We argued, originally

Kibble argued, that you’d expect approximately

one of these monopoles to form per horizon volume,

just because the monopoles are basically twists in

the scalar fields. And there’s no way the scalar

fields can organize themselves on distances larger than

the horizon distance. So you’d expect

something on order of– It’s a crude argument. –but something on

the order of 1 not in the scalar field

per horizon volume. And that led to far too

many magnetic monopoles, fantastically too many. And the formation of one

monopole per horizon volume is hard to avoid. I don’t know of any

way of avoiding it. But what gets us out

of the problem here, is that we can easily arrange

in our inflationary model for the bulk of the

inflation to happen after the monopoles form. And that means

the monopoles will be diluted by the

exponential expansion that will occur after

the monopoles form. The rest of the

matter is not diluted, because when

inflation takes place it’s at a constant

energy density so the amount of other

stuff that will be produced is not diminished by

this extra expansion. But the monopolies,

which will produce first, will be thinned out

by the expansion. So the basic idea here

is that the volume goes by a factor of the order,

using our sample numbers, it’s linearly growth by

a factor of 10 to the 28. Volumes go like cubes

of linear distances. So 10 to the 28 cubed is

10 to the 84, I think. Probably right. And that means that we

can dilute these monopoles by a fantastic factor and make

everything work, if we just arrange for the

monopoles to be produced before the exponential

expansion sets in. OK. Finally, and I think

this is probably the last thing we

will talk about, another problem that we

could have talked about and we’ll talk about

the solution of it now, even though we never really

talked about as a problem, is the small scale

nonuniformities of the universe. And if we look out

around the universe we don’t see a uniform

mass distribution, we see stars and stars collected

in galaxies and galaxies collected in clusters

and clusters collected in super clusters, a very

complicated array of structure in matter. Those are all nonuniformities. And we think we understand how

they evolve from early times because we also see in the

cosmic microwave background radiation, small fluctuations,

which we can now actually measure to very high

degree of accuracy. Those small fluctuations

provide the seeds for the structure in the

universe that happens later because of the fact that the

universe is gravitationally unstable. So at very early times

what we’re seeing directly in the CMB, these

nonuniformities were only at the level

of one part in 100,000. But nonetheless,

in regions where there was slightly

more mass density, that pulls in

slightly more matter, producing still stronger

gravitational field pulling in more matter and that

amplifies the fluctuations. And that affect, we believe,

is enough to account for all the structure that

we see in the universe as originating from these tiny

ripples on the cosmic microwave background. But that still

leaves the question of where do these tiny

ripples come from. And in conventional

cosmology, one really had no idea where

they come from. One knew they had to be there

even before they were seen because we had to account for

the structure in the universe and how it evolved. When they finally were seen,

it was seen just right. Everything fit together. In inflationary cosmology,

the exponential expansion tends to smooth everything out. And for a while, those

of us working on it were very worried that inflation

would produce a perfectly smooth universe and we’d

have no way of accounting for the small fluctuations

that were needed to explain the existence of

stars and galaxies. But then it was realized

that quantum theory can come to our rescue. Classically, inflation

would smooth everything out and produce a uniform

mass density everywhere. But quantum

mechanically, because quantum mechanical theories are

fundamentally probabilistic, the classical prediction

of a uniform density turns into a quantum mechanical

prediction of an almost uniform density, but with some

places being slightly higher than that uniform density, other

places being slightly lower. And qualitatively,

that’s exactly what we see in the cosmic microwave

background radiation. And furthermore, we can

do it quantitatively. One can actually

calculate the effects of these quantum fluctuations. And that’s what I

want to show you now. The actual data on that,

which is just gorgeous. Shown here is the

Planck seven year data. Shown here is the

Planck seven year data, where what’s being

plotted is the amplitude of the fluctuations versus

the angular wave length. One is seeing these as

a pattern on the sky. So the wavelength you see as

an angle, not as a distance. And long wave lengths

are at the left. Short waive lengths

are at the right. It’s really done as

a multiple expansion, if you know what that means. And those numbers are

showing on the top. And the data points are

shown as these black bars with their appropriate errors. And the red line is the

theoretical prediction due to inflation, putting

in the amount of dark matter that we need to fit

the data that we also measure from the supernovae. And it’s absolutely gorgeous. So I have a little

Eureka guy to show you how happy I was when

I saw this graph. And with the help

of Max Tegmark, we’ve also put on

this graph what other theories of

cosmology would give. So if we had an

open universe, where omega was just 0.2 or 0.3,

as many people believed before 1998, we would have

gotten this yellow line. If we had inflation

without dark energy, making omega equal what out of

matter, out of ordinary matter, we would get this

greenish line, which also doesn’t fit

the data at all. And there’s also something

called cosmic strings that we haven’t talked about. It was for a time, thought

to be a possible source of the fluctuations

in the universe. But once this data came in, that

became completely ruled out. Now this is not quite

the latest data. The latest data come from

the Planck satellite. And it was released last March. And I don’t have that

plotted on the same scale, but this is the latest

data which, as you see, fits even more gorgeously

than the data from WMAP. The more accurately

it gets measured, the better it fits the

theoretical expectations. Now I should mention for

truth in advertising, that this data is to some

extent fit to the model. It’s actually a

six parameter fit. But of those six parameters,

I don’t have time to talk about them in

detail, but four of them are pretty much determined

by other features. Two of them are just

fit to the data. And one of them

is something that changes the shape a little bit. It’s the opacity of

the space between us and the surface of

last scattering. An important parameter that’s

fit that you should know about, is the height of the curve. The height of the curve

can, in principle, be predicted by

inflation if you knew the full details of

this potential energy function that I’ve erased

for the scalar field. But we don’t. We just have some

qualitative idea about what it might look like. So the height of the

curve is fit to the data. But nonetheless, the location of

all these peaks and everything really just come out of the

theory and it’s just gorgeous. And it works wonderfully. So the bottom line

is I think inflation does look like a very good

explanation for the very early universe. It’s kind of bizarre since

it talks about times like 10 to the minus 35 seconds

after the Big Bang which seemed like a totally incredible

extrapolation from physics that we know. But nonetheless,

marvelously, it produces data that agrees fantastically

with what astronomers are now measuring. So we’ll stop there. I want to thank you all

for being in the class. It’s really been a

fun class to teach. I have very much enjoyed

all of your questions and enjoyed getting

to know you and hope to continue to see you around. Thank you.

super freak'n duper !

At around 5:53, Prof. Guth says that there is no way that a particle here would tunnel at the same time as a particle over there. But couldn't they be entangled?

[email protected] 14:00 Professor Guth states that the change in energy equals the original energy density times the change in volume. This could only be true if the increase in volume were incremental. Otherwise, the energy density would be lower with increased volume.

i learnt today that original idea was flawed, it didn't propose scalar field:) i wonder why it matters 4 me cause i've no idea the fuck is going on:O it'd be more interesting while under narcotic influence of herb:))))))))

Excellent series of lectures, I feel I now have some idea where the science in the documentaries comes from (even though my maths was creaking a bit by lecture 14 ;o)

An excellent course. Thank you to MIT OCW and to Professor Guth for sharing this with the internet!

excellent lecture!!who would be better than "Guth" to explain inflationary universe, afterall he is the creator of Inflationary theory!!!

I'm a little confused about something. Around 9:43 prof. Guth says that d(rho)/dt = 0 because the scalar field is stuck at the false vaccum value. But rho represents the mass density of the Universe, not the energy density of the false vaccum, V(phi). I get that V(phi) is constant, but why does that implies that the mass density rho is constant as well ?

EDIT : Maybe he is ignoring the relatively small mass density from matter ? So that rho is here only the masse density of the false vaccum energy density ?

I was researching about Higgs field and the origin of mass, I developed a research paper proposing that the mass of any particle is not its own property and hence it states that the particle is gaining its mass from external interactions. I want to know if should continue working on this research paper or my ideas and interpretations are not realistic?

Here are my views in a detail,

Abstract

The standard model of particles describes the different particles based on their properties, in which Higgs boson was not yet discovered. On 4 July 2012, the Large Hadron Collider found the remaining particle “the Higgs Boson.” Following the existence of Higgs boson, the proof for the existence of Higgs field was also found. The theory of Higgs field defines mass as the result of the interaction of fundamental particles with the Higgs field. The interaction resists the fundamental particles from traveling at the speed of light. The process of encountering resistance due to Higgs field is when an object gains mass. The mass due to Higgs field is not its own mass. All the particles of the universe are massless by themselves. The mass known currently is due to Higgs field as Higgs field is present universally.

Introduction

Today in the 21st century, we measure something in two forms “Mass and Weight.” Mass is the amount of matter in the substance. Whereas Weight is the effect of gravitation force on the mass of an object. We have studied that anything that has mass and occupies space is matter. According to our current understanding of particle physics, a proton is made up of 2 up quarks, and 1 down quark and these quarks are bound together by gluons. The mass when combined with 2 up quarks and 1 down quarks should be equal to the mass of the proton, as gluons are massless. But this combined mass is only 1 % of the total mass of the proton. The remaining 99% mass of the proton is due to the energy of the gluons known as quantum chromodynamics binding energy. Through the mass composition, we can conclude that the energy is equal to mass as mass is due to the energy of gluons. Because the mass is due to the quantum chromodynamics binding energy of gluons they should be considered the matter. Contrast to these mathematical calculations, the current theory of particle physics describes gluons as massless. Due to the inconsistency of data, I conclude that the definition of matter is wrong. Thus, proving the currently known mass of proton wrong according to E=Mc2 as the energy and mass are interchangeable in certain conditions but not equal. As the definition of mater was crafted when the Higgs field was not yet discovered, the definition becomes invalid after the L.H.C discovered the Higgs boson on 4 July 2012. Following the discovery of Higgs boson the Higgs field was proven to have existed, as Higgs boson is an excitation in the Higgs field. After the Higgs field was confirmed to exist, its properties hold true after that. The most important property of the Higgs field is that it gives mass to fundamental particles when interacting with the Higgs field. For example, proton’s mass was measured when the Higgs field was not yet then discovered, so the mass of proton measured was in the presence of Higgs field. This implies that the mass measured was due to the interaction with the Higgs field and not the own respective mass of the proton. This theory holds true for everything which has mass according to our current understanding of measurement of mass.

As all particles are made up of fundamental particles, so when the particle for example proton interacts with the Higgs field, the fundamental particles that make up the proton also interact as a proton is a name given to the composition of 2 up quarks and 1 down quark held together by gluons. The properties of Higgs field states that any fundamental particle when interacts with the Higgs field, it generates mass. Thus, concluding that the mass of a proton is due to Higgs field.

god himself might fail this class

universe started with God taking a fart…. that's a bubble….

Brilliant, kind of sad these came to an end.

Thanks to prof. Guth and MIT for the wonderful lecture course.