4. Best responses in soccer and business partnerships

February 15, 2020


Professor Ben Polak: So
last time we introduced a new idea and the new idea was that
of best response. And what was the idea?
The idea was to think of a strategy that is the best you
can do, given your belief about what the other people are doing:
what your opponents are doing, what other players are doing.
And you could think of this ─ you could think of this
belief as the belief that rationalizes that choice.
So if you have a boss you might want to ─
and he or she is going to ask you why you chose the action you
did. If you took a best response to
some believe, you can say I took this action
because I believed other people are going to do this.
And since that was the best you could do under that belief,
you’ll hopefully keep your job. I promised that today we would
look at the most important game in the world.
And, as announced last time, the most important game is the
penalty kick game. So this is a game that occurs
in soccer and just to give an idea of how important it is for
those people who are unfortunate enough not to be soccer fans
here, the last World Cup was decided
on penalty kicks. In England’s case,
England goes out of every single World Cup and every
single European competition because it loses on penalty
kicks, usually to Germany,
it has to be said. And more immediately,
this weekend, as all of you are thinking the
most event in the world was whatever was happening in
Congress to do with Iraq, actually, the most important
event in the world was taking place in England where my
favorite team, Portsmouth, were playing Kaj,
the head TA’s favorite team, Liverpool.
And, about a third of a way through that game there was a
penalty, and … I’ll let you know what happened
later. So keep at the back of your
mind that the real world example that matters here is Portsmouth
versus Liverpool this weekend. (Kaj, the head TA is
Scandinavian so I’ve got no idea why he’s supporting Liverpool
anyway, but I think maybe he spells it
like this or something like this).
So what we’re going to do is we’re going to look at some
numbers that are approximately the probabilities of scoring
when you kick the penalty kick in different directions.
But just make sure everyone ─ do I need to explain
what’s going on here? Is everyone familiar with this
situation? There’s one guy who’s going to
run up and kick the ball. The goal keeper is standing at
the goal. And their aim is to kick it
into the goal. That’s probably enough.
You’ve all seen this right? If you haven’t seen this,
go see it. I mean come on!
So things you should do in life: read Shakespeare and see a
soccer game. So the rough numbers for this
are as follows ─ and actually later on in the
class I’ll give you some more accurate numbers,
but these will do for now. There are three ways,
the goal─ the attacker could kick the
ball. He could kick the ball to the
left, the middle, or the right.
And I shouldn’t just say he here of course,
I mean this is he or she but if I get that wrong going on,
please forgive me for it. The goalie can dive to the left
or the right. In principle the goalie could
stay in the middle. We’ll come back and talk about
that later. So this is the guy who is
shooting, he’s called the shooter and this is the goalie.
These are roughly ─ well, let me put up the payoffs
for this game and then I’ll explain them.
So you’ll notice that I’m just going to put in numbers here and
then the negative of the number and the numbers are roughly like
this: (4,-4). So the numbers are (4,
-4), (9, -9), (6, -6), (6,
-6), (9, -9) and (4, -4).
And the idea here is that the number 4 represents 40% chance
of scoring if you shoot the ball to the left of the goal and the
goal keeper dives to the left. So the payoff here is something
like u_1(left) if the goal keeper dives to the left is
equal to 4, by which I mean there’s a 40%
chance of scoring. So the number for the–The
payoff for the shooter is his probability of scoring and the
payoff for the goal keeper is just the negative of that.
Let’s keep things simple. As I said before,
for now we’ll ignore the possibility that the goal keeper
could stay put. So how should we start
analyzing this important game? Well we start with the ideas we
learned already several weeks ago now, or more than a week
ago, which is the idea of dominant strategies.
Does either player here have a dominated strategy?
Does either player have a dominated strategy?
No, it’s kind of clear that they don’t.
Let’s just look at the shooter, for example.
So you might think that maybe middle dominates left,
but notice that middle has a higher payoff against left than
shooting to the left. It has as lower payoff if the
goalie dives to the right. So, not surprisingly in this
game, it turns out, that if the goalie dived to the
left you’re best off shooting to the right,
second best off shooting to the middle, and worst off shooting
to the left. That’s if the goalie dives to
the left. And if the goalie dives to the
right, you’re best off shooting to the left, second best off
shooting to the middle, and worst off by shooting to
the right; and that’s kind of common sense.
Okay, so if we had stopped the class after the first week where
all we learned to do was to delete dominated strategies,
we’d be stuck. We’d have nothing to say about
this game and as I said before, this is the most important
game, so that would be bad news for Game Theory.
But luckily, we can do a little bit better
than that. Before I do that,
let’s just take a poll of the class.
How many of you, if you were playing for,
I guess it’s going to be America, which is a sad thing to
start with, never mind. You guys are playing for
America and you’re taking this penalty kick and it’s the last
kick in the World Cup, how many of you,
show of hands, how many of you would shoot to
the left? How many of you would shoot to
the middle? How many of you would shoot to
the right? We’ve got kind of an even split
there, pretty much an even split.
We’re going to assume these are the correct numbers and we’re
going to see if that even split is really a good idea or not.
So how should we go about thinking about this?
What I suggest we do is we do what we did last time and we
start to draw a picture to figure out what my expected
payoff is, depending on what I believe the
goalie is going to do. So this is the same kind of
picture we drew last time. So on the horizontal axis is my
belief, and my belief is essentially the probability that
the goalie dives to the right. Now as I did last time,
let me put in two axes to make the picture a little easier to
draw. So this is 0 and this is 1.
And you probably have lines in your notes but I don’t,
so let me just help myself a bit.
So this is 2,4, 6,8, 10, so this is going to be
2,4, 6,8, and 10 and over here 2,4, 6,8, and 10,2,
4,6, 8, and 10. This would be the basis of my
picture. So it starts with a possibility
of shooting to the left. Let’s do this in red.
So I shoot to the left and the goalie dives to the left,
my payoff is what? It’s 4.
If I shoot to the left and there’s no probability of the
goalie diving to the right, which means that they dive to
the left, then my payoff is 4, meaning I score 40% of the
time. If I shoot to the left and the
goalie dived to the right, then I score 90% of the time,
so my payoff is .9. By the way why is it 90% of the
time and not 100% of the time? I could miss;
okay, I could miss. That happens rather often it
turns out, well 10% of the time. So we know this is going to be
a straight line in between, so let’s put this line in.
So what’s this? It’s the expected payoff to
Player I of shooting to the left as it depends on the probability
that the goal keeper dives to the right.
And conversely, we can put in …
well let’s do them in order. So middle: so if I shoot to the
middle and the goal keeper dives to the left,
then my payoff is .6, is 6, or I score .6 of the
time, and if I shoot to the middle and the goalie dives to
the right I still score 60% of the time,
so once again it’s a straight line in between.
So this line represents the expected payoff of shooting to
the middle as a function of the probability that the goal keeper
dives to the right. Finally ─
let’s do it in green ─ let’s look at the payoffs,
expected payoffs, if I shoot to the right.
So if I shoot to the right and the goalie dives to the left,
then I score with probability .9, or my payoff is 9.
Conversely, if I shoot to the right and he or she dives to the
right, then I score 40% of the time, so here’s my payoff .4.
And here’s my green line representing my expected payoff
as the shooter, from shooting to the right,
as a function of the probability that the goalie
dives to the right. Did everyone understand how I
constructed this picture? Easier picture than the one we
constructed last time. So what does everyone notice
from this picture? What’s the first thing that
jumps out at you from this picture?
Assuming these numbers are true, what jumps out at you from
this picture? Can we get some mikes up here?
So Ale, can we get this guy? Stand up first, the guy in red.
What’s your name? Don’t hold the mike;
just shout. Student: There’s no
point at which the 6, at which it shooting in the
middle gets a higher payoff. Professor Ben Polak:
Exactly, exactly. So the thing that I hope jumps
out at you from this picture is (no great guesses about figuring
out this is a ½), so if the probability that the
goalie’s going to jump to the right is less than a ½,
then the best you can do is represented by this green line,
which is shoot to the right. So the goalie is going to shoot
to the right with the probability less than a
½, sorry he’s going to dive to the
right with the probability less than a ½,
you should shoot to the right. Conversely, if you think the
goalie’s going to shoot to the right with probability more than
a ½, then the best you can do is
represented by the pink line, and that’s shooting to the
left, or if you think the goalie’s
going to dive to the right with the probability more than a
½, the best you can do,
your best response is to shoot to the left.
And there is no belief you could possibly hold given these
numbers in this game that could ever rationalize shooting the
ball to the middle. Is that right?
So no: to say it another way, middle is not a best response
to any belief I can hold about the goal keeper,
to any belief. So there’s a lesson here,
and it’s pretty much (just to make the lesson resonate again):
imagine there you are in the World Cup,
you’re playing for England, you have to justify your
actions not only to your teammates and your manager,
and your boss, but to about 60 million rather
angry fans. What’s the lesson here?
I’m hoping it was going to be obvious, what’s the lesson here?
The lesson is, do not shoot to the middle.
Let me qualify that lesson slightly, unless you’re German.
Germans can do whatever they like.
Now, it turns out that about a third of the game between my
team Portsmouth and Kaj’s team Liverpool, this weekend there
was a penalty. Portsmouth had a penalty and
the guy who was going to take the penalty came up to kick the
penalty and he kicked it to the middle and it was saved.
So just confirming these actions, not only did that spoil
my weekend but it also spoiled my opportunity to make fun of
Kaj all week, so it was really a big deal.
So this weekend a penalty was missed exactly by somebody
ignoring this rule. There’s a more general lesson
here, and the more general lesson is, of course:
do not choose a strategy that is never a best response to
anything you could believe. The more general lesson,
do not choose a strategy that is never a best response to any
belief. Notice here,
just to underline something which came up at the end last
time, that doesn’t just mean beliefs
of the form, the goalie’s going to dive left or the goalie’s
going to dive right. It means all probabilities in
between. So we’re allowing you to,
for example, to hold the belief that it’s
equally likely that the goalie dives left or dives right.
But if there’s no belief that could possibly justify it,
don’t do it. And underlining what arises in
this game, notice that in this game we’re able to eliminate one
of the strategies, in this case the strategy of
shooting to the middle, even though nothing was
dominated. So when we looked at domination
and deleted dominated strategies, we got nowhere here.
Here, at least, we got somewhere,
we got rid of the idea of shooting to the middle.
Now if you can just persuade the English and Portsmouth
soccer players of this lesson, I’d be very happy.
So before we leave it, I’ve been making a point in
this class of coming back to reality from time to time,
so this is a very simple model of the soccer game in reality.
Let’s just try, any of you on the Yale Soccer
Team? No?
Have any of you played soccer for your college?
One or two. Have you ever played soccer?
How many of you have ever played soccer?
Okay, good I was getting worried there for a second.
So, one thing we said last time was when we put up a model and
try and draw lessons from it, we should just take a step back
and say, what’s missing here? So let’s try and get some kind
of ─ I’ll come off the stage to make it easier for
Jude. What’s missing here?
What’s missing in this model of this piece of soccer,
this game within a game? What’s missing here?
Why is this not necessarily a hundred percent accurate model?
I’ll need some mikes up here. Can you?
You have to really shout because you’re miles from the
mike there. Student: You might be
better kicking to the left or to the right depending on whether
you’re right handed or left handed.
Professor Ben Polak: Good, so one thing that’s
clearly missing here is I’m ignoring that in fact right
footed players find it easier to shoot to their left,
which is actually the goalie’s right.
So right footed players find it easier to shoot to the left as
facing, to shoot across the goal.
Does everyone confirm that’s true?
Yeah, anyone ever tried to this? It’s a little easier to hit the
ball hard to the opposite side from the side which is your foot
and that’s the same principle in baseball.
It’s a little bit easier to pull the ball hard then it is to
hit the ball to the opposite field.
Yes? Student: Players don’t
make their decision before, and then stick with it
necessarily. Professor Ben Polak: All
right, so players are making decisions as they’re running up.
I think that’s okay here, right? We can think of this as the
decision happening at the instant at which you kicked.
So you’re right that you could have made your decision back in
the locker room, or you could have made the
decision at half time, but ultimately what matters
─ let’s hope that goes away.
We sure that it’s not off my mike?
Just in case I’ll move my mike a bit lower.
So I’m going to shout louder because my mike is now lower.
It doesn’t really matter exactly when the decision is
made. At the end of the day,
the goalie doesn’t know the decision of the shooter and the
shooter doesn’t know the decision of the goalie.
So it’s as if that decision is made instantaneously as the
shooter is running up. What else?
Yeah, can we get this. Tae can you get this guy here?
Stand up. Shout out.
Student: The goalie might stay in the middle.
Professor Ben Polak: The goalie might stay in the middle.
That’s a good point, of course, I’ve abstracted from
that here, and in fact, we’ll come back,
I think, I’ll try and put that onto a problem set,
but I think you’re right, it is an issue here.
Anything else? Well let me put up some real
numbers and we’ll see about how much the correspondence to what
we’ve got here. So I gave you some numbers I
made up actually a long time ago, but since I’ve been using
this game in class, somebody went out and checked.
And it turns out that ignoring middle for a second,
ignoring middle ─ so these are real numbers,
and these numbers come from a paper in the AER by Chiappori
and some co-authors and for everyone at Yale,
I’ll make that paper available to you through JSTORE or through
the Yale Library, so you can go look at it if
you’d like. What they worked out was the
following table. And again, we need to be a
little bit careful here. So I’m going to put the left
and right in inverted commas because actually what they did
was they corrected for people’s natural direction and not
natural direction. So the idea here is shooting to
the left if you’re right footed is the natural direction,
so left here means the natural direction.
Of course, if you’re left footed it goes the other way,
but they’ve corrected for that. It turns out that the
probabilities of scoring here are as follows,
63.6,94.4,89.3, and 43.7.
So things are not–I haven’t given you the numbers for the
middle but–So you can see that whoever it was who said,
you’re slightly better off, you score with slightly higher
probabilities when you kick to your natural side is exactly
right. The thing is still not
dominated and we could still have done exactly the same
analysis, and actually you can see I’m
not very far off in the numbers I made up, but things are not
perfectly symmetric. I forget who it was who said
that, but that does turn out to be true.
Certainly the goalie staying put is an issue,
as I said we’ll deal with that in the problem set,
but there’s another issue here. Let me just raise one more
issue. One more issue is,
you have another decision when you run up to hit this,
hit the penalty other than just left and right.
Someone whose played the game, what’s the other decision you
really face? Can I get the woman here?
What’s the other decision you face?
Student: You could kick up to the top corner.
Professor Ben Polak: Okay, you can kick up and down,
that’s true. Okay, that’s true actually,
that’s true. But I meant something else,
that’s right, but I meant something else.
What else is there? Try this guy here.
Student: Spin. Professor Ben Polak:
Well that’s getting subtle here. It’s a much more basic thing,
what’s a more basic thing? What’s a more basic thing here?
Take it, yeah right in front of you.
Student: Speed. Professor Ben Polak:
Speed, right. So another decision you face is
do you just try and kick this ball as hard as you can or do
you try and place it? That’s probably as important a
decision as placing it, as deciding which direction to
hit, and it turns out to matter. So, for example,
if you’re the kind of person, (which is I have to say all I
ever was), if you’re the kind of person
who can kick the ball fairly hard but not very accurately,
then it actually might change these numbers.
If you can kick the ball very hard, but not very accurately,
then if you try and shoot to the left or right,
you’re slightly more likely to miss.
On the other hand, as you shoot to the middle,
since you’re kicking the ball hard, you’re slightly more
likely to score. Now, if this all seems like
arcane and irrelevant detail, let’s just see why this matters
in the picture, and then we’ll leave soccer,
at least for today. So if you’re the kind of person
who can kick the ball hard but not accurately,
then it’s going to lower the probabilities of scoring as you
kick towards the right because you might miss,
and it’s going to lower the probability of scoring as you
hit towards the left because you’re likely to miss,
and it might actually raise the probability of your scoring as
you hit towards the middle, because you hit the ball so
hard it’s really pretty hard for the goal keeper to stop it.
Here it goes in the middle, and if you look carefully
there, I didn’t really make it clear enough,
you can see, suddenly a strategy that looked
crazy shooting to the middle, that suddenly started to seem
okay. It turns out,
if you look at those dotted lines, there’s an area in the
middle, the area between here and here,
this little area here, you actually might be just fine
shooting to the middle. So in reality,
we need to take into account a little bit more,
and in particular, we need to take into account
the abilities of players to hit the ball accurately and/or hard.
And if those people, if you’re interested in that
─ and I realize at this point I probably lost the
interest of most Americans in the room,
but for the non-Americans in the room, the people who are
interested in the real world ─ as I said before,
I’ll put that article online and that goes through all the
gory detail of this. I should just say that the data
I just gave you is real data but it’s actually mixed ability
data. This data comes half from the
Italian league, which is pretty good and half
from the French league, which sucks.
So who knows how much we should trust it.
Okay, so that was our example for the day and our first brush
with reality for the day. Let’s clean the board and do
some work. Do a bit more formal stuff here.
So here we have an example but I want to go back to the
generality and to a bit of formalism.
By the way, I should tell you that the game ended nil-nil or
0-0. It’s a moral victory for me I
think. So I want to be formal about
these things I’ve been mentioning informally.
And in particular, I want to be formal about the
definition of best response. I’m going to put down two
different definitions of best response, one of which
corresponds to best response to somebody else playing a
particular strategy like left and right,
and the other is just going to correspond to the more general
idea of a best response to a belief.
It’ll allow us to use our notation and just be a little
bit more nerdy. So Player i’s strategy,
Ŝ_i (there’s going to be a hat to single it
out) is a best response (always abbreviated BR),
to the strategy S_- i of the other players if ─
and here’s our real excuse to use our notation ─
if the payoff from Player i from choosing Ŝ_i
against S_- i is weakly bigger than her payoff
from choosing some other strategy,
S_i’, against S_- i and
this better hold for all S_i’ available to
Player i. So in previous definitions,
we’ve seen the qualifier, for all, be on the other
player’s strategy. Here, the qualifier for all is
on my strategy. So strategy Ŝ_i
is a best response to the strategy S_- i of the
other players if my payoff from choosing Ŝ_i
against S_- i, is weakly bigger than that from
choosing S_i’ against S_- i,
and this better hold for all possible other strategies i
could choose. There’s another way of writing
that, that’s kind of useful, or equivalently,
Ŝ_i solves the following.
It maximizes my payoff against S_- i.
So you’re all used to, I’m hoping everyone is used to
seeing the term max. As I solve the maximization
problem, how do I maximize my payoff given that other people
are choosing S_- i? Again, for the math phobics in
the room, don’t panic, this is just writing down in
words what we’ve already seen a couple of times already today,
well today and last time. Let’s generalize this
definition a little bit, since we want it to allow for
more general beliefs. So just rewriting,
Player i’s strategy, same thing, Ŝ_i
is a best response. But now let’s be careful,
best response to the belief P about the other player’s
choices, if ─ and it is going to
look remarkably similar except now I’m going to have
expectation ─ if the expected payoff to
Player i from choosing Ŝ_i,
given that she holds this belief P, is bigger than her
expected payoff from choosing any other strategy,
given she holds this belief P; and this better hold for all
S_i’ that she could choose.
So very similar idea, but the only thing is,
I’m slightly abusing notation here by saying that my payoff
depends on my strategy and a belief,
but what I really mean is my expected payoff.
This is the expectation given this belief.
Once again, we can write it the other way, or Ŝ_i
solves max when I choose S_i,
to maximize my expected payoff this time from choosing
S_i against S_-i.
What do I mean by expected payoff?
Just in our example, so just to make clear what that
expectation means, so for example,
the expected payoff to Player i in the game above from choosing
left given she holds the belief P is equal to the probability
that the goal keeper dived to the left,
times Player i’s payoff from choosing left against left,
plus the probability that the goal keeper dived to the right,
times Player i’s expected payoff from choosing left
against right. Okay, so expectation with
respect to P just means exactly what you expect it to mean.
So this is a little bit of math, a little formality,
but is everyone okay with that? I haven’t done anything here.
All I’ve done is write down slightly boringly and nerdily,
exactly what we already saw in a couple of occasions.
Student: [inaudible] Professor Ben Polak:
Thank you. So that right now is going to
seem a little bit like a sudden blast of notation,
so let’s just remind ourselves what we really care about is the
idea, it’s not the notation, and let’s spend the next half
hour applying these ideas to an application.
So this application is not as important as soccer,
but it’s a bit more Economicsy, so I can justify it under the
Economics title of the class. So clearing off my soccer game.
So imagine–What we’re going to look at is a game involving a
partnership. So Partnership Game.
And I believe this game is covered in some detail in the
Watson textbook, or something very close to it
is, if you’re having trouble. The idea is this.
There are two individuals who are going to supply an input to
a joint project. So that could be a firm,
it could be a law firm, for example,
and they’re going to share equally in the profits .
So one example would be a firm that they both earn,
sorry, they both own, and another example would be
two of you working as a study group on my homework assignment.
So they’re going to share equally in the profits of this
firm, or this joint project, but you’re going to supply
efforts individually. So let’s just be a bit more
formal. So the players are going to be
the two agents and they own this firm let’s call it.
They own this firm jointly and they split the profits,
so they share 50% of the profits each.
So it’s a profit- sharing partnership.
Each agent is going to choose her effort level to put into
this firm. So, it could be that you’re
deciding, as a lawyer, how many hours you’re going to
spend on the job. So for most of you these
decisions will be a question of whether you spend 20 hours a day
at the firm or 21 hours a day at the firm,
something like that. For most of you on your
homework assignments, I’m hoping it’s a little less
than that, but not much less than that.
So the strategy choices, we’re not going to do it in
hours, let’s just normalize and regard these choices as living
in 0 to 4, and you can choose any number
of hours between 0 and 4. Just to mention as we go past
it, a novelty here. Every game we’ve seen in the
class so far has had a discrete number of strategies.
Even the game, when you chose numbers,
you chose numbers 1,2, 3,4, 5, up to a 100,
there were 100 strategies. Here there’s a continuum of
strategies. You could choose any real
number in the interval [0,4]. So you have a continuum of
possible choices. That’s not going to bother us
but let’s point out it’s there. So there’s a continuum of
strategies. In principle,
you could bill your clients for fractions of a second or
fractions of a minute. Let’s wonder what the firm
profit is given by. So this partnership,
this law firm, its profit is given by the
following expression: 4 times the effort of Player I,
plus the effort of Player II, plus a parameter I’ll call B,
times the product of their efforts.
This is their profit. And I won’t tell you what B is
for now, but let’s just–I mean I won’t tell you exactly what it
is–but I’ll explain it. We’ll assume that B lives
between 0 and a 1/4 and it’s known, I just want to be able to
vary it later. So what’s the idea here?
The idea is Player I directly contributes profits to the firm
by working, as does Player II. But they also contribute
through this interaction term. How do we think of that
interaction term? How do we think of that term B
S_1 S_2? When you’re working on your
homework assignments, if your product,
the thing you hand in was just S_1 + S_2,
then you might think what? You might think there’s no
point working in a study group at all.
If the product is just the sum or multiple of a sum of the
inputs, there wouldn’t be much of a point working in a team at
all. It’s the fact you’re getting
this extra benefit from working with someone else that makes it
worth while working as a team to start with.
Is that right? So we can think of this term
has to do with complementarity, or synergy, a very unpopular
word these days but still: synergy.
So we’re going to assume that when you work together there are
some synergies. Some of you are good at some
parts of the homework, some of them are good at other
parts of the homework. And so in this law firm,
one of these guys is an expert on intellectual property and the
other one on fraud or something. So I’ve got the agents.
I’ve got the strategy set. I know something about the
profits of the firm. I need to tell you about their
payoffs. So the payoffs:
the payoff for Player I is going to depend,
of course, on her choice and on the choice of her partner,
and it’s going to equal a ½–because they’re
splitting the profits–so a ½ of the profits.
So a ½ of 4 times S_1 plus
S_2 plus B S_1 S_2.
She gets half of those profits but it also costs her
S_1 squared. So S_1 squared is her
effort costs, it’s her input costs.
This is the effort cost. Similarly, Player
II–everything’s symmetric here–Player II’s payoff is the
same thing. This term is the same except
we’re going to subtract off Player II’s efforts squared:
S_2 squared. So you get the profits of the
firm minus the disutility of having missed all that sleep.
There’s a guy in about the fifth row there who’s missed too
much sleep, so somebody just nudge him.
That’s it, good. We won’t put him on camera just
nudge him. That’s it, good.
There you go. Next time we’ll use the camera
for that. So now we have everything we
want to analyze this firm and to analyze how things are going to
work, either when you’re working on
your homework assignments or in the law partnership.
Again, just to make this relevant to you,
I mean this is very stylized of course,
but a huge number of businesses out there are partnerships and
do have this kind of profit sharing rule and do have
synergies. So this is a relevant issue in
a lot of businesses. Now we’re going to analyze
this–no secret here–we’re going to analyze this using the
idea of best response. That’s not a surprise to any of
you since that’s where we started the day.
So, in particular, I want to figure out what is
Player I’s best response to each possible choice of Player II?
What is Player I’s best response for each possible
choice S_2 of Player II?
How should I go about doing that?
How should I do that? So here what we did before was
we drew these graphs with probabilities,
with beliefs of Player I and the problem here is,
previously we had a nice simple graph to draw because there were
just two strategies for Player II.
Player II was a goalie, he could dive to the left or
the right. Problem is here,
that Player II has a continuum of strategies and trying to draw
all possible probabilities over an infinite number of objects on
the board is more than my drawing can do.
Too hard. So we need some other technique.
How are we going to find out Player I’s best response?
Somebody? Wave your hands in the air,
way back in the corner. Can I, can we,
let’s get the mike. Stand up but wait for the mike
to come to you. How are we going to do it?
How are we going to figure out what Player I’s best response
is? Shout loudly.
Student: [inaudible] Professor Ben Polak:
Good, okay. That’s certainly the first step.
We’ve got that, actually we’ve got that.
So here’s Player I’s payoff as a function of what Player II
chooses and what Player I chooses, so we have that
already. We have Player I’s payoff as a
function of the two efforts and now I want to find out what is
Player I’s best efforts given a particular choice of
S_2? Yeah.
Student: Take a derivative of S_1.
Professor Ben Polak: Good, take a derivative and–
Student: Set it equal to zero.
Professor Ben Polak: Okay, good.
So we’re going to use calculus. We’re going to use calculus of
one variable. We’re just changing one
variable, S_1. How many of you–we won’t let
the camera see you–how many of you have not–I’m not going to
show hands at all. If you have not seen the
calculus that I’m about to use on the board,
or more likely, if you’ve forgotten it since
high school, don’t panic. There is a chapter in the back
of the book, I think it’s chapter 25, that goes over this,
it refreshes your memory of such calculus.
And if you haven’t, if you’ve never seen it before,
if you haven’t taken, for example,
the equivalent of Math 112, come and see us.
We’ll probably try and line up a quick calculus lesson,
a special section for those people.
So if what I’m going to do now is scary, come and see us and
we’ll deal with it. All right, what I’m going to
do, we want to take a derivative of this thing.
What we’re going to do is, we’re asking the question,
what is the maximum, choosing S_1,
of this profit. Can I multiply the ½
by the 4 just to save myself some time?
So the profit is 2 S_1 plus S_2
plus B S_1 S_2 minus
S_1 squared. We’re asking the question,
taking S_2 as given, what S_1 maximizes
this expression and as the gentleman at the back said,
I’m going to differentiate and then I’m going to set the thing
equal to 0. So I’m almost bound to get this
wrong on the board. So can you all watch me like a
hawk a second? So if we differentiate this
object, I’m going to find a first order condition in a
second. All right, so we differentiate.
I’m going to have 2 still, and then this S_1 is
going to become a 1, and this S_1 here is
going to become a plus B S_2,
everyone happy with that? This S_1 squared is
going to become a minus 2S_1.
That was just differentiating. Everyone happy with the way I
differentiated? Is this coming back from high
school? The cogs are spinning now?
To make this a first order condition, I’m going to say “at
the best response,” put a hat over the 1.
At the best response this is equal to 0.
Yeah Tae, can you get the guy again.
Student: Wouldn’t that be 2, oh sorry,
never mind. Professor Ben Polak:
Okay, you’re right to shout out because I’m very–I mean doing
it on the board I’m very likely to make mistakes,
but okay. So I differentiated this
object, this is my first derivative and I set it equal to
0. Now in a second I’m going to
work with that, but I want to make sure I’m
going to find a maximum and not a minimum,
so how do I make sure I’m finding a maximum and not a
minimum? I take a look at the second
derivative, which is the second order condition.
So I’m going to differentiate this object again with respect
to S_1,. Pretend the hat isn’t there a
second. And none of this has an
S_1 in it, so that all goes away.
And I’m going to get minus 2, which came from here:
minus 2 and that is in fact negative, which is what I wanted
to know. To find a maximum I want the
second derivative to be negative.
So here it is, I’ve got my first order
condition. It tells me that the best
response to S_2 is the Ŝ_1 that solves
this equation, that solves this first order
condition. We can just rewrite that,
if I divide through by 2 and rearrange, it’s going to tell me
that Ŝ_1, or if you like,
Ŝ_1 is equal to 1 plus B S_2.
So this thing is equal to Player I’s best response given
S_2. Now I could go through again
and do exactly the same thing for Player II,
but I’m not going to do that because everything’s symmetric.
So everyone happy with that? So I could at the same–I could
do the same kind of analysis but we know I’ll get the same
answer. So similarly,
I would find that Ŝ_2 equals 1 plus
B S_1 and this is the best response of Player II,
as it depends on Player I’s choice of effort S_1.
Okay, now I found out what Player I’s best response is to
Player II, and what Player II’s best response is to Player I for
each possible choice of Player II up here,
and for each possible choice of Player I down there.
Now, let’s see if we can get a bit further.
And to get a bit further, let’s draw a picture.
What I’m going to do is I want to draw the two functions we
just found and see what they look like.
This is all in your notes already, so I can get rid of it.
What I can do with is some more chalk.
Excuse me. So what I’m going to do is,
let’s draw a picture that has S_1 on the horizontal
axis and S_2 on the vertical axis.
And there are different choices here 1,2, 3, and 4 for Player I,
and here’s the 45º line.
If I’m careful I should get this right 1,2,
3, and 4 are the possible choices for Player I.
Now before I draw it I better decide what B is going to be.
Okay, so I’m going to draw for the case–I’m going to draw the
best response of Player I and I’m going to draw the best
response for Player II in a minute,
for the case B equals 1/4. So we said B was somewhere
between 0 and 1/4, let’s draw the case for B
equals 1/4. So the expression I want to
draw first of all is the best response of Player I as a
function of S_2 and we agreed that that was given by 1
plus 1/4 now, 1 plus 1/4 S_2.
So for each possible choice of S_2,
I’m going to draw Player I’s best response and we’ll do it in
red. So if Player II chooses 0,
what is Player I’s best response?
Somebody shout out. Student: 1.
Professor Ben Polak: 1, okay.
So 1 plus 1/4 of 0 is 1, so if Player II chooses 0,
Player I’s best response is to choose 1.
What if Player II chooses 4? If Player II chooses 4,
what would be Player I’s best response?
So it’ll be 1 plus 1/4 times 4,1/4 times 4 is 1,
so 1 plus 1 is 2, so Player II’s best response in
that case will be 2. So if Player I chooses 4,
Player II should choose, I’m sorry, Player II chooses 4,
Player I should choose 2, and this is a straight line in
between. So the line I’ve just drawn is
the best response for Player I as it depends on Player II’s
choice. Everyone happy with how I drew
that? I’m assuming you’re taking it
on faith that it is a straight line in between,
but it is. The way we read this graph,
is you give me an S_2,
I read across to the pink line and drop down,
and that tells me the best response for Player I.
Now we can do the same for Player II, we can draw Player
II’s best response as it depends on the choices of Player I,
but rather than go through any math, I already know what that
line’s going to look like. What does that line look like?
Somebody raise your hand. Somebody?
What will Player II’s best response look like as a function
of Player I’s choice in the same picture?
Someone we haven’t had before, we’ve had all these guys
before, someone else. Yeah, there’s a guy in the
middle, can we get in to him? Yeah, maybe it’s easier from
that side. Shout out loud so the mike can
hear you. Student: It should be a
reflection across the 45° line.
Professor Ben Polak: Right, exactly.
So if I drew the equivalent line for Player II,
which is Player II’s best response for each choice of
Player I, we’re simply flipping the
identities of the players, which means we’ll be reflecting
everything in that 45º line.
So it’ll go from 1 here to 2 here, and it’ll look like this.
So this the best response for Player II for every possible
choice of Player I, and just to make sure we
understand it, what this blue line tells me is
you give me an S_1, an effort level of Player I,
I read up to the blue line and go across and that tells me
Player II’s best response. Okay, now we’re making some
progress. What do we notice?
Remember we said that one of the lessons of today’s class,
the second lesson. The first lesson was don’t
shoot towards the middle of the goal and the second more general
lesson was what? It was don’t ever play a
strategy that is not a best response to everything.
I admit I’m cheating a little bit here because I’m ignoring
beliefs, but trust me that’s okay in this game.
So are there strategies here that are never a best response
to anything? Put another way,
what strategies of Player I’s are ever a best response?
Anybody? Well, let’s have a look.
If Player II chooses 0 then Player I’s best response is 1,
and that’s as low as he ever goes.
So these strategies down here less than 1 are never a best
response for Player I. If Player II chooses 4,
then the synergy leads Player I to raise his best response all
the way up to 2, but these strategies up here
above 2 are never a best response for Player I.
Is that right? So the strategies below 1 and
above 2 are never a best response for Player I.
Similarly, for Player II, the lowest Player I could ever
do, is choose 0, in which case Player II would
want to choose 1, so the strategies below 1 are
never a best response for Player II.
And the strategies above 2 were never a best response for Player
II. So let’s actually–you might
want to be a little bit gentle in your own notebook–but on my
board let’s get rid of all these strategies that are never a best
response. So all of these strategies for
Player I are gone, and all of these strategies for
Player I are gone. You might want to not scribble
quite so much on your own notebook, but still.
And all these strategies for Player II are gone,
and all these strategies for Player II are gone,
and what’s left? A lot of scribble is left.
What’s left? So I claim if you look
carefully there’s a little box in here that’s still alive.
I’ve deleted all the strategies that were best–that are never
best responses for Player I and all the strategies that are
never best responses for Player II,
and what I’ve got left is that little box.
But I can’t see that little box, so what I’m going to do is
I’m going to redraw that little box.
So let’s redraw it. So it goes from 1 to 2 this
time. I’m just going to blow up that
box. So this now is 1,1 and up here
is 2,2 and let’s put in numbers of quarters, so this will be,
what will it be? It’ll be 5/4,6/4 and 7/4,
and over here it’ll be 5/4, and 6/4, and 7/4.
And let’s just draw how those, that pink and blue line look in
that box. This is just a picture of that
little box, so it’s going to turn out it goes from–the pink
line goes from here to here and the blue line goes from here to
here. We can work it out at home and
check it carefully, but this isn’t that incorrect.
So what I’ve done is I’ve redrawn the picture we just had
and blown it up. And have any of you seen that
picture before? Anyone here seen that picture
before? That’s the picture we just had,
except I’ve changed the numbers a bit.
Once I deleted all the strategies that were never a
best response and just focused on that little box of strategies
that survived, the picture looked exactly the
same as it did before, albeit it blown up and the
numbers changed. So what have we done so far?
We said players should never play a strategy that’s never a
best response to anything, so we threw those away.
Now what’s left? What should we do now?
So some of the strategies that we didn’t throw away were best
responses to things, but the things they were best
responses to have now been thrown away.
Is that right? This should be something
familiar from when we were deleting dominated strategies.
The strategies I’m about to throw away now,
they’re not–it isn’t that they’re not best responses,
they are best responses to something.
But the things they were best responses to,
we know are not going to be played, because they themselves
were not best responses to anything.
So what strategies do I have in mind?
What strategies am I about to throw away?
Well, for example, for Player I we know now that
Player II is never going to choose any strategy below 1,
and so the lowest Player II will ever choose is 1,
and it turns out that the lowest Player II would ever do
in response to anything is 1 and above,
never leads Player I to choose a strategy less than 5/4.
The highest Player II ever chooses is 2,
and the highest response that Player I ever makes to any
strategy 2 or less is 6/4, so all these things bigger than
6/4 can go. Let’s be careful here.
These strategies I’m about to delete, it isn’t that they’re
never best responses, they were best responses to
things, but the things they were best responses to,
are things that are never going to be played,
so they’re irrelevant. So we’re throwing away all of
the strategies less than 5/4 for Player I and bigger than 6/4 for
Player I, (which is 1½
for Player I) and similarly for Player II.
And if I did this–and again, don’t scribble too much in your
notes–but if we just make it clear what’s going on here,
I’m actually going to delete these strategies since they’re
never going to be played–I end up with a little box again.
So everyone see what I did? I started with a game.
I found out what Player I’s best response was for every
possible choice of Player II, and I found out what Player
II’s best response was for every possible strategy of Player I.
I threw away all strategies that were never a best response,
then I looked at the strategies that were left.
I said those strategies that were a best response to things
that have now been thrown away, but not best response
otherwise, I can throw those away too.
And when I threw those away, I was left once again with a
little box, and I could do it again, and again,
and again. If I go on doing this exercise
again, and again, and again what am I going to
end up with? Shout it out,
what am I going to end up with? The intersection, right?
If I keep on constructing these boxes within boxes,
so the next box would be a little box in here.
I’m not going to draw it, but it’s something like this.
But if you keep on doing boxes within boxes,
I’m going to converge in on that intersection.
So if we know people are not going to play a best
response–that’s never a best response,
and we know if we people are not going to play something
which is never a best response, and we know people are not
going to play which is not a best response,
which is not a best etc., etc., etc.
We’re going to converge in, in this game,
to just one strategy for each player, which is where they
intersect. So what we’re going to converge
in on to, is the S_1*, let’s call it in this case,
is equal to 1 plus B S_2* and that S2* is
equal to 1 plus B S_1*.
Actually, we can do it a little better than that,
since we know the game is symmetric,
we know that S_1* is actually equal to
S_2*. So taking advantage of the fact
that we know S_1 is equal to S_2 (because
we’re lying on the 45º line),
I can simplify things by making S_1* equal to
S_2*. So now I’ve got–actually,
that looks like three equations, it’s really just two
equations, because one of them implies the other.
And I can solve them, and if I solve them out I’m
going to get something like (let me just be careful) I’m going to
get something like: 1 minus B S_1* is
equal to 1, or S_1* equals
S_2* is equal to 1 over (1–B).
And again, anytime I’m doing algebra on the board,
someone should check me at home, so just have a quick look
at that. Is that right?
I think that’s right. My algebra, which is often
wrong, suggests that the solution is S_1*
equals S_2* equals 1 over (1–B).
But what I’m doing, it’s just math,
there’s nothing interesting going on.
I’m just trying to solve out for the equation of this point.
So what did we learn here? We learned that in this game
deleting strategies that are never best response,
and then deleting strategies that are never best response to
anything that is a best response and so on and so forth,
yielded a single strategy for each player.
Just one strategy for each player and that strategy was
given by this equation. So if we were management
consultants working for Mckinsey or something,
and we were brought in to advise you on your homework
assignments, or this law partnership on
their work practices, we would come down with a
prediction that this is how much work you’re going to get.
Question, is this amount of work a good amount of work or a
bad amount of work? Here you are,
you’re working for Mckinsey, you’ve been hired by Joe Smith
and Ann Blogs to figure out their strategy,
working on a problem in a team on working on my homework
assignments. You figured out how much work
they’re going to contribute. Is this a good amount of work?
Are they contributing too much, too little?
Because the answer is, depending on,
compared to what? So let me rephrase,
are these people, are these pair of partners in
the firm, or two students working on
their homework assignments, are they working more or less
than an efficient level? Let’s have a poll,
who thinks more? Turn the camera out into the
audience, let’s have a look. Who thinks more?
Who thinks they’re working just right?
Who thinks less? A lot of abstentions here.
I think they’re working too little here compared to what’s
efficient. I’ll get you to solve it out on
a homework assignment, so you can actually prove that.
You can prove that, in fact, if you were writing a
contract, if there was a social planner you’d work more.
But let’s try and get to grips why.
Why is it that when we see these law partners,
or medical partners, or whatever it happens to be,
or students together on a homework assignment,
why is it we tend to get inefficiently little effort when
we start figuring out the strategy and working through the
game? I’m conceding the answer.
I’m telling you they’re going to work too little.
Why do they end up not working hard enough?
Any takers? Can we get a mike in here, yeah.
Student: Because if they work any harder than that,
then the other person is just going to slack off instead.
Professor Ben Polak: All right, so there’s something
about that, there’s something. On the other hand,
this isn’t really, I mean the intuition you’re
giving me is kind of a Prisoner’s Dilemma intuition.
Saying I’m going to let the other guy work and I’ll shirk.
But there’s something, I think there’s something in
that, but there’s a little bit more going on here,
what more is going on? I think that’s a good first
step. There is something of that.
Yeah. Student: If there are
two people working together, there’s about half as much work
for each person to do. Professor Ben Polak:
That’s true, but that would suggest it doesn’t matter if
they slack off. What’s going on here,
so go back to your Economics 115 or 150, if you took either
of those courses. What’s the problem here?
What’s underlying the problem? Let’s get this guy in the pink
down here. Student: They only
capture fifty percent of their marginal benefit.
Professor Ben Polak: That’s the point.
Good, well your name is? Student: Patrick.
Professor Ben Polak: So Patrick is giving,
I think, it’s the correct answer here.
The problem here isn’t really about the amount of work.
It isn’t even, by the way, about the synergy.
You might think it’s because of this synergy that they don’t
take into account correctly. That isn’t the problem here.
It turns out even without the synergy this problem would be
there. The problem is what Patrick
said. The problem is that at the
margin, I, a worker in this firm, be it a law partnership or
a homework solving group, I put in, I bear the cost,
at the margin, I bear the full cost at the
margin for any extra unit of effort I put in,
but I only reap half the benefits.
At the margin, I’m reaping,
I’m bearing the cost for the extra unit of effort I
contribute, but I’m only reaping half of
the induced profits of the firm, because of profit sharing.
That leads all of us to put in too little effort.
What’s the general term that captures all such situations in
Economics? It’s an “externality.”
It’s an externality. There’s an externality here.
When I’m figuring out how much effort to contribute to this
firm I don’t take into account that other half of profits that
goes to you. So this isn’t to do with the
synergy. It isn’t to do with something
complicated. It’s something you knew back in
115. If you have profit sharing in a
firm or profit sharing in homework assignment,
or any joint projects, you have to worry about too
little effort being contributed because there’s an externality.
My efforts benefits you not just me.
While we’ve got this on the board, let’s just think a little
bit more. What would happen if we changed
the degree of the synergy? What would happen if we lowered
B? So B is the degree to which the
synergy across these workers, if we lowered B,
what would happen to our picture?
Let me redraw a picture unscribbled.
We had a picture that looked something like this.
This was S_1 and this was S_2.
If we lowered the degree of synergy, what would happen to
the effort level that we’d find by this method?
What would happen? What would happen to the
picture? Anybody, again this is a 115
kind of exercise, we’re going to be moving lines
around. Yeah, Henry isn’t it?
So let’s get a mike in to Henry. Student: The lines will
get shallower and eventually become horizontal and vertical
respectively. Professor Ben Polak: All
right, good. So the pink line is actually
going to get steeper, but I know what you mean.
So the pink line is going to move towards the vertical,
and the blue line is going to move towards the horizontal,
and notice that the amount of effort that we generate,
goes down dramatically, goes down in this direction.
So if we lower the synergy here, not only do I contribute
less effort, but you know I contribute less effort,
and therefore you contribute less effort and so on.
So we get this scissors effect of looking at it this way.
We could draw other lessons from this, but let me try and
move on a little bit. We decided in this game to
solve it by looking at best responses, deleting things that
were never a best response, looking again,
deleting things that were never a best response,
and so on and so forth, and luckily,
in this particular game, things converged and they
converged to the points where the pink and the blue line
crossed. What do we call that point?
What do we call the point where the pink and the blue line
cross? That’s an important idea for
this class. That’s going to turn out to be
what’s called a Nash Equilibrium.
So we know what it’s called. How many of you have heard the
term Nash Equilibrium before? How many of you saw the movie
about Nash? We’ll come back and talk about
that a bit next time. So this is a Nash Equilibrium,
but okay we know what it is in jargon, and we know,
we kind of knew that was going to be an important point,
because most of you have taken Economics courses before and you
know that whenever lines cross in Economics it’s important,
right? But what does it mean here?
Why is it–what’s going on at that line?
What does it tell us that the pink and the blue line cross?
What makes that point special? What does it mean to say the
pink and the blue line cross? Can I get the guy way back,
like three rows behind you in purple?
Shout out again. Student: It means that
neither player has an incentive to deviate from that point.
Professor Ben Polak: All right, well that’s correct.
So let’s try and read that through.
So I don’t know your name, your name is?
Student: Allen. Professor Ben Polak: So
Allen is saying if Player I is choosing this strategy and
Player II is choosing her corresponding strategy here,
neither player has an incentive to deviate.
Another way of saying it is: neither player wants to move
away. So if Player I chooses
S_1*, Player II will want to choose
S_2* since that’s her best response.
If Player II is playing S_2*,
Player I will want to play S_1* since that’s his
best response. Neither has any incentive to
move away. So more succinctly,
Player I and Player II, at this point where the lines
cross, Player I and Player II are playing a best response to
each other. The players are playing a best
response to each other. So clearly in this game,
it’s where the lines cross. Let’s go back to the game we
played with the numbers. Everyone had to choose a number
and the winner was going to the person who was closest to 2/3 of
the average. (By the way,
the winner’s never picked up their winnings for that,
so they still can.) So in that game, what’s the Nash
Equilibrium in that game? Everyone choosing 1.
How do we know that’s the Nash Equilibrium of that game?
How do we know that everyone choosing 1 is the Nash
Equilibrium in the game where you all chose numbers?
Well let’s just use the definition.
If everybody chose a 1, the average in the class would
be 1,2/3 of that would be 2/3 and you can’t go down below 1,
so everyone’s best response would be to choose 1.
Say it again: if everyone chose 1,
then everyone’s best response would be to choose 1,
so that would be a Nash Equilibrium.
Did people play Nash Equilibrium when we played that
game? No, they didn’t.
Not initially at any rate, but notice as we played the
game repeatedly, what happens?
As we played the game repeatedly, we noticed that play
seemed to converge down towards 1.
Is that right? In this game,
when we analyzed the game repeatedly, it seemed like our
analysis converged towards the equilibrium.
Now that’s not always going to happen but it’s kind of a nice
feature about Nash Equilibrium. Sometimes play tends to
converge there. Nash Equilibrium’s going to be
a huge idea from now to the mid-term exam and we’re going to
pick it up and see more examples on Wednesday.

No Comments

Leave a Reply