I performed some explicit calculations, which may help answer this
question: Let's say a player plays two games against two players, and
all of the players have the same rating to start with, which for the sake
of argument is under 2100, so that K=32 in the ratings formula. One
game is a loss and one game is a win. No matter which order the
games are completed, the player's rating increases by a net amount
of 0.736 points. If the player takes the loss first, then his (or her)
rating first decreases by 16, then increases by 16.736 (because the
win expectancy for the second game is now lower). If he takes the win
first, then his rating first increases by 16, then decreases by 15.264
(because the win expectancy for the second game is now higher). In
either case, the outcome is the same. Also, in either case, one
opponent's rating goes up or down by 16, and the other's rating either
goes down by 16.736 or goes up by 15.264. In either case, the
average rating of all the players remains 1200.
if the win expectancy for the second game is higher (after taking the
first win) shouldn't the decrease in rating be greater?...( not lower as
you suggest) I don't know the exact formula and haven't done the
calculations, but I would suspect that after the first win bringing you to
1216, the subsequent loss would bring you down 16.736(not 15.264)
to under 1200. As opposed to 1184 followed by (+16.736) to
1200.736. The average rating among all players will stay the same,
but one of them would be at 1199.264 and the other would be at
12.736.
Just from a laymen's perspective....a win w/ a lower rating to begin
with will give you a larger increase...AND a loss w/ a higher rating to
begin with will give you a larger decrease...Therefore, take the loss
first so your decrease will be smaller and your increase will be larger.
I couldn't confirm this mathematically because I don't know how to
calculate fractional exponents🙁 (I'm not even sure that ^ is
supposed to designate exponents) My math is spotty but I'm pretty
sure I've got the concept right.
Anyway, correct me if I'm wrong.
Franklin
Your first sentence is incorrect, so I will correct you as you asked: the
higher the win expectancy, the smaller the change in rating, win or
lose. The "^" symbol indeed stands for exponentiation. The formula
can be found in the help file. As a graduate level mathematics
professor, I stand by my original post.
Much as I hate to contradict, fexkorn is actually right - not due to an error in
your calculations (I hasten to add), but the fact that the help page is not
entirely clear:
The formula is better expressed:
Rn = Ro + K * (S - we)
where S is 1 for a win and 0 for a loss, and we is your win expectancy as
described in the help.
(I'm getting very nervous about clicking the post button now, so I'm adding a
disclaimer - I'm not a maths professor, this is supposed to clarify, not enrage.
Please correctly me nicely if I am wrong still!)
thanks for the clarification tom and koenig...I hope my previous post
didn't sound condescending. That was not my intent...and I
CERTAINLY don't claim any mathematical superiority...I actually went
to the Help section to get the formula but was unable work it through
becuase I couldn't remember how to do fractional exponents!
Your explanation is also consistent with the help page. That certainly
would make a difference in the outcome. Currently, the help page
states that one's rating will increase with each game, by the same
amount, regardless of the outcome! This is clearly ludicrous and
should be corrected. My question to you is: How do you know your
formula is correct? I hesitate to spend any more time analyzing the
original question until I know for sure the system used for the
calculation.
P.S. I did not mean to sound enraged in my previous post. Although
my credentials are as I stated them, it was meant as tongue-in-cheek
humor. I guess I should have used an emoticon :-/
I am going to reply to this question afresh, since I my previous reply
was in error due to a misinterpretation of the formula in the Help file,
as pointed out by 'tom'. I have analyzed the formula in his post and
confirmed it by observing the changes in the ratings in a recent game
of mine. My attempt at clarification follows:
If Player 1 wins, Player 1's rating increases by K * (1 - E1),
where E1 is the win expectancy of Player 1:
E1 = 1/(1+10^((R2 - R1)/400))
Player 2's rating also decreases by this same amount (using E1 in the
formula, not E2). Equivalently, one could say that Player 2's rating
decreases by K * E2, where E2 = 1/(1+10^((R1 - R2)/400)), and that
Player 1's rating increases by this same amount.
In general, one can say that a player's rating increases by K * (S - E),
where S is that player's result (0=loss, 1=win, 0.5=draw) and E is that
player's win expectancy (probability). Note that it can be shown that
E1 + E2 = 1, as required by probability theory.
So, in answer to the original question, yes, it is better to lose the
games you are going to lose first, then win the games you are going
to win. Let's say that a player plays against two people, and all three
start with the same rating, which is less than 2100, so that K = 32.
Losing first, then winning yields a net rating increase of 0.736,
whereas winning first, then losing yields a net rating decrease of
0.736. Incidentally, the first opponent's rating in either case goes up
or down by 16, while the second opponent's rating changes by
16.736. The average of all the ratings remains unchanged.