Originally posted by Richardt Hansenthe solution is almost trivial:
Well actually your both right 😉
The "statistical" tool can ONLY be used for an indication on whether or not a player is using engine assistance. Why is that ?
We can use Fritz to evaluate a players moves and we can find out how many are 1. choice, 2. choice, .....
From Cludi's post I can see that the "statictical" tool developed gives a p-v assistance.
I have faith ...... Let's get a new super team up and running 😀
you take a control group, the past and contemporary masters, who will obviously represent the most accurate play, and thus the closest group to the stone cold tactical perfection of engine play.
from this group you can calculate the mean and variance, and voilà, you've got a set of confidence levels. and it gets even better: those levels are very conservative, as there are from few to none players here who can possibly sustain GM level play over a longer period. -so if you get a 'guilty' result with, say, 99.99% accuracy assuming the investigated player is GM level, the reality is that it's even more probable the suspected amateur is guilty.
Originally posted by murrowin practice it's just really not doable, as single games can have even zero moves to analyse, especially in CC. database moves are discarded before engine analysis. and it's not unheard of scoring 100% match-up in a single game.
Obviously (to someone with knowledge of statistics), the smaller the set of games, the higher the threshold of statistical significance. But there's no reason in principle why allegations relating to a specific selection of games could not be tested statistically.
however, there can be single games with pretty concrete proof of engine use, detected by human analysis. the biggest group being known easy theoretical endings outside engine's computational horizon, where any decent human player can see (for example) the draw instantaneously, but an engine user plays on thinking "+1.83" means he's winning.
Originally posted by wormwoodI think this method has 2 problems:
the solution is almost trivial:
you take a control group, the past and contemporary masters, who will obviously represent the most accurate play, and thus the closest group to the stone cold tactical perfection of engine play.
from this group you can calculate the mean and variance, and voilà, you've got a set of confidence levels. and it gets even be ...[text shortened]... er is GM level, the reality is that it's even more probable the suspected amateur is guilty.
1) modern engines are strategically very good chess players, Rybka is claimed to evaluate positions at least as good as a master.
2) it's very likely that high level correspondence games are tactically way accurate than OTB GM games.
Originally posted by wormwoodI wasn't suggesting single games.
in practice it's just really not doable, as single games can have even zero moves to analyse, especially in CC. database moves are discarded before engine analysis. and it's not unheard of scoring 100% match-up in a single game.
I was suggesting a specific set of games. e.g. 20 games being a player's games in a tournament which were against players of roughly equal rating.
Originally posted by diskamyl1) That actually makes it easier to identify cheaters;
I think this method has 2 problems:
1) modern engines [b]are strategically very good chess players, Rybka is claimed to evaluate positions at least as good as a master.
2) it's very likely that high level correspondence games are tactically way accurate than OTB GM games.[/b]
2) That's not corroborated by pre-engine CC evidence.
Edit - Wasn't the vote on Game Mods supposed to be up by now?
Originally posted by diskamyl1) no they're not, the only thing they do is calculate. their 'strategical understanding' is a rigid set of hard coded assumptions with absolutely no relation to the position at hand. we've been over this many times already.
I think this method has 2 problems:
1) modern engines [b]are strategically very good chess players, Rybka is claimed to evaluate positions at least as good as a master.
2) it's very likely that high level correspondence games are tactically way accurate than OTB GM games.[/b]
2) gatecrasher wrote: "We found that the match-up rates of pre-computer era CC GMs and modem OTB GMs are very similar. Even super GMs are only about a percentage point or so higher than the norm. All strong verifiably human play exists in quite a tight band."
Originally posted by PalynkaIn my opinion all statistical tools are doing here is narrowing down the choices. They are used to give a probablity that someone is cheating. I think this probability is still highly subjective as factors such as the speed of the computer, the time it is left to think and the exact chess programme being run will all affect the results as will the quality of the DB used to determine when (if) someone leaves theory.
Statistical analysys cannot "prove" anything without uncertainty when there is randomness or disturbances in the data and samples are finite.
It can, however, provide us with evidence. Which is exactly what you admit that it does.
Once the statistical analysis is completed it surely becomes necessary to look through a players games to find precise evidence of computer moves. The higher the probability and the weaker the true strength of the player the greater the likelihood of finding such moves will be. It is finding engine moves that prove a player is cheating not a satistical analysis of his game.
Genuinely strong players would simply be able to avoid engine moves making the likelihood of them being caught remote but why should a genuinely strong player (like cludi) cheat when he doesn't need to. However, it is likely that even such a player would slip up and make an obvious engine move eventually.
Originally posted by Dragon FireI going to go out on a limb here, but I assume it would Certainly have to be more than one engine move, to be proof beyond any doubt.
In my opinion all statistical tools are doing here is narrowing down the choices. They are used to give a probablity that someone is cheating. I think this probability is still highly subjective as factors such as the speed of the computer, the time it is left to think and the exact chess programme being run will all affect the results as will the qualit ...[text shortened]... it is likely that even such a player would slip up and make an obvious engine move eventually.
Originally posted by Very RustyI am still not buying it either. I mean in almost all of my games once it is completely out of opening or any game I can find that corresponds with my position...I look for the best move...and as i have posted earlier I had a friend run quite a few of my games and it came out that Fritz thought my moves were the best too...as well as fritz thought my opponents moves were the correct ones.
I going to go out on a limb here, but I assume it would Certainly have to be more than one engine move, to be proof beyond any doubt.
Now this was not on every move but it was quite a few.
Certain positions call for certain moves...it all depends if you are an agressive player or you are passive.
Since i don't own Fritz I have no clue whether it plays passively or aggresively...or if you can set it for that or what.
I think I am gonna buy that dang Shredder program so i can finally see what all of the fuss is about with all of this.
Dave
An aspect that hasn't been much commented on is the quantity (as well as the quality) of the moves played. My suspicions against the likes of Ironman and Meman were not just based on the nature of their moves (though they were pretty blatant) but the sheer quantity of the moves they managed to play. They would often churn out thousands of precise, machine-like moves month after month. (And apparently they were active on other sites too!) I simply don't think the strongest players in the world with nothing better to do would be able to play such a quantity of moves at that standard. However, I do accept that it might be difficult to include such factors in testing mechanisms.
Originally posted by PalynkaPerhaps there isn't a need for a vote ! It just might be that they may run with the names that were mentioned? Most of them seemed to be suitable to me.
1) That actually makes it easier to identify cheaters;
2) That's not corroborated by pre-engine CC evidence.
Edit - Wasn't the vote on Game Mods supposed to be up by now?
Originally posted by Very RustyRuss
Perhaps there isn't a need for a vote ! It just might be that they may run with the names that were mentioned? Most of them seemed to be suitable to me.
[England] 1160
RHP Code Monkey
Location : RHP HQ
Joined : 21 Feb '01
Moves : 1116
12 Mar '08 13:19 :: 0 recommendations
Thread closed, and the vote will be created within 24 hours.
-Russ
Originally posted by Northern LadWell that is a good indicator right there...I mean my game load is already killing me...and the clan thing keeps adding new ones...so in the opening My MCO is right beside me...as well as any specific opening book is on my shelf...but after awhile on certain games I have to slow down...check my database...check the Pitt site...then set up my analysis set ( Ihate that "analyze board feature...I have to feel the pieces...seems to make me think better ).
An aspect that hasn't been much commented on is the quantity (as well as the quality) of the moves played. My suspicions against the likes of Ironman and Meman were not just based on the nature of their moves (though they were pretty blatant) but the sheer quantity of the moves they managed to play. They would often churn out thousands of precise, machi ...[text shortened]... However, I do accept that it might be difficult to include such factors in testing mechanisms.
So anyone playing a ton of games and moving fast in all of them and playing perfect moves...something would be up.
Good point NL
Dave