A major issue with an online chess competition recently came to my attention. A student, and a very decent and honest person I’ve known for years, was banned for alleged ‘cheating’. There was no right of appeal and no opportunity to answer the charges. The only option open to him was to ‘promise he wouldn’t do it again’, thereby admitting guilt. He wouldn’t do that because he hadn’t cheated in the first place.
Prior to being banned he had been working hard on his game and was showing an upswing from a previous plateau. He had been working on his tactics and endgames and assiduously studying my Building an Opening Repertoire course that brings PLANNING to one’s opening and early middle game play. He had booked extra lessons and things were starting to come together. Then suddenly (but not unexpectedly) he hit a good patch in which he disposed of some opponents with aplomb. They played rather horribly but a series of good wins was still a healthy sign with regard to his chess.
So what had set the alarms off? Essentially a computer algorithm had detected unusually good play in a series of games, well above his expected level. It wasn’t a question of him choosing the top computer pick in each position, it just judged his play to be way better than it was previously. So how was it decided that he was cheating rather than improving, or even just having a good run?
After writing in to vouch for his honesty, hard work and an upswing in his chess, I had several approaches from people saying how great and reliable the detection system was and how, by implication, my student had to be cheating. There seemed to be a certain lack of willingness to discuss the exact nature of their procedures but in one conversation I learned that the validity of the computer algorithm that flagged him was partly based on ‘admissions of guilt’. At this point I saw a problem.
When players are flagged and given the option to ‘promise not to cheat again’ to re-enter the competition, denial will mean that they lose any fees they’ve paid to participate in this competition or on the server on which it is hosted. If these ‘promises’ are then taken as ‘admissions of guilt’, the detection software may seem to have amazing results, at least via the ‘admission’ criteria. Of course it doesn’t take a massive understanding of statistics and testing to know that this is not fair. Those accused are being put under pressure and being given a clear incentive to admit they cheated, whether or not they actually did. Smack on the wrist and then back to the tournament with the system being given a slap on the back for its ‘accuracy’.
I have been assured that other methods have been used to verify the algorithm’s accuracy but details have not been supplied. Could it be ‘human judgement’, that most flawed of tests? In any case I would welcome a fuller disclosure of testing procedures, as I’m sure all chess players would.
Why should a chess site be using such a clearly flawed criteria as a coerced admission? A certain amount may be just ignorance about legal and testing procedures. But they also have a problem that many unscrupulous individuals may be using outside help, and possibly in very subtle ways. Meanwhile they need to make it look as if internet chess is being fairly played to attract people to it. So there must be a temptation to smudge the legal and scientific integrity of tests because a ‘greater good’ is at stake.
Meanwhile I reckon that a lot of innocent players are probably being falsely accused and banned, and will leave internet competition because they won’t lie and admit they cheated. Obviously this is a gross injustice, so if you’ve been one of them then I’d like to hear from you via the contact form. Discretion is assured and your stories may help open this can of worms. If I get enough new material I will revisit this issue in a later article but keeping names out of it.