I don't know if anyone here is interested in Trivia Crack. I am. It has become my main smartphone-based time-waster.
Anyway, I've been noticing recently that the first choice seems to be correct more often than would be expected by raw chance. By way of background, for anyone who doesn't, and hasn't, played T-Crack, questions come with four choices.
Now, sometimes such perceptions can be off. So I decided to do a little checking. I will track how many times the right choice comes up in each of the four positions. To be clear, I will not always track this. If I'm on the bus without pen and paper, I don't want that to keep me from playing. But what I will do is explicitly decide, before the fact, that I will track results for a particular session. Put another way, I won't decide, after playing, that I will track the results for questions already asked. Doing that would introduce the possibility of me subconsciously biasing the results.
But if I only include results that come after I have made the explicit decision to track what happens in a particular session, and if I always include the results once I have made the decision, that should eliminate the possibility of self-deception.
So far, I have done this twice, for two games. My results are summarized in the table. In the two games I played, I had 39 questions. Of those, the first choice was correct 23 times. That's more than half.Now, I am not sure how significant 39 questions is as a sample size. But I think this is suggestive. Remember -- it's not as if I played these games, realized that the first choice was correct a lot and then saved the results. I decided before playing that I would track the results because I suspected that the first choice would come up a lot.
I will report back after I have a larger sample. I don't know that keeping the split by category matters, though it will be interesting to see if I can notice a difference. Right now, what I have does not suggest anything. At least not to me. But I realize I don't have enough enough data to make any strong conclusions.
If anyone else plays T-Crack, I'd be curious to know if you're seeing the same thing. Also, what platform you play on -- if this is some glitch in the software I wonder if it's unique to Android (my platform) or if it's a cross-platform issue.
Perhaps I shouldn't post this. If there is a glitch, then knowing that fact gives me a competitive advantage over those who don't know. Posting this makes it more likely that others will know, and more likely that it will get fixed.
So, just consider that I am possibly sacrificing my T-Crack performance in the name of science. Or whatever.
Now, sometimes such perceptions can be off. So I decided to do a little checking. I will track how many times the right choice comes up in each of the four positions. To be clear, I will not always track this. If I'm on the bus without pen and paper, I don't want that to keep me from playing. But what I will do is explicitly decide, before the fact, that I will track results for a particular session. Put another way, I won't decide, after playing, that I will track the results for questions already asked. Doing that would introduce the possibility of me subconsciously biasing the results.
But if I only include results that come after I have made the explicit decision to track what happens in a particular session, and if I always include the results once I have made the decision, that should eliminate the possibility of self-deception.
So far, I have done this twice, for two games. My results are summarized in the table. In the two games I played, I had 39 questions. Of those, the first choice was correct 23 times. That's more than half.Now, I am not sure how significant 39 questions is as a sample size. But I think this is suggestive. Remember -- it's not as if I played these games, realized that the first choice was correct a lot and then saved the results. I decided before playing that I would track the results because I suspected that the first choice would come up a lot.
I will report back after I have a larger sample. I don't know that keeping the split by category matters, though it will be interesting to see if I can notice a difference. Right now, what I have does not suggest anything. At least not to me. But I realize I don't have enough enough data to make any strong conclusions.
If anyone else plays T-Crack, I'd be curious to know if you're seeing the same thing. Also, what platform you play on -- if this is some glitch in the software I wonder if it's unique to Android (my platform) or if it's a cross-platform issue.
Perhaps I shouldn't post this. If there is a glitch, then knowing that fact gives me a competitive advantage over those who don't know. Posting this makes it more likely that others will know, and more likely that it will get fixed.
So, just consider that I am possibly sacrificing my T-Crack performance in the name of science. Or whatever.
No comments:
Post a Comment