I'm new to Python and doing a small project to simulate how often a streak of six heads or tails will come up in a random coin toss.
I came up with the code easy enough (I thought), but when I decided to try writing the code in a different way I noticed a discrepancy in the output. The first script tends to output a streak between 3% to 3.5% of the time. The second script tends to only output a streak 1.5% to 2% of the time. I've tried to think through the lines carefully to see if there is a reason for the difference in the way I've written the two scripts... but I can't see it.
Can anyone explain why the first script tends to produce more streaks than the second?
Script 1
import random
streak = 0
for number in range(10000):
setOfSix = 0
# Generate a random list of heads or tails.
for tosses in range(6):
flip = random.randint(0, 1)
if flip == 0:
setOfSix -= 1
else:
setOfSix += 1
# Check if there is a streak in set of tosses
if setOfSix == 6:
streak += 1
elif setOfSix == -6:
streak += 1
else:
pass
print(streak)
Script 2
import random
streak = 0
for number in range(10000):
setOfTosses = []
# Generate a random list of heads or tails.
for tosses in range(6):
flip = random.randint(0, 1)
if flip == 0:
setOfTosses.append('T')
else:
setOfTosses.append('H')
# Check if there is a streak in set of tosses.
if 'H' and 'T' in setOfTosses:
pass
else:
streak += 1
print(streak)
Aucun commentaire:
Enregistrer un commentaire