Predicting the outcome of the World Cup in 150 lines of Python
It seems like Machine learning and Deep Learning are everywhere these days. AI can do anything! From writing essays to cheat tests to generating to having an infinite image generating machine running on your local machine, the possibilities are endless. So it should come as no surprise that you can use deep learning to predict the outcome of major sports tournaments.
I mean, it shouldn’t come as a surprise and I am sure you can, but today we’re going to talk about how to predict the Qatar World Cup without any machine learning toolkits, in just 150 lines of Python. I’m going to use Neptyne, the programmable spreadsheet — partly because that’s the startup I co-founded, but also since it’s a good platform for this sort of thing. The principles should work for any Python platform though.
Our basic plan here is to acquire recently played matches, calculate the relative strengths of each of the countries and then simulate the tournament a thousand times so we have an idea of the likely outcomes.
Training Data
Predictions are hard especially about the future they say, but you have to start with the past. Kaggle has a good data set, so let’s download that. This has all the international matches since 1872 in it, so that’s a bit much. We want to only have recent matches and only of countries that have played more than ten:
def upload_results():
"""Call this function and upload the data from kaggle"""
tbl = nt.upload("Kaggle International football results", "*.csv").dropna()
cutoff_date = sorted(tbl["date"])[-2500]
tbl = tbl[tbl["date"] >= cutoff_date]
counts = Counter(
itertools.chain.from_iterable(
((rec['home_team'], rec['away_team']) for _, rec in tbl.iterrows())
)
)
thress_hold = {country for country, count in counts.items() if count <= 10}
res = [
(DATEVALUE(rec['date']), rec['home_team'],
rec['away_team'], rec['home_score'], rec['away_score'])
for _, rec in tbl.iterrows()
if not rec['home_team'] in thress_hold and not rec['away_team'] in thress_hold
]
res.insert(0, ["Date", "Home team", "Away team", "Goals for", "Goals against"])
if "History" in nt.sheets:
nt.sheets.delete_sheet("History")
nt.sheets.new_sheet("History")
History!B1 = res
This code takes the csv we got, keeps the last 2500 matches, filters out teams that have not played enough and sticks all of it in a new sheet in our spreadsheet called History and then keeps only the columns we’re interested in.
Feats of Strength
Next we want to use this historic data to calculate the relative strength of each team. Here’s where deep learning could come in, but it doesn’t. Instead we do the straightforward thing. We try to assign each team a score such that if two teams play each other, the difference in score will correspond to the expected goal difference between the two teams. We start with assigning each team a score of zero and then adjust those scores to match all the matches. Rinse and repeat for about a thousand times and we get reasonable ratings. Here’s the code:
def train_model():
scores = defaultdict(float)
min_goals = []
home_advantage = 0
for i in range(500):
delta = 5 / (i *10 + 50)
total_diff = []
ddd = 0
for country1, country2, goals1, goals2 in History!C2:F:
if i == 0:
min_goals.append(min(goals1, goals2))
diff = goals1 - goals2 - (scores[country1] - scores[country2]) - home_advantage
total_diff.append(diff)
ddd += abs(diff)
scores[country1] = scores[country1] + diff * delta
scores[country2] = scores[country2] - diff * delta
home_advantage += delta * diff / 20
Model!F2:G[i // 10] = [[i + 1, ddd / len(History!C2:F)]]
Model!C2 = sorted(scores.items(), key=lambda rec: rec[1], reverse=True)
Model!A2 = "Score Std:", "Av Min Goals:", "Min Goals Std:", "Home advantage"
Model!B2 = np.std(total_diff), sum(min_goals) / len(min_goals), np.std(min_goals), home_advantage
We read the history from the sheet (table) we just created and write the results to another sheet called Model. We also keep track of the standard deviation of the outcomes and the minimum number of goals, plus the actual average minimum number of goals. Keeping track of the home advantage makes the model a little more stable. Here are the results:
Brazil’s always a favorite — as is Germany who rank last in our overview of the teams with a decent score.
Predicting Games
We can now simulate individual games using some simple code:
def predict(team1, team2):
score1 = VLOOKUP(team1, Model!C2:D200, 2, False)
score2 = VLOOKUP(team2, Model!C2:D200, 2, False)
gf = np.random.normal(score1 - score2, Model!B2)
ga = 0
if gf < 0:
ga = -gf
gf = 0
mg = max(0, np.random.normal(Model!B3, Model!B4))
gf = max(int(round(mg + gf)), 0)
ga = max(int(round(mg + ga)), 0)
return gf, ga
This depends a bit more on the spreadsheet than the other code, but it should be easy to follow. We lookup the scores for each team and then pick a random value for the goal difference, normally distributed around the predicted goal difference. Add the minimum number of goals again normally distributed and make sure we don’t return values below zero.
Simulate the tournament
Simulating the entire tournament is now a matter of running this over all the matches. In a Neptyne spreadsheet we can easily model the structure of the tournament using some custom functions to model the outcomes of the groups for example:
def standings(games):
all_teams = {}
for team1, _flag, team2, result in games:
all_teams[team1] = 8 * [0]
all_teams[team2] = 8 * [0]
for team1, _flag, team2, result in games:
if result:
goals1, goals2 = split_score(result)
for team, gf, ga in zip((team1, team2), (goals1, goals2), (goals2, goals1)):
entry = all_teams[team]
entry[0] += 1
entry[1] += 1 if gf > ga else 0
entry[2] += 1 if gf == ga else 0
entry[3] += 1 if gf < ga else 0
entry[4] += gf
entry[5] += ga
entry[6] += gf - ga
entry[7] += (3 if gf > ga else (1 if gf == ga else 0))
return sorted(([k, *v] for k, v in all_teams.items()), key=lambda r:r[8], reverse=True)
This returns the group results given a list of games played. For group A this predicts an outcome like:
Looks reasonable to me. Netherlands taking the lead!
We can take this all the way and simulate all the games until the final. Here’s one predicted finals:
Simulating multiple tournaments
So now all we have to do is run the tournament a thousand times and we get the following prediction:
So, Brazil’s the favorite! Who would have thought. 4.1% for the Netherlands though.
That’s it for today. If you want to have a look at the code, have a look at the Neptyne Spreadsheet. If you want to get your own account, please join our waitlist and we’ll get you coding up a spreadsheet before you know it.
Neptyne: The Programmable Spreadsheet