Machine Learning for Soccer Analytics

The pro cess of creating this. Every Overwatch game is played on a specific game map, and the team compositions actually depends on the map, i. SMOre g algorithm with a mean absolute error of 0. Ada LibSVM 85 0. Mo del T raining Time sec V s. Hero2vec: Embeddings are all you need

Discover the world's research

Recommendations

F rom this table, we noted that function based algorithms. Also it is possible to obtain a ranked list of. REP T re e. Note that for REP Tr ee the model generates a tree from which w e extract. In this pruning strategy , for all the four. W e stop the pruning at that. F or attackers dataset, the result summary for the Line ar Regr ession algorithm is. The results for other three datasets can be found in Appendix: Iterative Local Pruning Attackers: W e observe that the pruning saturates very quic kly.

T re e the pruning saturates in after tw o iterations, also for Linear R egr ession the. M5P algorithm takes three or four. The mean absolute errors at the saturation iteration of the. When we compare these results to the result obtained with. Global Ranked Pruning Figure 2. Iterative Local Pruning strategy , saturates at that iteration where the mean absolute. The set of attributes at this saturation p oint gives us the set of. Another aspect to be noted here is that, even though this saturation point is the.

F or example on attackers-dataset. The reduction in the num ber. When M5P is applied to goalkeepers dataset it. It is interesting to note here that Global Ranked Pruning.

Le astMe dSq we observe no c hange in mean absolute error with Iterative Ranked. Pruning, and it saturates with highest number of attributes in its set. This pruning strategy is similar to Iterative L o cal Pruning except that we use. W e implement this pruning. Out of this list we discard all those. W e use the remaining set of. W e keep on repeating the process. We perform this activity for all.

NumOfAttrib 94 61 NumOfAttrib 63 60 34 NumOfAttrib NumOfAttrib 42 42 25 Results for other three positions can be. The T able 2. Re gr ession over all the four pla yer types. For attack ers, both the threshold values. The results are summarised in T able 2. Threshold Pruning Optimal Results. NumOfAttrib 25 14 6 Local Pruning and Threshold Pruning. W e obtained optimal lists of attributes for. We compare the lists. For measuring the similarity betw een two rank ed lists of.

W e calculate the mean of all these similarities to obtain. Using this approach, the similarity at higher ranks. In this graph each of the selected attribute lists is compared with the. To a void comparison of the same tw o lists twice, each list on the axis. F or the other similarity results. W e observe that. This is a rank aggregation problem.

We solve this problem in the follo wing. For eac h of the play er type, we create a list of all attributes presents in all the. Next, we create a reward and penalty strategy for. F or each attribute w e note its occurrence in all the optimal.

Hence the reward is the sum of its rank from. Then we calculate the penalty. The top 25 attributes for attacker-list. Similarity between optimal lists for attack ers Legend for list names. Legend for list of optimal attackers attribute lists 2. Left F oot Blocked Shots. T eam F ormation. T otal F ouls W on. T ouches open play opp six yard. Headed Shots On T arget. Successful Ball T ouch.

T otal Successful Passes All. F ouls W on not in danger area. Saves Made from Outside Bo x. Position in F ormation. Right F o ot Blocked Shots. Characterising Matc h Outcome.

In this chapter, our objective to identify the most imp ortant performanc e attributes of. We will further investigate the extent to. We will prepare the datasets in suitable formats and perform several. W e will use attribute selection to identify the most important. Out of the attributes of play ers, attributes. The dataset released by OPTA did not con tain match-outcomes.

As we generated the match dataset from player dataset w e. Some of the performance metrics were averaged while most of them were. In the newly generated match dataset, eac h instance represented a. Hence the number of attributes in the dataset. Next, we analysed the data for dimensionality reduction. Then we create another data table. Hence this dataset has 20 instances, one for each team.

W e compare the. In this wa y we corroborate. Now w e have obtained appropriate. This is done to av oid generating trivial rules for characterising. The goals scored and conceded can on its own determine the. Next, we will execute sev eral machine learning algorithms on. W e execute a set of machine learning algorithms like MultiL ayer Perceptr on, F unc-.

The WEKA toolkit was. In the case of ensemble based algorithms the best. Hence bagging and boosting algorithms were not advantageous o ver other algorithms. F or evaluation of performance of algorithms we used fold cross-v alidation. W e observe that MultiL ayer Per ceptr on, F unc-. W e observe that w e get almost This indicates that the data.

Then we use attribute selection based on gain ratio to decrease the num ber of. A minimum threshold is set and attributes with gain ratio below that. The initial threshold is set to be.

We get 59 attributes. Then we change the threshold to 0. We observe that increasing. The results are for threshold value of 0. C W e observe that instance based algorithms like. KStar are worst performers, tree based algorithms using information gain also perform. Also as we go on decreasing the num ber of attributes.

W e observe that the performance of Multilayer p erc eptr on. Optimization algorithm remains maintains its good prediction. KStar algorithm works better with fewer and more useful attributes. F unctional T re e and Se quential Minimal.

Optimization algorithm give us reasonably good prediction even with threshold value. These top 34 attributes are shown in Table 3. F or a more comprehensive list w e can say that the 59 attributes obtained with. The rest attributes. We hav e the best performance with 34 attributes as. Also observing the performances of all algorithms we can conclude that the top MultiLayer P erceptron AdaboostM1 with FT REP T ree REP T ree 55 0.

V s gain-ratio threshold. Top 34 highest gain- ratio team attributes. Naive bayes and KStar algorithms: Com bining Play er Ratings.

In this chapter we will study ho w player expert ratings are correlated to match. F or this we need to perform. Next, we analyse how good is our ratings system for characterising the current matc h. This can be done using either only the ratings for the current match or it. Then we use our ratings system to predict the next match outcome where w e use.

The purpose of this experiment is to achieve the third and fourth objectives of our. Therefore the tw o objectives for the current experiment are:. To ev aluate the aggregation. Additionally , if the team ratings are go od at predicting the match outcome, it. First, it could mean that the best. This is shown in Figure 4.

Hence this hypothesis is weaker. The second h ypothesis is that the ratings. This case is highly likely. Combining Player Ra tings. Predicting match outcome using ratings of pla yer performances for. In the second case, as shown in Figure 4. T o investigate ho w well do team ratings generated from the aggregation method. The dataset contains instances, one instance for every. It has attributes, all of which. F or this experiment we also need to use information at.

This list of matches pla yed can then be appropriately linked to the. Expert ratings of player. Computing T eam Ratings. Using the prepared dataset from previous section, for each matc h of the EPL,. Hence we ha ve The number of instances is which equals. Howev er, these 27 attributes were created based on intuition and understanding of.

We should further c heck whether w e can add additional attributes or modify. F or this purp ose, we execute. Now, we will try to prune the least important attributes, i. W e observe these results. This selected subset of.

Hence, we create a new matc h dataset. The pro cess of creating this. The target class is match outcome. W e observe that Se quential. F unctional T re es algorithms gives us good results in terms of percentage of correctly. These results are shown in T able 4. Then we try to estimate. Predicting outcome with 27 rating attributes. As a result, we observe that the team a verage rating of home and aw ay. When we use only. Moreov er the maximum rating at play er positions.

Based upon these insights we realise that the best set of attributes for. The b est results obtained with this subset of attributes. Best subset of ratings attributes for classifying match outcome. T eam av erage rating for home team. T eam av erage rating of awa y team. Maximum ratings for attac ker for home team and. Maximum ratings for attac ker for aw ay team. With this result we can sa y that the list of attributes.

This validates the usefulness of our ratings aggregations method. Therefore the method to create the ratings attributes shown in T able 4. Predicting outcome with 8 selected 4. Correlating T eam Ratings with. The target class is the match outcome. The best aggr egated r atings.

Howev er, the aggregated. More ov er, the number of past matches to include in. We should also note that. Sometimes a player is. For this reason we will emplo y a weigh ting scheme.

Hence, if the window. How ever, in cases when the. This current weighted av erage scheme ensures that in all cases the. Initially we use a windo w size of two,. Next, we increase the window size in steps of. W e observe the trend of performance. The results for predicting the match outcome using the play er ratings from the. The window size indicates the num ber of matches past matches including. Also, note that more importance.

F rom the plot of this data in Figure 4. This trend shows that. And the further we consider the past performances, the worse our prediction. This trend is seen even though we used a w eighted scheme to tak e more.

So the decrease in prediction performance using an. Nevertheless, this is an empirical observation and does not rule out. We should also note that the best 8 attributes mentioned. This reiterates the importance of these. This task is intended to satisfy the last objective of the thesis. As shown in Figure 4.

For aggregating the pla yer ratings ov er a match, we use the same. Moreover we can either use. Similar to the previous section 4. However, this window will not include the curren t match. Hence a window size of three means that we include the ratings for the three matc hes. And similarly w e use a weighted a verage sc heme. F or example, with a window size of. In case the number of past. The intent of this sc heme is to have more con tribution from recent matches in the.

Matches, using SMO Algorithm. When we are using all the 27 attributes this critical point is reac hed at a. The prediction accuracy decrease after. The best performance we managed to achiev e is. W e observe that the. Another point of contrast between results of this predictive.

This points to the fact the outcome of the current is b est characterised by the ratings. F or our best prediction results Figure 4. The overall weighted a veraged F-measure being 0. When we remove all the.

This proves that an important reason for the lo w accuracy of our prediction for all. We were able to. Using this aggregation strategy. This set of attributes is shown in list 4. Next, in section 4. This shows a positive correlation between match. In the last part of this chapter, section 4.

The b est accuracy we obtained was. W e observed that this lo w accuracy. The simplicity of the rules and the familiarity of the tactical moves. In this thesis we used Machine Learning tec hniques on soccer match data to. W e split our dataset for the four play er positions i.

Then for each pla yer position, all the optimal lists of. These performance attributes best determine. Next, we performed a set of.

F rom our results we conclude. We had performance attributes in our datasets which had almost. W e also observed that. We presented an aggregation strategy using whic h. Next, we selected a list of 8 attributes. The second part of the third objective was to in vestigate. The number of past matches to be included in our aggregation.

The prediction p erformance rapidly deteriorated. This indicated that the outcome of a so ccer match is. This in turn asserts the. It was also observed that the best 8 aggregated. Thus, using our aggregation strategy we. Our fourth objective was to inv estigate the prediction of future matches using the.

For this task we generated the team ratings attributes. How ever, to generate the team ratings attributes we used only the. The number of previous matches to include was.

This thesis has explored the applications of Machine Learning techniques in soccer. This w ork can be extended with availabilit y of more detailed datasets. F or example, in this thesis we were restricted to only four pla yer positions which. Furthermore, for the aggregation of ratings ov er the. In this thesis, we limited ourselv es to notational analysis. With the availabilit y of better match. In this thesis, we.

The opportunities for applications of Machine Learning techniques in soccer analytics. Goals scored by a play er against his own team. Any goal attempt that goes into the net, would ha ve gone in to the net but for being.

Any goal attempt where the ball is going wide of the target, misses the goal or hits. Any goal attempt heading roughly on target tow ard goal which is blocked b y a. Sum of goals scored by the team and the own goals scored b y the opposition. T otal Goals Conceeded.

Sum of goals scored by the opponent team and the own goals done b y the team. A pass splitting the defence for a team-mate to run on to. Each pass is logged with. X and Y co-ordinates for its point of origin and destination. This is an attempt by a play er to beat an opponent in possession of the ball. A T ackle W on.

A T ackle Lost. This is a defensive action where a play er kicks the ball aw ay from his own goal with. This is where a player blocks a shot from an opposing play er. This is where a player wins bac k the ball when it has gone loose or where the ball. Where a player shields the ball from an opponent and is successful in letting it run.

Any infringement that is penalised as foul play b y a referee. Where a player is fouled b y an opponent. There is no foul won for a handball or a. A duel is an contest between t wo play ers of opposing sides in the match. Aerial Challenge won - A erial Challenge lost. This is where two pla yers challenge in the air against eac h other.

The play er that. When more than two pla yers are. The player who has been beaten is given a Challenge lost if they do not win the ball. A tackle is aw arded if a player wins the ball from another pla yer who is in possession. If he is attempting to beat the tackler, the other player will get an unsuccessful.

If he is in p ossession but not attempting to "beat" his man, then he will. F oul won-F oul conceded. The player winning the foul is deemed to ha ve won the duel and the pla yer committing. A goalkeeper preventing the ball from en tering the goal with any part of his body.

A player or team who does not concede a goal for the full match. A high ball that is caught by the goalk eeper. A high ball that is punched clear by the goalk eeper. A high ball where the goalkeeper gets hands on the ball but drops it from his grasp. A Brief In tro duction to T erms. This appendix is cr e ated using information obtaine d fr om WEKA documentation.

This algorithm uses linear regression for prediction. A dditionally the WEKA im-. This algorithm implements sequential minimal optimization algorithm for training a.

This generates a least median squared linear regression. It utilises the existing WEKA. LinearRegression class to form predictions. Least squared regression functions are.

The least squared regression with. This algorithm creates base routines for generating M5 Model trees and rules. Bagging with REP T ree. Bootstrap aggregating bagging is a machine learning ensemble meta-algorithm. It also reduces v ariance and helps to avoid. A decision stump is a machine. The predicted v alue is the expected value of the mean. The nodes in this netw ork are all sigmoid. It do es not allow missing. It select the attribute that gives.

Locally Weigh ted Learning. This is an instance-based algorithm to assign instance weights whic h are then used. Bayes or regression e. It can also do distance weighting. F unctional T rees. F uzzy Unordered Rule Induction Algorithm.

Only nominal class problems can b e tackled. This is an average the magnitude of the individual errors without taking accoun t of. This is the mean of the squares of the individual error.

This is computed by dividing the absolute error by the absolute error obtained b y. This is the time taken to test the model through the selected testing scheme. The Receiver Operating Characteristic is a plot of true positive rate against false.

The best possible prediction metho d would yield a point in the upper. This statistic is used to measure the agreement between predicted and observ ed.

It is calculated as the harmonic mean of precision. T ables and Figures. Algorithms results with Defenders dataset attributes. Algorithms results with Goalkeepers dataset attributes. Results from Chapter 2. Mean Absolute Error Vs Num ber. Mean Absolute Error V s. F unction based algorithms.

Lazy and Rule based algo-. Iterative Local Pruning Defenders: Iterative Local Pruning Goalkeepers: Similarity between optimal lists for defenders. Legend for list of optimal defenders attribute lists. Similarity between optimal lists for goalkeepers. Legend for list of optimal goalkeepers attribute lists. Results from Chapter 3. RandomF orest 60 0. Ada LibSVM 85 0. RandomCommittee RandomT ree Stacking FT 45 0.

F o otball chance: Murphy , editors, Science and F ootb all , pages — A model for teaching games in the secondary school. Bul letin of Physical Educ ation , 18 1 Sciences , pages 1—14, F enton, and M. A bay esian network.

Applie d Statistics , 46 2: Using model trees for. Machine Le arning , 32 1 , July I needed an application for using non-linear state space methods , which would normally imply a project involving financial time series, but I felt like doing something different.

It quickly became clear to me that this interest was less obscure than I had thought, and that there was a fairly established academic interest and commercial interest in the field. Similarly, there was a well established blogging scene in sports analytics. So there was a lot of existing content to work with.

But I noticed a glaring problem. The barriers to entry for making quantitative sports models are high. But given that there was an established interest in predicting sports using statistical methods, it was natural to ask whether these barriers to entry could be reduced to increase accessibility to the field.

My answer was Throne: Participating is as simple as submitting probabilities to the platform, which will then record your performance relative to other users. The live prediction style of the competitions also creates strong incentives for building particular types of models, in particular:.

As a platform, Throne also gives you tools to help construct your models:. All of these features are free for our registered users to access. Please knock yourself out! It is easy to register: