The Nuts and Bolts of Athlete Monitoring
I’ve taught courses in a range of disciplines the last few years—physiology, kinesiology, theories of strength and conditioning, and lab instrumentation to name a few—and yet somehow always manage to hit the same topic at some point during the semester: athlete monitoring. Even though my students might roll their eyes, I see athlete monitoring as one of the easiest ways to bridge the theoretical concepts discussed in class to real-world application. Energy systems? Using field assessments of VO2max to determine “fitness groups” for spring conditioning. Kinematics and kinetics? Using vertical jump data collected on dual force plates to track a soccer athlete’s progress following ACL reconstruction. Trust me, I have an example for every occasion.
While I certainly don’t think everyone who’s going into sports performance needs to possess the same level of fanaticism, a strong understanding and healthy appreciation of the nuts and bolts of athlete monitoring is definitely a prerequisite nowadays. When you boil things down, training for sport is an optimization problem; how do we give our athletes the best chance of competitive success (that is, how do we maximize their preparedness?) while simultaneously minimizing their injury risk? Unfortunately, there’s no single best answer to that question. Speaking as a former sports scientist for a men’s college soccer team, no two teams are ever the same. Each athlete has his own training and injury histories and level of motivation, comes from a different coaching philosophy, and responds differently to training. This makes training an ever-evolving process, where coaches and athletes alike are always searching for The Next Big Thing™ to take their performance to the next level.
If you ask me, though, these coaches and athletes are trying to run before they can crawl, are missing the forest for the trees, are…insert other clichés here. Essentially, they’re looking to solve complex problems with complex solutions when some basic monitoring data might provide insight. To continue the soccer example, a coach might believe his team is losing matches because they don’t possess well in the middle third (the middle, well, third of the field). They tend to have trouble converting their possession into goals while also frequently losing the ball and giving up goals to quick counterattacks. The coach decides to implement possession-focused training sessions he got from a friend of a friend of a friend at Man United, yet nothing changes. Well, maybe it’s their composure in the final third…or the center backs’ fitness…or the fact that their midfield and fullback positions are rotating doors of athletes because of constant injuries (in case you’re curious, player availability is a major contributor to competitive success in soccer 1,2).
You might think I’m being hyperbolic, but I lived that story and have seen similar things in some of my own graduate research. The first paper in my dissertation 3 was a survey of men’s college soccer coaches across the US. One hundred twenty coaching staffs responded about their training and monitoring practices. Over 60% claimed they monitored their athletes in some way, but it became clear we had different definitions of athlete monitoring the deeper I dug into the data. Twenty-seven of the teams didn’t quantify training load, while 12 “quantified” fatigue via the “eyeball test” and “having sense.” While this was a survey focused on men’s college soccer, I would wager a depressingly similar trend exists for other sports and levels of play. While we need to get more teams on the athlete monitoring hype train, it’s evident we first need to be clear what athlete monitoring is.
What is Athlete Monitoring?
The survey results I described above are what you might call the “black box” approach to training.4 We know the inputs (the athletes, the coach) and the outputs (wins and losses, competitive rank), but we understand very little about the inner workings of the process (training design, training loads, athletes’ responses to training, injury statistics, etc.). Black box, or performance-based, coaching is limited to answering questions such as “how did the team fare compared to last year?” or “did their rank improve?” Contrast that with the “white box,” or evidence-based, approach where we investigate and account for the inner workings I mentioned earlier. Armed with this contextual information, we can begin to answer deeper questions. For example, “did our athletes’ fitness improve across the season?” or “how well did we manage our athletes’ fatigue?” Essentially, a properly implemented athlete monitoring program helps us understand both the training prescription and the athletes’ response to that training. We can use this information both in the moment—“fast moving sports science”—and retroactively—“slow moving sports science”—to modify and improve our training program over time.5
Beginning an athlete monitoring program can seem like a daunting task with all the various technologies and data analysis techniques that exist. I know I certainly struggled with this problem when I started working with the soccer team, as I wanted to implement all sorts of complex monitoring with multiple sources of data, statistical modeling, a dashboard, and all that other jazz. Trying to implement all that at once was overwhelming, and after a semester I went back to the drawing board. I concluded that, instead of driving myself crazy trying to implement too much, the athlete monitoring program needed to answer two basic questions: 1) What did the athletes do? 2) How did they respond? And the data needed to be presented in an easily digestible format that answered the coaching staff’s (and my) questions. So, I fell back to session RPE and daily ratings of fatigue and soreness and created some ugly plots that managed to get the point across.
The data you collect and how you report them will be both sport- and context-dependent. From a sport perspective, you might monitor a weightlifter’s volume load (what they did) and squat jumps at a variety of loads (how they responded), whereas you might monitor an endurance athlete’s time in heart rate zones and their performance on a race-specific time trial. Talks with fellow coaches, journal articles, and books on athlete monitoring can all go a long way in helping you determine the best tools for the respective teams you’re working with.
As for context, that’s where things get hairy. You can create the Best Protocol Ever™, but financial constraints, athlete and coach buy-in, time constraints, etc. can all throw wrenches into the works. Dr. Ryan Alexander, the head of sports science for Atlanta United FC (who won the MLS cup in their second season as a franchise), provided a great example of this at the NSCA’s Georgia state clinic last year. They had the money to monitor whatever he wanted, he explained, but many of his athletes had no experience with an athlete monitoring program. He was lucky to get them to wear their GPS units every session, let alone complete a battery of fitness tests, answer a daily mood questionnaire, or perform regular vertical jump monitoring. So, he started simple with an eye on continuing to develop their processes over subsequent seasons.
Keep Moving Forward
I want to highlight that last bit again: continued development. I firmly believe that an athlete monitoring program should be ever-evolving. Have your core processes—for us, that became session RPE, weighted vertical jump testing, and GPS analysis—but don’t be afraid to experiment with new technologies and ideas. From the distance athlete example above, you might begin by setting their heart rate zones with a field-based time trial. Once you achieve athlete and coach buy-in and/or secure the necessary equipment, you might progress to lab-based assessment with a metabolic cart, lactate analysis, and all that good stuff. Regardless of what you implement, however, ensure some empirical support exists. And, no, “I saw it at a conference/on Twitter” is not empirical support…I’ll save that rant for another day.
As your athlete monitoring program evolves, the questions you’re attempting to answer should too. These questions should be driven by both your coaching and sports medicine staffs and by your own curiosities. Granted, there is often considerable overlap between these groups, the questions are just reworded. For instance, the possession-related problems I mentioned earlier boiled down to a string of recurrent non-contact injuries in our key starters. I wanted to know how we could reduce the risk of injury and reduce the likelihood of a follow-up injury once the athlete returned to play. Our coach wanted to know the same thing, he just asked it a little differently: “Matt, what’s with all the hamstrings?” Using the training load and fatigue data we were already collecting (thankfully, what did they do? how did they respond? can help you answer a lot of questions), we realized we weren’t doing a great job of 1) preparing the athletes for conference play, 2) managing their post-match fatigue, or 3) gradually returning them to play. Unfortunately, it was too late to use this new information (it was a…bad…year to say the least), but we used what we learned to radically alter our training program over the next two seasons. We went from 11 non-contact time-loss injuries with multiple re-injuries that year to zero the next and two the year after that. Your mileage will vary, of course, and similar results are not guaranteed. But the process is the same regardless of what you’re trying to address—pose a question, collect data (or use previously collected data) to answer that question, make a change to your training, observe the results, pose a question…you get the idea.
Analyzing and Interpreting Your Data
You could probably fill a library with the available literature concerning athlete monitoring data analysis techniques. While I don’t have the space to discuss all the nuances of data analysis and interpretation, hopefully I can at least set you down the right path. A few readings to start with are Sands et al,6 Sands and Stone,7 and McGuigan.8
A common theme you’ll notice in the available literature is that, despite all the complex analysis methods available, we’re ultimately looking for irregular patterns in our data. Outliers or anomalies, if you will. When you combine knowledge of anatomy and physiology with a healthy dose of training theory, you can begin to expect certain (qualitative) patterns when a specific training stimulus is applied. Individual differences will lead to quantitative differences between athletes, of course, but the basic patterns are still there. An athlete who breaks the expected pattern is cause for deeper scrutiny.
First, however, you need to get a handle on the noise (or variation) inherent in the tools you’re using. Check out Will Hopkins’ website (www.sportsci.org) for some very in-depth explanations on the underlying math and procedures, but to put things plainly: no measurement tool is perfect. Both natural biological variation and measurement errors will produce noise that obscures the true value of the variable you’re measuring. You can use published validation studies or in-house validation to get a handle on how much of this noise exists in your data. Coupled with longitudinal monitoring that establishes what’s normal or expected for your athlete(s), you can highlight results that fall outside your expectations.6,7 From there, how you handle these anomalous data will depend on the question(s) you’re trying to answer. And that, dear reader, is where the art of sports science comes into play.
It’s hard to get too in-depth in 2,000 words, but hopefully that goes to show you that athlete monitoring isn’t some complicated affair. To recap the process of implementing an athlete monitoring program, get a handle on your athletes’ training load, the responses to that load, and the variability of the data you’re collecting. Use these data to answer questions posed by your coaching staff or to satisfy your own curiosity. From there, you can steer the development of your athlete monitoring program via both sport-specific and context-specific factors. Regardless of your chosen methods, keep your eyes on the ultimate prize: improving your athletes’ performance and keeping them healthy.
By: Dr. Matt Sams
1. Eirale C, Tol JL, Farooq A, Smiley F, Chalabi H. Low injury rate strongly correlates with team success in Qatari professional football. Br J Sports Med. 2013;47(12):807-808.
2. Hagglund M, Walden M, Magnusson H, Kristenson K, Bengtsson H, Ekstrand J. Injuries affect team performance negatively in professional football: An 11-year follow-up of the UEFA Champions League injury study. Br J Sports Med. 2013;47(12):738-742.
3. Sams ML, Sato K, DeWeese BH, Sayers AL, Stone MH. An examination of the workloads and the effectiveness of an athlete monitoring program in NCAA Division I men’s soccer. 2017.
4. DeWeese BH, Gray HS, Sams ML, Scruggs SK, Serrano AJ. Revising the definition of periodization: Merging historical principles with modern concern. Olympic Coach. 2013:5-19.
5. Coutts AJ. Working Fast and working slow: The benefits of embedding research in high-performance sport. Int J Sports Physiol Perform. 2016;11:1-2. doi:10.1123/IJSPP.2015-0781
6. Sands WA, Kavanaugh AA, Murray SR, McNeal JR, Jemni M. Modern techniques and technologies applied to training and performance monitoring. Int J Sports Physiol Perform. 2017;12(S2):S2-63-S2-72. doi:10.1123/ijspp.2016-0405
7. Sands WA, Stone MH. Monitoring the elite athlete. Olympic Coach. 200