The Athlete Monitoring Startup Kit

Athlete monitoring and sport science teams are making their way into many professional teams and into some big time Division-I schools. This has already started to trickle into DII and DIII schools as well. Many coaches that may not have access to a sport science team or even sport science minded strength & conditioning staff are asking how to get in on the action. Before you decide to go all-in on high dollar technology or invest you or your staff’s time to collecting data, there are a couple of questions you need to ask yourself: “What do I want to know?” and “How will I use this new information to direct training?” If you can’t answer those two questions (at the bare minimum) hold off on the monitoring until you figure that out.

Another minimum threshold that should come before initiating any monitoring is to actually train hard enough for monitoring to be useful. Good training is the basics, athlete monitoring is the complex – walk before you run. If you aren’t doing the basics well and training hard on a fairly regular basis, what would lead you to believe that adding complex monitoring tools to determine when to train could make it any better?

Screen Shot 2016-03-10 at 12.33.17 PM

The purpose of athlete monitoring should be to give us a clearer picture on when the body is more adaptable or less adaptable to training . Really, this is what the goal was with?periodization before expensive monitoring tools were available – taking educated guesses on when the body should be stressed and when it should be rested. Consistent monitoring combined with consistent?training should lead us to a clearer picture on the amount of stress and rest an athlete needs.

Many believe that athlete monitoring is reserved only for the teams and schools that have the budget for them. While it might be easier for these teams to get a hold of the most current and most expensive technology, it doesn’t mean that you’re excluded from the party. It’s also important to note that the terms “new” or “expensive” don’t correlate to usefulness. It can, but it comes down to how you’re using the information that really matters.

To determine where to invest our resources, we first have to understand a few basic concepts to determine what monitoring tools would be the most useful.

Internal Load – The physiological and psychological stress an athlete feels they incurred from training.

External Load – The amount of work the athlete did quantitatively.

Subjective Data – Data we can collect that tells us about an athlete’s psychological or physiological state from their point of view.

Objective Data – Data about an athlete that is measurable by observation or testing.

Here’s a chart to show where different monitoring tools would fall based on the definitions above:

Monitoring grid

The question then becomes, what type of information is best?- internal or external, subjective or objective? It depends. Different situations and scenarios might call for different needs. Restricted?resources might be a limiting factor when it comes to collecting data, but for the sake of discussion, we don’t have to necessarily choose just one route – and it’s best if we don’t.

Research has indicated that it is best to use both internal and external training load as an indicator of fatigue.Typically, most like to look at the external loading of the athlete – sets, reps, distance run, power output, etc. ?This is all great to look at, but if this is the only thing we’re looking at we’re missing the other half of the picture. Arguably the more important half – what the athlete is experiencing. We also need to know how the athlete is being affected by the training. Not every athlete handles the same training load exactly alike. This is where internal training load comes in. Many athletes have?totally different physiological reactions as well as different perceptions of training.

As we see the need to diversify our internal and external monitoring, we want to aim to do the same with objective and subjective data. While subjective data may trump objective data on a short and long term basis, subjective data, specifically wellness surveys, by itself does have it’s downsides. An athlete may start to become a robot to answering daily questionnaires?or session RPE, answering with a preset series of numbers they haven’t truly thought about. ?A second, less typical, downside is?if an athlete isn’t being honest in their answers to circumnavigate training. These two downsides have the potential to skew an athlete’s baseline and potentially have training prescribed to them that doesn’t optimally fit their state of readiness.

Objective data typically comes from machine based technology, such as GPS systems. ?If you’ve ever dealt with technology, you know that it has it’s occasional flaws. If you’re using technology in your athlete monitoring, what happens if that piece of technology abruptly fails? What if you can’t travel with it? Using different streams of data allows for these things to happen without skipping a beat when it comes to the overall picture. Gaps in data could become problematic and we will examine this in greater depth later.

For these reasons we don’t want to rely on just one of these avenues to collect data if we don’t need to. The idea is to create a 360 degree view of the athlete.?We would want to use different methods to create a checks and balance system.?Charting out accessible?monitoring tools in their appropriate domains, as the above chart illustrates, can make choosing the tools to use much easier. The goal would be to diversify your choices among the quadrants rather than overloading just one quadrant. If you don’t have the means to collect data from many or even all four quadrants then by no means is it not worth it, you may just have to be a little more critical of where?you’re spending time and how you’re using the data.

To show how objective and subjective data used in combination we’ll use RPE x Session Duration. This is sometimes referred to as training load, but?also known as the Foster Method. If we refer back to the chart above, this?is a simple way to monitor session fatigue from an internal subjective and external objective standpoint.

If an entire team had a 90 minute training session it’s not wise to assume it stressed each athlete the same just because they were practicing for the same duration. If the entire team?did practice for the same exact duration and this is our only indicator of stress, it would have us believe that the entire team was stressed the same. ?Leading us then to believe that they all need the same recovery time. This way of measuring stress is purely objective and doesn’t take into account the athletes perspective.

For this example we’ll use a 90 minute session factored in a with RPE on a 10 point scale – how hard they think the 90 minute practice was. We can see that not every athlete experiences practice the same. One athlete may feel like it was a hard practice while one might feel like it was too easy.

training load
? ? ? ? ? ? ? ? ? ? ? ? ?Calculation of training load.

We can start to see how subjective and objective data together can be more useful than either one on their own.

This is the case with quite a bit of information. While we don’t need to make equations based on subjective and objective data, we do need to take into account both sides of the picture.

Once you determine what metrics you want to track and what equipment you choose to use, we need some data history before we can make too many decisions on training. In an ideal world we could monitor year round, but most of the time this isn’t possible. In a team setting, early pre-season is usually a good time to start monitoring because it can create the data history we need to make decisions late in pre-season and during the season. On top of this it can create some early buy-in as well as set the tone of it being compulsory for the remainder of the season.

A week or two of past data on athletes should give us enough information to make useful changes to training. If we take any one day out of context of the bigger picture, it could lead us to making poor training decisions. Looking at any one day of monitoring is like looking at a just a few of pixels out of an entire photo. The more days of data collected, the clearer the picture becomes.

Looking at any one day of monitoring is like looking at a just a few of pixels out of an entire photo.
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Pixels are like daily data points.

While just a few pixels can certainly give you information, this information carries much more weight?if you can see the entire photo. Daily numbers can be useful in an acute setting, but daily numbers captured over time will allow you to see the patterns and trends needed to make longer term changes in training or lifestyle habits. To accumulate the relative data needed to make meaningful changes usually takes one to two weeks (on the short end) to four weeks (on the high end).

On a separate, but related note,?gaps in daily data create the same problem, but in a different way. If you’ve monitored your athletes over 60 days, but for 20 of those days data goes uncollected for one reason or another, it makes long term data harder to?interpret since pixels are missing from the photo.

Using a simple survey question as an example we’ll see how not having enough normative data for that athlete?can be problematic

To the question “What was the quality of your sleep last night?”, let’s assume that Athlete A, B, and C all respond with a “6” on a scale of 1-10 (1=very poor, 10=best sleep ever).

If you looked at this daily score and without the individual’s previous data history one might assume that a score of 6 would affect each of these athletes the same. Well, it’s not quite as easy as that. To illustrate why longer term collection is vital to making changes to training, let’s take this a step further and give each athlete a sleep quality history. It might look like this:

SQ Chart
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? A 7 day sleep history.

You might already see what I’m getting at. The subjective view of a 6 is not the same for Athlete A, B, or C. This is where trends and previous data can become very important. It sets a “baseline” for each athlete. Creating an individual baseline could be as simple as a rolling average?- in this case 7 previous days. If we use this method, here are the baselines for each of those athletes.

A 7 day sleep history with a rolling baseline
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?A 7 day sleep history with a rolling baseline.

Now you can see how a sleep quality of 6 isn’t really the same for each athlete. When we compare a score of 6 to their baseline, Athlete A might have had a poor night of sleep while Athlete C might have had a relatively good night of sleep.

There is certainly more monitoring techniques to use that these two examples (surveys and training load), but it seems to be a great place to start for most teams that don’t really have anything in place. It also needs to be said that it doesn’t matter if you have the best technology ?if?you don’t know how to use it or use it inefficiently. You can collect wellness surveys and RPEs with just a pen and paper – nothing fancy.

One of the most important aspects to keep in mind is that good athlete monitoring isn’t about the equipment, it’s about collecting meaningful metrics and making it actionable. You can buy the fanciest equipment, collect all the data points you want, chart it out in fancy graphs, but if you don’t use it to change how your team or individuals are training in some capacity, it’s a waste of resources.

Athlete monitoring isn’t about the equipment, it’s about collecting meaningful metrics and making it actionable.

Assuming the athlete is not injured or in a return to play scenario, there are a two main reasons you would need to diverge from the plan based on the data you collect. When the athlete has too many consecutive days with heavy fatigue or?too many consecutive days with low fatigue. Anecdotally, 4+ consecutive days on either end of the fatigue spectrum is too many. If the athlete is in the heavy fatigue state for too long there is an increase in likelihood for injury and overtraining. If the athlete is in the low fatigue state for too long, detraining?will start to occur, performance decreases, and actually there is an increase in likelihood for injury as well due to not being able to meet the demands of the sport during games.

When it comes to heavy fatigue we can attack it by

  • reducing training load in practice by either the session duration or RPE.
  • reducing training load in the weight room?by reducing volume by ~20% or completely take weight room off the schedule.
  • adding recovery to the daily schedule (massage, hot tub, cold tub, etc.).
  • giving an athlete the entire day off.

In consecutive low fatigue days we can

  • add specific fitness work after practice (10 minutes of specific work is typically enough).
  • add specific fitness work after games – for reserves that did not play (the above rule applies here as well).
  • increase training load in the weight room by increasing intensity (scenario dependent) or volume (~20%).

One last?thing that needs to be?considered when implementing monitoring is logistics.

In his Pacey Performance Podcast interview, Carl Valle mentions that if the monitoring techniques used on your athletes are cumbersome and and take longer than 1 or 2 minutes, you’ve probably lost your athletes. I experienced this exact idea when working?with the Vancouver Whitecaps sport science department. The guys didn’t want to have to do any more work than what they were already doing – nothing really wrong with this, we just need to account for these things. Things like having GPS systems in their compression gear before they arrived and making sure the morning survey only took a minute of their time?were critical in getting them to actually comply. The key here is to make it fool proof on the athlete side?and aim to reduce the number of steps needed to collect the information you want. The more we reduce the amount of steps involved for the athlete, the more likely they are to comply.

A great way to approach data collection is asking yourself – if you didn’t collect a particular stream of data for a week, would it matter? Would the coaches and other performance staff notice or care that it is missing? If they wouldn’t notice that it’s missing, you might have to rethink why you’re collecting the data or spending time charting it out. This can help decipher what data is really usable and what may not be worth your time.

When we gather all the information that we want, can we use it quickly? If it takes the sport science staff days to decipher and make sense of the data collected, that window of usefulness is shrinking. The turnaround time on data should be as instant as we can possibly make it. With technology, this is fairly easy since it does most of the back end work for you, but if you’re monitoring out of an Excel sheet and creating your own charts and graphs this can be a little tougher. We hear this in coaching a lot – “be a twitter coach”. Meaning, communicate in as few words as possible. Do the same with data. Only include what is necessary.

The need to make the data readable is a necessity, especially if the head coach believes in athlete monitoring, but lacks the knowledge to interpret the numbers. Using things like color codes to help other coaches understand the numbers you already understand. Something as simple as associating the?colors of a street light with fatigue can bridge the gap using something almost everyone already understands to a domain very few people understand.

Red (Stop) = Heavy Fatigue
Yellow (Yield) = Moderate Fatigue
Green (Go) = Low Fatigue

Using this method, a coach could look at a roster of 30 athletes and know where everyone stands within a few seconds of looking at it.

Beginning athlete monitoring can be a daunting task, so let’s keep in mind a few of the above points as we implement and tweak our own?monitoring system

  • Train hard enough to have athlete monitoring to be useful.
  • Use a combination of internal, external, objective, and subjective data where possible.
  • Relative baselines are needed to make changes to training.
  • Don’t let an athlete spend too much time in the heavy or low fatigue state.
  • Think logistics!

These are just a few things you’d want to keep in mind while implementing monitoring strategies. It’s inevitable that tweaks will be needed with the monitoring process, especially in it’s early stages. Keep tweaking until you find something that works with your team setting. With new information and ideas being shared constantly and technology improving at an exponential rate, athlete monitoring should be an evolving process for all teams.


The following two tabs change content below.
John Grace is a coach at Athletic Lab Sports Performance Training Center in Cary, NC - USA. John has his CSCS, USAW Level 1 certification, USATF Level 1 certification and has worked as an assistant fitness coach for the Vancouver Whitecaps of the MLS.