Quantcast
Channel: McMillanSpeed...
Viewing all articles
Browse latest Browse all 111

a coaches' guide to strength development: PART VI - awareness & data collection

$
0
0

Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance … do not trust anyone—including yourself—to tell you how much you should trust their judgment.
Daniel Kahneman 


As discussed in the last post, a key to the organization of training loads is not to blindly follow  pre-set ‘periodization’ models, but to develop a training-coaching philosophy that guides the way you and the athlete(s) move through the macro- and meso- levels.


Philosophy - reasoning

Our philosophy is concerned with what counts as genuine knowledge (epistemology); and what are the correct principles of reasoning (logic); it is based upon rational thinking - and deals very little with instinct, faith, or analogy. It relies primarily on logic - a conscious analysis of all the options, a prediction of potential outcomes, and an understanding of their relative utility.  

But this takes time.  
Time that is often not available to coaches.
So instead we rely on what is often called our ‘awareness’, and it is this that is essential to understanding micro-adaptation.  


As it has been a while since we released the philosophy section, we highly suggest you go back and review this, as it it will lend a little more context to what we will discuss in this long section on awareness - what this means, and how it can be supported by simple data-driven metrics. 


We hope you enjoy, and find it useful to your own coaching practice.  The second section (beginning with 'Tracking What Matters') of this post is written by Matt. 


First - a disclaimer:
If you read the last couple of posts in this series, you may feel we are ‘anti-plan’ - we are not.  

Having a plan is important.  
But it must be preceded by a competent understanding of our philosophical underpinnings, and it should not be so overly detailed that if affects the manner in which we respond to the dynamic nature of the daily perturbations in athlete response. 

Planning - and the terminology that comes with it - can give us a false sense of security.   Pre-set ‘periodization’ models allay responsibility.  They allow us to pass the buck.  If things go wrong, then it wasn’t us - we can blame the model.  And there is an easy fix - we just switch the model!


The reason why we began with the philosophy section, is because it is this philosophy that gives us the ability to ask the relevant questions.  

But it does not answer them.  

Philosophy asks the question.  Science gives us the answer.  

Philosophy is our theory.  Science is the experiment.  

Science is the coaches’ attempt to answer a philosophical question by forming a hypothesis (the program), then running a trial & error experiment - the results of which form the conclusion - furthering our philosophical understanding -  and leading to further questions (and therefore, further experiments). 


Philosophy is the macro.  Science is the micro. 
It is the interplay between the two - knowledge at breadth, and knowledge at depth - that distinguishes successful coaches. 


Dual Process Theory

I am a quick thinker.  Ask me a question, and invariably, you will receive an answer immediately.  Matt is a little more purposeful - he thinks things through - weighing up all sides of an issue before reaching a decision.  These two very distinct ways at arriving at an answer is together known as dual process theory.

Popularized by Daniel Kahneman in his book Thinking Fast and Slow, dual process theory has its roots in the early days of neurological research, when in the 1800s, Wigan & Hughlings identified two cognitive systems: what were then known as verbal-analytic and narrative-experiential.  

Kahneman has since referred to the two systems as intuition (or system 1) and reasoning (system 2).  Intuition is fast and automatic; while reasoning is slower, and more conscious.  

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control … System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”
Kahneman


I’m currently in the UK, struggling first-hand with System 1 and System 2 oscillation:  not only am I having to drive on the other side of the road, but everything inside the car is backwards as well - from the side of the car I sit on, to the stick shift.  What is normally a fairly automatic process is now clearly far front-brain.  I expect that by the end of the week, though, most of this System 2 work will be taken over by System 1, and I can go back to driving with more fluidity.  


How does this relate to coaching?

It is important that the coach is able to switch between the two systems depending upon the situation - the dynamic interaction between systems 1 & 2 allows us to not only make on-the-spot coaching decisions, but also to debrief these decisions later, comparing with previous situations, and ultimately learning from these experiences.  As mentioned, both System 1 and System 2 are equally important, and it is important to understand that expertise in each relies on experience and knowledge. System 1 effectiveness increases as it is provided with more context through System 2.  The simplest way to understand this is to think of System 1 as a pattern-recognition device.  It relies on heuristics that have been developed over time through experiences of similarity, representativeness, and attributions of causality (Tversky & Kahneman).  Your subconscious mind finds links between your new situation and various patterns of these past experiences. 


“Skills are acquired in an environment of feedback and opportunity for learning … thinking about things - discussing things - helps to develop models-heuristics that will both cognitively and intuitively aid in making decisions”
Kahneman


When I first got to the UK in 2010, there was a distinct divide between the ‘scientific’ coaches, and the ‘practical’ coaches.  The practical coaches had years of experience - usually as both an athlete and a coach - and in many cases, considerable international success.  The old argument that empirical knowledge precedes scientific rationalization held very much true in the UK coaching world at the time.  ‘Practical’ coaches relied on their own personal experience as athletes, on pre-set training models, and their own intuition - rather than a deep philosophical (what they termed as ‘scientific’) understanding of the sport, or event.  The problem with this is that it is this deep understanding that leads to robust intuitive skills.  System 1 expertise has its base in System 2 depth.  

It is true, however, that many of these coaches were highly successful internationally - and we all know ‘traditional’ coaches who enjoy much success, without a deep understanding of their sport.  

Perhaps because of this reliance on ‘intuition’ they have built proficient system 1 skills.  Compare this with the medicine world from a few generations ago:  medicine was - by necessity - focused on clinical symptoms, rather than lab values.  Intuitive skills were developed as the only things doctors had to go by were these symptoms, so they were forced to become extremely careful observers.  

No one will argue, though, that medicine has not progressed over the last few decades - it most certainly has; the best doctors are those who have keen observational skills - supported by understanding all the data that is available to him.

The key is in this understanding.  We can all understand anything better than we currently do.  As a self-confessed ‘generalist’, I often find myself skipping over headlines - rather than delving a little deeper into the details (perhaps this is why Matt and I work so well together, as he - through the necessity of doing a PhD, digs very deeply).  As discussed in the last section, we try to direct focus towards the fundamentals.  The key is to figure out what these are, and to understand them in depth


Be Present

“It is of little use to us to be able to remember and predict if it makes us unable to live in the present”
Alan Watts 

Nothing annoys me more than watching a coach not paying attention during training.  Whether it is mindless chatting with others, continuous checking of texts and emails on his or her phone, or just simple day-dreaming, coaches who do not pay attention do not deserve to call themselves coaches.  Real coaching requires constant attention - being present

At risk of sounding overly zen, when we are not present, our ability to react is compromised.  An over-dependance on planning (the future) and former program outcomes (the past) impedes our appreciation of what is happening right now.  Let us learn from the past - but not be dictated by it.  And let’s not get too preoccupied with the future, as that kidnaps value from the present.

When not distracted by meaningless banter, text messages from home, and what you are going to cook for dinner, we can learn to ‘listen’ with a quiet mind: what the Quakers call the ‘still small voice within’. Even well-meaning coaches can allow a busy mind to get in the way of absolute awareness: I often find myself busy planning future sessions, over-analyzing technique, theorizing, and judging; there is a fine line between being analytical and being so critical you lose sense of everything else around: the proverbial losing sight of the forest for the trees. It is important we are not biased by what we expect to see - instead remaining open to what we actually see.


Dissociation

A recent challenge to a coaches’ presence is the continued improvement in technology - both general (smartphones, mostly), and specific (various tracking devices, etc.).  And while much of this technology has aided our understanding of athlete adaptive processes, most hinder our intuitive skills.   With every new technological advancement, our intuition is further impaired. Instead of developing our skills of perception, we instead rely on technology.  It is a symptom of our insecurity - just like we use mechanistic prediction models as ‘crutches’, we do the same with the latest technological gadget.  It takes the pressure away from us.

Scientists Lancelot Whyte and Trigant Burrow called this move away from instinct ‘Dissociation’ - simply, that we have allowed reasoning to dominate our lives out of all proportion to intuition.  Although initially referring to the chasm that has developed between our brains and our bodies, this disassociation has only increased with the continued rapid pace of technological advancement.

As with your colleges, so with a hundred ‘modern improvements’; there is an illusion about them; there is not always positive advance … Our inventions are wont to be pretty toys, which distract our attention from serious things.  They are but improved means to an unimproved end, an end which it was already but too easy to arrive at …
Henry David Thoreau


Bias

Having a clear philosophy helps to guide us through major methodological questions in our programing.  It helps us with macro- and meso-planning.  Active improvement of our intuitive skills allows us to more effectively vacillate between system 1 and system 2 thinking - making faster, and better decisions when necessary - and enable us to better understand micro-planning. 

But we are still human - and subject to all manner of biases.  

“All mortals are fallible, even the smartest among us, including the scientists. We are prey to cognitive lapses, some of them built into the very machinery of thinking, such as the statistical fallacies we are prone to commit.”
Tversky &  Kahneman

Our intuitions will always bias us, and our judgments will always be dependent on a background of intuitions. However, there is room for slight improvements - by being aware of them, by understudying our fallibility, by speaking to colleagues when we are unsure, by occasionally seeking out contradictory beliefs, and by truly paying attention to our practice, we can reduce their relative impact. 


Regression to the Mean

In addition, we often mistakenly attribute an adaptation as being caused by a certain stimulus, when the response would have happened anyway.  This is a highly important concept to understand - more than just the difference between correlation and causation, regression to the mean can bias our conclusions quite easily.  

To account for errors stemming from bias and regression to the mean, scientific experimentation employs a control group - or a group against which a treatment effect can be compared.  However, training rarely has a control group.  If a coach or athlete believes something might be effective, it is often difficult to withhold this treatment or training stimulus for the purposes of avoiding errors related to bias.  And as training has no controls, our only option is to improve our intuitive skills, and - without burying ourselves in hard to decipher data and complicated monitoring systems - devise a high quality, simple data-collection system. 

...



Tracking What Matters

Intuitively, great coaches monitor quality of movement and an athlete’s psychological disposition using the power of visual observation and conversation.  However, it’s tough to quantify these highly important elements.  For this section, I will assume we all agree on the importance of a coach who pays attention and the importance of capitalizing on intuition and the powers of daily interaction with an athlete to make decisions.  These elements must be considered and used alongside objectively determined metrics at all times.


I have to be honest: over my 15 years working with elite athletes from a variety of sports, this has consistently been one of the more challenging aspects for me.  The soft skills of strength coaching - such as motivating athletes, programming, and cueing technique are so much easier to come by - especially for the natural coaches out there.  Further, they are oftentimes much more valued by athletes and teams than demonstrating impact on performance through data. 

I’ve always said, right or wrong, the biggest success factors for a strength coach are looking the part, effective athlete communication, and being able to get along in a high performance sport environment (see more on this in part III of this series from Brett Bartholomew).  The relative ease of acquiring these skills compared to implementing a data-driven, decision-making process and athlete-monitoring framework is significant.  Not only is there a considerable amount of time involved in collecting and analyzing data, but also a great deal of confusion around what metrics truly matter. It’s interesting to note that despite the real challenges in bringing training from art to science, it sure seems to be a consistent theme in the training philosophy of many strength coaches. Whether or not an effective data driven system is being implemented is a whole other question.  

In my early days, I can think of a few strength coaches who all independently viewed their programs as ‘scientific’, and believed objective metrics were critical, yet arrived at totally different conclusions as to what mattered.  I recall receiving a range of advice from: “testing explosive ability never told me anything – I only track progression in gym lifts” to “we need to do entry and exit testing for all training phases” - which at the time involved using a linear position transducer to assess everything from ‘power’ to strength curves (what a waste of time that was).  
As a young strength coach, I was highly confused.  On the one hand I saw strength coaches simply tracking changes in body composition and progression in lifts, and on the other, I saw  strength coaches digging deeper into other strength abilities.  I also read about all sorts of different tests in both scientific publications and textbooks.  I had my undergraduate degree in Kinesiology, I spent a lot of time learning the art of strength development through my own career as a weightlifter and gym rat, but despite this, I could never really figure out how-why I should implement more of a data-driven approach.  

I was also somewhat shocked to see that the quantification of training load was nothing more than a theoretical consideration for most programs.  On the one hand monitoring training load was the cornerstone of any periodization textbook worth its weight; yet on the other hand, I can’t remember a single strength coach who actually could answer the question: how much has your athlete trained in the past week?  

This seemed like a pretty straightforward question when it came down to effective periodization. While it was always discussed in terms of the theory of strength development it just didn’t seem overly important in practical terms for the acute programming process.

The truth is that the softer skills I discussed earlier were really the big rocks in the bucket.  They get you a long way there in terms of effective strength programming and delivery.  However, the final few percent of programming effectiveness can only really be obtained when specific program elements are tracked and monitored.  While there are several important angles for assessment/monitoring such as nutrition and health, I’m going to focus primarily on those implemented on an ongoing nature related to strength development and athlete adaptation.  I’m also going to focus on elements that can be quantified instead of the important qualitative observations great coaches make every single day.  

The PUSH band is an excellent example of a simple, easy to use device that does not get in the way of your coaching

Why Monitor?  

Several years ago, I watched a very interesting documentary.  Whether the scientific claims of what I’m about to discuss are supported is not something I want to debate – instead I want to focus on the analogy to sport.  This documentary focused on climate change and begins by highlighting a very bizarre observation for climatologists when a temperature spike following one of America’s worst days – the 9-11 terror strikes.  Global temperatures are quite stable (generally) so this observation baffled scientists. 

It’s interesting to note that to solve this mystery, they turned in part to a consistently recorded metric called 24-hour pan evaporation.  This wasn’t a complicated measurement system – as the name indicates, farmers literally placed a pan of water outside and measured the amount of water that evaporated in a 24-hour time period.  It is an indirect measure for the amount of sunlight that hits the Earth.  Well, low and behold, in the time period after 9-11 this data demonstrated that more sunlight had indeed hit the Earth, and likely explained the increase in temperatures.  The scientists went on to infer that as a result of the airplanes being grounded, the vapour trails - which are an air pollutant in the upper atmosphere - diminished, and as this air pollutant actually reflects sunlight, a significant cooling variable had been eliminated.  This finding was somewhat profound and changed the way climatologists model climate change.  All this arose from a crude, but consistently implemented, monitoring program using a farmer’s pan.

I can think of several examples in my career when athlete performance was compromised for some reason or another - and it was consistently tracked data that helped us navigate our way out of a mess.  Similar to the example above, the data wasn’t collected for the good times but more when something unexplainable occurred.  The ability to rely on years of well-tracked data brought clarity to an otherwise stressful and highly uncertain time period.  This is one big reason why monitoring athletes is important: when shit hits the fan, data helps bring clarity to a situation often plagued by a high potential for bad decision-making.  

There are other important reasons for data driven monitoring such as: 

  1. removing the veil of our own bias from colouring the true pros or cons of a training intervention
  2. the ability to review big chunks of data to find trends and generate new understanding
  3. the ability to create better processes for a sport around the time course of adaptation in specific strength abilities as it relates to sport performance (e.g. How much strength is enough? What strength abilities matter?  How long does it take to develop an athlete in a given sport?)


    The other theme that resonated with me from the climate change example is that monitoring doesn’t need to be complicated.  Simple metrics can get you a long way.  In Calgary, I work with a great boxing coach (Doug Harder, Bowmont Boxing Club) who is one of the best coaches I know.  Doug talks about the importance of mastering the jab with all athletes - whether they are a novice or a professional fighter.  The reason is that the jab, while on the one hand a highly simple strike, is also one of the most important because it sets the foundation for all the other punches, sets range, and allows a fighter to control the fight.  It’s a simple punch but gets a boxer a long way there in terms of setting the stage for everything else.

    I take a similar approach to monitoring athletes.  Similar to mastering the jab, I try to answer simple questions and master simple metrics before delving into the complicated stuff.   As Stu outlined previously, it is the question that drives the scientific process.  In respect to the monitoring system,  in the absence of good questions, you can guarantee at the end of a year you will be sitting with spreadsheets upon spreadsheets of data, zero clue where to start, and a feeling that all of this monitoring is a bunch of bollocks.  Strong and simple questions drive the process for implementing a data-driven monitoring system.


    The questions that matter to me are:

    1. How much training has the athlete done in the past week?
    2. How does the athlete feel he or she is adapting to the training load?
    3. How much neuromuscular fatigue is accumulating?
    4. What is the athlete’s structural tolerance (structural tolerance is a measure of the musculoskeletal capacity to handle load)
    5. How is performance tracking in my key indicator lifts?


      The remainder of this section will detail how I go about answering these questions.


      How much training has the athlete done in the past week?

      The teams I work with do not have overly large budgets.  As such, we often can’t afford a $20,000 price tag for a commercially available monitoring system.  Instead, I use Google Docs and generate my own form.  The training log form asks a couple of specific questions:

      1. How long did you train (in minutes)?
      2. How hard was the training session (rating of perceived exertion, scale from 1 to 10)?

        Example of Issurin meso- organization of training load

        The product of these two variables gives the session load and has been referred to in the literature as the Sessional RPE (Foster, 1998).  By tallying up the session loads for each day over a given week, I can estimate the weekly training load for the athletes.  This allows me to identify the occurrence of insufficient variation in training load (monotony), sharp and inappropriate increases or decreases in training load, and gives me an anchor against which I can compare changes in other performance and monitoring metrics.


        How does the athlete feel he or she is adapting to the training load?

        Again and again, subjective athlete wellness assessments are often the most telling metrics for coaches.  In fact,this is what we obtain from our athletes every day simply by watching body language, having quick conversations to gauge energy levels, and watching warm up.  The only difference here is that I am systematically measuring an athlete’s qualitative self-assessment of their wellness in terms of indicators for non-functional overreaching and overtraining syndrome. 

        My questionnaire of choice is the Hooper-MacKinnon Questionnaire (Hooper et al., 1995).  

        Again, I save thousands of dollars and develop my own forms online for free.  Athletes complete this form each morning without putting too much thought, energy or contemplation into the answers; we don’t want athletes to become fixated or obsessed on how they are feeling.  

        Instead, it’s a quick and effortless response that is done consistently each day.  This data allows me to go back over large time scales to evaluate how the athlete felt.  This allows me to anchor my training load and other performance/monitoring metrics to an athlete’s subjective feeling about how things were going during a particular time period.  It’s also a great conversation starter for the interaction I have each day in the daily training environment.  My good buddy Tyler Goodale (strength coach and performance lead for the Canadian Women’s Rugby Seven’s program) has his strength and conditioning team review this information before each training session.  It helps direct the conversations that occur in the daily training environment with up to 20 athletes.  Based on the athlete wellness monitoring data, Tyler knows to whom he needs to talk and how the conversation should be focused.  


        How much neuromuscular fatigue is accumulating?

        This is a tough question to answer.  However, what we know is that while high frequency fatigue recovers in a matter of hours, low frequency fatigue or chronic low frequency force depression can persist for many days (and my hunch is up to a couple of weeks – I think we send plenty of athletes into competition with way too much fatigue, which supports why often times unexpected lay offs for small injuries or illnesses lead to best-ever performance for an athlete…they were finally rested!).  Low frequency fatigue is responsible for the ‘dead leg syndrome’ or the feeling of heavy legs when we need to do routine movements like running up a flight of stairs.  The problem with this type of fatigue is that it diminishes rate of force development, which is key for explosive strength.  As many sports require explosive strength for successful performance, this is an important element to monitor.  

        Monitoring neuromuscular fatigue is a touch more complicated to implement.  At our training centre we have athletes perform weekly or bi-weekly vertical jump tests using a few jump variants.  Generally speaking, we control jump technique pretty carefully and measure jump performance.  Our go to jump performance variable is take off velocity, which is easily calculated from the vertical ground reaction force using force plate methodology, and has excellent reliability, making it easy to detect a meaningful change.  We consistently measure coefficients of variation below 1% in a diverse group of athlete populations, and we typically use a cut-off of 5% to 10% for determining when training volume needs to be adjusted.  We also use it to understand the individual nuances between recovery in explosive strength abilities and training load/exercise prescription.  This helps us to devise appropriate taper strategies for athletes in sports reliant on explosive strength.


        What is the structural tolerance of the athlete?

        By answering this question, I am trying to identify an athlete who might get injured before the injury happens.  Essentially, I’m looking for something to support the important feedback I receive from my soft tissue therapist and from a daily qualitative movement assessment.  This is a new area of research for us, so the data I’m discussing right now has yet to be truly flushed out the way it should from the standpoint of robust scientific investigation. At our training centre, in addition to measuring jump performance, we also employ a dual force-plate system that allows us to simultaneously measure the vertical ground reaction force from the right and left limbs during a double-leg jump.  We calculate a functional asymmetry index by measuring the kinetic impulse from the right and left limb separately over specific jump phases.  

        We then use this asymmetry index called the kinetic impulse asymmetry index to identify athletes who might have diminished lower body structural tolerance.  Our preliminary data indicates that an asymmetry index above 15%-20% in the eccentric deceleration phase of a countermovement jump is predictive of lower body injury in elite athletes.  I just presented this data at the International Society of Biomechanics Conference in Glasgow, Scotland. As such, we flag athletes that have big changes in asymmetry, or get into this red zone, and adjust training appropriately - or triage to the appropriate person on our medical team.  Assessing functional asymmetry in this manner is also very useful for monitoring an athlete throughout the return to sport process after injury.  


        How is performance tracking in the key indicator lifts?

        I think a simple but highly important question regarding the efficacy of a strength program is whether or not an athlete is developing maximal strength.  There are lots of reasons why maximal strength is critical for athletes competing in many different sports, and plenty of evidence that at the very least, maximal strength is a foundational performance factor that should be optimized for each sport and each athlete (as discussed in part I of this series).  

        In each training program, I have one to two key indicator lifts that are the primary focal point of the program.  I want to track and monitor the progression in load that occurs over a training phase.  There are many ways to accomplish this - the simplest being, have the athlete record their lifts on their program with pen and paper, and then upload this data into a spreadsheet (highly time-consuming and a pain in the ass).  Additionally, many training centres have apps and devices at each training station that eliminate the need for pen and paper, and make this type of data collection much easier.  If you have the budget to afford this system then your problem is solved.

        I personally turn to free online tools and develop my own tracking forms.  This eliminates the need for the athlete to go pen to paper and saves a ton of time in terms of having to enter this data into a spreadsheet.  At the bottom of my Training Load form discussed above, I have a small section that allows the athlete to select from a handful of indicator lifts and enter in the maximum load lifted in kilograms on their best set, along with the number of successful repetitions.  Some may criticize this approach as it does not allow me to calculate the total tonnage or lifting volume from a given workout but I have found this information to not be of great value as my weekly training load metric gives me greater insight into how much training an athlete has done in a given week.  Instead, the more relevant question is: how is performance tracking in my key indicator lift?  Is the athlete progressing steadily in the gym or flat lining?  To get at this question, I find that the load and reps lifted on the best set for a given day tells me everything I need to know.


        In summary, I’ll bring your attention back to the key points: 

        1. A coach who pays attention is the best monitoring tool going
        2. Data-driven systems are necessary to confirm a coaches’ intuition, and to protect against biases
        3. Great questions drive a successful data-driven monitoring program
        4. Effective monitoring systems give you the final few percent in performance - which is what every elite athlete is chasing
        5. Simple metrics collected consistently over time are extremely valuable, and are often more valuable than sophisticated measurement tools that are unsupported by good questions and difficult to implement


          Ask the right questions, pick your metrics carefully - and be consistent.  A data-driven approach to programming doesn’t happen overnight and instead is something that requires commitment and curiosity.    


          Thanks for reading ... if you enjoyed this post, 
          please share on Twitter or Facebook




          Viewing all articles
          Browse latest Browse all 111

          Trending Articles