S3 2018 Recap

An Overview

The first S3 (Sports Science Summit) is officially in the books. Organized (at least from what I can tell) and moderated by Gary McCoy, the conference was meant to provide insights from some of the most progressive sports science programs in the US. I have to be honest when I say I was skeptical when I first heard about the conference last week, because I’ve been burned more than once by larger national conferences that promised a big game yet failed to deliver. Let me just say, S3 delivered! Each of the talks and panel discussions was extremely interesting and gave me a little more hope for sports science in the US (I’ll save my rant for another day).

Recap

Gary McCoy started things off with some opening remarks and his beliefs on the current state of sports science in the US. It didn’t take long for me to decide we’re kindred spirits, because one of the first points he made was that sports science isn’t a “thing”–it’s a process. Too often, coaching staffs hire a “sports scientist” with the hope that they’ll solve all the team’s problems overnight. When things haven’t changed appreciably in the first few months (or even the first year), said sports scientist should probably start updating their resume. Further, cue the following quote from the coach/admin: “Yeah, we tried that sports science thing, but it didn’t really work for us.” These all-too-common scenarios tend to pop up for two reasons I think: 1) Some individuals and companies tend to oversell the benefits of a sports science program for their own short-term gain while conflating the presence of technology with sports science and 2) there is a disconnect between what we do and what administrators think we do.

To the first point, I’ve spoken to several coaches who have purchased X hardware and software because a company rep promised them the world (eliminate injuries, win every game, assess player readiness with this one weird metric, that sorta stuff). Tens of thousands of dollars (and one or two added employees) later, that shiny hardware sits on the shelf collecting dust and the password to the software platform has long been forgotten. The coach is left with a bad taste in their mouth about “that sports science thing,” and progress in the field is set back.

(I spent way too long looking for a “What I think I do” meme. Imagine there’s one here for this next bit.)

The above situation tends to play out over and over again because there’s a disconnect between what sports science actually is and what those outside the field think it is. At its core, sports science is about optimizing performance, maintaining or improving athlete wellbeing, and minimizing injury risk. That’s it. Sports science isn’t running logistic regression models, making pretty graphics to impress stakeholders, and telling the coach and S&C staff “no” or “do less.” Instead, sports science is looking at an athlete or a team and attempting to answer questions. E.g.:

  • “Why do we have so many injuries at this time of year?”
  • “How can we increase our player availability?”
  • “How fit do we need to be?”
  • “Can we train through this opponent?”
  • “How can we make practice more efficient?”

…With each question ultimately referring back to those goals I mentioned before. We might use a healthy dose of statistics and data visualization techniques along the way to answering said questions–and we might even have to have tough conversations with a coach from time to time–but we use data and the tools available to us as a means to an end, not ends in themselves. And unfortunately, the process of identifying a question, collecting and analyzing data, implementing a plan, and re-assessing takes time. I spent five years with ETSU men’s soccer, and while some questions were answered relatively quickly (“How can we improve player availability during conference play?”), questions like “How do we reduce the likelihood of an ACL injury after spring break?” were tougher nuts to crack (4 years of poring over data in fact). In the current age of smartphones, text messages, and immediate gratification, explaining to a coach or athletic director that it might take a while to effect positive change isn’t very well received. Yet it’s imperative we help administrators understand sports science is a process that takes time to bring about change, not some nebulous thing that works or doesn’t work.

That thought ties in nicely with a common theme shared by many of the speakers (Gary McCoy, Shaun Huls, Steve Tashjian, Ben Peterson, and the Orlando Magic high performance staff): communication reigns supreme in the high performance world. To have a successful high performance team, everyone needs to be on the same page. That includes the coaches, S&C, sports medicine, sports science, sports nutrition, you name it. Further, we need to lean on and collaborate with each other instead of selfishly staying inside our little box and never reaching across the aisle. Obviously, we shouldn’t overstep into another professional’s domain (I’m not going to tell my head or assistant coach how to design a possession drill), but we should work synergistically with each member of the high performance team to provide the best service possible to our athletes (I would suggest drill dimensions, team sizes, drill lengths, etc. to the coaches to help us hit our desired internal and external load targets). I’ll again refer you back to the three goals of sports science I mentioned above.

Ben Peterson had an especially interesting take on communication in the high performance world. Most of us are trained in hard skills during our masters and PhD programs, but few programs emphasize the soft skill side of things (etiquette, listening, getting along, smalltalk, that sort of stuff). Yet the soft skills are vital in overcoming that administrative disconnect I mentioned before. You naturally pick up some soft skills in the course of working with athletes and coaches from a variety of backgrounds, but Ben suggested that making a conscious effort to improve your soft skills can go a long way in improving coach and athlete buy-in and can ultimately make life easier for all parties involved. From personal experience, dealing with athletes from less rigid cultural backgrounds became a whole lot easier once I took a more empathetic and positive approach.

The afternoon’s talks were my personal favorites. Full disclosure (if you couldn’t already guess), I’m a technology and monitoring nerd. And boy, did John Meyer and Marcus Elliot deliver in spades. John is the Associate AD of Sports Science and Performance at USC (the Californian variety), while Marcus is the founder and head of P3 (Peak Performance Project). Both have spent their respective careers collecting and analyzing vast amounts of monitoring data as a means to improve the training and rehabilitative processes. They shared some pretty interesting work on concussion management, return to play, and predictive analytics that had me turning to the guy next to me and saying, “Well, guess I’m moving to LA to work for one of these guys.” I’m not sure if I was kidding. Suffice to say, I plan on keeping an eye on the work they’re doing and have some new ideas I want to implement in my own athlete training and monitoring.

Some Personal Thoughts

While I really enjoyed the conference, I do have some disagreements with a few of the speakers on a philosophical level.

On Evolution (No, Not That One)

Several speakers mentioned the idea of constant evolution of your sports science program. While I agree that a program that isn’t pushing forward and seeking to continually improve itself year-after-year is effectively dead, evolution for evolution’s sake (or to be different or “brave” as one speaker put it) is just as damaging as stagnation. Yes, we should constantly look to improve our training program, monitoring program, coaching feedback, etc., but change should occur in a logical, progressive way that addresses one (or more) of our three goals (optimized performance, maintained or improved wellbeing, reduced injury risk). It’s OK that you’ve used the same vertical jump protocol to assess your athletes’ explosive capabilities for the last 8 years AS LONG AS the data you obtain from the test are leveraged in some way. You might even improve your testing protocol by adding new technology that provides you deeper insights (Vertec -> switch mats -> force plates). And hey, it’s perfectly fine to test new things with your athletes as long as you can justify their inclusion and they won’t put an undue burden on the athletes. As an example, my first year with ETSU men’s soccer didn’t involve any athlete monitoring past sRPE and pre- and post-season lab testing. My second year, we used force plates to monitor the athletes’ vertical jumps once a week. We found that weighted squat jump height was strongly related to the athletes’ accumulated training load, so the next three years involved performing weighted squat jumps on match day to assess the athletes’ fatigue states. We used that data both to make training modifications in the moment and to adjust our training program in subsequent seasons. My point is that while the test was modified over time (force plates -> a switch mat, once weekly -> every pre-game), the fundamental core of using the data to monitor the athletes’ fatigue state remained the same.

On The Sports Science See-Saw

This is tangentially related to the previous point. One thing that continues to frustrate me about the field is that we tend to wildly swing from one viewpoint to another. In ACL rehabilitative circles, for instance, asymmetry was the only thing that mattered for years. Then, someone proclaimed asymmetry was dead and that force production was all that mattered. Naturally, everyone suddenly parroted that as gospel. In reality, both are important. Yes, the involved limb needs to be able to tolerate the forces encountered in match play, but if the non-involved limb is experiencing disproportionately greater forces either from “picking up the slack” or a learned compensation pattern, there’s a good chance of a follow-up injury. In the words of my all-time favorite commercial,

The same goes for nutrition and training theory. Some ideas were shared during the conference that came across as “60 years of performance research is wrong”…based on my observations…with a single group of athletes…over a two year span. All I’m saying is pump the brakes a bit, let the research develop, and take a more nuanced approach to what you’re publicizing. Of course, if what you’re doing is working for you, great, keep it up. But be careful about making broad generalizations to other populations before the data are in. Likewise, as a practitioner listening to these types of presentations, critically evaluate what you’re hearing and don’t be quick to chase every new fad just because it worked with one group of athletes.

On Machine Learning

Interestingly, most of the speakers didn’t feel machine learning would have a major impact on the field in the near future. Maybe I’m stepping on my previous point a bit here, but I have a feeling machine learning will have an impact sooner than we think. Don’t get me wrong, I don’t think machine learning is going to solve all of our problems, but I do believe we’ll be able to answer some narrowly-defined questions as technology continues to develop and as the data we collect on athletes continues to increase. For instance, several papers have come out over the last three years that have used supervised machine learning to predict sRPE values from external workload measures (distance traveled, minutes played, sprints performed, etc.).

An interesting application that I’ve been toying around with is the idea of “anomaly detection.” If we predict a certain sRPE value for an athlete, yet their true response is substantially greater than the prediction (and the error, of course), we might flag that athlete for a deeper dive into their data. If their pre-training mood state and HRV are displaying negative trends, we might provide them some additional rest. Likewise, if everything looks good from a pre-training standpoint, maybe something happened in training that they’re trying to conceal (be it injury, a bad practice, getting yelled at by coach, whatever).

Another application of machine learning I see becoming popular is dimension reduction and identification of individualized metrics for each athlete. We can already perform dimension reduction to some degree with correlation analyses, PCA, and that sort of stuff, but machine learning may be able to help us better understand what external workload variables drive a specific athlete’s internal response. See Bartlett et al. as an example of what I’m talking about.

Staying out of the Weeds

Most importantly, though, I think it’s a good idea to echo what I said at the end of my last blog post: master the big before worrying about the small. The college athletes I’ve worked with for the last six years are very different compared to the Olympic athletes I’ve had the privilege to work with. The Olympic athletes make life easy (…well, most of the time): they sleep 9 - 10 hours a night, they take a 20-minute mid-day nap, they eat exactly what they need to eat when they need to eat it, and they’re extremely neurotic about their training. For them, extremely in-depth analysis of their data, experimentation with new training modalities, supervised machine learning to individualize what we’re monitoring, and all that good stuff are worthwhile endeavors. For my collegiate athletes…not so much. They don’t sleep, they take four hour naps because they didn’t sleep, they don’t eat, their time management skills are shit, and most of them are not invested in the training process. For those athletes, I’m wasting my time by digging super deep into their data. Instead of doing a deep dive to understand why they didn’t perform well in the last match, all I need to do is ask a teammate to find out they were up until 3 AM playing cards in the hotel lobby (unfortunately, a true story…we got spanked that match).

So before you implement 15 different tools to monitor your athletes, start by reading the room. Understand the team and the team culture. If you’re dealing with immature athletes who aren’t that invested in the process, start small and use a few quick, non-invasive methods of assessment. Focus on the big areas (are they sleeping? eating?) and go from there. As you develop a better culture and get the athletes to buy into what you’re doing, you can begin to implement further monitoring techniques and maybe even get into some advanced methods of assessment.

Anyway, that’s enough ranting for one blog post. I’d love to get your thoughts on what I discussed in this post, so feel free to reach out on Twitter (@DrMattSams) or shoot me an email: samsperformancetraining@gmail.com

Avatar
Matt Sams
Analyst - Performance Science

My interests focus on maximizing athletes’ performance while managing their fatigue and injury risk.

Related

Next
Previous