SPORTSCIENCE · sportsci.org

Latest issue

News & Comment / In Brief

This issue

 

   Guest Editorial: Sport Science–a Misfit. Our discipline straddles too many academic boundaries.

   Sad Stats.  No suitable statistical package for sport scientists. Plus tips and tricks for SPSS.

   Magnitude Matters. A slideshow from the 2006 ACSM meeting.

   Preparing Graphics for Publication. Tips and tricks for the Office platform.

   New and Updated Research Slideshows. Student supervision, dimensions of research, making inferences, controlled trials.

  Reprint pdf · Reprint doc

 

Guest Editorial: Sport Science–a Misfit

Stephen Seiler, Agder University College, Faculty of Health and Sport, Kristiansand 4604, Norway. Email. Sportscience 10, 55-56, 2006 (sportsci.org/2006/inbrief.htm#misfit). Published Dec 15, 2006. ©2006

Earlier this year I asked folks on the Sportscience list to tell me what Faculty or College their sport science, physical education or kinesiology program was organized under at their university or college.  I asked, because a strange suggestion for faculty reorganization had come up where I work, so I was trying to gather some info to demonstrate just how strange the suggestion was.

The question seemed to touch a nerve, based on the number of requests I received to send a summary of the results. And I got a huge response: more than 70 replies.  It was clear that our discipline struggles with an identity problem within the greater university community. No big surprise there–we know who we are and what we do, but university administrators who draw (re)organizational charts often do not.  As the list below shows, sport science has been organized under just about every possible faculty one can imagine.  And at least 10% of the institutions are in the process of “reorganizing” at any given time, based on this snapshot. 

Some typical organizational structures did emerge. In fact, three structures accounted for over half the responses.  The most common was that we are part of a College of Education.  Based on this survey, this is now a structure found almost exclusively in the US, where teacher education and a major in Physical Education is the historical basis, even as numerous non-teaching, health and exercise science-type majors have emerged over the last 20-30 years.  The second most common structure was with sport science organized under a Health Sciences umbrella.  This solution was the most “international.”  The third common solution was for human movement studies or sport science to be organized as a freestanding Division/Faculty/School.  This sounds great if you are big enough to pull it off.

After that, well, read for yourself.  Sport science has been placed all over.  The general organizational leaning around the world is to align sport and physical education with the natural and biological sciences, and not social sciences.  However, it could be that the composition of this email list biases the findings in that direction.  Here is the breakdown of responses in detail:

   College of Education, 18.  One of these was named Institute of Education.  Another is moving to a Faculty of Health Science.

   Faculty, School or College of Health Sciences, Health Professions, Health and Human Services, 13.

   Free standing Division, Faculty or School of Human Movement, Sport Sciences, Physical Education or Kinesiology, 9

   Faculty of Medicine or Medical Science, 4

   Faculty or Department of Life Sciences or Biological Sciences, 3

   Faculty of Arts and Sciences, 2

I had difficulty grouping the remainder.  Here they all are as singletons:

   School of Health and Natural Sciences - Department of Health and Human Performance

   School of Science and Engineering - School of Biological Sciences

   Institute for Systems and Membrane Biology

   Faculty of Computing, Health, and Science - School of Exercise

   School of Fine and Applied Arts - Department of Health, Leisure and Exercise Science

   Faculty of Arts, Education and Human Development

   Faculty of Social Sciences - Department of Exercise Science

   Faculty of Philosophy - Institute of Sport Science and Sport (at a traditional German University)

   Faculty of Science

   Faculty of Business - School of Leisure, Sport and Tourism

   Faculty of Applied Technology (respondent said that this included everything from accountancy to mechanical engineering)

   Faculty of Empirical Human Science (a subdivision of the Philosophical Faculty–also at a German University)

   Faculty of Philosophy - Institute for Sport Science (now, but in 2008 they close and join ranks with the German University of Sport in Cologne)

   College of Liberal Arts, Social Sciences Division - Department of Physical Education

   School of Humanities (along with English, History, Linguistics, Religious Studies, Philosophy, and Art History)

   School of Science and Technology - Department of Kinesiology

   School of Health and Natural Sciences - Department of Health and Human Performance

   Faculty of Science, Engineering and Health - School of Health and Human Performance

   Faculty of Science, Engineering and Health (was in Faculty of Arts, Health, and Sciences but got moved in a faculty rationalization)

   An undergraduate department in School of Health, with the graduate department in the Medical School

   One program was split such that the Sport Science program was under the Faculty of Science and the Physical Education and Sport Studies program was under the Faculty of Education

   Another program was actually split among three faculties:  Education, Science, and Arts

What was the suggestion for my faculty that I found so strange?  Well, a supposedly external commission wanted to split out the nursing program and merge it with a parallel nursing education program in another city (although still part of our university).  What was left of the former Faculty of Health and Sport (nursing education, food and nutrition, sport science and sport education, outdoor recreation, physical education, and continuing education in health related professions) would be moved to the Faculty of Art and Culture, along with music, dance, textile arts, etc.   A weird proposal, yes, but not unprecedented, judging by the responses to the survey.  Fortunately we managed to convince our political academicians that this reorganization would not be appropriate for students or staff.

 

Sad Stats

Will G Hopkins, Sport and Recreation, AUT University, Auckland 0627, New Zealand. Email. Sportscience 10, 56-57, 2006 (sportsci.org/2006/inbrief.htm#sad). Reviewed by Alan M Batterham, School of Health and Social Care, University of Teesside, Middlesbrough TS1 3BA, UK.  Published Dec 15, 2006. ©2006.  Reviewer's Commentary.

Update June 2015. As explained in detail in an In-brief item in the 2015 issue, the Zip-compressed file of files explaining how to use SPSS to do mixed modeling now contains an Excel file of data and another Word doc explaining how to import and analyze reliability data with one-way and two-way mixed models. See the article on spreadsheets for validity and reliability in the 2015 issue for a more detailed explanation of the reliability models. But wait, there's more… I have included Excel data and a Word doc explaining how to analyze a controlled trial in SPSS using change scores as the dependent variable. (The data and instructions originally in the Zip file were for a mixed model using the original pre and post scores, not the change scores.  Those files are still there.) The data are those in the spreadsheet for analysis of a controlled trial at this site, which I have included in the Zip file.

This year (2006) I made a serious attempt to identify a stats package that I could recommend to my research students and colleagues.  While none was good enough for a recommendation, SPSS was the least disappointing.  At the end of this editorial are some links to instructions and trial data for SPSS.

I particularly wanted a package that would do mixed modeling, an advanced form of linear modeling that allows you to specify and estimate sources of variability in your data.  Mixed modeling is great for the usual kind of continuous dependent variable when errors are different for different groups of subjects or when the error changes between measurements on the same subjects. The package I use is the Statistical Analysis System (SAS), and the mixed-model procedure (Proc Mixed) in SAS meets most of my needs, but I can't recommend it.  SAS is expensive (annual academic institutional licenses start at around US$10,000), it takes years to become an independent user of the full command-code version, and the interface is far from friendly. 

I started my quest as usual with a message to the Sportscience list.  People on the list suggested Statistica, Stata, JMP, which is one of the two menu-driven versions of SAS, and SPSS.  I chased up and tried the latest versions of all these packages.  A late entry is the package known simply as "R".  I had tried the mixed model in this package several years ago but was unable to crack the absurdly obscure code.  Just recently a graduate student at my last institution has assured me it is worth the effort and is going to teach me how to use it in the New Year.  Stay tuned.

Statistica's version 7 was the most user-friendly package, but its mixed model was too simple.  I couldn't use it to specify different errors in different groups or random variation in slopes (random numeric effects). 

I didn't discover if the mixed model in Stata worked, because it was clear that this package was aimed at expert statisticians.  It also sported a thinly disguised saurian DOS interface.

The hype at the JMP website gave me hope, so I eagerly downloaded the 30-day free trial.  I had already experienced great disappointment a year or so ago with the menu-driven version of the main SAS package, the so-called Enterprise Guide.  Incredibly, the mixed model in the Guide platform was dysfunctional.  In any case, the Guide is part of the main SAS package, which is too expensive for many academics.  JMP is a lot cheaper. 

The hype was unjustified.  JMP turned out to be an honest but failed attempt at a new view of statistics.  In trying to avoid the usual statistical jargon, they developed an almost entirely new jargon that was equally confusing.  And I discovered that I couldn't dial up the customized estimates that I need routinely for controlled trials.  I tried with data consisting of pre and post measurements in a control and experimental group, with sex subgroups. There are two routes: via the parameter estimates for the model, and via least-squares means.  Well, the parameter estimates are impossibly complicated in JMP, because the modeling works properly only if you include all main effects and interactions less than the full sex*group*time interaction.  Alas, to combine all those parameters to get the difference between sexes in the difference between groups in the post-pre change is beyond my capabilities on the days when my IQ dips below 200, so I can't expect you folks to use it.  The least-squares-means route was more straightforward, but when you combine the levels you want, an inappropriate constant divisor is introduced that you can't suppress.  For example, when you dial up post-pre for control-exptal, you get half the correct answer!  I was using data that I had generated with known effects and that I analyzed in the full SAS package, which gave the right answer.  Goodness knows what JMP would give if you tried to dial up something like a post value minus the mean of two pre values for the exptal minus the control for females minus males. 

JMP was more powerful than SPSS and Statistica for specifying random effects, but there was no option for different random effects in different groups.  I was hoping to access and tweak the command script in JMP to get the right estimates for fixed effects and more flexible random effects, but JMP's script is nothing like the Fortran/Basic of the main SAS package, and it was untweakable, by me anyway.  By the way, I could find no explanation of what JMP stands for.

And now, SPSS version 14…  Initially I could not get it to do simple difference-in-the-changes or other customized estimates from the group*time interaction in a controlled trial. But the mixed model was working with numeric random effects, so I hit on a novel way to use dummy variables to model the outcome of a treatment as a fixed and random effect.  For more information link to a Word doc and a zip-compressed folder of files for mixed modeling.  Adding in another between-subject effect (sex*group*time) would be too difficult, but if you ever reach this point with SPSS, read the article in this issue about using a spreadsheet for combining the outcomes from separate analyses of females and males.  The Word doc also explains how to use SPSS for descriptive stats, reliability, validity, and modeling of binary outcome variables.  The latter is ridiculously complicated in SPSS, but if you get this far, a zip-compressed folder of files for binary outcomes may be helpful.

Having produced these additional materials for use with SPSS, I don't advise you to use them. My most recent spreadsheets for controlled trials do a better job for continuous variables. The spreadsheets limit you to one covariate at a time, but you can include an additional grouping covariate using another spreadsheet, as described in another article in this issue. Now we need spreadsheets to perform generalized linear modeling of binary outcome variables.  I'm looking into it.

Reviewer's commentary.

 

Magnitude Matters

Will G Hopkins, Sport and Recreation, AUT University, Auckland 0627, New Zealand. Email. Sportscience 10, 58, 2006 (sportsci.org/2006/inbrief.htm#magnitude).  Reviewed by Dwight Thé, Syracuse University, Syracuse, New York, NY 13244-5040.  Published Dec 20. ©2006.  Reviewer's Commentary

At long last, magnitude is becoming a buzzword in research analysis.  Guidelines for authors in biomedical and psychological disciplines are now including calls for reporting and interpretation of the magnitude of treatment outcomes and other effects.  For example, the International Committee of Medical Journal Editors advises authors at their website to show "specific effect sizes" and to "avoid relying solely on statistical hypothesis testing…, which fails to convey important information about effect size".  The Publication Manual of the American Psychological Association (5th edition, 2001) now has a section on "Effect Size and Strength of Relationship" and identifies 15 ways to express magnitudes, although I do not approve of some of these.  Generic measures of effect magnitude and their interpretation are also important when combining studies in a meta-analysis, and mention of effect measures occurs throughout the Cochrane Reviewers' Handbook. 

I had the opportunity to contribute to a symposium on Tradition and Innovation in Data Analysis at this year's annual conference of the American College of Sports Medicine.  I therefore opted to summarize some of the guidelines related to magnitude and to provide new insights and practical advice on calculation and presentation of effect magnitudes. My talk was entitled Magnitude Matters: Effect Size in Research and Clinical Practice, and you can download it in Powerpoint or PDF format.

In the slideshow I explain how the magnitude of an outcome is important when estimating sample size and when interpreting the clinical or practical importance of the outcome.  I then identify generic outcome measures that facilitate interpretation of magnitude: correlation coefficients, standardized differences and changes in means, and relative risk and other measures of relative frequency.  For each measure I identify a minimum clinically important difference and other thresholds of magnitude when used with patients, healthy individuals, and competitive athletes.

I made several changes to this item and the slideshow in response to the reviewer's comments. However, we disagree about use of variance explained to assess magnitude. 

Update July 30, 2010: A new article/slideshow on magnitudes of effects, including extensive material on linear models, is now available in the 2010 issue.

Update August 10, 2009: The slideshow now includes a complete set of magnitude thresholds (small, moderate, large, very large, extremely large) for competitive performance of individual athletes.  I derived the thresholds above small using the method of simulation described in Hopkins et al. (1999).  They are included in Hopkins et al. (2009); for more, see the item Progressive Statistics Updated in the 2009 issue of Sportscience), which you can cite if you use them. 

Hopkins WG, Hawley JA, Burke LM (1999). Design and analysis of research on sport performance enhancement. Medicine and Science in Sports and Exercise 31, 472-485

Hopkins WG, Marshall SW, Batterham AM, Hanin J (2009). Progressive statistics for studies in sports medicine and exercise science. Medicine and Science in Sports and Exercise 41, 3-12

 

Preparing Graphics for Publication

Will G Hopkins, Sport and Recreation, AUT University, Auckland 0627, New Zealand. Email. Sportscience 10, 58-59, 2006 (sportsci.org/2006/inbrief.htm#graphic). Reviewed by Andrew Pinder, Health and Safety Laboratory, Buxton SK17 9JN, UK.  Published Dec 15, 2006. ©2006 Updated 26 Jan 2008.

Update Oct 2008. There is a major problem with copying graphs from Excel to Powerpoint in Office 2007. See the In-brief item in Sportscience 2008.

If you use Excel and Powerpoint to make figures and graphs for publication, you may have run into the problem of how to create them in a high-resolution format for on-line submission.  Last year, with help from the Sportscience list, I sorted out a strategy that works for the journal Medicine and Science in Sports and Exercise.  But I omitted to include it as an In-brief item.  Herewith is a summary.

First, some advice on creating and editing figures…  I make them the same size as they will appear in the journal, to get everything looking right.  Axes and tick marks look better if the lines are ½ pt.  The lines and symbols in a line graph look better if they are a bit thicker, ¾ pt.  Make all lines continuous in Excel then convert to dashed or dotted when you clean up the figure in Powerpoint.  Make the symbols on graphs big enough, about 6 pt.  Use Arial Narrow 9 or 10 pt for labels, by selecting the whole figure, right-clicking and selecting Format Chart Area, selecting the fonts via the Font tab and unselecting Autoscale. Paste graphs from Excel into Powerpoint to clean them up, as follows… Copy the graph to the clipboard in Excel.  Set up a blank slide in Powerpoint. Make sure you have the Drawing toolbar active.  (Customize it for future use by adding Group, Ungroup, Bring to Front, Send to Back, Bring Forward, and Send Backward.) Use Paste Special/Picture (Enhanced Metafile).  Ungroup the object twice, whereupon all picture elements will be selected.  Click to the side to unselect, then click on elements you want to modify.  It's easier if you delete the white background before editing any elements.  (You can't see the background, but you will see it selected when you click on it.).  Play with various combinations of the Shift, Ctrl and Alt keys when you click and/or drag, until you master the various tricks.  One trick you might not discover is to move selected objects one pixel at a time by using the up, down and sideways arrows while pressing the Ctrl key. 

To make a Powerpoint figure bigger for a slideshow, UNgroup all elements in the figure, then copy and paste into another Powerpoint file as an enhanced metafile.  (If you don’t ungroup before you copy, you get overlaid duplicates of the grouped elements when you paste, which can be troublesome.)  When you click and drag one corner to enlarge, you will find all elements including fonts increase in size.  Then ungroup (twice) and colorize text and objects.  You can use the same approach in reverse to convert a figure from a slideshow into a figure for publication.

When inserting a graphic into a Word doc, create a table with two cells in one column and invisible borders, then paste the graphic (as an enhanced metafile) into one of the cells.  Double-click on the graphic, click the Layout tab, and choose In line with text.  Change the vertical borders on the table to fit the graphic, then type a legend in the empty cell.  Change the horizontal position of the table by selecting it then Shift-click-dragging the borders, or by right-clicking, selecting Table Properties, then choosing the alignment you want.

Now, the high-resolution format for publication…  Convert the Powerpoint to a PDF using the Adobe Acrobat PDF editor.  (You may have to get a copy off someone.)  In the PDF, crop the blank space around each figure using the crop tool.  (Just draw a rectangle around each figure with the tool and hit Enter.)  This maneuver reduces the file size.  If the journal accepts only TIFF files, save the Powerpoint file as such.  Adobe generates a new TIFF for each figure in the PDF, numbering them sequentially.  The files are manageable in size (2-4 MB) and have adequate resolution.  MSSE and presumably other journals also accept EPS files (one per figure).  These are preferable to TIFFs, because EPS is one of those "vector" formats that contains all the info needed to regenerate the figure perfectly at any resolution, and the files are small (100-200 K).  The EPS file can contain embedded bitmap images too, of course.  Vector graphics packages other than Powerpoint (Corel Draw, Adobe Illustrator, Fireworks, Xara X) can also produce EPS files for publication.

Several people on the email list suggested saving figures as a TIFF directly from Powerpoint.  I tried this strategy in various ways, but it always produced images that were too pixelly.  There were also various suggestions for pasting into image editors via the clipboard, but the editors I tried (Photoshop, Fireworks, Microsoft Photo Editor) all produced similar unacceptably pixelly images.

 

New and Updated Research Slideshows: Student Supervision, Dimensions of Research, Making Inferences, Controlled Trials

Will G Hopkins, Sport and Recreation, AUT University, Auckland 0627, New Zealand. Email. Sportscience 10, 59, 2006 (sportsci.org/2006/inbrief.htm#slides). Reviewed by Steve Olivier, Social and Health Sciences, University of Abertay Dundee, Dundee, DD1 1HG, UK.  Published Dec 15, 2006. ©2006

Update Feb 27, 2007:  some improvements to the slideshows on student supervision and dimensions of research, as suggested by the reviewer.

Earlier this year I was invited to present a talk on the topic of Establishing and Maintaining the Relationship between Student and Supervisor.  The talk was aimed at new supervisors of research students in one of the tertiary institutions I visit regularly.  The talk consisted mainly of a summary of research on the topic that I was able to access via Google Scholar, but I included advice based on my own experience.  Download the talk in Powerpoint or PDF format.

I also ran a workshop recently about research design and analysis for a national sports institute.  In the process I created or updated several slideshows that may be useful for your own understanding and teaching about research, as follows.

The slideshow for the 2002 article about Dimensions of Research now includes two more legitimate kinds of research project: a review of literature, and development or investigation of a method.  Download the updated slideshow in Powerpoint or PDF format. 

The slideshow about clinical vs statistical significance is improved somewhat and is now called Making Inferences. Download in Powerpoint or PDF format.  See also the article by Batterham and Hopkins on this topic in last year's issue.

The article about the different kinds of controlled trial now has a short slideshow (in Powerpoint only) summarizing the key points.

————