Chong R* | |
Department of Physical Therapy, Georgia Regents University, Augusta, Georgia, USA | |
Corresponding Author : | Raymond Chong Department of Physical Therapy Georgia Regents University, Augusta, Georgia, USA E-mail: rchong8@hotmail.com |
Received October 15, 2013; Accepted October 17, 2013; Published October 19, 2013 | |
Citation: Chong R (2013) Analyses of Data: Single Trials versus Averaging. J Nov Physiother 3:e131. doi:10.4172/2165-7025.1000e131 | |
Copyright: © 2013 Chong R. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
Visit for more related articles at Journal of Novel Physiotherapies
Should we use average values when we take multiple measurements in our test subjects or patients? Does the answer depend on the research or clinical question? Humans have great capacities to adapt and learn. We also exhibit variability in our movements or responses. This variability is due in part to becoming more accurate in our actions with repeated attempts to do something. Even in activities that are considered well-learnt such as keeping our balance during standing or walking, we occasionally still observe a first-trial response that is different from subsequent attempts. Regardless of the question of interest, I propose to you that this adaptive behavior is important to capture and analyze. If it is present, we should report it. Why? Because that is who and how we are. That’s what humans do. |
In research, it is sometimes unclear whether this is done prior to aggregate analyses. Researchers typically used one of several methods to overcome the potential presence of adaptation. Here I highlight two routines that researchers commonly apply: |
The first is averaging identical consecutive trials whereby the researcher simply uses the mean value for comparison across groups of subjects or treatment effects. Averaging assumes to a certain extent that the behavior is random. This assumption is often overlooked. If there is something systematic going on, e.g., trial by trial change in response due to adaptation/learning or fatigue, this is potentially very useful clinical information that is lost when the trials are averaged. |
The second routine is randomizing the order of testing. Some tests are prescribed in a particular order. Researchers sometimes do not follow the sequence and instead randomize the order of the tests. Presumably, this is done to eliminate the learning effect associated with the assessment. For reasons that are often not explained, researchers do not wish to study it or test for its occurrence. Randomization however, may confound the experimental design. |
Let me illustrate with an example: There is a computerized postural sway test called the Sensory Organization Test (SOT). There are six conditions and each condition is tested three times in a block for a total of 18 trials. The conditions are administered in ascending order of difficulty, i.e. three trials of condition one is done first (easiest), followed by three trials of condition two, and so on until the sixth condition is completed (most challenging). It is known that subjects adapt to the SOT. There are good reasons not to randomize the test order. |
Firstly, and most important, randomization means a subject will experience a more difficult condition before an easier one. Do you see the problem here? The adaptation effect in the easier trials is now magnified while the opposite is the case for the more difficult trials. |
Secondly, the SOT happens to be a recognized balance control assessment tool. Following the standard protocol allows for the opportunity to carry out follow-up studies, thereby affording the results of the study clinical relevance and application. Results from different studies can also be directly compared if the test sequences are kept the same. |
Thirdly, by keeping the testing order as prescribed, the first versus third trial performance in each condition can be studied. Not all subjects display adaptive behavior. For those whose performance is abnormal, we should find out why. |
Fourthly- and this is more of a question than a reason-why ignore the remarkable adaptive behavior of humans? Why wash it out with randomization and potentially confound the study? |
When researchers analyzed trials separately rather than averaging or randomizing them, they often observe adaptive behavior, including balance control during stance [1-9], or walking [10], as well as other activities including gymnastics [11], dual-tasking [12,13], upper extremity visuomotor adaptation [14], postural stability [15], and reaction time [16]. Contradicting statistical outcomes may occur when trials are analyzed as an average versus separately. They create uncertainty among the research and clinical communities. The contradictions may not be appreciated unless the methodologies are scrutinized in detail [12]. They may also produce inconclusive results in meta-analyses studies. |
Researchers and clinicians must be willing to cast a wider net in their evaluation of test subjects and patients. They should assess for the presence of adaptation before averaging the data. In journals such as JNP, there is no page limit to constrain the amount and level of detail in analyzing data. Such analytical practices should therefore be encouraged. This should result in a more thorough understanding of how humans perform and adapt to task demands. As the knowledge is communicated rapidly and freely in an open-access forum, researchers and clinicians can quickly incorporate these findings into research and clinical practice. |
References |
|
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals