WeChat Mini Program
Old Version Features

Exercise As the Sum of Our Choices Between Behavioral Alternatives: the Decisional Preferences in Exercising (DPEX) Test

PSYCHOLOGY OF SPORT AND EXERCISE(2024)

Univ Potsdam

Cited 0|Views16
Abstract
Exercising can be theorized as the result of choosing one behavior over alternative behaviors. The Decisional Preferences in Exercising (DPEX) test is a computerized, easy-to-use, publicly available (open source Python code: https://osf.io/ahbjr/) and highly adaptive research tool based on this rationale. In the DPEX, participants are asked to choose between two images by pressing a key on the computer keyboard, one showing a physical exercise and the other showing a non-exercise behavioral alternative in a series of trials. Combinations are randomly assembled from two definable pools of stimuli trial-per-trial. The test can be scored either based on a crossed random effects model (facilitating the use of different stimulus material in different studies without compromising the comparability of test scores) or with a simple proportion score. Data from diverse study samples (N = 451) showed strong correlations of DPEX scores with past and future exercise behavior (r = 0.42 and 0.47 respectively) as well as with affective experiences with exercise (e.g., 'pleasure-displeasure': r = 0.47). DPEX test scores discriminated between exercisers and non-exercisers according to receiver operating curve (ROC) analysis. The DPEX may be used to examine research questions derived from dual process theories, the effects of psychological states on behavioral choices can be tested, or the effects of behavior change interventions can be evaluated. The DPEX helps to avoid common method bias in the assessment of exercise behavior, for example, when psychological variables are measured with questionnaires.
More
Translated text
Key words
Exercise behavior,Decisional preferences,Motivation,Dual processing
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined