There are two options for using Sit2Stand.ai: 1) Self-Assessment or 2) For Researchers.
Self-assessment consists of receiving instructions, uploading a single video, and receiving
basic outputs (test time, maximum trunk flexion, and maximum trunk acceleration). With the
Researchers” tool, multiple videos can be uploaded at once, and an email will be sent when
results are ready. For this option, researchers can record their own videos and choose
they want the basic outputs or extended outputs. Details can be found in our webinar and manuscript.
If you would like to customize our web application or processing code, you can do so from
Github repositories here
What kind of device is required to use Sit2Stand.ai?
Any device equipped with a camera can be used to record the video. Recording the video can
done directly in the web app or separately. The video can then be uploaded to the web app by
device with an Internet connection.
After processing the videos with pose estimation, we calculate joint angles by computing the
angle between key points in the camera plane (e.g., the knee angle is the angle between the
ankle-knee-and hip key points). Additional details can be found in our manuscript and
What are the optimal conditions for recording the sit-to-stand test?
We found that pose estimation performed the best in a well-lit room, with one person in
without background mirrors or highly reflective surfaces when the participant can be seen,
without obstructions in front of the participant, with the participant’s full body in view,
the participant wearing fitted clothing, and with a standard chair without armrests, wheels,
thick cushioning. You can view the instructions we give to participants in our instruction video.
Can I use more than one camera?
Sit2Stand.ai is made for video upload from one camera per capture. If you are interested in
use of multiple cameras, we recommend OpenCap.
Can Sit2Stand.ai integrate with external forces or a musculoskeletal model?
The current Sit2Stand.ai pipeline does not integrate external forces or a musculoskeletal
However, you can build upon the Sit2Stand.ai pipeline to input the joint position key points
into a musculoskeletal model (like the OpenCap pipeline) or add
Can I use Sit2Stand.ai to track progress over time?
While you can periodically upload videos and retrieve results, we have not yet collected
scientific evidence to show that change in variables is significant and clinically relevant.
that reason, we don’t advise you to make any decisions based on periodic measurements.
Can I use Sit2Stand.ai to track multiple people at the same time?
Sit2Stand.ai only assesses the closest person to the camera. Therefore, it cannot currently
track multiple people at the same time.
What are common issues for participants using Sit2Stand.ai?
We found that “in the wild”, participants’ videos varied in camera position and orientation.
was also common for participants to move out of the camera frame or have an obstruction
part of the participant.
What are other applications of the tool beyond osteoarthritis?
This tool was primarily designed to evaluate the relationship between the sit-to-stand test
osteoarthritis. However, there are a number of studies on how OpenPose, the underlying
technology, can be used to quantify movement and provide meaningful information for
decision-making. For example, see this paper for an application of a related technology to
What other factors should I consider when carrying out a digital biomechanics study?
Are the videos uploaded saved? What is the security of the saved videos?
By using Sit2Stand.ai, you acknowledge that content submitted to this website is stored on
Amazon Web Services (AWS) Cloud servers and that it will be available under a CC BY 4.0
Videos submitted to the website are transferred using the secured protocol HTTPS and are
at a unique random location in the AWS cloud. Videos are periodically moved from the cloud
What considerations should be made for ethical use and IRB approval?
All study procedures should be approved by the institute of the researcher submitting
help with approval, researchers can anonymize the videos before submitting them.