Sit2Stand FAQ

The Web App

  1. How do I use Sit2Stand.ai?
    • There are two options for using Sit2Stand.ai: 1) Self-Assessment or 2) For Researchers. Self-assessment consists of receiving instructions, uploading a single video, and receiving basic outputs (test time, maximum trunk flexion, and maximum trunk acceleration). With the “For Researchers” tool, multiple videos can be uploaded at once, and an email will be sent when the results are ready. For this option, researchers can record their own videos and choose whether they want the basic outputs or extended outputs. Details can be found in our webinar and manuscript.
    • If you would like to customize our web application or processing code, you can do so from our Github repositories here and here.
  2. What kind of device is required to use Sit2Stand.ai?
    • Any device equipped with a camera can be used to record the video. Recording the video can be done directly in the web app or separately. The video can then be uploaded to the web app by any device with an Internet connection.
  3. What pose estimation algorithm is used?
  4. How are joint angles calculated?
    • After processing the videos with pose estimation, we calculate joint angles by computing the angle between key points in the camera plane (e.g., the knee angle is the angle between the ankle-knee-and hip key points). Additional details can be found in our manuscript and Github Repository.
  5. What are the optimal conditions for recording the sit-to-stand test?
    • We found that pose estimation performed the best in a well-lit room, with one person in view, without background mirrors or highly reflective surfaces when the participant can be seen, without obstructions in front of the participant, with the participant’s full body in view, with the participant wearing fitted clothing, and with a standard chair without armrests, wheels, or thick cushioning. You can view the instructions we give to participants in our instruction video.
  6. Can I use more than one camera?
    • Sit2Stand.ai is made for video upload from one camera per capture. If you are interested in the use of multiple cameras, we recommend OpenCap.
  7. Can Sit2Stand.ai integrate with external forces or a musculoskeletal model?
    • The current Sit2Stand.ai pipeline does not integrate external forces or a musculoskeletal model. However, you can build upon the Sit2Stand.ai pipeline to input the joint position key points into a musculoskeletal model (like the OpenCap pipeline) or add external forces.
  8. Can I use Sit2Stand.ai to track progress over time?
    • While you can periodically upload videos and retrieve results, we have not yet collected enough scientific evidence to show that change in variables is significant and clinically relevant. For that reason, we don’t advise you to make any decisions based on periodic measurements.
  9. Can I use Sit2Stand.ai to track multiple people at the same time?
    • Sit2Stand.ai only assesses the closest person to the camera. Therefore, it cannot currently track multiple people at the same time.
  10. What are common issues for participants using Sit2Stand.ai?
    • We found that “in the wild”, participants’ videos varied in camera position and orientation. It was also common for participants to move out of the camera frame or have an obstruction blocking part of the participant.
  11. What are other applications of the tool beyond osteoarthritis?
    • This tool was primarily designed to evaluate the relationship between the sit-to-stand test and osteoarthritis. However, there are a number of studies on how OpenPose, the underlying technology, can be used to quantify movement and provide meaningful information for decision-making. For example, see this paper for an application of a related technology to track cerebral palsy.
  12. What other factors should I consider when carrying out a digital biomechanics study?

Security and Ethics

  1. Are the videos uploaded saved? What is the security of the saved videos?
    • By using Sit2Stand.ai, you acknowledge that content submitted to this website is stored on Amazon Web Services (AWS) Cloud servers and that it will be available under a CC BY 4.0 license. Videos submitted to the website are transferred using the secured protocol HTTPS and are stored at a unique random location in the AWS cloud. Videos are periodically moved from the cloud to a secure drive in the Stanford network. See our terms of use for additional information.
  2. What considerations should be made for ethical use and IRB approval?
    • All study procedures should be approved by the institute of the researcher submitting videos. To help with approval, researchers can anonymize the videos before submitting them.
Return to the main page