Keywords

1 Introduction

In the past few decades, many researches are accomplished on yoga. As a result, few applications were developed on yoga which gives details of yoga on daily basis. Few databases are developed, which contains a collection of different types of yoga activities. In current trends, the researches on yoga have taken a different turn, where systems such as real-time posture monitoring system and analysis of human body temperatures and pressures during yoga, are developed.

1.1 Significance

The current paper mainly focuses on a real-time system which considers different aspects, while performing yoga, i.e., posture monitoring, based on comparing the gathered data with the existing data of yoga in the database.

The proposed system deals with monitoring the posture of yoga aasanaas without human expert guidance while doing different steps in some aasanaas, in real time. The system uses its underlying knowledge about the postures for aasanaas as a comparing tool with the real-time yoga practitioner and thus monitors the posture. In summary, this system helps in smooth practicing of yoga for the practitioners without any human expert guidance.

1.2 Pose Estimation Algorithm with Bottom-Up Approach for 3D Videos

Human posture estimation goals at forecasting the postures of human body fragments and linkages in images or videos. Since posture gestures are frequently determined by some detailed human postures, the perceptive body posture of a human is acute for movement recognition.

Totally, methods for posture approximation can be gathered into bottom-up and top-down approaches. Bottom-up approach approximates body joints primarily and then clusters them to form a unique pose. Bottom-up methods were pioneered with deep cut. Top-down methods run an individual sensor primarily and approximation of body joints inside the spotted vaulting packets.

3D humanoid posture approximation is castoff to forecast the positions of body joints in 3D space. Also in the 3D posture, some approaches also improve 3D human weave from imageries or videos. This arena has attracted abundant attention in recent years; meanwhile, it is used to deliver widespread 3D structure info associated with the human body. It can be functional to numerous solicitations, such as 3D animation trades, virtual or augmented reality, and 3D action estimations. 3D humanoid posture approximation can be achieved on monocular images or videos.

Furthermost approaches habit an N-joints rigid kinematic model where a human body is characterized as an object with joints and members, comprising body kinematic construction and body shape info.

Here are three kinds of models for human body modeling:

Kinematic Model, even known as skeleton-centered model, is castoff for 2D posture approximation as sound as 3D posture approximation [9]. This stretchy and instinctual human physique classic embraces a set of mutual locations and limb alignments to characterize the human body construction. Consequently, skeleton posture approximation models are castoff to acquire the relationships among various body portions [10]. Conversely, kinematic models are restricted in demonstrating surface or outline data as shown in Fig. 1.

Fig. 1
Three illustrations depict the human body models, namely, kinematic, planar, and volumetric.

Types of models for human body modeling

Planar Model, or contour-specific architecture, is used for 2D postures estimates. The planar replicas are recycled to characterize the presence and form of a human figure [11]. Typically, body fragments are embodied by several rectangles resembling the human physique delineations [12]. A prevalent illustration is the active shape model (ASM) that is recycled to acquire the complete human physique grid and the outline distortions by means of principal component analysis as shown in Fig. 1.

Volumetric model is implemented to estimate the 3D posture [13]. Here, several standard 3D human physique facsimiles recycled for deep learning built on 3D human posture estimate for mending 3D human weave exist [14, 15]. For example, GHUM and GHUML (ite) are completely trainable endways deep learning channels thought on a greater clarity dataset of complete physique probes above 60,000 human formations to archetypal arithmetical and enunciated 3D human physique form and posture as shown in Fig. 1.

2 Literature Survey

Yoga is an activity which boosts the physical and mental health. Various aasanaas are performed in yoga. It is mandatory to practice yoga under the guidance of a yoga expert. In the absence of the expert if done, mistakes might happen that leads to physical problem. In this context, various systems are designed about posture monitoring system, yoga database, measuring the effectiveness of yoga, measuring for the right posture based on body temperature, blood pressure, etc.

Thangavelu and Mani [1] have proposed a real-time monitoring system for yoga practitioners, which monitors yoga activity. Bowyer and Kevin [2] have made a survey of approaches to three-dimensional face recognition for demonstration of ways of recognizing points in face. Dileep and Danti [3] demonstrated lines of connectivity-face model for recognition of the human facial expressions, which explains how to identify point in face for deciding different expressions. Lee et al. [4] have demonstrated a unique posture monitoring system for preventing physical illness of smartphone users, which says about negative impact of mobile phone looking posture. Muhammad Usama Islam, Hasan Mahmud, Faisal Bin Ashraf, Iqbal et al. [5] worked on yoga posture recognition by detecting human joint points in real time using Microsoft Kinect, where points are identified for detection. Patsadu et al. [6] have worked on human gesture recognition using Kinect camera, which demonstrates the type of gathering data. Obdržálek et al. [7] have proposed a real-time human pose detection and tracking for telerehabilitation in virtual reality, which signifies processing of pose detection. Lee and Nguyen [8] have demonstrated the human posture recognition using human skeleton provided by Kinect for posture recognition.

The prime focus in this paper is monitoring the posture of the body activity during performing yoga using video, by comparing with the existing data in the database, i.e., training set. The main yoga type used in this paper is Suryanamaskar, which consists of 12 different steps, and the proposed system will be built for identifying postures in it. The rest of this paper is being organized as follows: Sect. 3 describes the proposed methodology. Section 4 presents the proposed algorithm. Section 5 deals with experimental analysis and assumptions. Section 6 draws the conclusions and discussions. Finally, Sect. 7 provides the future scope of the proposed approach.

3 Proposed Methodology

The methodology of the proposed system is broadly classified as follows:

  1. a.

    Posture capturing video: While performing aasanaas, the data will be captured from high definition cameras fixed to the left side of the yoga practitioner. The camera captures the video and monitors for the body posture positions such as bending of arms, waist, with some angle, keeping knees straight in some positions, keeping entire body in straight position, parallel alignment of head and palms, inclination of hands, or legs. After that the captured data will be matched with the data which is stored in database for the particular aasanaa, the training set.

  2. b.

    Offline data storing techniques in terms of templates: The data about the aasanaa for which the system is built has to be constructed in series of steps, and the activities which are to be performed in the aasanaa have to be mentioned clearly and must be stored in some database. The database which has to be used in the proposed system must contain an arrangement, so that it supports the data for comparing with the real-time data in suitable formats. Therefore, the selection, design, and building of database must be done properly.

The particular type of yoga on which proposed system is implemented is Suryanamaskar.

3.1 Suryanamaskar

Prerequisite: In the world of yoga, Suryanamaskar is the basic level activity. Suryanamaskar should be performed on the flat floor with a yoga mat. The prerequisite posture for performing Suryanamaskar is as follows:

Stand straight on the floor; observe that both the feet are together with a reference line. By using this reference line only, the steps should be performed to follow proper method of Suryanamaskar. The sequence of steps in Suryanamaskar is shown below in Fig. 2.

Fig. 2
A photograph depicts silhouettes of a person doing the 12 steps of Suryanamaskar against a setting sun backdrop.

Sequence of steps in Suryanamaskar

  • Step 1: Namaskarasana

  • In this aasanaa, bring both the hands in front of the chest, press the palm tightly, and keep the breathing normal.

  • Step 2: Urdhvasana

  • In this step, rise both hands upper side, with that rise the head in proportion with hands in 90degree inclination, inhale breath, and sustain the breath in that position for some seconds.

  • Step 3: Hastapadasana

  • In this step, keep both the palms in straight position on the floor with a width equal to shoulders width, note that knees should be straight, both palms and toes are in proper reference line, and exhale and sustain for few seconds.

  • Step 4: Ekapada Prasaranasana (right or left leg alternatively)

  • In this step, move one of the legs backward and land on the floor only on toes, and just place the knee of the corresponding leg on the floor, (no weight should be applied on the knee, and it should just touch the floor), and keep the breath normal.

  • Step 5: Dwipada Prasaranasana

  • In this step, move the other feet in the adjacent position to the first leg as in step 4, and keep your body in straight position like a stick parallel to the ground. Now, the entire body weight remains on palm and toes. Keep the breath normal and sustain in the same position for some seconds.

  • Step 6: Bhoodharasana

  • In this step, with the help of your entire body strength, touch the entire feet to the ground by raising the hip level upward. The body shape looks like inverted “V” shape. Stay in same position for some seconds, with normal breathing.

  • Step 7: Saashtaangapraneepaataasana

  • In this step, bring your body in horizontal position with the floor, by just touching forehead, chest, and knees to floor (all the body weight should be on palm and toes) and inhale and sustain in the same position for some seconds.

  • Step 8: Bhujangaasana

  • In this step, raise the head, shoulders, and look upward, in this position waist should be near to the floor, and inhale and sustain the same position for some seconds.

  • Step 9: Bhoodharasana

  • Perform step 6.

  • Step 10: Ekapada Prasaranasana (right or left leg alternatively)

  • Perform step 4 for the corresponding leg in reverse order.

  • Step 11: Hastapadasana

  • Perform step 3 for the corresponding leg in reverse order.

  • Step 12: Namaskarasana

  • Return to Namaskarasana, i.e., step 1.

The proposed methodology of the system consists of identification of the necessary points in the human image/picture as in Fig. 3, and in between these points, a relationship should be established in terms of angles while performing aasanaa.

Fig. 3
An illustration depicts the silhouette of a person with a total of 19 marked points on the whole body.

Identification of points

In this paper, the capturing will be done by left/right side view of the practitioner, so the points will be identified either in left side view or in right side view. The identification points for left side view are 1, 2, 4, 5, 7, 9, 11, 13, 15, 17, 19 and for right side view 1, 2, 3, 5, 6, 8, 10, 12, 14, 16, 18, respectively. Currently, capturing from left side of the practitioner will be done and identify the points as shown in Fig. 4.

Fig. 4
An illustration of the side profile of a human body with a total of 19 marked points.

Left side viewpoints

In the next step, the angle between various points is measured and recorded for each aasanaa. In Fig. 5, the angles are measured for Bhoodharasanaas a sample for presentation, and similarly for each aasanaa the angles will be measured and recorded. The angles are measured by using some vectors for each identified point.

Fig. 5
An illustration depicts a continuous line pattern with angles formed from various points on the silhouette of a person doing the Bhoodharasana step.

Identification of angles

4 Proposed Algorithm

  • Step 1: Read the input video data from both the cameras, i.e., left and front cameras and breath data from the breathing sensor fixed and store it instantaneously. (The first reading will be taken as training set, store it in database, and repeat step 1 for testing set). The activities to be performed in the reading of the video data are identification of the points given by θn and the time spent for a particular posture in Tn, measurable in milliseconds.

  • Step 2: Verify the mapping between front camera data given by βF and left side camera data given by βL for the training set given by

    $$ \beta n = {\text{MappingFunction}}\left( {\beta F,\beta L} \right) $$
    (1)
  • Step 3: Compare the data collected from front and left side camera with the training set data instantaneously, while the system completes the reading of identification points for the testing set for the video data βn. The difference in the angle between the training and the testing set is given by

    $$ \mu {\text{n}} = {\text{MatchDistance}}\sum\limits_{n = 0}^{n - 1} {\left( {\theta n {-} \beta n} \right)} $$
    (2)
  • Step 4: Verify the breath data for each posture given by BTn, at time intervals Tn, by comparing with training set data given by Bn, at time intervals Tn. The difference is given by notation BTRn, at time intervals Tn is as follows:

    $$ {\text{BTR}}n = {\text{Match Distance}}\sum\limits_{n = 0}^{n - 1} {(Bn - {\text{ BT}}n)} $$
    (3)
  • Step 5: Set the allowable threshold value in terms of angle for the video data and in terms of BPM for the breath data for the successful completion of a posture. The reason to include the threshold values is because of the different sized body structures of the practitioners. The calculations can be done in the way shown as below

    $$ {\text{For video data}}:\mu n = \Theta n + 10 $$
    (4)

    Repeat step 1 to step 5 for each aasanaa.

  • Step 6: Calculate the result in terms of percentage of successful completion of yoga by consolidating the results for all the aasanaas, prepare report in proper format, and display in front of the practitioner.

5 Experimental Analysis and Assumptions

The experimental analysis and assumptions includes the following: The posture of the yoga practitioner will be captured in terms of video, and single camera is used to capture the video from one of the side angles that is, in this paper, left sided video capturing is preferred. In the next stage, the proposed algorithm compares the angles from the collected vectors (testing set) with training set which is already captured in terms of video from the same proposed algorithm and which is already stored in the database.

A threshold value should be fixed for compensation for the postures in the aasanaas with some differences in angles. The threshold is termed as T, to be fixed for the training set, and it consists of the angle value for a vector for a particular aasanaa along with plus or minus values. These plus or minus values are considered to be threshold value, and the vector value for a particular aasanaa in testing set is valid when it matches with the training set plus threshold values to draw the conclusion.

6 Conclusions and Discussions

The proposed model specifies an automated system for monitoring yoga posture. The system gets the input from a camera in terms of video; in the proposed system, the camera is placed in the left side of the practitioner, from where the data is captured. This data is called testing set. The testing set has to be compared with the training set which is already stored in the database in video format. The training set consists of the angles for the aasanaas for yoga by an expert practitioner with perfect postures, and hence, this will be considered as training set. A mathematical model compares both training set and testing set in terms of vectors, and conclusion can be drawn.

Since proposed system is developing globally, by considering the various body structures of human, an attempt has been made in giving some relaxation for the angles in aasanaas; i.e., a threshold T gives the plus or minus angles of errors in performing aasanaas, the method of fixing threshold T for each aasanaa is out of the scope of this paper, and it may be considered as a future development.

7 Future Scope

In this paper, a model system for monitoring yoga posture is proposed. The proposed system uses video capturing for monitoring yoga postures. In this context, the system uses a high definition camera, which is placed in the left side of the yoga practitioner, and posture information will be gathered in terms of video by identifying some of the point from the video.

The proposed system can be further developed by placing one more high definition camera from the front side of the yoga practitioner, thereby gathering again some data about the posture, by some identification points. In this stage, the yoga monitoring system will be having two HD cameras that is one from left side and one from the front side. The necessity of the front side camera is to ensure the balanced structure of the body while performing some particular yoga aasanaa. In the next level, an algorithm compares both the data, which are collected by front and side angle cameras with some constraints and thresholds, and conclusion will be drawn.