TWI891564B - Device and method for analyzing yoga movements and physiological conditions to generate practice plan - Google Patents
Device and method for analyzing yoga movements and physiological conditions to generate practice planInfo
- Publication number
- TWI891564B TWI891564B TW113143037A TW113143037A TWI891564B TW I891564 B TWI891564 B TW I891564B TW 113143037 A TW113143037 A TW 113143037A TW 113143037 A TW113143037 A TW 113143037A TW I891564 B TWI891564 B TW I891564B
- Authority
- TW
- Taiwan
- Prior art keywords
- data
- movement
- practice
- yoga
- user
- Prior art date
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
一種產生瑜伽運動練習計畫之裝置及其方法,特別係指一種分析瑜伽動作與生理狀況以產生練習計畫之裝置及方法。A device and method for generating a yoga exercise plan, particularly a device and method for analyzing yoga movements and physiological conditions to generate an exercise plan.
瑜伽是一個通過提升意識,幫助人類充分發揮潛能的體系。瑜伽姿勢運用古老而易於掌握的技巧,改善人們生理、心理、情感和精神方面的能力,是一種達到身體、心靈與精神和諧統一的運動方式。現代人所稱的瑜伽主要是一系列的修身養心方法,包括調身的體位法、調息的呼吸法、調心的冥想法等。Yoga is a system that helps people realize their full potential by elevating consciousness. Yoga postures utilize ancient yet accessible techniques to improve physical, mental, emotional, and spiritual abilities. It is a form of exercise that promotes harmony between body, mind, and spirit. What we now call yoga primarily encompasses a series of methods for self-cultivation and spiritual well-being, including asanas for physical conditioning, breathing techniques for pranayama, and meditation for spiritual well-being.
目前的瑜伽課程大多是提供體位(瑜伽動作)的教學,也就是由提供各種體位的姿勢指導,因此,現有的瑜伽課程除了在瑜伽教室面對瑜伽教練上課之外,也可以透過影像教學,甚至,目前也有瑜伽運動的輔助工具。Most current yoga classes provide instruction in asanas (yoga movements), that is, they provide guidance on various asanas. Therefore, in addition to taking classes in person with yoga instructors in yoga classrooms, existing yoga classes can also be taught through video teaching, and there are even auxiliary tools for yoga exercises.
然而,現有瑜伽運動的輔助工具大多僅提供靜態的姿勢指導,並沒有教練可以根據使用者的實際動作給予指導,如此,使用者無法確定姿勢是否正確,可能無法達到有效的訓練效果。However, most existing yoga training tools only provide static posture guidance, without a coach to provide guidance based on the user's actual movements. As a result, users cannot be sure whether their postures are correct and may not achieve effective training results.
綜上所述,可知先前技術中長期以來一直存在瑜伽運動輔助工具提供使用者確認姿勢是否正確的問題,因此有必要提出改進的技術手段,來解決此一問題。In summary, it can be seen that the prior art has long had the problem of providing yoga exercise assistive tools to users to confirm whether their postures are correct. Therefore, it is necessary to propose improved technical means to solve this problem.
有鑒於先前技術存在瑜伽運動輔助工具提供使用者確認姿勢是否正確的問題,本發明遂揭露一種分析瑜伽動作與生理狀況以產生練習計畫之裝置及方法,其中:In view of the problem that prior art yoga exercise assistive tools provide users with confirmation of whether their postures are correct, the present invention discloses a device and method for analyzing yoga movements and physiological conditions to generate a practice plan, wherein:
本發明所揭露之分析瑜伽動作與生理狀況以產生練習計畫之裝置,至少包含:資料取得模組,用以取得使用者影像及與使用者影像同步收集之生理狀態資料;影像分析模組,用以分析使用者影像以產生初始姿勢資料,初始姿勢資料包含使用者之身體之多個關鍵部位之位置與角度,且初始姿勢資料與生理狀態資料對應;狀態判斷模組,用以依據初始姿勢資料及生理狀態資料判斷使用者之初始身體狀態;計畫產生模組,用以選擇與初始身體狀態相符之多個瑜伽動作以產生練習計畫,練習計畫包含些瑜伽動作及與各瑜伽動作對應之練習時間;資料載入模組,用以載入與當前之瑜伽動作對應之示範動作資料;實境互動模組,用以使用實境技術模擬虛擬教練,並依據示範動作資料使用實境技術模擬並顯示虛擬教練進行當前之瑜伽動作及說明動作要點。The device disclosed in the present invention for analyzing yoga movements and physiological conditions to generate a practice plan includes at least: a data acquisition module for acquiring user images and physiological status data collected synchronously with the user images; an image analysis module for analyzing the user images to generate initial posture data, the initial posture data including the positions and angles of multiple key parts of the user's body, and the initial posture data corresponding to the physiological status data; a state judgment module for judging the user's posture based on the initial posture data and the physiological status data. The system uses a program to determine the user's initial physical state; a plan generation module to select multiple yoga poses that match the initial physical state to generate a practice plan. The practice plan includes yoga poses and the corresponding practice time for each yoga pose; a data loading module to load demonstration movement data corresponding to the current yoga pose; and a reality interaction module to use reality technology to simulate a virtual coach and use reality technology to simulate and display the virtual coach performing the current yoga pose based on the demonstration movement data and explain the key points of the movement.
本發明所揭露之分析瑜伽動作與生理狀況以產生練習計畫之方法,其步驟至少包括:取得使用者影像及與使用者影像同步收集之生理狀態資料;分析使用者影像以產生初始姿勢資料,初始姿勢資料包含使用者之身體之多個關鍵部位之位置與角度,且初始姿勢資料與生理狀態資料對應;依據初始姿勢資料及生理狀態資料判斷使用者之初始身體狀態;選擇與初始身體狀態相符之多個瑜伽動作以產生練習計畫,練習計畫包含些瑜伽動作及與各瑜伽動作對應之練習時間;使用實境技術模擬虛擬教練;載入與當前之瑜伽動作對應之示範動作資料;依據示範動作資料使用實境技術模擬並顯示虛擬教練進行當前之瑜伽動作及說明動作要點。The method disclosed in the present invention for analyzing yoga movements and physiological conditions to generate a practice plan includes at least the following steps: obtaining a user image and physiological status data collected synchronously with the user image; analyzing the user image to generate initial posture data, the initial posture data including the positions and angles of multiple key parts of the user's body, and the initial posture data corresponding to the physiological status data; and performing a training program based on the initial posture data and the physiological status data. The system uses data to determine the user's initial physical condition; selects multiple yoga poses that match the initial physical condition to generate a practice plan, which includes yoga poses and the corresponding practice time for each yoga pose; uses real-world technology to simulate a virtual coach; loads demonstration movement data corresponding to the current yoga pose; and uses real-world technology to simulate and display a virtual coach performing the current yoga pose based on the demonstration movement data and explaining the key points of the movement.
本發明所揭露之裝置及方法如上,與先前技術之間的差異在於本發明透過分析使用者影像以產生使用者的初始姿勢資料,並依據使用者的初始姿勢資料及與使用者影像同步收集的生理狀態資料判斷使用者的初始身體狀態,及選擇與初始身體狀態相符的瑜伽動作以產生訓練計畫後,依據訓練計畫中之瑜伽動作的示範動作資料使用實境技術模擬並顯示虛擬教練進行動作示範與要點說明,藉以解決先前技術所存在的問題,並可以達成提供即時動作指導和個人練習計畫之技術功效。The device and method disclosed in the present invention, as described above, differ from prior art in that the present invention generates initial posture data of the user by analyzing user images. Based on the user's initial posture data and physiological status data collected simultaneously with the user images, the present invention determines the user's initial physical condition. Yoga movements that match the initial physical condition are selected to generate a training plan. Based on the demonstration movement data of the yoga movements in the training plan, the present invention uses real-world technology to simulate and display a virtual coach demonstrating the movements and explaining key points. This solves the problems of prior art and achieves the technical benefits of providing real-time movement guidance and personalized practice plans.
以下將配合圖式及實施例來詳細說明本發明之特徵與實施方式,內容足以使任何熟習相關技藝者能夠輕易地充分理解本發明解決技術問題所應用的技術手段並據以實施,藉此實現本發明可達成的功效。The following will be used in conjunction with drawings and embodiments to describe in detail the features and implementation methods of the present invention. The content is sufficient to enable anyone familiar with the relevant technology to easily and fully understand the technical means used by the present invention to solve the technical problems and implement them accordingly, thereby achieving the effects that can be achieved by the present invention.
本發明可以依據使用者進行瑜伽運動時的姿勢與生理狀態產生適合使用者的練習計畫,並在使用者依據練習計畫進行瑜伽練習時透過實境技術給予動作指導。其中,本發明所提之實境技術可以是擴增實境或混合實境,但本發明並不以此為限。The present invention can generate a suitable exercise plan based on the user's yoga posture and physiological state during the exercise, and provide movement guidance through real-world technology while the user practices yoga according to the exercise plan. The real-world technology mentioned in the present invention can be augmented reality or mixed reality, but the present invention is not limited to such.
實現本發明之裝置可以是計算設備,本發明所提之計算設備包含但不限於一個或多個處理模組、一條或多條記憶體模組、以及連接不同硬體元件(包括記憶體模組和處理模組)的匯流排等硬體元件。透過所包含之多個硬體元件,計算設備可以載入並執行作業系統,使作業系統在計算設備上運行,也可以執行軟體或程式。計算設備也包含一個外殼,上述之各個硬體元件設置於外殼內。The apparatus implementing the present invention may be a computing device. The computing device described herein includes, but is not limited to, one or more processing modules, one or more memory modules, and hardware components such as a bus that connects various hardware components (including the memory modules and processing modules). Through these hardware components, the computing device can load and execute an operating system, allowing the operating system to run on the computing device, and can also execute software or programs. The computing device also includes a housing, within which the aforementioned hardware components are housed.
本發明所提之計算設備的匯流排可以包含一種或多個類型,例如包含資料匯流排(data bus)、位址匯流排(address bus)、控制匯流排(control bus)、擴充功能匯流排(expansion bus)、及/或局域匯流排(local bus)等類型的匯流排。計算設備的匯流排包括但不限於的工業標準架構(Industry Standard Architecture, ISA)匯流排、周邊元件互連(Peripheral Component Interconnect, PCI)匯流排、視頻電子標準協會(Video Electronics Standards Association, VESA)局域匯流排、以及串列的通用序列匯流排(Universal Serial Bus, USB)、快速周邊元件互連(PCI Express, PCI-E/PCIe)匯流排等。The bus of the computing device provided in the present invention may include one or more types of buses, such as a data bus, an address bus, a control bus, an expansion bus, and/or a local bus. Buses used in computing devices include, but are not limited to, the Industry Standard Architecture (ISA) bus, the Peripheral Component Interconnect (PCI) bus, the Video Electronics Standards Association (VESA) local bus, the Universal Serial Bus (USB) bus, and the PCI Express (PCI-E/PCIe) bus.
本發明所提之計算設備的處理模組與匯流排耦接。處理模組包含暫存器(Register)組或暫存器空間,暫存器組或暫存器空間可以完全的被設置在處理模組之處理晶片上,或全部或部分被設置在處理晶片外並經由專用電氣連接及/或經由匯流排耦接至處理晶片。處理模組可為中央處理器、微處理器或任何合適的處理元件。若計算設備為多處理器設備,也就是計算設備包含多個處理模組,則計算設備所包含的處理模組都相同或類似,且透過匯流排耦接與通訊。在部分的實施例中,處理模組可以解釋一個計算機指令或一連串的多個計算機指令以進行特定的運算或操作,例如,數學運算、邏輯運算、資料比對、複製/移動資料等,藉以驅動計算設備中的其他硬體元件或運行作業系統或執行各種程式及/或模組。計算機指令可以是組合語言指令、指令集架構指令、機器指令、機器相關指令、微指令、韌體指令、或者以一種或多種程式語言的任意組合編寫的初始碼或目的碼(Object Code),且計算機指令可以完全地在單一個計算設備上被執行、部分地在單一個計算設備上被執行、部分在一個計算設備上被執行且部分在相連接之另一計算設備上被執行。其中,上述之程式語言包括物件導向(Object-oriented)的程式語言,如Common Lisp、Python、C++、Objective-C、Smalltalk、Delphi、Java、Swift、C#、Perl、Ruby等,及常規的程序式(Procedural)程式語言,如C語言或其他類似的程式語言。The processing module of the computing device proposed in the present invention is coupled to a bus. The processing module includes a register group or register space, which can be completely set on the processing chip of the processing module, or completely or partially set outside the processing chip and coupled to the processing chip via a dedicated electrical connection and/or via a bus. The processing module can be a central processing unit, a microprocessor, or any suitable processing element. If the computing device is a multi-processor device, that is, the computing device includes multiple processing modules, then the processing modules included in the computing device are the same or similar, and are coupled and communicated through a bus. In some embodiments, the processing module can interpret a computer instruction or a series of multiple computer instructions to perform specific calculations or operations, such as mathematical operations, logical operations, data comparison, copying/moving data, etc., to drive other hardware components in the computing device or run an operating system or execute various programs and/or modules. The computer instructions can be assembly language instructions, instruction set architecture instructions, machine instructions, machine-related instructions, microinstructions, firmware instructions, or initialization code or object code written in any combination of one or more programming languages, and the computer instructions can be executed entirely on a single computing device, partially on a single computing device, or partially on one computing device and partially on another connected computing device. The above-mentioned programming languages include object-oriented programming languages such as Common Lisp, Python, C++, Objective-C, Smalltalk, Delphi, Java, Swift, C#, Perl, Ruby, etc., and conventional procedural programming languages such as C or other similar programming languages.
計算設備中通常也包含一個或多個晶片組(Chipset)。計算設備的處理模組可以與晶片組耦接或透過匯流排與晶片組電性連接。晶片組是由一個或多個積體電路(Integrated Circuit, IC)組成,包含記憶體控制器以及周邊輸出入(I/O)控制器等,也就是說,記憶體控制器以及周邊輸出入控制器可以包含在一個積體電路內,也可以使用兩個或更多的積體電路實現。晶片組通常提供了輸出入和記憶體管理功能、以及提供多個通用及/或專用暫存器、計時器等,其中,上述之通用及/或專用暫存器與計時器可以讓耦接或電性連接至晶片組的一個或多個處理模組存取或使用。在部分的實施例中,晶片組也可能屬於處理模組的一部份。Computing devices usually also include one or more chipsets. The processing module of the computing device can be coupled to the chipset or electrically connected to the chipset through a bus. The chipset is composed of one or more integrated circuits (ICs), including a memory controller and a peripheral input/output (I/O) controller, etc. In other words, the memory controller and the peripheral input/output (I/O) controller can be included in one IC, or can be implemented using two or more ICs. The chipset usually provides input/output and memory management functions, as well as multiple general-purpose and/or dedicated registers, timers, etc., wherein the above-mentioned general-purpose and/or dedicated registers and timers can be accessed or used by one or more processing modules coupled or electrically connected to the chipset. In some embodiments, the chipset may also be part of the processing module.
計算設備的處理模組也可以透過記憶體控制器存取安裝於計算設備上的記憶體模組和大容量儲存區中的資料。上述之記憶體模組包含任何類型的揮發性記憶體(volatile memory)及/或非揮發性(non-volatile memory, NVRAM)記憶體,例如靜態隨機存取記憶體(Static Random Access Memory, SRAM)、動態隨機存取記憶體(Dynamic Random Access Memory, DRAM)、唯讀記憶體(Read-Only Memory, ROM)、快閃記憶體(Flash memory)等。上述之大容量儲存區可以包含任何類型的儲存裝置或儲存媒體,例如,硬碟機、光碟(optical disc)、隨身碟(flash drive)、記憶卡(memory card)、固態硬碟(Solid State Disk, SSD)、或任何其他儲存裝置等。也就是說,記憶體控制器可以存取靜態隨機存取記憶體、動態隨機存取記憶體、快閃記憶體、硬碟機、固態硬碟中的資料。The processing module of a computing device can also access data from memory modules and mass storage areas installed on the computing device through a memory controller. The aforementioned memory modules include any type of volatile memory and/or non-volatile memory (NVRAM), such as static random access memory (SRAM), dynamic random access memory (DRAM), read-only memory (ROM), and flash memory. The aforementioned mass storage area can include any type of storage device or storage media, such as a hard drive, optical disc, flash drive, memory card, solid-state drive (SSD), or any other storage device. In other words, the memory controller can access data in static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, hard drives, and SSDs.
計算設備的處理模組也可以透過周邊輸出入控制器經由周邊輸出入匯流排與周邊輸出裝置、周邊輸入裝置、通訊介面、各種資料或訊號接收裝置等周邊裝置或介面連接並通訊。周邊輸入裝置可以是任何類型的輸入裝置,例如鍵盤、滑鼠、軌跡球、觸控板、搖桿等,周邊輸出裝置可以是任何類型的輸出裝置,例如顯示器、印表機等,周邊輸入裝置與周邊輸出裝置也可以是同一裝置,例如觸控螢幕等。通訊介面可以包含無線通訊介面及/或有線通訊介面,無線通訊介面可以包含支援無線區域網路(如Wi-Fi、Zigbee等)、藍牙、紅外線、近場通訊(Near-field communication, NFC)、3G/4G/5G等行動通訊網路(蜂巢式網路)或其他無線資料傳輸協定的介面,有線通訊介面可為乙太網路裝置、DSL數據機、纜線(Cable)數據機、非同步傳輸模式(Asynchronous Transfer Mode, ATM)裝置、或光纖通訊介面及/或元件等。資料或訊號接收裝置可以包含GPS接收器或生理訊號接收器,生理訊號接收器所接收的生理訊號包含但不限於心跳、血氧等。處理模組可以週期性地輪詢(polling)各種周邊裝置與介面,使得計算設備能夠透過各種周邊裝置與介面進行資料的輸入與輸出,也能夠與具有上面描述之硬體元件的另一個計算設備進行通訊。The processing module of a computing device can also connect to and communicate with peripheral devices or interfaces, such as peripheral output devices, peripheral input devices, communication interfaces, and various data or signal receiving devices, through a peripheral input/output controller via a peripheral input/output bus. A peripheral input device can be any type of input device, such as a keyboard, mouse, trackball, touchpad, or joystick. A peripheral output device can be any type of output device, such as a monitor or printer. The peripheral input device and peripheral output device can also be the same device, such as a touchscreen. The communication interface may include a wireless communication interface and/or a wired communication interface. The wireless communication interface may include an interface supporting wireless local area networks (such as Wi-Fi, Zigbee, etc.), Bluetooth, infrared, near-field communication (NFC), 3G/4G/5G mobile communication networks (cellular networks), or other wireless data transmission protocols. The wired communication interface may be an Ethernet device, a DSL modem, a cable modem, an asynchronous transfer mode (ATM) device, or an optical fiber communication interface and/or component. The data or signal receiving device may include a GPS receiver or a physiological signal receiver. The physiological signals received by the physiological signal receiver include, but are not limited to, heart rate and blood oxygen levels. The processing module can periodically poll various peripheral devices and interfaces, allowing the computing device to input and output data through various peripheral devices and interfaces, and also to communicate with another computing device having the hardware components described above.
以下先以「第1圖」本發明所提之分析瑜伽動作與生理狀況以產生練習計畫之裝置之元件示意圖來說明實現本發明的裝置。如「第1圖」所示,本發明之裝置100含有記憶體110、攝影機120、通訊介面130、儲存媒體140、輸入元件150、處理器170、匯流排190。其中,處理器170透過匯流排190與記憶體110、攝影機120、通訊介面130、儲存媒體140、輸入元件150連接。The following will first illustrate the device implementing the present invention, using Figure 1, a schematic diagram of the components of the device for analyzing yoga movements and physiological conditions to generate a practice plan. As shown in Figure 1, the device 100 of the present invention includes a memory 110, a camera 120, a communication interface 130, a storage medium 140, an input device 150, a processor 170, and a bus 190. Processor 170 is connected to memory 110, camera 120, communication interface 130, storage medium 140, and input device 150 via bus 190.
記憶體110可以儲存一組或多組計算機指令。Memory 110 can store one or more sets of computer instructions.
攝影機120可以包含電路板、鏡頭組件與影像感測元件(圖中均未示),鏡頭組件與影像感測元件透過電路板連接。攝影機120可以透過鏡頭組件與影像感測元件擷取影像。在本發明中,裝置100並不限於包含一個攝影機120,也可以包含多個同步擷取影像的攝影機120。Camera 120 may include a circuit board, a lens assembly, and an image sensor (not shown). The lens assembly and the image sensor are connected via the circuit board. Camera 120 can capture images using the lens assembly and the image sensor. In the present invention, device 100 is not limited to including a single camera 120; it may also include multiple cameras 120 that capture images synchronously.
通訊介面130可以連線到外部的網路儲存裝置或伺服器等網路裝置,並向所連線的網路裝置請求並下載資料。The communication interface 130 can be connected to an external network storage device or server and other network devices, and request and download data from the connected network device.
儲存媒體140可以儲存儲存通訊介面130所下載的資料或訊號,也可以儲存提供給處理器170或處理器170運作時所需要的資料或訊號,還可以儲存處理器170所產生的資料或訊號。The storage medium 140 can store data or signals downloaded from the communication interface 130 , store data or signals provided to the processor 170 or required for the processor 170 to operate, and store data or signals generated by the processor 170 .
輸入元件150可以透過裝置100的周邊輸入裝置提供輸入資料。例如,輸入元件150可以透過鍵盤、滑鼠、觸控板、觸控螢幕輸入資料。The input element 150 can provide input data through a peripheral input device of the device 100. For example, the input element 150 can input data through a keyboard, a mouse, a touchpad, or a touch screen.
處理器170可以如「第2圖」本發明所提之模組示意圖所示,包含資料取得模組210、影像分析模組220、資料載入模組240、狀態判斷模組250、計畫產生模組260、實境互動模組290等模組,也可以包含可附加的動作指導模組270。在部分的實施例中,處理器170可以執行記憶體110所儲存的計算機指令,並可以在執行計算機指令後產生「第2圖」中的各模組;在另一部份的實施例中,「第2圖」中的各模組可以是由一個或多個電路及/或完整或部分的晶片等硬體元件產生,即處理器170包含組成「第2圖」中之各模組的硬體元件,也就是說,處理器170所包含的各模組可以是軟體模組,也可以是硬體模組,本發明沒有特別的限制。The processor 170 may include modules such as a data acquisition module 210, an image analysis module 220, a data loading module 240, a state judgment module 250, a plan generation module 260, and a reality interaction module 290 as shown in the module schematic diagram of the present invention in "Figure 2", and may also include an additional action guidance module 270. In some embodiments, the processor 170 can execute computer instructions stored in the memory 110 and can generate the modules in "Figure 2" after executing the computer instructions; in other embodiments, the modules in "Figure 2" can be generated by one or more circuits and/or hardware components such as complete or partial chips, that is, the processor 170 includes hardware components that constitute the modules in "Figure 2", that is, the modules included in the processor 170 can be software modules or hardware modules, and the present invention has no special limitations.
資料取得模組210負責取得使用者影像。資料取得模組210可以透過攝影機120擷取使用者影像,也可以提供輸入元件150輸入使用者影像的影像存放路徑,並依據影像存放路徑由儲存媒體140中讀出使用者影像,或透過通訊介面130連線至資料伺服器(圖中未示)下載使用者影像。The data acquisition module 210 is responsible for acquiring user images. The data acquisition module 210 can capture user images via the camera 120 or provide the input component 150 with the image storage path of the user image. The module then reads the user image from the storage medium 140 based on the image storage path, or connects to a data server (not shown) via the communication interface 130 to download the user image.
資料取得模組210也負責取得與所取得之使用者影像同步收集的生理狀態資料。資料取得模組210所取得的生理狀態資料包含但不限於心跳率、呼吸率、血氧濃度,甚至可以包含力量、心肺能力等。一般而言,生理狀態資料可以由使用者所穿戴之穿戴裝置(如智慧手環、智慧手錶、心率感測器等)偵測使用者的生理狀態而產生,並可以將所產生的生理狀態資料直接傳送給裝置100儲存或傳送給實境裝置400轉送到裝置100,如此,資料取得模組210可以透過通訊介面130接收穿戴裝置(圖中未示)或實境裝置400所傳送的生理狀態資料,也可以依據輸入元件150所輸入的檔案存放路徑由儲存媒體140中讀出預先儲存的生理狀態資料或透過通訊介面130由資料伺服器下載生理狀態資料,但資料取得模組210取得生理狀態資料的方式並不以上述為限。The data acquisition module 210 is also responsible for acquiring physiological status data collected synchronously with the acquired user images. The physiological status data acquired by the data acquisition module 210 includes but is not limited to heart rate, respiratory rate, blood oxygen concentration, and may even include strength, cardiopulmonary capacity, etc. Generally speaking, physiological status data can be generated by a wearable device (such as a smart bracelet, smart watch, heart rate sensor, etc.) worn by a user to detect the user's physiological status. The generated physiological status data can be directly transmitted to device 100 for storage or transmitted to real-world device 400 for transfer to device 100. In this way, data acquisition module 210 can receive physiological status data transmitted by the wearable device (not shown) or real-world device 400 via communication interface 130. It can also read pre-stored physiological status data from storage medium 140 based on the file storage path input by input component 150, or download physiological status data from a data server via communication interface 130. However, the methods for data acquisition module 210 to obtain physiological status data are not limited to the above.
資料取得模組210也可以即時取得使用者的練習影像。一般而言,資料取得模組210可以透過攝影機120擷取使用者的練習影像,或可以透過通訊介面130接收實境裝置400所傳送的練習影像。The data acquisition module 210 can also acquire the user's practice images in real time. Generally speaking, the data acquisition module 210 can capture the user's practice images through the camera 120 or receive the practice images transmitted by the real device 400 through the communication interface 130.
影像分析模組220負責分析資料取得模組210所取得的使用者影像以產生與該生理狀態資料對應的初始姿勢資料,影像分析模組220所產生的初始姿勢資料包含使用者之身體的多個關鍵部位之位置與角度。上述之關鍵部位包含但不限於頭頂、頸部、肩膀、胸口、手肘、髖部、骨盆底部、膝蓋等,影像分析模組220可以透過深度學習模型或電腦視覺技術分析使用者影像,舉例來說,若使用者影像中包含深度資料,則影像分析模組220可以使用如OpenPose、PostNet、HRNet等姿勢估計模型,藉以透過卷積神經網路(CNN)取得使用者影像中的人體輪廓與局部(如頭部、四肢等)部位的特徵以產生表示使用者影像中各像素在關鍵部位之機率的熱圖(Heatmap)與表示使用者影像中各像素偏離關鍵部位之向量值的偏移圖(Offset Map),並依據熱圖與偏移圖判斷出所有熱圖中各關鍵部位的最大值位置,及將所判斷出之最大值位置做為使用者影像中使用者之身體的關鍵部位,並將各關鍵部位對應到使用者影像中的深度資料,藉以透過個關鍵部位的深度資料判斷各關鍵部位在空間中的位置座標與角度;若使用者影像中包含不同角度同步擷取的影像,則影像分析模組220可以在使用姿勢估計模型檢測使用者影像中使用者之身體的關鍵部位後,透過三角測量法來計算各關鍵部位在空間中的位置與角度。The image analysis module 220 is responsible for analyzing the user image acquired by the data acquisition module 210 to generate initial posture data corresponding to the physiological status data. The initial posture data generated by the image analysis module 220 includes the positions and angles of multiple key parts of the user's body. The above-mentioned key parts include but are not limited to the top of the head, neck, shoulders, chest, elbows, hips, pelvic floor, knees, etc. The image analysis module 220 can analyze the user image through a deep learning model or computer vision technology. For example, if the user image contains depth data, the image analysis module 220 can use a pose estimation model such as OpenPose, PostNet, HRNet, etc. to obtain the features of the human body contour and local parts (such as head, limbs, etc.) in the user image through a convolutional neural network (CNN) to generate a heat map (Heatmap) representing the probability of each pixel in the user image being in the key part and an offset map (Offset) representing the vector value of each pixel in the user image deviating from the key part. Map), and based on the heat map and the offset map, determine the maximum value position of each key part in all the heat maps, and use the determined maximum value position as the key part of the user's body in the user image, and correspond each key part to the depth data in the user image, so as to determine the position coordinates and angle of each key part in space through the depth data of each key part; if the user image includes images captured synchronously at different angles, the image analysis module 220 can use the posture estimation model to detect the key parts of the user's body in the user image, and then calculate the position and angle of each key part in space through triangulation.
影像分析模組220也可以分析資料取得模組210所取得之練習影像以產生練習姿勢資料。一般而言,影像分析模組220可以使用上述相同的過程分析練習影像以產生練習姿勢資料。The image analysis module 220 can also analyze the training images obtained by the data acquisition module 210 to generate training posture data. Generally speaking, the image analysis module 220 can use the same process as described above to analyze the training images to generate training posture data.
影像分析模組220也可以依據所產生的初始姿勢資料判斷使用者影像中使用者所做出的瑜伽動作,也可以依據所產生的練習姿勢資料判斷使用者當前所練習的瑜伽動作。舉例來說,影像分析模組220可以依據預先定義之各關鍵部位的連接方式連接初始姿勢資料所包含之使用姿勢估計模型所產生的人體各關鍵部位以產生人體骨架圖,並可以比對各種瑜伽動作之標準骨架圖與所產生之人體骨架圖的相似度,藉以判斷初始姿勢資料或練習姿勢資料所對應之瑜伽動作,也就是判斷使用者所做出之瑜伽動作,但本發明並不以此為限。Image analysis module 220 may also determine the yoga pose performed by the user in the user image based on the generated initial posture data, or may determine the yoga pose currently being practiced by the user based on the generated practice posture data. For example, image analysis module 220 may connect the key human body parts generated by the posture estimation model included in the initial posture data according to a predefined connection method for each key part to generate a human skeleton diagram. The generated human skeleton diagram may then be compared for similarity with standard skeleton diagrams of various yoga poses to determine the yoga pose corresponding to the initial posture data or the practice posture data, that is, to determine the yoga pose performed by the user, but the present invention is not limited to this.
資料載入模組240可以載入與影像分析模組220所產生之初始姿勢資料對應的標準姿勢資料,例如,由儲存媒體140載入或透過通訊介面130連線到資料伺服器下載影像分析模組220所判斷出之瑜伽動作的標準姿勢資料;資料載入模組240也負責載入與當前之瑜伽動作對應的示範動作資料。本發明所提之標準姿勢資料包含對應之瑜伽動作在理想情況下之身體各關節的位置與相對距離和身體角度。Data loading module 240 can load standard posture data corresponding to the initial posture data generated by image analysis module 220. For example, standard yoga posture data identified by image analysis module 220 can be loaded from storage medium 140 or downloaded from a data server via communication interface 130. Data loading module 240 is also responsible for loading demonstration movement data corresponding to the current yoga movement. The standard posture data proposed in the present invention includes the positions, relative distances, and body angles of each joint under ideal conditions for the corresponding yoga movement.
狀態判斷模組250負責依據影像分析模組220所產生之初始姿勢資料及資料取得模組210所取得之生理狀態資料判斷使用者的初始身體狀態。狀態判斷模組250可以依據資料取得模組210所取得之使用者影像的時間同步對應的初始姿勢資料與生理狀態資料,並依據同步的初始姿勢資料與生理狀態資料判斷使用者的初始身體狀態。The state determination module 250 is responsible for determining the user's initial physical state based on the initial posture data generated by the image analysis module 220 and the physiological state data acquired by the data acquisition module 210. The state determination module 250 can determine the user's initial physical state based on the synchronized initial posture data and physiological state data corresponding to the user's images acquired by the data acquisition module 210.
更詳細的,狀態判斷模組250可以依據影像分析模組220所產生之初始姿勢資料與資料載入模組240所載入之標準姿勢資料判斷動作完成度,也可以依據初始姿勢資料判斷動作流暢度,並可以依據所判斷出之動作完成度與動作流暢度判斷初始身體狀態。舉例來說,狀態判斷模組250可以依據標準姿勢資料中各個關鍵部位的連接關係連接初始姿勢資料中之特定的關鍵部位已建立骨架,並可以計算初始姿勢資料中各個關鍵部位的位置與角度,及可以比對初始姿勢資料與標準姿勢資料中各關鍵部位的位置,藉以找出偏差(如角度或位置上的不同),例如,狀態判斷模組250可以通過初始姿勢資料與標準姿勢資料中肩膀、手肘、手腕形成的角度是否產生角度偏差來判斷手臂的伸展是否到位;又如,狀態判斷模組250可以計算初始姿勢資料與標準姿勢資料中各關鍵部位之間的距離和相對位置以判斷位置偏差,如當瑜伽動作為下犬式時,狀態判斷模組250可以檢查初始姿勢資料與標準姿勢資料以判斷髖關節是否足夠向上延展,並通過髖關節與手腳關鍵部位(手肘、手腕、膝蓋、腳踝等)的位置比對來確定動作是否到位,之後,可以對每個關鍵部位的偏差值進行加權計算,以得出偏差分數。在部分的實施例中,若某些關鍵部位與動作完成度有較大的關聯(如髖關節在下犬式中的作用),則這些關鍵部位的偏差值會被賦予較高權重。例如,對於一個標準下犬式動作,髖關節的角度、手臂與地面的夾角、腳跟的位置是主要的指標,狀態判斷模組250賦予這些指標的偏差值較高的權重,藉以可以著重依據這些指標的偏差值與權重計算出偏差分數以評估動作的準確性。In more detail, the state judgment module 250 can judge the completion of the movement based on the initial posture data generated by the image analysis module 220 and the standard posture data loaded by the data loading module 240, and can also judge the smoothness of the movement based on the initial posture data, and can judge the initial body state based on the determined movement completion and movement smoothness. For example, the state judgment module 250 can connect the specific key parts in the initial posture data based on the connection relationship of each key part in the standard posture data to establish a skeleton, and can calculate the position and angle of each key part in the initial posture data, and can compare the position of each key part in the initial posture data with the standard posture data to find the deviation (such as the difference in angle or position). For example, the state judgment module 250 can judge the arm by whether the angle formed by the shoulder, elbow, and wrist in the initial posture data and the standard posture data has an angle deviation. For another example, the state judgment module 250 can calculate the distance and relative position between the initial posture data and the key parts in the standard posture data to determine the position deviation. For example, when the yoga pose is the downward dog pose, the state judgment module 250 can check the initial posture data and the standard posture data to determine whether the hip joint is sufficiently extended upward, and determine whether the movement is in place by comparing the position of the hip joint with the key parts of the hands and feet (elbows, wrists, knees, ankles, etc.). Afterwards, the deviation value of each key part can be weighted to obtain a deviation score. In some embodiments, if certain key areas are significantly correlated with movement completion (e.g., the role of the hip joint in Downward Dog), the deviation values of these key areas are given higher weights. For example, for a standard Downward Dog, the angle of the hip joint, the angle between the arm and the ground, and the position of the heels are key indicators. The state judgment module 250 assigns higher weights to the deviation values of these indicators, thereby focusing on calculating the deviation score based on the deviation values and weights of these indicators to evaluate the accuracy of the movement.
狀態判斷模組250也可以依據影像分析模組220所分析出之練習姿勢資料與標準姿勢資料判斷動作完成度。例如,狀態判斷模組250可以依據標準姿勢資料中之各關鍵部位所連接而成之標準骨架圖與練習姿勢資料中各關鍵部位所連接而成之人體骨架圖的相似度、標準姿勢資料中預先針對不同瑜伽動作所定義之各關鍵部位間的相對距離及相對位置與練習姿勢資料中之各關鍵部位間的相對距離與相對位置的偏差值計算動作完成度,但本發明並不以此為限。The state determination module 250 may also determine the degree of movement completion based on the practice posture data and the standard posture data analyzed by the image analysis module 220. For example, the state determination module 250 may calculate the degree of movement completion based on the similarity between a standard skeleton diagram formed by connecting key parts in the standard posture data and a human skeleton diagram formed by connecting key parts in the practice posture data, and the deviation between the relative distances and relative positions between key parts pre-defined for different yoga poses in the standard posture data and the relative distances and relative positions between key parts in the practice posture data, but the present invention is not limited to this.
狀態判斷模組250也可以依據動作完成度、動作流暢度及生理狀態資料之變化評估使用者之疲勞狀況。例如,狀態判斷模組250可以跟蹤使用者對瑜伽動作的完成度(準確度),並判斷動作的完成度是否隨著練習時間推進而下降,進而判斷疲勞累積情況,例如,當動作的完成度下降時,表示使用者疲勞度增加;相似的,狀態判斷模組250也可以追蹤使用者在變化瑜伽動作的流暢度,並依據流暢度是否隨著練習時間推進而下降判斷疲勞累積情況,例如,當動作的流暢度下降時,使用者疲勞度增加;狀態判斷模組250也可以依據使用者做出相同瑜伽動作時生理狀態資料的變化判斷疲勞累積情況,例如,當心率或呼吸率增加時表示使用者疲勞度增加。The state judgment module 250 can also assess the user's fatigue status based on the changes in the completion degree of the movement, the smoothness of the movement, and the physiological status data. For example, the state judgment module 250 can track the completion degree (accuracy) of the user's yoga movements and determine whether the completion degree of the movement decreases as the practice time progresses, and then determine the accumulation of fatigue. For example, when the completion degree of the movement decreases, it means that the user's fatigue increases; similarly, the state judgment module 250 can also track the user's changes in the yoga movements. Fluency and judging the accumulated fatigue based on whether the fluency decreases as the practice time progresses. For example, when the fluency of the movement decreases, the user's fatigue increases. The state judgment module 250 can also judge the accumulated fatigue based on the changes in the physiological state data of the user when performing the same yoga movement. For example, when the heart rate or breathing rate increases, it indicates that the user's fatigue increases.
計畫產生模組260負責選擇與初始身體狀態相符之多個瑜伽動作以產生練習計畫,練習計畫包含被選擇的瑜伽動作及與各個被選擇之瑜伽動作對應的練習時間。其中,計畫產生模組260可以為初學者可以選擇較簡單的變體動作,而為進階使用者選擇難度較高的挑戰動作。舉例來說,計畫產生模組260可以由儲存媒體140或透過通訊介面130下載包含瑜伽動作與難度等級的動作難度清單,並依據初始身體狀態由動作難度清單中選出相符的瑜伽動作,但本發明並不以此為限。The plan generation module 260 is responsible for selecting a plurality of yoga poses that match the initial physical condition to generate a practice plan. The practice plan includes the selected yoga poses and the corresponding practice time for each selected yoga pose. The plan generation module 260 may select simpler variations for beginners and more challenging poses for advanced users. For example, the plan generation module 260 may download a difficulty list containing yoga poses and difficulty levels from the storage medium 140 or via the communication interface 130, and select yoga poses from the difficulty list that match the initial physical condition, but the present invention is not limited to this.
計畫產生模組260可以依據狀態判斷模組250所判斷出之動作完成度與資料取得模組210所取得之生理狀態資料調整練習計畫。舉例來說,若心率和呼吸速率異常增高,計畫產生模組260可以延長休息時間或降低練習之動作的難度,同時,計畫產生模組260也可以根據使用者的進步情況逐步提高動作的難度,例如,若使用者能穩定完成現有計畫的所有動作,且使用者的動作的完成度都達到一定值以上,則計畫產生模組260可以在練習計畫中增加難度較高的新動作或延長練習計畫中某些動作的練習時間。The plan generation module 260 can adjust the exercise plan based on the exercise completion rate determined by the state determination module 250 and the physiological status data obtained by the data acquisition module 210. For example, if the heart rate and respiratory rate increase abnormally, the plan generation module 260 can extend the rest period or reduce the difficulty of the exercise. At the same time, the plan generation module 260 can also gradually increase the difficulty of the exercise based on the user's progress. For example, if the user can consistently complete all the exercises in the existing plan and the user's exercise completion rate reaches or exceeds a certain value, the plan generation module 260 can add new, more difficult exercises to the exercise plan or extend the practice time of certain exercises in the exercise plan.
動作指導模組270可以在影像分析模組220所分析出之練習姿勢資料與標準姿勢資料之偏差值達到門檻值時產生指導訊息。The action guidance module 270 can generate a guidance message when the deviation between the training posture data analyzed by the image analysis module 220 and the standard posture data reaches a threshold value.
實境互動模組290負責使用實境技術模擬虛擬教練,其中,虛擬教練可以是使用3D建模軟體所設計產生之擬真的人物或Q版的可愛角色,本發明沒有特別的限制。The reality interaction module 290 is responsible for simulating a virtual coach using reality technology. The virtual coach can be a realistic person or a cute Q-version character designed using 3D modeling software. The present invention has no particular limitations.
實境互動模組290也負責依據當前進行之瑜伽動作的示範動作資料使用相同的實境技術模擬並顯示虛擬教練進行當前進行之瑜伽動作及說明該瑜伽動作之動作要點。其中,實境互動模組290可以提供使用者變換視角,使得使用者可以由不同的角度觀看瑜伽動作的細節來學習正確的姿勢。The real-world interaction module 290 is also responsible for using the same real-world technology to simulate and display a virtual instructor performing the yoga poses and explaining the key points of the yoga poses based on the model movement data of the yoga poses currently being performed. The real-world interaction module 290 can also provide users with the ability to change perspectives, allowing them to observe the details of the yoga poses from different angles and learn the correct postures.
實境互動模組290也可以依據動作指導模組270所產生之指導訊息使用相同的實境技術模擬虛擬教練給予使用者動作指導。The reality interaction module 290 can also use the same reality technology to simulate a virtual coach to provide action guidance to the user based on the guidance information generated by the action guidance module 270.
接著以一個實施例來解說本發明的系統運作與方法,並請參照「第3A圖」本發明所提之分析瑜伽動作與生理狀況以產生練習計畫之方法流程圖。在本實施例中,假設裝置100為提供實境裝置連接的實境伺服器,但本發明並不以此為限。Next, the system operation and method of the present invention will be explained using an embodiment. Please refer to FIG. 3A for a flow chart of the method for analyzing yoga movements and physiological conditions to generate a practice plan. In this embodiment, it is assumed that device 100 is a real-world server that provides connectivity to real-world devices, but the present invention is not limited to this.
裝置100的資料取得模組210可以取得使用者影像及與被取得之使用者影像同步收集的生理狀態資料(步驟310),裝置100的影像分析模組220可以分析資料取得模組210所取得之使用者影像以產生與資料取得模組210所取得之生理狀態資料對應的初始姿勢資料(步驟320)。在本實施例中,假設初始姿勢資料包含使用者之身體之多個關鍵部位之位置與角度。The data acquisition module 210 of the device 100 can acquire a user image and physiological status data collected simultaneously with the acquired user image (step 310). The image analysis module 220 of the device 100 can analyze the user image acquired by the data acquisition module 210 to generate initial posture data corresponding to the physiological status data acquired by the data acquisition module 210 (step 320). In this embodiment, it is assumed that the initial posture data includes the positions and angles of multiple key parts of the user's body.
在裝置100的影像分析模組220產生初始姿勢資料後,裝置100的狀態判斷模組250可以依據影像分析模組220所產生之初始姿勢資料及資料取得模組210所取得之生理狀態資料判斷使用者的初始身體狀態(步驟330)。在本實施例中,假設狀態判斷模組250可以比對出使姿勢資料與資料載入模組240所讀出之標準姿勢資料以判斷使用者的初始身體狀態。After the image analysis module 220 of the device 100 generates initial posture data, the state determination module 250 of the device 100 can determine the user's initial physical state based on the initial posture data generated by the image analysis module 220 and the physiological state data obtained by the data acquisition module 210 (step 330). In this embodiment, it is assumed that the state determination module 250 can compare the posture data with the standard posture data read by the data loading module 240 to determine the user's initial physical state.
在裝置100的狀態判斷模組250判斷出使用者的初始身體狀態後,裝置100的計畫產生模組260可以選擇與初始身體狀態相符之多個瑜伽動作並設定被選擇之瑜伽動作的練習時間以產生練習計畫(步驟340)。After the state determination module 250 of the device 100 determines the initial physical state of the user, the plan generation module 260 of the device 100 can select a plurality of yoga poses that match the initial physical state and set the practice time of the selected yoga poses to generate a practice plan (step 340).
之後,裝置100的實境互動模組290可以使用實境技術模擬虛擬教練(步驟350)。在本實施例中,假設實境互動模組290可以產生虛擬教練的模型資料並透過裝置100的通訊介面130將模型資料傳送到實境裝置400,使得實境裝置400可以依據所接收到的模型資料使用擴增實境模擬並顯示虛擬教練。Afterwards, the reality interaction module 290 of the device 100 can simulate the virtual coach using reality technology (step 350). In this embodiment, it is assumed that the reality interaction module 290 can generate model data of the virtual coach and transmit the model data to the reality device 400 via the communication interface 130 of the device 100, so that the reality device 400 can use the received model data to augment the reality simulation and display the virtual coach.
同時,裝置100的實境互動模組290也可以載入與當前之瑜伽動作對應的示範動作資料(步驟361),並可以依據示範動作資料使用實境技術模擬並顯示虛擬教練進行當前之瑜伽動作及說明動作要點(步驟365)。在本實施例中,假設實境互動模組290可以將所在入的示範動作資料透過裝置100的通訊介面130將示範動作資料傳送到實境裝置400,使得實境裝置400可以依據所接收到的示範動作資料使用擴增實境模擬並顯示虛擬教練做出瑜伽動作,並說明當前所進行之瑜伽動作的動作要點。At the same time, the reality interaction module 290 of the device 100 can also load the demonstration movement data corresponding to the current yoga pose (step 361), and can use reality technology to simulate and display a virtual instructor performing the current yoga pose and explaining the key points of the pose based on the demonstration movement data (step 365). In this embodiment, it is assumed that the reality interaction module 290 can transmit the input demonstration movement data to the reality device 400 via the communication interface 130 of the device 100, so that the reality device 400 can use augmented reality to simulate and display the virtual instructor performing the yoga pose based on the received demonstration movement data and explain the key points of the current yoga pose.
如此,透過本發明,可以產生適合使用者的練習計畫,並在使用者依據練習計畫進行瑜伽練習時透過實境技術給予動作指導。In this way, through the present invention, a practice plan suitable for the user can be generated, and when the user practices yoga according to the practice plan, movement guidance can be given through real-world technology.
上述實施例中,在裝置100的實境互動模組290依據示範動作資料使用實境技術模擬並顯示虛擬教練進行當前之瑜伽動作及說明動作要點(步驟365)後,也可以如「第3B圖」之流程所示,裝置100的資料取得模組210可以即時取得使用者之練習影像與生理狀態資料(步驟371),裝置100的影像分析模組220可以分析使用者之練習影像以產生練習姿勢資料(步驟375),裝置100的狀態判斷模組250可以依據練習姿勢資料與標準姿勢資料判斷動作完成度(步驟381),裝置100的動作指導模組270可以在判斷練習姿勢資料與標準姿勢資料之差異達到門檻值時產生指導訊息(步驟385),裝置100的實境互動模組290可以依據指導訊息使用實境技術模擬虛擬教練給予動作指導(步驟391),同時,裝置100的計畫產生模組260可以依據動作完成度與生理狀態資料調整練習計畫(步驟395)。In the above embodiment, after the reality interaction module 290 of the device 100 uses the reality technology to simulate and display the virtual coach performing the current yoga movement and explaining the key points of the movement according to the demonstration movement data (step 365), the data acquisition module 210 of the device 100 can also obtain the user's practice image and physiological state data in real time (step 371), the image analysis module 220 of the device 100 can analyze the user's practice image to generate practice posture data (step 375), and the state judgment module of the device 100 can also be used to analyze the user's practice image to generate practice posture data (step 375). 250 can determine the degree of movement completion based on the practice posture data and the standard posture data (step 381). The movement guidance module 270 of the device 100 can generate a guidance message when it is determined that the difference between the practice posture data and the standard posture data reaches a threshold value (step 385). The real-world interaction module 290 of the device 100 can use real-world technology to simulate a virtual coach to provide movement guidance based on the guidance message (step 391). At the same time, the plan generation module 260 of the device 100 can adjust the practice plan based on the movement completion and physiological status data (step 395).
綜上所述,可知本發明與先前技術之間的差異在於具有分析使用者影像以產生使用者的初始姿勢資料,並依據使用者的初始姿勢資料及與使用者影像同步收集的生理狀態資料判斷使用者的初始身體狀態,及選擇與初始身體狀態相符的瑜伽動作以產生訓練計畫後,依據訓練計畫中之瑜伽動作的示範動作資料使用實境技術模擬並顯示虛擬教練進行動作示範與要點說明之技術手段,藉由此一技術手段可以來解決先前技術所存在無法提供使用者確認姿勢是否正確的問題,進而達成提供即時動作指導和個人練習計畫之技術功效。In summary, the difference between the present invention and prior art lies in its ability to analyze user images to generate the user's initial posture data, determine the user's initial physical condition based on the user's initial posture data and physiological status data collected simultaneously with the user's images, select yoga movements that match the initial physical condition to generate a training plan, and then use real-world technology to simulate and display a virtual coach demonstrating the movements and explaining key points based on the demonstration movement data of the yoga movements in the training plan. This technical means can solve the problem of prior art that cannot provide users with confirmation whether their posture is correct, thereby achieving the technical effect of providing real-time movement guidance and personalized practice plans.
再者,本發明之分析瑜伽動作與生理狀況以產生練習計畫之方法,可實現於硬體、軟體或硬體與軟體之組合中,亦可在電腦系統中以集中方式實現或以不同元件散佈於若干互連之電腦系統的分散方式實現。Furthermore, the method of analyzing yoga movements and physiological conditions to generate a practice plan of the present invention can be implemented in hardware, software, or a combination of hardware and software. It can also be implemented in a centralized manner in a computer system or in a distributed manner with different components distributed across several interconnected computer systems.
雖然本發明所揭露之實施方式如上,惟所述之內容並非用以直接限定本發明之專利保護範圍。任何本發明所屬技術領域中具有通常知識者,在不脫離本發明所揭露之精神和範圍的前提下,對本發明之實施的形式上及細節上作些許之更動潤飾,均屬於本發明之專利保護範圍。本發明之專利保護範圍,仍須以所附之申請專利範圍所界定者為準。While the embodiments disclosed above are limited to the present invention, these descriptions are not intended to directly limit the scope of patent protection for this invention. Any modifications or alterations in the form and details of the present invention made by a person skilled in the art without departing from the spirit and scope of this invention are within the scope of patent protection for this invention. The scope of patent protection for this invention shall remain subject to the scope of the attached patent application.
100: 裝置 110: 記憶體 120: 攝影機 130: 通訊介面 140: 儲存媒體 150: 輸入元件 170: 處理器 190: 匯流排 210: 資料取得模組 220: 影像分析模組 240: 資料載入模組 250: 狀態判斷模組 260: 計畫產生模組 270: 動作指導模組 290: 實境互動模組 400: 實境裝置 步驟310: 取得使用者影像及同步收集之生理狀態資料 步驟320: 分析使用者影像以產生與生理狀態資料對應之初始姿勢資料 步驟330: 依據初始姿勢資料及生理狀態資料判斷使用者之初始身體狀態 步驟340: 選擇與初始身體狀態相符之瑜伽動作以產生練習計畫,練習計畫包含瑜伽動作及對應之練習時間 步驟350: 使用實境技術模擬虛擬教練 步驟361: 載入與當前之瑜伽動作對應之示範動作資料 步驟365: 依據示範動作資料使用實境技術模擬並顯示虛擬教練進行當前之瑜伽動作及說明動作要點 步驟371: 即時取得使用者之練習影像與生理狀態資料 步驟375: 分析使用者之練習影像以產生練習姿勢資料 步驟381: 依據練習姿勢資料與標準姿勢資料判斷動作完成度 步驟385: 判斷練習姿勢資料與標準姿勢資料之差異達到門檻值時產生指導訊息 步驟391: 依據指導訊息使用實境技術模擬虛擬教練給予動作指導 步驟395: 依據動作完成度與生理狀態資料調整練習計畫 100: Device 110: Memory 120: Camera 130: Communication Interface 140: Storage Media 150: Input Device 170: Processor 190: Bus 210: Data Acquisition Module 220: Image Analysis Module 240: Data Loading Module 250: State Detection Module 260: Project Generation Module 270: Action Guidance Module 290: Reality Interaction Module 400: Reality Device Step 310: Acquire user images and simultaneously collected physiological status data Step 320: Analyze user images to generate initial posture data corresponding to the physiological status data Step 330: Determine the user's initial physical state based on the initial posture data and physiological status data Step 340: Select yoga poses that match the initial physical state to generate a practice plan, which includes yoga poses and corresponding practice times Step 350: Use reality technology to simulate a virtual instructor Step 361: Load demonstration movement data corresponding to the current yoga pose Step 365: Using real-world technology, simulate and display a virtual instructor performing the current yoga pose based on the demonstration movement data and explain the key points of the movement. Step 371: Real-time acquisition of the user's practice images and physiological status data. Step 375: Analyze the user's practice images to generate practice posture data. Step 381: Determine the completion of the movement based on the practice posture data and the standard posture data. Step 385: Generate a coaching message when the difference between the practice posture data and the standard posture data reaches a threshold. Step 391: Based on the coaching information, use real-world technology to simulate a virtual coach providing movement guidance. Step 395: Adjust the exercise plan based on movement completion and physiological status data.
第1圖為本發明所提之分析瑜伽動作與生理狀況以產生練習計畫之裝置之元件示意圖。 第2圖為本發明所提之處理器之模組示意圖。 第3A圖為本發明所提之分析瑜伽動作與生理狀況以產生練習計畫之方法流程圖。 第3B圖為本發明所提之調整練習計畫及給予動作指導之方法流程圖。 Figure 1 is a schematic diagram of the components of the device for analyzing yoga movements and physiological conditions to generate a practice plan according to the present invention. Figure 2 is a schematic diagram of the modules of the processor according to the present invention. Figure 3A is a flow chart of the method for analyzing yoga movements and physiological conditions to generate a practice plan according to the present invention. Figure 3B is a flow chart of the method for adjusting a practice plan and providing practice guidance according to the present invention.
步驟310:取得使用者影像及同步收集之生理狀態資料 Step 310: Obtain user images and simultaneously collected physiological status data
步驟320:分析使用者影像以產生與生理狀態資料對應之初始姿勢資料 Step 320: Analyze user images to generate initial posture data corresponding to physiological state data
步驟330:依據初始姿勢資料及生理狀態資料判斷使用者之初始身體狀態 Step 330: Determine the user's initial physical condition based on the initial posture data and physiological status data.
步驟340:選擇與初始身體狀態相符之瑜伽動作以產生練習計畫,練習計畫包含瑜伽動作及對應之練習時間 Step 340: Select yoga poses that match your initial physical condition to generate a practice plan. The practice plan includes the yoga poses and the corresponding practice time.
步驟350:使用實境技術模擬虛擬教練 Step 350: Use real-world technology to simulate a virtual coach
步驟361:載入與當前之瑜伽動作對應之示範動作資料 Step 361: Load the demonstration movement data corresponding to the current yoga movement.
步驟365:依據示範動作資料使用實境技術模擬並顯示虛擬教練進行當前之瑜伽動作及說明動作要點 Step 365: Use real-world technology to simulate and display a virtual instructor performing the current yoga movements based on the demonstration movement data and explaining the key points of the movements.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113143037A TWI891564B (en) | 2024-11-08 | 2024-11-08 | Device and method for analyzing yoga movements and physiological conditions to generate practice plan |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113143037A TWI891564B (en) | 2024-11-08 | 2024-11-08 | Device and method for analyzing yoga movements and physiological conditions to generate practice plan |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TWI891564B true TWI891564B (en) | 2025-07-21 |
Family
ID=97228344
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW113143037A TWI891564B (en) | 2024-11-08 | 2024-11-08 | Device and method for analyzing yoga movements and physiological conditions to generate practice plan |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI891564B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW202209275A (en) * | 2020-08-15 | 2022-03-01 | 喬山健康科技股份有限公司 | Fitness exercise guidance apparatus capable of guiding the user to perform fitness exercise by using interactive images |
| TW202243706A (en) * | 2021-04-02 | 2022-11-16 | 美商愛康有限公司 | Virtual environment workout controls |
| JP2024140925A (en) * | 2023-03-28 | 2024-10-10 | 株式会社日立製作所 | Exercise support system and exercise support method |
-
2024
- 2024-11-08 TW TW113143037A patent/TWI891564B/en active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW202209275A (en) * | 2020-08-15 | 2022-03-01 | 喬山健康科技股份有限公司 | Fitness exercise guidance apparatus capable of guiding the user to perform fitness exercise by using interactive images |
| TW202243706A (en) * | 2021-04-02 | 2022-11-16 | 美商愛康有限公司 | Virtual environment workout controls |
| JP2024140925A (en) * | 2023-03-28 | 2024-10-10 | 株式会社日立製作所 | Exercise support system and exercise support method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20130171596A1 (en) | Augmented reality neurological evaluation method | |
| JP7492722B2 (en) | Exercise evaluation system | |
| Feigl et al. | Sick moves! motion parameters as indicators of simulator sickness | |
| US11636777B2 (en) | System and method for improving exercise performance using a mobile device | |
| US20230316811A1 (en) | System and method of identifying a physical exercise | |
| JPWO2020121500A1 (en) | Estimating method, estimation program and estimation device | |
| Omarov et al. | Examination of the augmented reality exercise monitoring system as an adjunct tool for prospective teacher trainers | |
| JP2023168557A (en) | Program, method, and information processing device | |
| CN117423166B (en) | Motion recognition method and system according to human body posture image data | |
| CN110693500A (en) | Balance ability exercise evaluation method, balance ability exercise evaluation device, server and storage medium | |
| Kaewrat et al. | Enhancing exercise monitoring and guidance through mobile augmented reality: A comparative study of RGB and LiDAR | |
| TWI891564B (en) | Device and method for analyzing yoga movements and physiological conditions to generate practice plan | |
| Vanmechelen et al. | Markerless motion analysis to assess reaching-sideways in individuals with dyskinetic cerebral palsy: A validity study | |
| CN118593977A (en) | System and method for simulating virtual players for targeted training based on player weaknesses | |
| CN119379235A (en) | Device and method for analyzing yoga movements and physiological conditions to generate exercise plans | |
| Chiensriwimol et al. | Monitoring frozen shoulder exercises to support clinical decision on treatment process using smartphone | |
| JP7482471B2 (en) | How to generate a learning model | |
| TWI896413B (en) | Device for analyzing boxing moves to provide boxing training suggestions and method thereof | |
| JP2022158694A (en) | program, method, information processing device | |
| US20240112367A1 (en) | Real-time pose estimation through bipartite matching of heatmaps of joints and persons and display of visualizations based on the same | |
| JP2022158701A (en) | program, method, information processing device | |
| US20250367532A1 (en) | Approaches to providing personalized feedback on physical activities based on real-time estimation of pose and systems for implementing the same | |
| WO2025035128A2 (en) | Approaches to generating semi-synthetic training data for real-time estimation of pose and systems for implementing the same | |
| WO2025054197A1 (en) | Image-to-3d pose estimation via disentangled representations | |
| TWI875622B (en) | System for simulating virtual basketball player for training based on player weaknesses and method thereof |