CN110584782B - Medical image processing method, medical image processing apparatus, medical system, computer, and storage medium - Google Patents
Medical image processing method, medical image processing apparatus, medical system, computer, and storage medium Download PDFInfo
- Publication number
- CN110584782B CN110584782B CN201910932560.3A CN201910932560A CN110584782B CN 110584782 B CN110584782 B CN 110584782B CN 201910932560 A CN201910932560 A CN 201910932560A CN 110584782 B CN110584782 B CN 110584782B
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional
- real
- region
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Robotics (AREA)
- Gynecology & Obstetrics (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The present application relates to a method, an apparatus, a medical system, a computer and a storage medium for processing a medical image, the method comprising: receiving a three-dimensional scanning image of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space to obtain a fused image; the three-dimensional scanning image of the target object comprises a scanning image of a region of interest, and the global image comprises three-dimensional space coordinate information of an object in a preset space; and selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image. The method respectively registers the scanned image and the real-time image with the global image, so that the three images are converted into the same three-dimensional space coordinate system to determine the relative position of the scanned image in the real-time image, thereby displaying the scanned image in the real-time image, facilitating observation and improving efficiency.
Description
Technical Field
The present invention relates to the field of medical images, and in particular, to a medical image processing method, apparatus, medical system, computer device, and storage medium.
Background
During some medical procedures that need to be performed inside the human body, an accident may occur if a doctor or an operator cannot observe the internal situation, for example, when an interventional device (e.g., a radio frequency ablation catheter) is used, if the vascular form of a patient is complicated, the interventional device may be tangled after entering, which affects safety. In such a process, therefore, a doctor or an operator generally needs to observe the inside of the human body through a medical scan image.
Conventional processing of such medical images typically involves either continuous scanning (e.g., X-ray radiography) during the processing or displaying of the scanned image prior to processing. However, in the treatment process, continuous scanning causes great radiation damage, and the scanning image before display treatment cannot meet the requirement of real-time observation, on the other hand, the scanning image cannot correspond to the actual treatment part well, and doctors or treatment staff need to observe the scanning image and the treatment part at the same time, which can cause the operation time to be prolonged, thereby increasing the pain of the patients and increasing the operation difficulty of the doctors.
Disclosure of Invention
Accordingly, it is necessary to provide a method, an apparatus, a computer device and a storage medium for processing a medical image, which can display a medical scanning image at a corresponding position of an actual part of a human body in real time, thereby facilitating the processing by a doctor and improving the efficiency.
The invention provides a medical image processing method, which comprises the following steps:
step 1: receiving a scanned image of a region of interest of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space; the three-dimensional scanning image of the target object comprises a scanning image of a region of interest, and the global image comprises three-dimensional space coordinate information of an object in the preset space;
step 2: selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image to obtain a fused image; wherein when part or all of the region of interest exists within the local view angle, a scanned image of part or all of the region of interest is presented in the real-time image; when the region of interest is not present within the local view angle, the scanned image of the region of interest is not presented in the real-time image.
According to the medical image processing method, the scanned image of the region of interest and the real-time image of the local view angle of the doctor are respectively registered with the global image in the space, so that the contents in the three images are converted into the same three-dimensional space coordinate system, the corresponding position of the medical scanned image of the interested part in the region of interest in the real-time image is determined, the scanned image of the region of interest can be displayed in the real-time image of the view angle of the doctor, the doctor can conveniently observe, and the processing efficiency is improved.
In one embodiment, the three-dimensional scan image of the target object further comprises a scan image of a localization area;
the step 2 specifically comprises:
registering the scanned image of the region of interest with the global image using the localization area to determine three-dimensional spatial coordinates of the region of interest in the preset space;
using the same object in the real-time image and the global image as a reference, and registering the real-time image and the global image to determine a three-dimensional space coordinate of the real-time image in the preset space;
determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in the preset space;
displaying the scanned image of the region of interest at the corresponding position in the real-time image to obtain the fused image.
In one embodiment, the step of registering the three-dimensional scan image with the global image using the localization area comprises:
acquiring a first reference identifier of the positioning area in the three-dimensional scanning image and a second reference identifier of the positioning area in the global image;
and registering the three-dimensional scanning image and the global image according to the position of the first reference mark in the three-dimensional scanning image and the position of the second reference mark in the global image.
In one embodiment, the locating region is a head region of the target object, and the step of acquiring a first reference identifier of the locating region in the three-dimensional scan image and a second reference identifier of the locating region in the global image includes:
acquiring a skull of the target object in the three-dimensional scanning image as the first reference mark for identification;
identifying the facial features of the target object in the global image as the second reference mark through a face recognition algorithm;
wherein the facial features include at least one of eyes, mouth, nose, ears, and eyebrows.
In one embodiment, a magnetic field is arranged in the preset space; after displaying the scan image of the region of interest at the corresponding location to obtain the fused image, the method further comprises:
acquiring a magnetic field three-dimensional coordinate of a preset object in the preset space;
acquiring a relative relation between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate in the preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object;
acquiring a magnetic field three-dimensional coordinate of the interventional device in the preset space;
determining three-dimensional space coordinates of the interventional device in the preset space based on the relative relationship and the magnetic field three-dimensional coordinates of the interventional device;
determining a corresponding position of the intervention device in the fusion image based on three-dimensional space coordinates of the intervention device in the preset space;
displaying a virtual image of the interventional device at a corresponding location on the fused image;
wherein the interventional device is used for interventional treatment of a region of interest of the target object.
In one embodiment, the method further comprises the following steps:
acquiring the updated magnetic field three-dimensional coordinates of the interventional device in real time under the condition that the position of the interventional device moves;
and updating the corresponding position of the virtual image of the interventional device in the fusion image based on the updated magnetic field three-dimensional coordinates, and displaying the moving path of the interventional device.
The invention also provides a medical image processing device, comprising:
the image receiving module is used for receiving a three-dimensional scanning image of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space, wherein the global image comprises three-dimensional space coordinate information of an object in the preset space, and the three-dimensional scanning image of the target object comprises a scanning image of an interested area; and
the image processing module is used for selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image so as to obtain a fused image; wherein when part or all of the region of interest exists within the local view angle, a scanned image of part or all of the region of interest is presented in the real-time image; when the region of interest is not present within the local view angle, the scanned image of the region of interest is not presented in the real-time image.
According to the processing device of the medical images, the scanned images of the region of interest and the real-time images of the local view angles of the doctors are respectively registered with the global images in the space, so that the contents in the three images are converted into the same three-dimensional space coordinate system, the corresponding positions of the medical scanned images of the interested parts in the real-time images are determined, the scanned images of the region of interest can be displayed in the real-time images of the view angles of the doctors, the doctors can conveniently observe, and the processing efficiency is improved.
In one embodiment, the three-dimensional scan image of the target object further comprises a scan image of the localization area; the image processing module includes:
the first registration module is used for registering the three-dimensional scanning image with the global image by using the positioning area so as to determine the three-dimensional space coordinates of the region of interest in the preset space;
the second registration module is used for registering the real-time image with the global image so as to determine the three-dimensional space coordinates of the real-time image in the preset space;
the position determining module is used for determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in the preset space;
and the image fusion module is used for displaying the scanned image of the region of interest at the corresponding position so as to obtain the fused image.
In one embodiment, the processing device of the medical image is applied to a medical system, a magnetic field is disposed in the preset space, the medical system includes an interventional device and a magnetic positioning device, the interventional device is used for performing interventional treatment on a region of interest of the target object, the magnetic positioning device is disposed in the interventional device and a preset object, the magnetic positioning device is used for acquiring three-dimensional coordinates of the magnetic field of the interventional device and the preset object in the magnetic field, and the processing device of the medical image is in communication connection with the magnetic positioning device;
wherein the processing device of the medical image is further configured to determine a relative relationship between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate in the preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object, and determine the three-dimensional space coordinate of the interventional device in the preset space based on the relative relationship and the magnetic field three-dimensional coordinate of the interventional device, so as to determine a corresponding position of the interventional device in the fusion image, and display a virtual image of the interventional device at the corresponding position on the fusion image.
The present invention also provides a medical system comprising:
interventional means for interventional treatment of a region of interest of the target object;
the magnetic field generating device is used for generating a magnetic field in the preset space;
the magnetic positioning device comprises a plurality of magnetic positioning devices, a positioning device and a positioning device, wherein at least one magnetic positioning device is arranged in the interventional device, the rest magnetic positioning devices are arranged in a preset object, and the magnetic positioning devices are used for acquiring magnetic field three-dimensional coordinates of the interventional device and the preset object in the magnetic field;
the medical image processing apparatus is further communicatively connected to the plurality of magnetic positioning apparatuses, and is configured to determine a relative relationship between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate in the preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object, and determine the three-dimensional space coordinate of the interventional device in the preset space based on the relative relationship and the magnetic field three-dimensional coordinate of the interventional device, so as to determine a corresponding position of the interventional device in the fused image, and display a virtual image of the interventional device at the corresponding position on the fused image.
In one embodiment, the medical system further comprises:
the camera device is used for acquiring the global image of the preset space; and
the visualization device is used for acquiring the real-time image of the local visual angle in the preset space and displaying the fusion image;
the processing device of the medical image is respectively connected with the camera device and the visualization device in a communication mode so as to receive the global image and the real-time image.
In one embodiment, the preset object is the image capturing device and/or the visualization device.
In one embodiment, the medical system further comprises:
the sliding rail is arranged in the preset space, and the camera shooting device is movably arranged on the sliding rail.
In one embodiment, the medical system further comprises a scanning device for acquiring the scan image of the region of interest of the target object, and the processing device of the medical image is communicatively connected with the scanning device to receive the three-dimensional scan image; the scanning device comprises at least one of a computed tomography device, a magnetic resonance scanning device and a digital subtraction angiography device; the camera device comprises a depth camera and/or a holographic camera; the visualization device includes augmented reality glasses and/or mixed reality glasses.
In one embodiment, the interventional device comprises at least one of a guidewire, a catheter, a sheath, and a puncture needle.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable by the processor, the processor implementing the steps of the above method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
Drawings
FIG. 1 is a flow diagram illustrating a method for processing a medical image according to one embodiment;
FIG. 2 is a flow chart illustrating a method for processing a medical image according to an embodiment;
FIG. 3 is a flowchart illustrating a step of aligning a three-dimensional scan image with a global image using a localization area in a method of processing a medical image according to an embodiment;
FIG. 4 is a flowchart illustrating the steps of acquiring the first reference identifier and the second reference identifier in the processing method of the medical image in the embodiment of FIG. 3;
FIG. 5 is a schematic diagram of registration of a three-dimensional scan image and a global image in one embodiment;
FIG. 6 is a flowchart illustrating a method of processing a medical image according to another embodiment;
FIG. 7 is a schematic diagram showing the configuration of a medical image processing apparatus according to an embodiment;
FIG. 8 is a block diagram of a medical system in one embodiment;
FIG. 9 is a schematic diagram of the configuration of the medical system in one embodiment;
fig. 10 is a schematic structural view of a medical system in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a flowchart illustrating a processing method of a medical image according to an embodiment, and as shown in fig. 1, a processing method of a medical image includes:
step S12: receiving a three-dimensional scanning image of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space; the three-dimensional scanning image of the target object comprises a scanning image of an interested area, and the global image comprises three-dimensional space coordinate information of an object in a preset space;
specifically, the target object may be a patient or a tester receiving medical treatment, the preset space may be a ward or an operating room where corresponding medical treatment is performed, and first, a medical scan is required, and specifically, the target object may be scanned by a Computed Tomography (CT) device, a Magnetic Resonance Imaging (MRI) device, or a Digital Subtraction Angiography (DSA) device, and the scanning region includes a region of interest, and the region of interest may be an organ or a tissue of the target object in the region of interest. And performing three-dimensional reconstruction on the scanned data to obtain a three-dimensional scanning image of the target object, wherein the three-dimensional scanning image of the target object comprises a scanning image of the region of interest. The global image in the preset space can be obtained by shooting through devices such as a holographic camera or a depth camera, and the holographic camera or the depth camera can correspondingly establish a three-dimensional space coordinate system in the preset space when shooting the global image, wherein the global image comprises images of all objects in the preset space and corresponding three-dimensional space coordinate information. After the global image is acquired, a real-time image of a local view angle in a preset space, that is, a real-time image watched by a doctor or an operator, is acquired by shooting through a camera or a visualization device worn by the doctor or the operator, and the view angle of the real-time image is the same as the real-time view angle of the doctor or the operator.
Step S14: selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image to obtain a fused image; when partial or all interested areas exist in the local visual angle, the scanning image of the partial or all interested areas is presented in the real-time image; when no region of interest exists within the local viewing angle, the scanned image of the region of interest will not appear in the real-time image.
Specifically, after receiving the three-dimensional scanned image, the global image, and the real-time image, image registration may be performed between the three-dimensional scanned image, the global image, and the real-time image, a registration process between the three images may be determined according to an actual situation, the registration process may be performed by using the same or a corresponding object in the images as a reference, and the reference object may select a specific portion of the target object or set a mark alone. After the image registration is carried out, the three-dimensional space coordinates of the three-dimensional scanning image and the object in the real-time image in the preset space can be obtained, so that the scanning image and the real-time image of the region of interest can be converted into a three-dimensional space coordinate system in the preset space, and the corresponding position of the region of interest in the real-time image can be determined according to the three-dimensional space coordinates of the region of interest. When the real-time visual angle of the doctor comprises the position of the region of interest, the three-dimensional scanning image of the doctor can be displayed at the corresponding position of the region of interest in the real-time image, so that the internal structure of the region of interest can be observed, the method is more convenient and visual, and the efficiency of other subsequent processing operations is improved.
Fig. 2 is a detailed flowchart of a processing method of a medical image according to an embodiment, and as shown in fig. 2, on the basis of the embodiment, the processing method of the medical image includes:
step S110: receiving a three-dimensional scanning image of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space, wherein the global image comprises three-dimensional space coordinate information of objects (including people and objects) in the preset space, and the three-dimensional scanning image of the target object comprises a scanning image of a region of interest and a scanning image of a positioning region.
In particular, for a target object, in addition to acquiring a scan image of a region of interest, a scan image of a localized region needs to be acquired to facilitate subsequent image registration. The localization area may be an organ or tissue of the target object, etc., and the localization area may be a different area from the region of interest, or may be a partially or completely overlapping area. And scanning and three-dimensionally reconstructing the region of interest and the positioning region to obtain a scanning image of the positioning region and the region of interest. The method comprises the steps of obtaining a global image in a preset space through shooting by a holographic camera, a depth camera and other devices, obtaining a real-time image through shooting by a camera or visual equipment worn by a doctor or an operator and the like, wherein the visual angle of the real-time image is the same as the real-time watching visual angle of the doctor or the operator.
Step S130: and registering the three-dimensional scanning image with the global image by using the positioning area so as to determine the three-dimensional space coordinates of the interested area in a preset space.
Specifically, after receiving the three-dimensional scan image and the global image, the three-dimensional scan image and the global image may be registered, the registration process may be generally performed by using the same object in the three-dimensional scan image and the global image of the localization area as a reference, and the selection of the reference object may be determined according to an actual situation, for example, the registration may be performed according to a specific part in the localization area, or a mark may be set in the localization area as a registration reference. The registration process may determine the positional relationship of the object in the three-dimensional scanned image and the object in the global image based on the position of the object in the three-dimensional scanned image and the global image as a reference. Because the global image comprises the three-dimensional space coordinate information of all objects in the preset space, the three-dimensional scanning image can be converted into the three-dimensional space coordinate system of the preset space according to the position relation, and therefore the three-dimensional space coordinate of the region of interest in the preset space is determined.
Step S150: and registering the real-time image and the global image by using the same object in the real-time image and the global image as a reference to determine the three-dimensional space coordinate of the real-time image in a preset space.
Specifically, after the real-time image of the doctor view angle is acquired, the real-time image and the global image need to be registered, the registration process can be generally performed by using the same object in the real-time image and the global image as a reference, the selection of the reference object can be determined according to actual conditions, and for example, objects with relatively fixed positions such as a hospital bed in a preset space can be selected as a registration reference. The registration process may determine the position relationship of the object in the real-time image and the object in the global image according to the position of the object in the real-time image and the global image as a reference. Because the global image comprises the three-dimensional space coordinate information of the object in the preset space, the real-time image can be converted into the three-dimensional space coordinate system of the preset space according to the position relation, and therefore the three-dimensional space coordinate of the object in the real-time image of the doctor visual angle in the preset space is determined.
From the above description, it is understood that the global image may also include the three-dimensional space coordinate information of some objects (including people and things) in the preset space, and the real-time image includes the three-dimensional space coordinate information of some objects (including people and things) in the local view angle in the preset space, as long as the matching between the two and the three-dimensional scanning image is not affected.
Step S170: and determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in the preset space.
Step S190: the scanned image of the region of interest is displayed at the corresponding location in the real-time image to obtain a fused image.
Specifically, after the three-dimensional scanning image and the real-time image are determined to be respectively registered with the global image, the region of interest and the real-time image can be converted into a three-dimensional space coordinate system in a preset space, so that the position of the region of interest in the real-time image, namely the position of the region of interest in the view angle of the doctor, can be determined according to the three-dimensional space coordinate of the region of interest. The three-dimensional scanning image and the real-time image can be subjected to image fusion, the scanning image of the region of interest is displayed at the corresponding position in the real-time image, the fused image can be displayed on visual equipment worn by a doctor, such as mixed reality glasses and the like, so that the scanning image of the region of interest is projected at the corresponding position of a real human body in the real-time visual angle of the doctor, the internal structure of the region of interest can be observed, the method is more convenient and visual, and the efficiency of subsequent other processing operations is improved.
According to the medical image processing method, the scanned image of the region of interest and the real-time image of the local view angle of the doctor are respectively registered with the global image in the preset space, so that the contents in the three images are converted into the same three-dimensional space coordinate system, the corresponding position of the medical scanned image of the region of interest in the real-time image is determined, the scanned image of the region of interest can be displayed in the real-time image of the view angle of the doctor, the doctor can conveniently observe, and the processing efficiency is improved.
Fig. 3 is a flowchart illustrating a step S130 of the medical image processing method in an embodiment, and in an embodiment, as shown in fig. 3, the step S130 may specifically include:
step S132: and acquiring a first reference identifier of the positioning area in the three-dimensional scanning image and a second reference identifier of the positioning area in the global image.
Step S134: and registering the three-dimensional scanning image and the global image according to the position of the first reference mark in the three-dimensional scanning image and the position of the second reference mark in the global image.
Specifically, after the three-dimensional scan image and the global image are acquired, the object in the images may be extracted and recognized respectively, so that the same object is found in the two images as a reference object for image registration, for example, an image of the reference object in the positioning region displayed in the three-dimensional scan image is used as a first reference identifier, and an image of the reference object displayed in the global image is used as a second reference identifier. The reference object can generally be an object with a fixed position and clear shape and outline so as to be conveniently identified in a three-dimensional scanning image and a panoramic image. For example, since the body of the target object may be covered by white cloth and only the head is exposed to the outside during scanning and medical treatment, it is easier to recognize in the global image, and thus the head of the target object can be used as a reference object. And registering the two images according to the positions of the head of the target object in the three-dimensional scanning image and the displayed image in the panoramic image.
Further, fig. 4 is a schematic flowchart of the step S132 of the medical image processing method in the foregoing embodiment, and in a preferred embodiment, as shown in fig. 4, the positioning region is a head region of the target object, and the step S132 may specifically include:
step S1322: acquiring a skull of a target object in a three-dimensional scanning image as a first reference identifier;
step S1324: and recognizing the facial features of the target object in the global image as a second reference mark through a face recognition algorithm. Wherein the facial features include at least one of eyes, mouth, nose, ears, and eyebrows.
The accurate position information of the eyes, the nose, the mouth and the like in the skull scanned image can be extracted by using a face matching algorithm, and the first reference mark in the three-dimensional scanned image is matched with the second reference mark in the global image, so that the position information of the first reference mark in the three-dimensional scanned image is converted into a three-dimensional space coordinate system in a preset space, and the registration of the whole three-dimensional scanned image and the global image is further completed.
It is understood that, in the registration of the three-dimensional scan image and the global image, in addition to using the head of the target object as the reference object, other parts or additional markers may be selected as the reference object, so as to perform the registration of the three-dimensional scan image and the global image under the condition that the head image of the target object cannot be acquired. For example, the positioning object can be independently arranged near the region of interest, image registration is carried out according to the positions of the positioning object in the three-dimensional scanning image and the image displayed in the global image, and meanwhile, compared with the method using the head as the reference object, the method using the head as the reference object is more convenient and accurate by adjusting the number, the shape, the arrangement position and the like of the positioning object, and the precision of image registration is effectively improved.
In a specific embodiment, fig. 5 is a schematic diagram of registration of a three-dimensional scan image and a global image, as shown in fig. 5, in which a region of interest is a venous vascular access from a femoral vein of a target object to a subcardial cardiac vein and a cardiac system. As shown in fig. 5 (a), the reference object may be set as the head of the target object, and the registration of the left global image and the right three-dimensional scan image may be performed by the position of the skull. In addition, as shown in fig. 5 (b), a marker may be set at a specific position of the body of the target object, for example, three positioning markers are set in a triangle on the torso as reference objects for registration, and the positioning markers may be opaque objects with rays pasted so as to be clearly visible in the three-dimensional scanned image, so that registration between the left global image and the right three-dimensional scanned image may be performed according to the positions of the three positioning markers. At this time, the localization area is the torso of the target object. The first reference mark of the positioning area in the three-dimensional scanning image is a scanning image of three positioning marks, and the positioning area in the second reference mark of the global image is a shooting image of three marks.
Fig. 6 is a schematic flow chart of a medical image processing method in another embodiment, and as shown in fig. 6, on the basis of the above embodiment, a magnetic field is provided in a preset space in the present embodiment, and an interventional device is used for performing interventional treatment on a region of interest of a target object. After the step of displaying the three-dimensional image of the region of interest at the corresponding position to obtain the fused image, the processing method of the medical image in this embodiment may further include:
step S210: and acquiring the magnetic field three-dimensional coordinates of a preset object in a preset space.
Specifically, after the fusion image is obtained, the region of interest of the target object may be further subjected to interventional treatment by an interventional device, which may specifically include a guide wire, a catheter, a sheath, a puncture needle, or the like. In order to achieve a localization of the interventional device when performing the interventional procedure, magnetic field localization techniques may be used in the pre-set space. The magnetic field may be generated by a magnetic field generator disposed in a preset space, and the three-dimensional coordinates of the magnetic field of one or more preset objects are first obtained, where the preset objects are objects within a coverage range of the magnetic field in the preset space, for example, a hospital bed may be used as the preset objects, and a camera device for obtaining a panoramic image or a visualization device worn by a doctor in the above embodiments may also be used as the preset objects. And arranging a magnetic positioning sensor and the like at the position of the preset object, and determining the three-dimensional coordinate of the preset object in the magnetic field through a signal fed back by the magnetic sensor.
Step S220: and acquiring the relative relation between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate in the preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object.
Specifically, the three-dimensional space coordinate of the preset object is determined according to the global image, so that the relative relationship between the three-dimensional space coordinate of the magnetic field in the preset space and the three-dimensional space coordinate can be calculated according to the three-dimensional space coordinate and the three-dimensional space coordinate of the magnetic field of the preset object, and a function relation formula of mutual conversion of the two coordinates can be generally obtained, so that the three-dimensional space coordinate and the three-dimensional space coordinate of the preset space where the preset object is located are matched. Preferably, the accuracy of the interconversion between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate can be improved by sampling and calculating a plurality of groups of preset objects.
Step S230: and acquiring the three-dimensional coordinates of the magnetic field of the interventional device in a preset space.
Step S240: and determining the three-dimensional space coordinate of the intervention device in the preset space based on the relative relationship and the magnetic field three-dimensional coordinate of the intervention device.
In particular, a magnetic positioning sensor is also arranged on the interventional device, and the real-time magnetic field three-dimensional coordinates of the interventional device in the magnetic field can be determined through signals fed back by the magnetic positioning sensor after the interventional device enters the body or the blood vessel of the target object. And converting the determined magnetic field three-dimensional coordinate of the intervention device according to the relative relationship between the magnetic field three-dimensional coordinate and the three-dimensional space coordinate in the preset space obtained in the step to obtain the three-dimensional space coordinate of the intervention device in the preset space. It can be understood that, in order to improve the accuracy of obtaining the three-dimensional coordinates of the magnetic field of the interventional device, the magnetic field generating device can be arranged under the sickbed at a position which is closer to the interventional device and has less space obstacles.
Step S250: and determining the corresponding position of the intervention device in the fusion image based on the three-dimensional space coordinates of the intervention device in the preset space.
Step S260: a virtual image of the interventional device is displayed at a corresponding location on the fused image.
Specifically, after the three-dimensional space coordinates of the interventional device in the preset space are determined, the position corresponding to the coordinates is determined in the fused image of the real-time image and the three-dimensional scanning image, so that the virtual image of the interventional device can be displayed at the position of the fused image, and the virtual image of the interventional device can be an image obtained by scanning the interventional device or an icon drawn according to the shape and the scale of the interventional device, and the virtual image of the interventional device is displayed on the fused image.
Therefore, under the operation visual angle of a doctor or an operator, the doctor or the operator can see the scanned image of the interested area and the virtual image of the intervention device projected on the real human body, can know the position of the intervention device in the target object body and the relative position relation between the intervention device and the interested area in real time, helps the doctor to track the condition of the intervention device in the target object body, improves the subsequent operation efficiency, prevents the intervention device from hurting the human body, and improves the safety of medical treatment.
Further, in an embodiment, the method for processing a medical image may further include:
step S270: and under the condition that the position of the interventional device moves, acquiring the updated magnetic field three-dimensional coordinates of the interventional device in real time.
Step S280: and updating the corresponding position of the virtual image of the intervention device in the fusion image based on the updated magnetic field three-dimensional coordinates, and displaying the moving path of the intervention device.
Specifically, since the interventional device may move in the body of the target object, the magnetic positioning sensor needs to continuously work to feed back the updated magnetic field three-dimensional coordinates of the interventional device when the interventional device moves, so that the updated three-dimensional space coordinates of the interventional device can be determined according to the relative relationship between the magnetic field three-dimensional coordinates and the three-dimensional space coordinates, and a virtual image of the interventional device is displayed at a position corresponding to the updated three-dimensional space coordinates in the fusion image. According to the updating of the position of the intervention device, the moving path and the direction of the intervention device can be displayed on the fusion image, for example, marked by dotted lines, so that a doctor can monitor the moving process and the moving trend of the intervention device, the intervention device is prevented from being damaged in a target object body by moving, and the safety of medical treatment is further improved.
In the embodiment of the medical image processing method, the three-dimensional scanned image is fused with the global image, the real-time image is fused with the global image (the sequence of the two fusions is not limited in the present invention, the two fusions can be performed simultaneously), and then the three-dimensional scanned image is fused with the real-time image again. Of course, in other embodiments, the real-time image and the three-dimensional scanning image may be fused first, and then the two fused images are fused with the real-time image, which is not limited in the present invention.
Fig. 7 is a schematic structural diagram of a medical image processing apparatus according to an embodiment, and as shown in fig. 7, in an embodiment, a medical image processing apparatus 300 includes: the image receiving module 310 is configured to receive a three-dimensional scanned image (including a scanned image of a positioning region and a region of interest) of a target object, a global image of a preset space where the target object is located, and a real-time image of a local perspective in the preset space, where the global image includes three-dimensional space coordinate information of all objects in the preset space. The image processing module is used for selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image; when partial or all interested areas exist in the local visual angle, the scanning image of the partial or all interested areas is presented in the real-time image; when no region of interest exists within the local viewing angle, the scanned image of the region of interest will not appear in the real-time image.
Preferably, the image processing module specifically includes a first registration module 330, a second registration module 350, a position determination module 370 and an image fusion module 390, where the first registration module 330 is configured to register the three-dimensional scan image with the global image by using the localization area to determine three-dimensional space coordinates of the region of interest in the predetermined space; the second registration module 350 is configured to register the real-time image with the global image to determine a three-dimensional space coordinate of the real-time image in a preset space; the position determining module 370 is configured to determine a corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in the preset space; the image fusion module 390 is configured to display the scanned image of the region of interest at a corresponding location in the real-time image to obtain a fused image.
Specifically, the image receiving module 310 is in communication connection with a scanning device such as a CT or MR, an image pickup device such as a holographic camera, and a visualization device in a wired or wireless manner, and the image receiving module 310 receives a scanned image of a positioning region and a region of interest obtained by a scanning device, a global image captured by a device such as a holographic camera, and a real-time image of a doctor's view angle obtained by a visualization device. The image receiving module 310 sends the three-dimensional scan image and the global image to the first registration module 330, and sends the real-time image and the global image to the second registration module 350. After receiving the three-dimensional scan image and the global image, the first registration module 330 performs image registration by using the position of the image displayed by the reference object in the three-dimensional scan image in the positioning region and the position of the image displayed by the reference object in the global image, where the reference object may be a head of a human body or other markers separately arranged in the positioning region, and after performing registration, determines the three-dimensional space coordinates of the region of interest in a preset space according to the three-dimensional space coordinate information in the global image, and the first registration module 330 sends the obtained three-dimensional space coordinates of the region of interest to the position determination module 370. After receiving the real-time image and the global image, the second registration module 350 registers the reference object in the real-time image with the reference object in the global image, where the reference object may be a relatively fixed object in a preset space, such as a hospital bed or a cabinet, or may be another identifier separately set in the preset space, and determines the three-dimensional space coordinate of the object in the local view angle in the real-time image in the preset space according to the three-dimensional space coordinate information in the global image after registration, and the second registration module 350 sends the obtained three-dimensional space coordinate of the real-time image to the position determination module 370.
After receiving the three-dimensional space coordinates of the region of interest and the real-time image, the position determining module 370 finds a corresponding position in the real-time image that is the same as the three-dimensional space coordinates of the region of interest in the three-dimensional space coordinate system of the preset space, and sends the position to the image fusion module 390. After receiving the corresponding position information of the region of interest in the real-time image, the image fusion module 390 fuses the scanned image of the region of interest with the real-time image according to the position, and displays the scanned image of the region of interest at the corresponding position of the real-time image, thereby obtaining a final fused image.
Further, in an embodiment, the processing device 300 of the medical image is applied to a medical system, the magnetic field is disposed in a preset space, the medical system includes an interventional device and a magnetic positioning device, the interventional device is used for performing interventional treatment on a region of interest of a target object, the magnetic positioning device is disposed in the interventional device and a preset object, the magnetic positioning device is used for acquiring three-dimensional coordinates of the magnetic field of the interventional device and the preset object in the magnetic field, and the processing device 300 of the medical image is in communication connection with the magnetic positioning device; wherein the processing means 300 of the medical image is further adapted to determine a relative relationship between the three-dimensional space coordinates and the magnetic field three-dimensional coordinates within the preset space from the magnetic field three-dimensional coordinates and the three-dimensional space coordinates of the preset object, and to determine the three-dimensional space coordinates of the interventional device in the preset space based on the relative relationship between the three-dimensional space coordinates and the magnetic field three-dimensional coordinates of the interventional device, thereby determining the corresponding position of the interventional device in the fused image, and to display a virtual image of the interventional device at the corresponding position on the fused image.
The processing device 500 for medical images respectively registers the three-dimensional scanned image of the target object and the real-time image of the local view angle of the doctor with the global image in the preset space, so as to convert the contents of the three images into the same three-dimensional space coordinate system, and determine the corresponding position of the medical scanned image of the interested part in the real-time image, so that the scanned image of the interested area can be displayed in the real-time image of the view angle of the doctor, thereby facilitating the observation of the doctor and improving the processing efficiency.
FIG. 8 is a block diagram of a medical system in one embodiment, as shown in FIG. 8, in which a medical system 50 is disposed in a predetermined space, the medical system 50 comprising: the system comprises a scanning device 510, an image pickup device 530, a visualization device 550 and a medical image processing device 300, wherein the scanning device 510 is used for acquiring a three-dimensional scanning image (including a scanning image of a positioning area and a region of interest) of a target object, the image pickup device 530 is used for acquiring a global image of a preset space, and the global image comprises three-dimensional space coordinate information of all objects in the preset space; the visualization device 550 is used for acquiring a real-time image of a local view angle in a preset space. The medical image processing apparatus 300 is communicatively connected to the scanning apparatus 510, the imaging apparatus 530, and the visualization apparatus 550, respectively, and is configured to receive the three-dimensional scan image, the global image, and the real-time image, perform fusion processing on the three-dimensional scan image, the global image, and the real-time image, and generate a fusion image.
Specifically, in the medical system 50, the scanning device 510 scans and three-dimensionally reconstructs a region of interest of the target object to obtain a three-dimensional scan image of the target object, including a positioning region and a scan image of the region of interest. The camera 530 performs panoramic shooting on a preset space where the medical system 50 is located, and obtains a global image including three-dimensional space coordinate information of all objects in the preset space. The visualization device 550 is worn by a doctor or an operator, and acquires a local real-time image of the visual angle of the doctor or the operator in real time. The scanning device 510, the imaging device 530, and the visualization device 550 transmit the three-dimensional scan image, the global image, and the real-time image to the medical image processing device 300. The processing apparatus 300 of the medical image may specifically be an image workstation, and the like, and the processing apparatus 300 of the medical image respectively registers the three-dimensional scan image and the real-time image with the global image to obtain three-dimensional space coordinates of the region of interest and the real-time image in a preset space, so as to determine a corresponding position of the region of interest in the real-time image, and display the scan image of the region of interest at the corresponding position of the real-time image. The doctor can watch the scanned image of the region of interest at the visual angle and project the scanned image at the corresponding position of the real human body, so that the doctor can observe the scanned image more conveniently and visually, and the processing efficiency is improved.
Fig. 9 is a schematic structural diagram of the medical system in the above embodiment, and in one embodiment, as shown in fig. 9, the scanning device 510 includes at least one of a computed tomography device, a magnetic resonance scanning device, and a digital subtraction angiography device; the camera 530 comprises a depth camera and/or a holographic camera; the visualization device 550 includes augmented reality glasses and/or mixed reality glasses. The application scenario in fig. 9 may be a case where a doctor is performing a cardiac intervention operation, and using an intervention device or the like to reach a target site, such as an operation for an arrhythmia patient, the intervention device needs to enter through a femoral vein and gradually reach an inferior vena cava of a heart through a vascular access; also, for coronary heart disease patients, the interventional device needs to be accessed through the femoral or radial artery, through a vascular access, and step-wise to the heart. The aortic medical system 50 can virtually project the blood vessel map and the heart chamber model inside the human body onto the visualization device, and the positions of the blood vessel map and the heart chamber model are matched with the corresponding positions on the real human body, and the doctor can also see the moving state of the intervention device in the blood vessel and the heart on the visualization device, thereby helping the doctor to accurately operate the intervention device.
Specifically, the medical system 50 is disposed in a preset space, the preset space may be an operating room, a carrying platform 520 is disposed in the operating room, the carrying platform 520 may be an operating table or a hospital bed, and the target object generally lies on the carrying platform 520. The scanning device 510 scans and images a target object on the carrying platform 520, and the scanning device 510 may be specifically a CT device, an MRI device, or a DSA device, for example, a C-arm device scans and images the target object. The camera device 530 performs panoramic shooting on the operating room, the camera device 530 can be specifically a holographic camera or a depth camera, so that the images shot by the camera device 530 contain three-dimensional space coordinate information in the operating room, the medical system obtains three-dimensional stereo data of the environment of the whole preset space and a target object, and in order to shoot all positions in the operating room, the camera device 530 can be arranged at the position of a ceiling, and the shielding of obstacles on the camera device 530 is reduced.
The visualization device 550 is a display device worn by a doctor or an operator, and may specifically be glasses-type devices based on Mixed Reality (MR) or Augmented Reality (AR), for example, the visualization device 550 may specifically include a Hololens system of Microsoft corporation, a Meta system of Metavision corporation, a Magic Leap system of Magic Leap corporation, and the like. The visualization device 550 may acquire a digitized three-dimensional environment image of the view angle of the doctor through a depth camera provided therein. Further, the visualization device 550 is also used for displaying the fused image. The mixed reality glasses or the augmented reality glasses can also project a fused image of the three-dimensional scanning image to form a holographic image on the lens. Specifically, the visualization device is internally provided with a corresponding algorithm for acquiring the position of the device in a preset space environment, and the virtual image required by a user is manually or automatically projected at a designated position to form a holographic image and projected on a lens, so that the combination of virtual and real is realized, a doctor or an operator can see a real target object through eyes, and can see a three-dimensional scanning image at the position of an interested area to visually observe the inside of the interested area.
The medical system 50 respectively registers the scanned image of the region of interest and the real-time image of the local view angle of the doctor with the global image in the space, so as to convert the contents of the three images into the same three-dimensional space coordinate system, and determine the corresponding position of the medical scanned image of the region of interest in the real-time image, so that the scanned image of the region of interest can be displayed in the real-time image of the view angle of the doctor, the doctor can conveniently observe the scanned image, and the processing efficiency is improved.
In one embodiment, the medical system 50 may further include: an interventional device 570 for performing an interventional procedure on a region of interest of a target object; a magnetic field generating device 590 for generating a magnetic field in a predetermined space; and a plurality of magnetic positioning devices (not identified in the figures). At least one magnetic positioning device is arranged in the intervention device 570, and the rest magnetic positioning devices are arranged in the camera device 530 and/or the visualization device 550, and the magnetic positioning devices are used for acquiring the magnetic field three-dimensional coordinates of the magnetic positioning devices in the magnetic field; wherein the processing device 300 of the medical image is further communicatively connected to a plurality of magnetic localization devices for determining a relative relationship of the three-dimensional space coordinates and the magnetic field three-dimensional coordinates within the preset space from the magnetic field three-dimensional coordinates and the three-dimensional space coordinates of the imaging device 530 and/or the visualization device 550, determining the three-dimensional space coordinates of the intervention device 570 in the preset space based on the relative relationship and the magnetic field three-dimensional coordinates of the intervention device 570 to determine a corresponding position of the intervention device 570 in the fused image, and displaying a virtual image of the intervention device 570 at the corresponding position on the fused image.
Specifically, in the medical system 50, the doctor or the operator needs to operate the intervention device 570 to perform the intervention on the target object. The interventional device 570 may specifically include a guidewire, a catheter, a sheath or a puncture needle, etc. Since the interventional device 570 enters the target object during the procedure and cannot directly acquire images to determine its position, it can be localized by magnetic field localization techniques. The magnetic field generator 590 is configured to generate a magnetic field with a certain frequency, and to improve the positioning accuracy of the interventional device 570, the magnetic field 590 may be generally disposed under the carrying platform 520 so as to be closer to the interventional device and less spaced apart from obstacles. One or more magnetic positioning devices may be disposed on the intervention device 570, and a magnetic positioning device may be disposed on the image capturing device 530 or the visualization apparatus 550, where the magnetic positioning device may specifically be a magnetic sensor, and the position of the magnetic sensor, that is, the three-dimensional coordinate of the magnetic field, is in the magnetic field environment of the magnetic field generating device 590. For example, in a radio frequency ablation procedure, a magnetic field generator is generally placed under a bed to emit a magnetic field with a certain frequency, one or more magnetic sensors are arranged at the distal end of an interventional device, the position of the one or more magnetic sensors can be known in the magnetic field environment, a virtual heart model is constructed by the positions of the plurality of magnetic sensors at different times, and the shape and characteristics of the catheter, such as the position, the bending shape and the like, can also be known in the virtual heart model.
Since the three-dimensional space coordinates of the imaging device 530 and the visualization apparatus 550 in the operating room are already determined from the global image, after the three-dimensional magnetic field coordinates of the imaging device 530 or the visualization apparatus 550 are acquired, the relative relationship between the three-dimensional magnetic field coordinates and the three-dimensional space coordinates in the operating room can be calculated, and the three-dimensional space coordinates of the intervention device 570 in the operating room can be determined according to the relative relationship and the three-dimensional magnetic field coordinates of the intervention device 570. It can be understood that, in addition to the camera device 530 and the visualization device 550, a magnetic positioning device may be disposed on other objects within the magnetic field coverage range of the preset space, or a plurality of magnetic positioning devices may be disposed in the preset space at the same time, so as to achieve more accurate calibration and positioning, and improve the conversion accuracy between the magnetic three-dimensional coordinates and the three-dimensional space coordinates of the interventional device 570.
After the three-dimensional space coordinates of the intervention device 570 are obtained, a virtual image of the intervention device 570 can be displayed at a corresponding position of the fusion image on the lens of the visualization apparatus 550, the virtual image of the intervention device 570 can be a photograph or a virtual icon of the intervention device 570, and the like, so that a doctor or an operator can see a real target object, a three-dimensional scanning image of the region of interest projected on the real target object, and a virtual image of the intervention device 570 through the visualization apparatus 550 at the same time, so that the doctor can know the position of the intervention device 570 in the target object body and the relative position relationship between the intervention device 570 and the region of interest in real time, and the intervention treatment is facilitated. Meanwhile, the moving path and direction of the intervention device 570 can be displayed, so that a doctor can monitor the moving process and moving trend of the intervention device, the target object is prevented from being damaged in the intervention process, and the safety is further improved.
Fig. 10 is a schematic structural diagram of a medical system in another embodiment, as shown in fig. 10, in an embodiment, the medical system 60 may include a scanning device, a carrying platform 620, a camera device 630, a visualization device 650, an interventional device 670, a magnetic field generation device 690, a magnetic positioning device, and a medical image processing device 400, which may be respectively the same as the corresponding structures in the above embodiments, and the medical system in this embodiment may further include: and the sliding rail 640 is arranged in the preset space, and the camera device 630 is movably arranged on the sliding rail 640.
Specifically, in the medical system 60, a magnetic positioning device is disposed on the camera device 630, in order to ensure that a magnetic sensor on the camera device 630 is in the magnetic field of the magnetic field generating device 690 and can completely acquire a panoramic image in a preset space, a sliding rail 640 may be disposed near the bearing platform 620, and the camera device 630 may be disposed on the sliding rail 640, so that the position of the camera device 630 may be flexibly moved, so that the camera device may be kept in the magnetic field, and the panoramic image may be photographed at a suitable position, which is more flexible and convenient. The processing device 400 of medical images in the medical system 60 may be an image workstation comprising a display, on which the fused image and a virtual image of the interventional device 670, etc. may also be displayed, or other medical information, for assisting reference by a doctor or an operator.
In this embodiment, the environment image of the preset space may be acquired by the visualization device 650, a specific reference object in the positioning region is identified, for example, the head of the target object or a separately set positioning mark in the above embodiments, and then the three-dimensional scan image and the image acquired by the visualization device 650 are registered according to the reference object to determine the three-dimensional space coordinate information of the three-dimensional scan image. And the relative position relationship between the camera device 630 and the interventional device 670 is determined by the magnetic field three-dimensional coordinates, and the relative position information of the three-dimensional space coordinates is corrected according to the magnetic field positioning information, so as to realize more accurate registration and positioning.
In one embodiment, a medical device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the program when executed by the processor performing the steps of: acquiring a three-dimensional scanning image (including a scanning image of a positioning area and an interested area) of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space, wherein the global image comprises three-dimensional space coordinate information of an object in the preset space; registering the three-dimensional scanning image with the global image by using the positioning area to determine the three-dimensional space coordinates of the region of interest in a preset space; registering the real-time image with the global image to determine a three-dimensional space coordinate of the real-time image in a preset space; determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in a preset space; and displaying the scanned image of the region of interest at the corresponding position to obtain a fused image. Optionally, the processor may execute the following steps when executing the program: determining a relative relation between a three-dimensional space coordinate and a magnetic field three-dimensional coordinate in a preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object, and based on the relative relation between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate of the interventional device; three-dimensional space coordinates of the interventional device in a preset space are determined, so that a corresponding position of the interventional device in the fused image is determined, and a virtual image of the interventional device is displayed at the corresponding position on the fused image.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of: acquiring a three-dimensional scanning image (including a scanning image of a positioning area and an interested area) of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space, wherein the global image comprises three-dimensional space coordinate information of an object in the preset space; registering the three-dimensional scanning image with the global image by using the positioning area to determine the three-dimensional space coordinates of the region of interest in a preset space; registering the real-time image with the global image to determine a three-dimensional space coordinate of the real-time image in a preset space; determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in a preset space; and displaying the scanned image of the region of interest at the corresponding position to obtain a fused image. Alternatively, the computer program, when executed by the processor, may further cause the processor to perform the steps of: determining a relative relation between a three-dimensional space coordinate and a magnetic field three-dimensional coordinate in a preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object, and based on the relative relation between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate of the interventional device; three-dimensional space coordinates of the interventional device in a preset space are determined, so that a corresponding position of the interventional device in the fused image is determined, and a virtual image of the interventional device is displayed at the corresponding position on the fused image.
For the above limitations of the computer-readable storage medium and the computer device, reference may be made to the above specific limitations of the method, which are not described herein again.
It should be noted that, as one of ordinary skill in the art can appreciate, all or part of the processes of the above methods may be implemented by instructing related hardware through a computer program, and the program may be stored in a computer-readable storage medium; the above described programs, when executed, may comprise the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM) or a Random Access Memory (RAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (15)
1. A method of processing a medical image, comprising:
step 1: receiving a three-dimensional scanning image of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space; the three-dimensional scanning image of the target object comprises a scanning image of an interested area, the global image comprises three-dimensional space coordinate information of an object in a preset space, the three-dimensional scanning image of the target object also comprises a scanning image of a positioning area, the three-dimensional space coordinate information of the object in the preset space is obtained by directly shooting an image, and the preset space comprises a ward or an operating room;
step 2: registering the scanned image of the region of interest with the global image using the localization area to determine three-dimensional spatial coordinates of the region of interest in the preset space; using the same object in the real-time image and the global image as a reference, and registering the real-time image and the global image to determine a three-dimensional space coordinate of the real-time image in the preset space; selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image to obtain a fused image; wherein when part or all of the region of interest exists within the local view angle, a scanned image of part or all of the region of interest is presented in the real-time image; when the region of interest is not present within the local view angle, the scanned image of the region of interest is not presented in the real-time image.
2. The method according to claim 1, wherein the step 2 specifically comprises:
determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in the preset space;
displaying the scanned image of the region of interest at the corresponding position in the real-time image to obtain the fused image.
3. The method of claim 2, wherein the step of registering the three-dimensional scan image with the global image using the localization zone comprises:
acquiring a first reference identifier of the positioning area in the three-dimensional scanning image and a second reference identifier of the positioning area in the global image;
and registering the three-dimensional scanning image and the global image according to the position of the first reference mark in the three-dimensional scanning image and the position of the second reference mark in the global image.
4. The method according to claim 3, wherein the localization area is a head area of the target object, and the step of acquiring a first reference identifier of the localization area in the three-dimensional scan image and a second reference identifier of the localization area in the global image comprises:
acquiring a skull of the target object in the three-dimensional scanning image as the first reference mark for identification;
identifying the facial features of the target object in the global image as the second reference mark through a face recognition algorithm;
wherein the facial features include at least one of eyes, mouth, nose, ears, and eyebrows.
5. A medical image processing apparatus, comprising:
the image receiving module is used for receiving a three-dimensional scanning image of a target object, a global image of a preset space where the target object is located and a real-time image of a local visual angle in the preset space, wherein the global image comprises three-dimensional space coordinate information of an object in the preset space, the three-dimensional scanning image of the target object comprises a scanning image of an interested area, the three-dimensional scanning image of the target object also comprises a scanning image of a positioning area, the three-dimensional space coordinate information of the object in the preset space is acquired by directly shooting an image, and the preset space comprises a ward or an operating room; and
an image processing module, including a first registration module and a second registration module, for registering the three-dimensional scan image with the global image by using the positioning region to determine three-dimensional space coordinates of the region of interest in the preset space; the second registration module is used for registering the real-time image with the global image so as to determine the three-dimensional space coordinates of the real-time image in the preset space; the image processing module is used for selectively presenting the scanned image of the region of interest in the real-time image in real time according to the change of the local visual angle through image registration among the three-dimensional scanned image, the global image and the real-time image so as to obtain a fused image; wherein when part or all of the region of interest exists within the local view angle, a scanned image of part or all of the region of interest is presented in the real-time image; when the region of interest is not present within the local view angle, the scanned image of the region of interest is not presented in the real-time image.
6. A medical image processing apparatus according to claim 5, wherein the image processing module comprises:
the position determining module is used for determining the corresponding position of the region of interest in the real-time image based on the three-dimensional space coordinates of the region of interest and the real-time image in the preset space;
and the image fusion module is used for displaying the scanned image of the region of interest at the corresponding position so as to obtain the fused image.
7. The medical image processing apparatus according to claim 5, wherein the medical image processing apparatus is applied in a medical system, a magnetic field is disposed in the preset space, the medical system includes an interventional device and a magnetic positioning device, the interventional device is used for performing interventional treatment on a region of interest of the target object, the magnetic positioning device is disposed in the interventional device and a preset object, the magnetic positioning device is used for acquiring three-dimensional coordinates of the magnetic field of the interventional device and the preset object in the magnetic field, and the medical image processing apparatus is connected to the magnetic positioning device in communication;
wherein the processing device of the medical image is further configured to determine a relative relationship between the three-dimensional space coordinate and the magnetic field three-dimensional coordinate in the preset space according to the magnetic field three-dimensional coordinate and the three-dimensional space coordinate of the preset object, and determine the three-dimensional space coordinate of the interventional device in the preset space based on the relative relationship and the magnetic field three-dimensional coordinate of the interventional device, so as to determine a corresponding position of the interventional device in the fusion image, and display a virtual image of the interventional device at the corresponding position on the fusion image.
8. A medical system, characterized in that the medical system comprises:
interventional means for interventional treatment of a region of interest of the target object;
the magnetic field generating device is used for generating a magnetic field in the preset space;
the magnetic positioning device comprises a plurality of magnetic positioning devices, a positioning device and a positioning device, wherein at least one magnetic positioning device is arranged in the interventional device, the rest magnetic positioning devices are arranged in a preset object, and the magnetic positioning devices are used for acquiring magnetic field three-dimensional coordinates of the interventional device and the preset object in the magnetic field;
the medical image processing apparatus according to claim 5 or 6, wherein the medical image processing apparatus is further communicatively connected to the plurality of magnetic positioning apparatuses, and is configured to determine a relative relationship between three-dimensional space coordinates and magnetic field three-dimensional coordinates within the preset space according to the magnetic field three-dimensional coordinates and the three-dimensional space coordinates of the preset object, and determine three-dimensional space coordinates of the interventional device in the preset space based on the relative relationship and the magnetic field three-dimensional coordinates of the interventional device, so as to determine a corresponding position of the interventional device in the fused image, and display a virtual image of the interventional device at the corresponding position on the fused image.
9. The medical system of claim 8, further comprising:
the camera device is used for acquiring the global image of the preset space; and
the visualization device is used for acquiring the real-time image of the local visual angle in the preset space and displaying the fusion image;
the processing device of the medical image is respectively connected with the camera device and the visualization device in a communication mode so as to receive the global image and the real-time image.
10. The medical system of claim 9, further comprising: the preset object is the camera device and/or the visualization device.
11. The medical system of claim 9, further comprising:
the sliding rail is arranged in the preset space, and the camera shooting device is movably arranged on the sliding rail.
12. The medical system of claim 9, further comprising:
a scanning device for acquiring the scanning image of the region of interest of the target object, and a processing device of the medical image is in communication connection with the scanning device to receive the three-dimensional scanning image;
wherein the scanning device comprises at least one of a computed tomography device, a magnetic resonance scanning device, and a digital subtraction angiography device; the camera device comprises a depth camera and/or a holographic camera; the visualization device includes augmented reality glasses and/or mixed reality glasses.
13. The medical system of claim 8, wherein the interventional device comprises at least one of a guidewire, a catheter, a sheath, and a puncture needle.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable by the processor, characterized in that the steps of the method of any of claims 1 to 4 are implemented when the computer program is executed by the processor.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910932560.3A CN110584782B (en) | 2019-09-29 | 2019-09-29 | Medical image processing method, medical image processing apparatus, medical system, computer, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910932560.3A CN110584782B (en) | 2019-09-29 | 2019-09-29 | Medical image processing method, medical image processing apparatus, medical system, computer, and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110584782A CN110584782A (en) | 2019-12-20 |
| CN110584782B true CN110584782B (en) | 2021-05-14 |
Family
ID=68864572
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910932560.3A Active CN110584782B (en) | 2019-09-29 | 2019-09-29 | Medical image processing method, medical image processing apparatus, medical system, computer, and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110584782B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111710028B (en) * | 2020-05-27 | 2023-06-30 | 北京东软医疗设备有限公司 | Three-dimensional contrast image generation method and device, storage medium and electronic equipment |
| CN114283177B (en) * | 2020-09-27 | 2025-03-18 | 北京猎户星空科技有限公司 | Image registration method, device, electronic device and readable storage medium |
| CN113069206B (en) * | 2021-03-23 | 2022-08-05 | 江西麦帝施科技有限公司 | Image guiding method and system based on electromagnetic navigation |
| CN113081265B (en) * | 2021-03-24 | 2022-11-15 | 重庆博仕康科技有限公司 | Surgical navigation space registration method and device and surgical navigation system |
| CN113693738A (en) * | 2021-08-27 | 2021-11-26 | 南京长城智慧医疗科技有限公司 | Operation system based on intelligent display |
| CN116269831A (en) * | 2023-03-17 | 2023-06-23 | 上海兰甲医疗科技有限公司 | Holographic image-based surgical assistance system |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6235038B1 (en) * | 1999-10-28 | 2001-05-22 | Medtronic Surgical Navigation Technologies | System for translation of electromagnetic and optical localization systems |
| CN103211655A (en) * | 2013-04-11 | 2013-07-24 | 深圳先进技术研究院 | Navigation system and navigation method of orthopedic operation |
| CN107536643A (en) * | 2017-08-18 | 2018-01-05 | 北京航空航天大学 | A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction |
| CN110101452A (en) * | 2019-05-10 | 2019-08-09 | 山东威高医疗科技有限公司 | A kind of optomagnetic integrated positioning navigation method for surgical operation |
| CN110169822A (en) * | 2018-02-19 | 2019-08-27 | 格罗伯斯医疗有限公司 | Augmented reality navigation system for use with robotic surgical system and method of use thereof |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9681925B2 (en) * | 2004-04-21 | 2017-06-20 | Siemens Medical Solutions Usa, Inc. | Method for augmented reality instrument placement using an image based navigation system |
| CN101073528B (en) * | 2007-06-22 | 2010-10-06 | 北京航空航天大学 | Digital operating bed system with double-plane positioning and double-eyes visual tracting |
| CN102341046B (en) * | 2009-03-24 | 2015-12-16 | 伊顿株式会社 | Surgical robot system and control method using augmented reality technology |
| CN101797182A (en) * | 2010-05-20 | 2010-08-11 | 北京理工大学 | Nasal endoscope minimally invasive operation navigating system based on augmented reality technique |
| WO2013134623A1 (en) * | 2012-03-08 | 2013-09-12 | Neutar, Llc | Patient and procedure customized fixation and targeting devices for stereotactic frames |
-
2019
- 2019-09-29 CN CN201910932560.3A patent/CN110584782B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6235038B1 (en) * | 1999-10-28 | 2001-05-22 | Medtronic Surgical Navigation Technologies | System for translation of electromagnetic and optical localization systems |
| CN103211655A (en) * | 2013-04-11 | 2013-07-24 | 深圳先进技术研究院 | Navigation system and navigation method of orthopedic operation |
| CN107536643A (en) * | 2017-08-18 | 2018-01-05 | 北京航空航天大学 | A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction |
| CN110169822A (en) * | 2018-02-19 | 2019-08-27 | 格罗伯斯医疗有限公司 | Augmented reality navigation system for use with robotic surgical system and method of use thereof |
| CN110101452A (en) * | 2019-05-10 | 2019-08-09 | 山东威高医疗科技有限公司 | A kind of optomagnetic integrated positioning navigation method for surgical operation |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110584782A (en) | 2019-12-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110584782B (en) | Medical image processing method, medical image processing apparatus, medical system, computer, and storage medium | |
| JP4822634B2 (en) | A method for obtaining coordinate transformation for guidance of an object | |
| US10952795B2 (en) | System and method for glass state view in real-time three-dimensional (3D) cardiac imaging | |
| JP7051307B2 (en) | Medical image diagnostic equipment | |
| US10163204B2 (en) | Tracking-based 3D model enhancement | |
| CN102077248B (en) | For in the equipment of experimenter's inner position objects and method | |
| KR101458585B1 (en) | Radiopaque Hemisphere Shape Maker for Cardiovascular Diagnosis and Procedure Guiding Image Real Time Registration | |
| JP7049325B2 (en) | Visualization of image objects related to instruments in in-vitro images | |
| US20220323164A1 (en) | Method For Stylus And Hand Gesture Based Image Guided Surgery | |
| IL293233A (en) | Registration of an image with a tracking system | |
| US20080234570A1 (en) | System For Guiding a Medical Instrument in a Patient Body | |
| KR20190005177A (en) | Method and apparatus for image-based searching | |
| JP2001245880A (en) | Method of judging position of medical instrument | |
| US10849583B2 (en) | Medical image diagnostic apparatus and medical image processing apparatus | |
| EP3673854B1 (en) | Correcting medical scans | |
| KR20140052524A (en) | Method, apparatus and system for correcting medical image by patient's pose variation | |
| JPH09173352A (en) | Medical navigation system | |
| JP6878028B2 (en) | Medical image diagnostic system and mixed reality image generator | |
| US20240341860A1 (en) | System and method for illustrating a pose of an object | |
| KR20140120157A (en) | Radiopaque Hemisphere Shape Maker Based Registration Method of Radiopaque 3D Maker for Cardiovascular Diagnosis and Procedure Guiding Image | |
| US20250072969A1 (en) | Systems and methods for integrating intra-operative image data with minimally invasive medical techniques | |
| JP2017217154A (en) | X-ray CT apparatus | |
| WO2020002071A1 (en) | Gestural scan parameter setting | |
| JP2022506030A (en) | Patient carrier positioning | |
| WO2014163317A1 (en) | System for aligning x-ray angiography image and ct angiography image on basis of radiopaque hemispherical three-dimensional marker, and method for conducting cardiovascular operation by aligning ct angiography image and x-ray angiography image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |