CN102103690A - Method for automatically portioning hair area - Google Patents

Method for automatically portioning hair area Download PDF

Info

Publication number
CN102103690A
CN102103690A CN 201110055823 CN201110055823A CN102103690A CN 102103690 A CN102103690 A CN 102103690A CN 201110055823 CN201110055823 CN 201110055823 CN 201110055823 A CN201110055823 A CN 201110055823A CN 102103690 A CN102103690 A CN 102103690A
Authority
CN
China
Prior art keywords
area
face
hair
region
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110055823
Other languages
Chinese (zh)
Inventor
孙知信
邹大海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN 201110055823 priority Critical patent/CN102103690A/en
Publication of CN102103690A publication Critical patent/CN102103690A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for automatically portioning a hair area, relating to the technical field of a hair partitioning method and belonging to the field of computer vision. The method comprises the following steps of: step 1, detecting the face: using a face detection module for detecting the face position from one input face picture by using a trained cascade classifier; step 2, marking target background marks: using a target background marking module to line out the interested area, and finding out the most possible target mark and the background mark according to position and color characteristics; and step 3, image partitioning: according to the target background marks, using an image partitioning module to partition the hair area and output the hair area. The method provided by the invention uses the image processing technology to perform face detection, target background marking, image partitioning and the like to an input image, and output the hear area in the image. The method can accurately detect the hair area under complicated background conditions, which provides good basis to the research and application of identity recognition, gender and age estimation, image retrieval and the like.

Description

一种自动的头发区域分割方法An Automatic Hair Region Segmentation Method

技术领域technical field

本发明涉及一种自动的头发分割方法,属于计算机视觉领域。The invention relates to an automatic hair segmentation method, which belongs to the field of computer vision.

背景技术Background technique

头发是人体的一个重要特征,使用该特征我们可以实现身份识别、年龄性别估计、图像检索等应用。有研究表明,头发是我们区分相似人脸的重要特征,发型的改变可能使我们对人的身份作出错误的判断,利用头发特征,可以在进行身份识别时提供辅助信息。因此,在计算机视觉领域对于头发特征的应用具有重要意义。Hair is an important feature of the human body. Using this feature, we can implement applications such as identification, age and gender estimation, and image retrieval. Studies have shown that hair is an important feature for us to distinguish similar faces. Changes in hairstyle may make us make wrong judgments about people's identities. Using hair features can provide auxiliary information for identification. Therefore, the application of hair features in the field of computer vision is of great significance.

头发的形状和颜色会因人体性别、年龄、种族的差异而不同,人们也可以随意的改变自己的发型与颜色,这给头发特征的检测、描述、分析及应用带来了一定困难。但是,在一定时期内头发的颜色和形状总是固定不变的,这使得对于头发特征的研究成为可能。The shape and color of hair will be different due to differences in human gender, age, and race. People can also change their hairstyle and color at will, which brings certain difficulties to the detection, description, analysis, and application of hair features. However, the color and shape of hair are always fixed in a certain period of time, which makes it possible to study the characteristics of hair.

在对头发特征的描述、分析、匹配前,一项重要的工作是寻找头发区域。只有确定了头发区域后,才能对该区域内的头发进行特征提取、特征匹配等操作。目前确定头发区域的主要方法一般都是在准确探人脸区域的基础上,利用头发的颜色、纹理、形状、位置等特征,进行区域分割。Before describing, analyzing, and matching hair features, an important task is to find hair regions. Only after the hair region is determined, can the hair in the region be subjected to feature extraction, feature matching and other operations. At present, the main method of determining the hair area is generally based on the accurate detection of the face area, and uses the hair color, texture, shape, position and other features to perform area segmentation.

Yacoob和Davis是最早进行头发区域确定及头发特征描述的研究者。他们提出了一种肤色模型匹配的方法。他们首先用级联分类器准确检测了人脸及眼睛的位置,然后根据相对位置在额头和眼睛下划了三个矩形框,提取矩形框中的颜色特征以建立肤色模型。再根据相对位置在人脸边界的上部、左部、右部分别划了三个小矩形框。三个框的相对位置经过反复试验确定,以使三个矩形框尽量在头发区域中,三个框中的像素点与肤色模型进行匹配,删除掉肤色像素点。根据这三个框中的颜色建立头发的颜色特征模型。扫描整幅图像,通过比较像素点RGB值与该模型的距离来判断是否为头发像素点。该方法仅用颜色这一特征判断头发像素,只能分割简单的、归一化背景下的头发,在背景复杂情况下准确率比较低。Yacoob and Davis were the first researchers to identify hair regions and describe hair characteristics. They proposed a method for skin color model matching. They first used a cascade classifier to accurately detect the position of the face and eyes, then drew three rectangular boxes under the forehead and eyes according to the relative positions, and extracted the color features in the rectangular boxes to build a skin color model. Then draw three small rectangular frames on the upper part, left part and right part of the face boundary according to the relative position. The relative positions of the three frames are determined through trial and error, so that the three rectangular frames are in the hair area as much as possible, and the pixels in the three frames are matched with the skin color model, and the skin color pixels are deleted. Based on the colors in these three boxes, a hair color feature model is established. Scan the entire image, and judge whether it is a hair pixel by comparing the distance between the RGB value of the pixel and the model. This method only uses the feature of color to judge hair pixels, and can only segment the hair in a simple and normalized background, and the accuracy rate is relatively low in the case of complex backgrounds.

Lee等人提出了一种利用头发颜色和位置信息构建高斯混合模型进行头发分割。高斯混合模型的构建分为离线训练与在线更新两部分。对于一副图片,首先根据背景、脸部和头发的分布概率,找出最可能区域,将区域内的点与GMM进行比较,删除不满足模型的点。然后再用图割算法最小化能量函数,确定部分头发、脸部像素点。根据确定的像素点更新GMM,用更新后的模型再进行分割,直到所有像素被分割。该方法在算法上构建能量函数中的头发位置信息采用的是绝对位置,因此它要求测试图片中人脸必须位于图片的正中间,否则会产生大量误检。其次如论文所述,对于头发与昏暗背景的判断容易产生误差。最后,因为基于像素单位的多类图割算法具有复杂的计算度,需要较长的运算时间。Lee et al. proposed a Gaussian mixture model using hair color and location information for hair segmentation. The construction of the Gaussian mixture model is divided into two parts: offline training and online updating. For a picture, first find the most likely area according to the distribution probability of the background, face and hair, compare the points in the area with the GMM, and delete the points that do not satisfy the model. Then use the graph cut algorithm to minimize the energy function to determine some hair and face pixels. The GMM is updated according to the determined pixel points, and the updated model is used to segment until all pixels are segmented. This method uses an absolute position for the hair position information in the energy function constructed algorithmically, so it requires that the face in the test picture must be in the middle of the picture, otherwise a large number of false detections will occur. Secondly, as mentioned in the paper, the judgment of hair and dark background is prone to errors. Finally, because the multi-class graph cut algorithm based on the pixel unit has a complex calculation degree, it requires a long operation time.

C.Rousset和P.Y.Coulon提出了一种结合频域掩膜和颜色掩膜去分割头发的方法。该方法首先用Viola和Jones提出的人脸盒子及人体学数据来定义头部区域。由于头发具有纹理特征,用一个高斯频带滤波器扫描图片,得到图片的频谱映射图。通过阈值分割出头发像素点,剩余像素点为背景像素。在分割出的头发区域中,在脸部上方划出一个样本窗口,提取该窗口中的颜色为头发颜色,建立头发颜色模型,并用该模型判断头发像素点。将在头部区域内的头发像素作为目标标记,将纹理分割出的背景像素及纹理分割为头发但不在脸部区域的像素作为背景标记。然后利用Levin等提出的自动分割方法进行图像分割。该方法综合利用了颜色纹理特征,然而在复杂背景及光照条件下存在漏检。C.Rousset and P.Y.Coulon proposed a method of combining frequency domain mask and color mask to segment hair. The method first uses the face box proposed by Viola and Jones and anthropometric data to define the head region. Since the hair has texture characteristics, a Gaussian band filter is used to scan the image to obtain a spectral map of the image. The hair pixels are segmented by thresholding, and the remaining pixels are background pixels. In the segmented hair area, draw a sample window above the face, extract the color in the window as hair color, establish a hair color model, and use the model to judge hair pixels. The hair pixels in the head area are used as the target markers, and the background pixels from the texture segmentation and the pixels from the texture segmentation as hair but not in the face area are used as the background markers. Then the image is segmented using the automatic segmentation method proposed by Levin et al. This method makes comprehensive use of color and texture features, but there are missed detections under complex background and lighting conditions.

综上,目前存在的一些头发分割技术都存在着一定的局限性。因此,目前需要一种更有效地分割头发区域的方法。In summary, some hair segmentation techniques that currently exist have certain limitations. Therefore, there is currently a need for a more efficient method for segmenting hair regions.

发明内容Contents of the invention

本发明的目的是提供一种头发区域的自动分割方法,利用图像处理技术对一幅输入的图像进行人脸检测、目标背景标记、图像分割等处理,输出图像中头发的区域。所述方法能够在复杂背景条件下准确检测头发区域,为身份识别、性别年龄估计、图像检索等研究与应用提供了良好基础。The purpose of the present invention is to provide a method for automatic segmentation of hair regions, using image processing technology to perform face detection, target background marking, image segmentation and other processing on an input image, and output the hair region in the image. The method can accurately detect hair regions under complex background conditions, and provides a good foundation for the research and application of identification, gender and age estimation, image retrieval and the like.

一种自动的头发区域分割方法,包括如下步骤:An automatic hair region segmentation method, comprising the following steps:

步骤1:检测人脸:利用人脸检测模块从输入的一张人脸图片中通过训练好的级联分类器探测人脸位置;Step 1: Detect face: Use the face detection module to detect the position of the face through the trained cascade classifier from the input face picture;

步骤2:标记目标背景标记:利用目标背景标记模块在人脸位置上划出感兴趣区域,根据位置、颜色特征找出最可能的目标标记与背景标记;Step 2: Mark the target background mark: Use the target background mark module to draw the area of interest on the face position, and find the most likely target mark and background mark according to the position and color characteristics;

步骤3:图像分割:根据目标背景标记,利用图像分割模块分割出头发区域并输出。Step 3: Image segmentation: According to the target background mark, use the image segmentation module to segment the hair area and output it.

所述步骤1的检测人脸方法包括以下步骤:提取Haar特征,训练弱分类器,采用AdaBoost算法选取优化的弱分类器迭代,生成强分类器,进行实时检测得到面部区域RfThe face detection method in step 1 includes the following steps: extracting Haar features, training a weak classifier, using the AdaBoost algorithm to select an optimized weak classifier iteratively, generating a strong classifier, and performing real-time detection to obtain the facial region R f .

所述AdaBoost算法具体过程如下:分别对采集的人脸样本集和非人脸样本集计算样本积分图,得到矩形特征原型,计算矩形特征值,得到特征集;确定阙值,由矩形特征集生成对应的弱分类器,得到弱分类器集;挑选最优弱分类器,调用AdaBoost算法训练强分类器,得到强分类器集,此时再次判断是否还有非人脸图片集,若判断为是,则补充非人脸样本至非人脸样本集重复上述步骤,若判断为否,则直接得到级联分类器。The specific process of the AdaBoost algorithm is as follows: respectively calculate the sample integral graph for the collected face sample set and non-face sample set, obtain the rectangular feature prototype, calculate the rectangular feature value, and obtain the feature set; determine the threshold value and generate it from the rectangular feature set Corresponding weak classifier, get the weak classifier set; select the optimal weak classifier, call the AdaBoost algorithm to train the strong classifier, and get the strong classifier set, then judge again whether there is a non-face picture set, if it is judged to be , add non-face samples to the non-face sample set and repeat the above steps, if the judgment is no, directly obtain the cascade classifier.

步骤2的目标背景标记方法包括如下步骤:The target background marking method of step 2 comprises the following steps:

(1)感兴趣区域的确定,根据人脸区域位置和先验概率知识,在人脸区域Rf基础上确定感兴趣区域,公式如下:(1) Determination of the region of interest, according to the position of the face region and prior probability knowledge, determine the region of interest on the basis of the face region Rf , the formula is as follows:

“头发和人脸”区域宽度=3.6*人脸宽度,"Hair and face" area width = 3.6*face width,

“头发和人脸”区域高度=3.7*人脸高度;"Hair and face" area height = 3.7*face height;

(2)人脸颜色特征提取,在人脸区域Rf中做肤色的概率分布图,找出概率分布大于1%的像素点,计算均值μ和协方差矩阵C,得到肤色的高斯模型;(2) face color feature extraction, make the probability distribution map of skin color in the face area R f , find out the pixel point that probability distribution is greater than 1%, calculate mean value μ and covariance matrix C, obtain the Gaussian model of skin color;

(3)目标像素标记,在人脸区域Rf的基础上划出一个肯定包括头发的样本窗口区域Rh,其中:(3) target pixel mark, draw a sample window area R h that definitely includes hair on the basis of the face area R f , wherein:

LL xx RR hh == LL xx RR ff RR xx RR hh == RR xx RR ff

Uu ythe y RR hh == Uu ythe y RR ff -- 11 // 22 (( RR xx RR ff -- LL xx RR ff ))

DD. ythe y RR hh == DD. ythe y RR ff ++ 11 // 44 (( RR xx RR ff -- LL xx RR ff ))

其中分别代表头发样本窗口Rh的左右上下边界,同样

Figure BDA0000049417150000036
分别代表面部区域Rf的左右上下边界;in represent the left and right upper and lower boundaries of the hair sample window R h respectively, and the same
Figure BDA0000049417150000036
Represent the left and right upper and lower boundaries of the facial region R f respectively;

在该区域内根据人脸肤色模型去除掉肤色像素,在剩余像素中提取头发颜色特征,并标记头发像素为目标标记;In this area, the skin color pixels are removed according to the face skin color model, the hair color features are extracted from the remaining pixels, and the hair pixels are marked as the target mark;

(4)背景像素标记,在人脸区域Rf中心位置及感兴趣区域的左右上角各取一块区域作为背景区域,并标记为背景。(4) Background pixel labeling, take an area at the center of the face area R f and the left and right upper corners of the area of interest as the background area, and mark it as the background.

步骤3的图像分割方法包括:The image segmentation method of step 3 comprises:

初始图像分割,用mean shift算法对感兴趣区域中图像做初始分割得到小区域;Initial image segmentation, use the mean shift algorithm to perform initial segmentation on the image in the region of interest to obtain a small area;

目标背景区域标记,如果一个区域中有背景标记存在,则将其标记为背景区域,如果一个区域中有目标标记存在,则将其标记为目标区域,剩下的区域标记为待合并区域;Mark the target background area. If there is a background mark in an area, it will be marked as the background area. If there is a target mark in an area, it will be marked as the target area, and the remaining area will be marked as the area to be merged;

MSRM分割,对背景区域、目标区域、待合并区域分别进行迭代合并,采用最大相似性原理,经过多次迭代得到最后分割结果。MSRM segmentation, iteratively merges the background area, the target area, and the area to be merged, and uses the principle of maximum similarity to obtain the final segmentation result after multiple iterations.

本发明的MSRM分割算法包括以下过程:MSRM segmentation algorithm of the present invention comprises the following processes:

A、将背景区域中的区域与其相邻区域进行迭代合并,直到没有新区域形成时迭代停止;A. Iteratively merge the area in the background area and its adjacent area until no new area is formed, and the iteration stops;

B、将目标区域中的区域与其相邻区域进行迭代合并,直到没有新区域形成时迭代停止;B. Iteratively merge the area in the target area and its adjacent area until no new area is formed, and the iteration stops;

C、将待合并区域中的区域与其相邻区域进行迭代合并,直到没有新区域形成时迭代停止;C. Iteratively merge the area in the area to be merged with its adjacent area until no new area is formed, and the iteration stops;

重复上述步骤A、步骤B、步骤C,直到步骤C中没有迭代合并动作时停止。Repeat the above step A, step B, and step C until there is no iterative merge action in step C and stop.

本发明实施上述技术方法的特点在于:(1)利用人脸位置信息确定感兴趣区域,并根据颜色、位置信息找到目标像素点并标记之。(2)在标记目标、背景像素点时要确保所标注的像素点位目标和背景。(3)在YCrCb颜色空间内进行操作,用CrCb两个分量进行相似度计算,减少了系统复杂度。The features of the present invention for implementing the above-mentioned technical method are as follows: (1) Use the position information of the human face to determine the region of interest, and find and mark the target pixel according to the color and position information. (2) When marking the target and background pixels, make sure that the marked pixels are located in the target and background. (3) Operate in the YC r C b color space, and use the two components of C r C b to calculate the similarity, which reduces the complexity of the system.

附图说明Description of drawings

图1为本发明自动头发区域分割方法中的四种Haar特征示意图;Fig. 1 is four kinds of Haar feature schematic diagrams in the automatic hair region segmentation method of the present invention;

图2为本发明自动头发区域分割系统的流程示意图;Fig. 2 is a schematic flow chart of the automatic hair region segmentation system of the present invention;

图3为本发明自动头发区域分割系统中AdaBoost训练模块流程示意图;Fig. 3 is the schematic flow chart of AdaBoost training module in the automatic hair region segmentation system of the present invention;

图4为本发明自动头发区域分割系统中目标背景标记模块的流程示意图;Fig. 4 is a schematic flow chart of the target background marking module in the automatic hair region segmentation system of the present invention;

图5为本发明自动头发区域分割系统中图像分割模块的流程示意图;Fig. 5 is the schematic flow chart of the image segmentation module in the automatic hair region segmentation system of the present invention;

具体实施方式Detailed ways

如图2所示,一种自动的头发区域分割方法,包括如下步骤:As shown in Figure 2, an automatic hair region segmentation method includes the following steps:

步骤1:检测人脸,从输入的一幅图像中利用训练好的级联分类器探测人脸位置。Step 1: Detect the face, and use the trained cascade classifier to detect the position of the face from an input image.

步骤2:标记目标背景标记,利用人脸位置划出感兴趣区域,根据位置、颜色等特征找出最可能的目标标记与背景标记。Step 2: Mark the target background mark, use the face position to draw the region of interest, and find the most likely target mark and background mark according to the position, color and other characteristics.

步骤3:图像分割,根据步骤2所做的标记,利用图像分割技术分割出头发区域。Step 3: Image segmentation, according to the marks made in step 2, use image segmentation technology to segment out the hair region.

其中,步骤1所述的人脸检测包括:分类器的训练,针对不同的训练集训练同一个分类器(弱分类器),然后把这些不同训练集上的得到的分类器联合起来,构成一个最终的强分类器;检测过程,利用训练好的分类器检测人脸,得到人脸区域,用Rf表示。Wherein, the face detection described in step 1 comprises: the training of classifier, trains the same classifier (weak classifier) for different training sets, then the classifiers obtained on these different training sets are combined to form a The final strong classifier; the detection process, use the trained classifier to detect the face, and get the face area, which is represented by R f .

其中,所述的分类器训练过程包括以下步骤:Wherein, the classifier training process includes the following steps:

(1)提取Haar特征(1) Extract Haar features

常用的Haar特征有3中类型4种形式,如图1A-图1D所示。3种类型分别为:2-矩形特征、3-矩形特征、4-矩形特征。当然也可以在这4种特征的基础上设计出更多、更复杂的特征。由于训练样本通常有近万个,并且矩形特征的数量非常庞大,如果每次计算特征值都要统计矩形内所以像素之和,将会大大降低训练和检测的速度。因此引入了一种新的图像表示方法——积分图像,矩形特征的特征值计算,只与此特征矩形的端点的积分图有关,所以不管此特征矩形的尺度变换如何,特征值的计算所消耗的时间都是常量。这样只要遍历图像一次,就可以求得所有子窗口的特征值。Commonly used Haar features have 3 types and 4 forms, as shown in Figure 1A-Figure 1D. The three types are: 2-rectangle feature, 3-rectangle feature, 4-rectangle feature. Of course, more and more complex features can also be designed on the basis of these four features. Since there are usually nearly 10,000 training samples, and the number of rectangular features is very large, if the sum of all pixels in the rectangle is calculated every time the feature value is calculated, the speed of training and detection will be greatly reduced. Therefore, a new image representation method-integral image is introduced. The calculation of the eigenvalue of the rectangular feature is only related to the integral image of the endpoint of the feature rectangle, so regardless of the scale transformation of the feature rectangle, the calculation of the feature value consumes time is constant. In this way, the eigenvalues of all sub-windows can be obtained by traversing the image once.

(2)生成弱分类器(2) Generate a weak classifier

每一个Haar特征都对应着一个弱分类器,每一个弱分类器都是根据它所对应的Haar特征的参数来定义的。利用上述Haar特征的位置信息,对训练样本进行统计就可以得到对应的特征参数。AdaBoost算法中所训练的弱分类器是任何分类器,包括决策树,神经网络,隐马尔科夫模型,如果弱分类器是线性神经网络,那么AdaBoost算法每次将构造多层感知器的一个节点。Each Haar feature corresponds to a weak classifier, and each weak classifier is defined according to the parameters of its corresponding Haar feature. Using the position information of the above-mentioned Haar features, the corresponding feature parameters can be obtained by performing statistics on the training samples. The weak classifier trained in the AdaBoost algorithm is any classifier, including decision trees, neural networks, and hidden Markov models. If the weak classifier is a linear neural network, then the AdaBoost algorithm will construct a node of the multilayer perceptron each time. .

(3)采用AdaBoost算法选取优化的弱分类器(3) Use the AdaBoost algorithm to select an optimized weak classifier

并不是任伺一个Haar特征都能较好的描述人脸灰度分布的某一特点。Adaboost算法应用于人脸检测中,其基本思想是针对不同的训练集训练同一个分类器(弱分类器),然后把这些不同训练集上的得到的分类器联合起来,构成一个最终的强分类器。Adaboost算法中不同的训练集是通过调整每个样本对应的权重来实现的。开始时,每个样本对应的权重是相同的,对于h1分类错误的样本,加大其对应的权重;而对于分类正确的样本,降低其权重,这样分错的样本就被突出出来,从而得到一个新的样本分布U2。在新的样本分布下,再次对弱分类器进行训练,得到弱分类器h2。依次类推,经过T次循环,得到T个弱分类器,把这T个弱分类器按一定的权重叠加(boost)起来,得到最终想要的强分类器。Not any Haar feature can better describe a certain feature of the gray distribution of the face. The Adaboost algorithm is applied to face detection. The basic idea is to train the same classifier (weak classifier) for different training sets, and then combine the classifiers obtained on these different training sets to form a final strong classification. device. Different training sets in the Adaboost algorithm are realized by adjusting the weight corresponding to each sample. At the beginning, the corresponding weights of each sample are the same, for the misclassified samples of h1, increase their corresponding weights; and for the correctly classified samples, reduce their weights, so that the misclassified samples are highlighted, thus obtaining A new sample distribution U2. Under the new sample distribution, train the weak classifier again to obtain the weak classifier h2. By analogy, after T cycles, T weak classifiers are obtained, and these T weak classifiers are superimposed (boosted) according to a certain weight to obtain the final desired strong classifier.

如图4所示,其中步骤2所述的目标背景标记方法包括以下步骤:As shown in Figure 4, wherein the target background marking method described in step 2 comprises the following steps:

(1)根据人脸区域确定感兴趣区域。(1) Determine the region of interest based on the face region.

通过对许多归一化后的头发训练图片来形成可能的头发区域,由此头发区域来定义头部区域公式。从头发的聚类结果,可以得到包含头发和脸的头部区域定义公式如下:The possible hair regions are formed by many normalized hair training images, and the head region formula is defined by the hair regions. From the clustering results of hair, the definition formula of the head area containing hair and face can be obtained as follows:

“头发和人脸”区域宽度=3.6*人脸宽度"Hair and face" area width = 3.6*face width

“头发和人脸”区域高度=3.7*人脸高度"Hair and face" area height = 3.7*face height

这样,就可以得到包含所有头发和人脸的区域,以后的所有操作全部在该区域内进行。In this way, an area containing all hair and faces can be obtained, and all subsequent operations are performed in this area.

(2)提取人脸颜色特征(2) Extracting face color features

本发明选用YCrCb空间作为颜色分布统计的映射空间,该空间受亮度变化的影响较小,而且是两维独立分布,能较好地限制肤色分布区域,它具有HSl格式中将亮度分量分离的优点,而且可以从RGB格式线性变化得到。The present invention selects YC r C b space as the mapping space of color distribution statistics. This space is less affected by brightness changes, and is two-dimensional independent distribution, which can better limit the skin color distribution area. It has the brightness component in the HS1 format. The advantages of separation, and can be obtained from the linear change of the RGB format.

在步骤1所得人脸区域Rf中,除了有人脸肤色外还有其他颜色,如部分头发、背景颜色等,但是人脸肤色的比例是最高的。将CrCb分量分别等分成64份,对Rf中所有像素的CrCb分量做高斯概率分布。选取概率大于1%的像素点,根据下述公式:In the face region R f obtained in step 1, there are other colors besides the skin color of the face, such as part of the hair, background color, etc., but the proportion of the skin color of the face is the highest. Divide the C r C b components into 64 equal parts, and make a Gaussian probability distribution for the C r C b components of all pixels in R f . Select pixels with a probability greater than 1%, according to the following formula:

χ=(Cr,Cb)T χ=(C r , C b ) T

μ=E(x)μ=E(x)

C=E〔(χ-μ)(x-μ)TC=E[(χ-μ)(x-μ) T ]

得到面部皮肤颜色的色度信息CrCb所组成的向量χ=(Cr,Cb)T的均值μ和协方差矩阵C,拟合得到二维高斯模型。Obtain the mean μ and covariance matrix C of the vector χ=(C r , C b ) T composed of the chromaticity information C r C b of the facial skin color, and fit the two-dimensional Gaussian model.

(3)标记目标标记(3) mark target mark

在人脸区域Rf上部,划出一个头发样本窗口Rh,其中:On the upper part of the face region Rf , draw a hair sample window Rh , where:

LL xx RR hh == LL xx RR ff RR xx RR hh == RR xx RR ff

Uu ythe y RR hh == Uu ythe y RR ff -- 11 // 22 (( RR xx RR ff -- LL xx RR ff ))

DD. ythe y RR hh == DD. ythe y RR ff ++ 11 // 44 (( RR xx RR ff -- LL xx RR ff ))

其中分别代表头发样本窗口Rh的左右上下边界,同样

Figure BDA0000049417150000074
分别代表人脸区域Rf的左右上下边界。in represent the left and right upper and lower boundaries of the hair sample window R h respectively, and the same
Figure BDA0000049417150000074
represent the left, right, upper and lower boundaries of the face region Rf , respectively.

计算头发样本窗口Rh中像素点与提取人脸颜色特征中所得高斯模型的相似度,其中相似度计算公式如下:Calculate the similarity between the pixels in the hair sample window R h and the Gaussian model obtained in the extracted face color feature, where the similarity calculation formula is as follows:

PP (( CC rr ,, CC bb )) == expexp {{ -- 11 22 (( (( xx -- μμ )) TT CC -- 11 (( xx -- μμ )) }}

χ=(Cr,Cb)T χ=(C r , C b ) T

其中μ为提取人脸颜色特征所得的均值,C为所得的协方差矩阵。Among them, μ is the mean value obtained by extracting face color features, and C is the obtained covariance matrix.

去除头发样本窗口Rh中与面部肤色相近的像素,将CrCb分量等分成32组范围,对头发样本窗口Rh中剩余像素的CrCb分量做高斯概率分布。选取概率最大的范围作为发色范围空间,在头发样本窗口Rh中将属于该范围的像素点标记为目标标记,保证所有被标为目标的都是头发。Remove the pixels similar to facial skin color in the hair sample window Rh , divide the Cr C b components into 32 groups, and make a Gaussian probability distribution for the Cr C b components of the remaining pixels in the hair sample window Rh . The range with the highest probability is selected as the hair color range space, and the pixels belonging to this range are marked as target marks in the hair sample window R h to ensure that all the marked targets are hair.

(4)标记背景标记(4) mark background mark

在人脸区域Rf中心处划出一个区域

Figure BDA0000049417150000076
它的长宽都是Rf的1/2,将Rb2中的所有像素标记为背景。同样,在感兴趣区域的左上角和右上角各划出一个等腰直角三角形区域,顶点在感兴趣区域的左右顶点处,边长为
Figure BDA0000049417150000077
分别作为区域
Figure BDA0000049417150000078
将该区域内的像素点标记为背景。保证被标记为背景的区域中不能含有头发像素点。Draw a region at the center of the face region R f
Figure BDA0000049417150000076
Its length and width are 1/2 of R f , marking all pixels in R b2 as background. Similarly, draw an isosceles right-angled triangle area at the upper left corner and upper right corner of the region of interest, the vertices are at the left and right vertices of the region of interest, and the side length is
Figure BDA0000049417150000077
respectively as regions
Figure BDA0000049417150000078
Mark the pixels in this area as the background. Make sure that the area marked as background does not contain hair pixels.

其中,步骤3所述的分割方法包括以下几步骤:如图5所示,步骤一,使用mean shift进行初始分割;步骤二,根据标记标记出背景区域MB和目标区域Mo;步骤三,使用改进的MSRM算法进行区域分割。Wherein, the segmentation method described in step 3 includes the following steps: as shown in Figure 5, step 1, use mean shift for initial segmentation; step 2, mark out the background area M B and target area M o according to the marker; step 3, Region segmentation using the improved MSRM algorithm.

其中,步骤一使用mean shift作为初始分割方法,主要是因为该方法与其他方法相比,具有较少的过分割现象而且可以更好的保存边界信息。利用该方法,只需要很少的标记信息就可以得到不错的分割效果,具有更好的健壮性。Among them, step 1 uses mean shift as the initial segmentation method, mainly because this method has less over-segmentation phenomenon and can better preserve boundary information compared with other methods. Using this method, a good segmentation effect can be obtained with only a small amount of label information, and it has better robustness.

其中,步骤二所述的背景区域MB和目标区域MO确定方法主要是根据步骤一所分割出的区域中是否含有标记进行判断的。区域中包含背景标记的叫做背景标记区域MB,区域中包含有目标标记的叫做目标标记区域MO。那些既不包含目标标记也不包含背景标记的区域叫做待合并区域,用N表示。图像中大部分区域是待合并区域N,我们的任务就是对待合并区域N进行判断,待合并区域N不是目标就是背景。Wherein, the method for determining the background area M B and the target area M O in step two is mainly judged according to whether the area segmented in step one contains a mark. The region containing the background mark is called the background mark region M B , and the region containing the target mark is called the target mark region M O . Those regions that contain neither target markers nor background markers are called regions to be merged, denoted by N. Most of the area in the image is the area to be merged N, our task is to judge the area N to be merged, the area N to be merged is either the target or the background.

其中,步骤三所述的MRSM分割方法主要是根据最大相似性原理,将待合并区域N与背景区域MB和目标区域MO进行合并,可具体分为三个阶段:Among them, the MRSM segmentation method described in step 3 is mainly based on the principle of maximum similarity to merge the area N to be merged with the background area M B and the target area M O , which can be specifically divided into three stages:

阶段一:对于每一个区域B∈MB,我们都找到它所有的相邻区域Ai的集合

Figure BDA0000049417150000081
对于每一个Ai∈N,我们找到它的相邻区域
Figure BDA0000049417150000082
的集合
Figure BDA0000049417150000083
很明显B
Figure BDA0000049417150000084
计算Ai
Figure BDA0000049417150000085
中每一个元素的相似度,如果满足如下公式Phase 1: For each area B∈MB , we find the set of all its adjacent areas A i
Figure BDA0000049417150000081
For each A i ∈ N, we find its neighbors
Figure BDA0000049417150000082
collection of
Figure BDA0000049417150000083
Obviously B
Figure BDA0000049417150000084
Calculate A i with
Figure BDA0000049417150000085
The similarity of each element in , if it satisfies the following formula

ρρ (( AA ii ,, BB )) == maxmax jj == 1,21,2 ,, .. .. .. ,, kk ρρ (( AA ii ,, SS jj AA ii ))

其中,ρ(Ai,B)代表Ai与B区域的相似度,

Figure BDA0000049417150000087
同理。Among them, ρ(A i , B) represents the similarity between A i and B area,
Figure BDA0000049417150000087
the same way.

那么,就将Ai与B区域进行合并,合并后的区域仍然标记为B。否则,Ai与B区域保存不变。Then, A i is merged with B region, and the merged region is still marked as B. Otherwise, A i and B areas remain unchanged.

上述过程是迭代进行的,经过一次迭代,MB和N会得到更新,然后继续迭代,当没有新的区域被合并时,迭代停止。The above process is carried out iteratively. After one iteration, MB and N will be updated, and then continue to iterate. When no new area is merged, the iteration stops.

阶段二:按同样方法,对每一个区域O∈Mo进行迭代合并,直到没有新的区域被合并时停止。Phase 2: In the same way, iteratively merge each region O∈M o until no new region is merged.

阶段三:经过上述两个阶段,还有一些没有被标记的区域存在,这主要是因为这些没有被标记的区域间的相似度太高,所以背景区域MB和目标区域MO中的任何区域都不能与他们合并。因此,我们要对这部分区域进行合并,方法与阶段一相同。Stage 3: After the above two stages, there are still some unmarked regions, mainly because the similarity between these unmarked regions is too high, so any region in the background region M B and the target region M O Neither can be merged with them. Therefore, we need to merge this part of the area, the method is the same as the first stage.

经过阶段三后回到阶段一继续进行,直到阶段三没有合并过程发生时结束。After passing through stage three, return to stage one and continue until stage three ends when no merging process occurs.

其中,两个区域的相似度用巴氏系数表示,计算方法如下:Among them, the similarity between the two regions is represented by the Bhattachary coefficient, and the calculation method is as follows:

ρρ (( RR ,, QQ )) == ΣΣ uu == 11 256256 HisHis tt RR uu ·&Center Dot; HisHis tt QQ uu

其中,

Figure BDA0000049417150000089
分别是R、Q区域的归一化直方图,上标u表示第u个Hist元素。Hist是将Cr、Cb颜色通道分别量化为16个等级,这样就产生了16×16=256个颜色特征空间,对区域内的像素在这256个特征上做颜色直方图,统计落在每一个特征上的概率,用表示。in,
Figure BDA0000049417150000089
They are the normalized histograms of the R and Q regions, respectively, and the superscript u represents the uth Hist element. Hist quantizes the C r and C b color channels into 16 levels respectively, thus generating 16×16=256 color feature spaces, and making a color histogram on these 256 features for the pixels in the area, and the statistics fall in The probability of each feature, with express.

巴氏系数实际就是向量

Figure BDA0000049417150000092
和向量之间夹角的余弦,P、Q区域间的巴氏系数越大,说明他们之间的相似性越高。如果两个区域具有相似的内容,那么之间的相似度就很高,他们之间的巴氏系数就会很大。The Bhattachary coefficient is actually a vector
Figure BDA0000049417150000092
and vector The cosine of the angle between them, the larger the Barthel coefficient between the P and Q regions, the higher the similarity between them. If two areas have similar content, then the similarity between them is high, and the Barthel coefficient between them will be large.

根据本发明的另一方面,本发明提供了一种自动头发区域分割的系统,该系统包括以下几部分构成:According to another aspect of the present invention, the present invention provides a system for automatic hair region segmentation, the system includes the following parts:

人脸检测模块,用于从输入的图片中根据Haar特征与AdaBoost分类器探测出人脸区域,包括AdaBoost分类器训练模块和实时检测模块。The face detection module is used to detect the face area from the input picture according to the Haar feature and the AdaBoost classifier, including the AdaBoost classifier training module and the real-time detection module.

目标背景标记模块,用于根据人脸信息划出感兴趣区域区域,并根据一定的位置、颜色信息在感兴趣区域中标记出目标、背景像素点。The target background marking module is used to delineate the region of interest according to the face information, and mark the target and background pixels in the region of interest according to certain position and color information.

图像分割模块,根据标记出的标记,利用改进的MSRM算法分离图像。The image segmentation module uses the improved MSRM algorithm to separate images according to the marked marks.

根据本发明,目标背景标记模块包括:感兴趣区域区域确定模块,根据人脸区域位置和先验概率知识,在人脸区域Rf基础上确定感兴趣区域区域;人脸颜色特征提取模块,在人脸区域Rf中做肤色的概率分布图,找出概率分布大于1%的像素点,计算均值μ和协方差矩阵C,得到肤色的高斯模型;目标像素标记模块,在人脸区域Rf的基础上划出一个肯定包括头发的样本窗口区域Rh,在该区域内根据人脸肤色模型去除掉肤色像素,在剩余像素中提取头发颜色特征,并标记头发像素;背景像素标记模块,在人脸区域Rf中心位置及感兴趣区域的左右上角各取一块区域作为背景区域,并标记为背景。According to the present invention, the target background labeling module includes: the region of interest area determination module, according to the position of the face area and prior probability knowledge, determines the area of interest on the basis of the face area Rf ; the face color feature extraction module, in Make a probability distribution map of skin color in the face area R f , find out the pixels whose probability distribution is greater than 1%, calculate the mean value μ and covariance matrix C, and obtain the Gaussian model of skin color; the target pixel marking module, in the face area R f Draw out a sample window region R h that must include hair on the basis of , remove the skin color pixels according to the face skin color model in this area, extract the hair color features from the remaining pixels, and mark the hair pixels; the background pixel marking module, in The center position of the face region R f and the left and right upper corners of the region of interest are each taken as the background region and marked as the background.

根据本发明,图像分割模块包括:初始分割模块,用mean shift算法对感兴趣区域中图像做初始分割;目标背景区域标记模块,利用目标、背景标记将分割出的区域标记为背景区域MB、目标区域Mo以及待合并区域N;MSRM分割模块,包括背景区域合并模块、目标区域合并模块、待合并区域合并模块三部分,经过多次迭代得到最后分割结果。According to the present invention, the image segmentation module includes: an initial segmentation module, which uses the mean shift algorithm to initially segment the image in the region of interest; a target background area labeling module, which uses the target and background labels to mark the segmented area as background area M B , The target area M o and the area to be merged N; the MSRM segmentation module, including the background area merging module, the target area merging module, and the to-be-merged area merging module, get the final segmentation result after multiple iterations.

Claims (6)

1.一种自动的头发区域分割方法,其特征在于包括如下步骤:1. an automatic hair region segmentation method, is characterized in that comprising the steps: 第一步:检测人脸:利用人脸检测模块从输入的一张人脸图片中通过训练好的级联分类器探测人脸位置;Step 1: Detect faces: Use the face detection module to detect the face position through the trained cascade classifier from an input face picture; 第二步:标记目标背景标记:利用目标背景标记模块在人脸位置上划出感兴趣区域,根据位置、颜色特征找出最可能的目标标记与背景标记;Step 2: mark the target background mark: use the target background mark module to draw the area of interest on the face position, and find the most likely target mark and background mark according to the position and color characteristics; 第三步:图像分割:根据目标背景标记,利用图像分割模块分割出头发区域并输出。Step 3: Image segmentation: According to the target background mark, use the image segmentation module to segment the hair area and output it. 2.根据权利1所述的自动的头发区域分割方法,其特征在于,所述第一步的检测人脸方法包括以下步骤:提取Haar特征,训练弱分类器,采用AdaBoost算法选取优化的弱分类器迭代,生成强分类器,进行实时检测得到面部区域Rf2. according to the described automatic hair region segmentation method of right 1, it is characterized in that, the detection people's face method of the first step comprises the following steps: extract Haar feature, train weak classifier, adopt the weak classifier of AdaBoost algorithm to select optimization The generator iterates, generates a strong classifier, and performs real-time detection to obtain the face region R f . 3.根据权利要求2所述的自动的头发区域分割方法,其特征在于所述AdaBoost算法具体过程如下:分别对采集的人脸样本集和非人脸样本集计算样本积分图,得到矩形特征原型,计算矩形特征值,得到特征集;确定阙值,由矩形特征集生成对应的弱分类器,得到弱分类器集;挑选最优弱分类器,调用AdaBoost算法训练强分类器,得到强分类器集,此时再次判断是否还有非人脸图片集,若判断为是,则补充非人脸样本至非人脸样本集重复上述步骤,若判断为否,则直接得到级联分类器。3. The automatic hair region segmentation method according to claim 2, characterized in that the specific process of the AdaBoost algorithm is as follows: the sample integral graph is calculated for the face sample set and non-face sample set collected respectively to obtain the rectangular feature prototype , calculate the rectangular feature value to get the feature set; determine the threshold, generate the corresponding weak classifier from the rectangular feature set, and get the weak classifier set; select the optimal weak classifier, call the AdaBoost algorithm to train the strong classifier, and get the strong classifier At this time, it is judged again whether there is a non-face image set. If the judgment is yes, add non-face samples to the non-face sample set and repeat the above steps. If the judgment is no, directly obtain the cascade classifier. 4.根据权利1所述的自动的头发区域分割方法,其特征在于,所述第二步的目标背景标记方法包括如下步骤:4. the automatic hair region segmentation method according to right 1, is characterized in that, the target background labeling method of described second step comprises the steps: (1)感兴趣区域的确定,根据人脸区域位置和先验概率知识,在面部区域Rf基础上确定感兴趣区域,公式如下:(1) Determination of the region of interest, according to the position of the face region and prior probability knowledge, determine the region of interest on the basis of the face region Rf , the formula is as follows: “头发和人脸”区域宽度=3.6*人脸宽度,"Hair and face" area width = 3.6*face width, “头发和人脸”区域高度=3.7*人脸高度;"Hair and face" area height = 3.7*face height; (2)人脸颜色特征提取,在面部区域Rf中做肤色的概率分布图,找出概率分布大于1%的像素点,计算均值μ和协方差矩阵C,得到肤色的高斯模型;(2) Face color feature extraction, make a probability distribution map of skin color in the facial region R f , find out the pixels whose probability distribution is greater than 1%, calculate the mean value μ and covariance matrix C, and obtain the Gaussian model of skin color; (3)目标像素标记,在面部区域Rf的基础上划出一个肯定包括头发的样本窗口区域Rh,其中:(3) Mark the target pixel, draw a sample window area R h that definitely includes hair on the basis of the facial area R f , wherein: LL xx RR hh == LL xx RR ff RR xx RR hh == RR xx RR ff Uu ythe y RR hh == Uu ythe y RR ff -- 11 // 22 (( RR xx RR ff -- LL xx RR ff )) DD. ythe y RR hh == DD. ythe y RR ff ++ 11 // 44 (( RR xx RR ff -- LL xx RR ff )) 其中
Figure FDA0000049417140000025
分别代表头发样本窗口Rh的左右上下边界,同样
Figure FDA0000049417140000026
分别代表面部区域Rf的左右上下边界;
in
Figure FDA0000049417140000025
represent the left and right upper and lower boundaries of the hair sample window R h respectively, and the same
Figure FDA0000049417140000026
Represent the left and right upper and lower boundaries of the facial region R f respectively;
在该区域内根据人脸肤色模型去除掉肤色像素,在剩余像素中提取头发颜色特征,并标记头发像素为目标标记;In this area, the skin color pixels are removed according to the face skin color model, the hair color features are extracted from the remaining pixels, and the hair pixels are marked as the target mark; (4)背景像素标记,在面部区域Rf中心位置及感兴趣区域的左右上角各取一块区域作为背景区域,并标记为背景。(4) Mark the background pixels. Take a region at the center of the face region R f and the left and right upper corners of the region of interest as the background region, and mark it as the background.
5.根据权利1所述的方法,其特征在于,所述第三步的图像分割方法包括:初始图像分割,用mean shift算法对感兴趣区域中图像做初始分割,得到小的区域;目标背景区域标记,如果一个区域中有背景标记存在,则将其标记为背景区域,如果一个区域中有目标标记存在,则将其标记为目标区域,剩下的区域标记为待合并区域;5. method according to right 1, is characterized in that, the image segmentation method of described 3rd step comprises: initial image segmentation, uses mean shift algorithm to do initial segmentation to image in region of interest, obtains small area; Target background Area marking, if there is a background mark in an area, it will be marked as the background area, if there is a target mark in an area, it will be marked as the target area, and the remaining area will be marked as the area to be merged; MSRM分割,对背景区域、目标区域、待合并区域分别进行迭代合并,采用最大相似性原理,经过多次迭代得到最后分割结果。MSRM segmentation, iteratively merges the background area, the target area, and the area to be merged, and uses the principle of maximum similarity to obtain the final segmentation result after multiple iterations. 6.根据权利5所述的方法,其特征在于,所述的MSRM分割算法包括以下过程:6. method according to claim 5, is characterized in that, described MSRM segmentation algorithm comprises the following process: A、将背景区域中的区域与其相邻区域进行迭代合并,直到没有新区域形成时迭代停止;A. Iteratively merge the area in the background area and its adjacent area until no new area is formed, and the iteration stops; B、将目标区域中的区域与其相邻区域进行迭代合并,直到没有新区域形成时迭代停止;B. Iteratively merge the area in the target area and its adjacent area until no new area is formed, and the iteration stops; C、将待合并区域中的区域与其相邻区域进行迭代合并,直到没有新区域形成时迭代停止;重复上述步骤A、步骤B、步骤C,直到步骤C中没有迭代合并动作时停止。C. Iteratively merge the area in the area to be merged with its adjacent area until no new area is formed; repeat the above step A, step B, and step C until there is no iterative merge action in step C. Stop.
CN 201110055823 2011-03-09 2011-03-09 Method for automatically portioning hair area Pending CN102103690A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110055823 CN102103690A (en) 2011-03-09 2011-03-09 Method for automatically portioning hair area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110055823 CN102103690A (en) 2011-03-09 2011-03-09 Method for automatically portioning hair area

Publications (1)

Publication Number Publication Date
CN102103690A true CN102103690A (en) 2011-06-22

Family

ID=44156445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110055823 Pending CN102103690A (en) 2011-03-09 2011-03-09 Method for automatically portioning hair area

Country Status (1)

Country Link
CN (1) CN102103690A (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368300A (en) * 2011-09-07 2012-03-07 常州蓝城信息科技有限公司 Target population various characteristics extraction method based on complex environment
CN102509073A (en) * 2011-10-17 2012-06-20 上海交通大学 Static target segmentation method based on Gauss background model
CN102521611A (en) * 2011-12-13 2012-06-27 广东威创视讯科技股份有限公司 Touched object identification method based on touch screen
CN102831394A (en) * 2012-07-23 2012-12-19 常州蓝城信息科技有限公司 Human face recognizing method based on split-merge algorithm
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN103955962A (en) * 2014-04-21 2014-07-30 华为软件技术有限公司 Device and method for virtualizing human hair growth
CN104318558A (en) * 2014-10-17 2015-01-28 浙江大学 Multi-information fusion based gesture segmentation method under complex scenarios
CN104718559A (en) * 2012-10-22 2015-06-17 诺基亚技术有限公司 Classifying image samples
CN105474232A (en) * 2013-06-17 2016-04-06 匡特莫格公司 System and method for biometric identification
CN105844706A (en) * 2016-04-19 2016-08-10 浙江大学 Full-automatic three-dimensional hair modeling method based on single image
CN106022221A (en) * 2016-05-09 2016-10-12 腾讯科技(深圳)有限公司 Image processing method and processing system
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN106419923A (en) * 2016-10-27 2017-02-22 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
CN106611160A (en) * 2016-12-15 2017-05-03 中山大学 CNN (Convolutional Neural Network) based image hair identification method and device
CN106778827A (en) * 2016-11-28 2017-05-31 南京英云创鑫信息技术有限公司 A kind of hair density appraisal procedure based on lines cluster
CN107122791A (en) * 2017-03-15 2017-09-01 国网山东省电力公司威海供电公司 Electricity business hall employee's hair style specification detection method based on color development and Texture Matching
CN108198192A (en) * 2018-01-15 2018-06-22 任俊芬 A kind of quick human body segmentation's method of high-precision based on deep learning
CN108460336A (en) * 2018-01-29 2018-08-28 南京邮电大学 A kind of pedestrian detection method based on deep learning
CN109117760A (en) * 2018-07-27 2019-01-01 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN109923385A (en) * 2016-11-11 2019-06-21 汉高股份有限及两合公司 The method and apparatus for determining hair color uniformity
CN110009708A (en) * 2019-04-10 2019-07-12 上海大学 Method, system and terminal for hair color transformation based on image color segmentation
CN110189340A (en) * 2019-06-03 2019-08-30 北京达佳互联信息技术有限公司 Image partition method, device, electronic equipment and storage medium
CN110287807A (en) * 2019-05-31 2019-09-27 上海亿童科技有限公司 A kind of human body information acquisition method, apparatus and system
US10665013B2 (en) 2016-04-19 2020-05-26 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
CN111292247A (en) * 2018-12-07 2020-06-16 北京字节跳动网络技术有限公司 Image processing method and device
CN111539960A (en) * 2019-03-25 2020-08-14 华为技术有限公司 Image processing method and related equipment
CN111815733A (en) * 2020-08-07 2020-10-23 深兰科技(上海)有限公司 Video coloring method and system
CN112084965A (en) * 2020-09-11 2020-12-15 义乌市悦美科技有限公司 Scalp hair detection device and system
CN112102196A (en) * 2020-09-16 2020-12-18 广州虎牙科技有限公司 Image hairdressing processing method and device, electronic equipment and readable storage medium
CN113033662A (en) * 2021-03-25 2021-06-25 北京华宇信息技术有限公司 Multi-video association method and device
CN114078083A (en) * 2020-08-11 2022-02-22 北京达佳互联信息技术有限公司 Hair transformation model generation method and device, and hair transformation method and device
CN114187309A (en) * 2022-01-11 2022-03-15 盛视科技股份有限公司 Hair segmentation method and system based on convolutional neural network
CN114663274A (en) * 2022-02-24 2022-06-24 浙江大学 A method and device for removing hair from portrait images based on GAN network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128939A1 (en) * 2008-11-25 2010-05-27 Eastman Kodak Company Hair segmentation
CN101877058A (en) * 2010-02-10 2010-11-03 杭州海康威视软件有限公司 People flow rate statistical method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128939A1 (en) * 2008-11-25 2010-05-27 Eastman Kodak Company Hair segmentation
CN101877058A (en) * 2010-02-10 2010-11-03 杭州海康威视软件有限公司 People flow rate statistical method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Pattern Recognition》 20100228 jifeng Ning et al Interactive image segmentation by maximal similarity based region merging 448-449,fig 3 5,6 第43卷, 第2期 *
《硕士学位论文》 20101031 傅文林 《图像分割技术研究及头发分割应用》 36-53 1-3,5,6 , *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368300A (en) * 2011-09-07 2012-03-07 常州蓝城信息科技有限公司 Target population various characteristics extraction method based on complex environment
CN102509073A (en) * 2011-10-17 2012-06-20 上海交通大学 Static target segmentation method based on Gauss background model
CN102521611A (en) * 2011-12-13 2012-06-27 广东威创视讯科技股份有限公司 Touched object identification method based on touch screen
CN102831394A (en) * 2012-07-23 2012-12-19 常州蓝城信息科技有限公司 Human face recognizing method based on split-merge algorithm
CN104718559A (en) * 2012-10-22 2015-06-17 诺基亚技术有限公司 Classifying image samples
US10096127B2 (en) 2012-10-22 2018-10-09 Nokia Technologies Oy Classifying image samples
CN105474232A (en) * 2013-06-17 2016-04-06 匡特莫格公司 System and method for biometric identification
CN103473780B (en) * 2013-09-22 2016-05-25 广州市幸福网络技术有限公司 The method of portrait background figure a kind of
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN103955962A (en) * 2014-04-21 2014-07-30 华为软件技术有限公司 Device and method for virtualizing human hair growth
CN103955962B (en) * 2014-04-21 2018-03-09 华为软件技术有限公司 A kind of device and method of virtual human hair's generation
CN104318558A (en) * 2014-10-17 2015-01-28 浙江大学 Multi-information fusion based gesture segmentation method under complex scenarios
CN104318558B (en) * 2014-10-17 2017-06-23 浙江大学 Hand Gesture Segmentation method based on Multi-information acquisition under complex scene
CN105844706A (en) * 2016-04-19 2016-08-10 浙江大学 Full-automatic three-dimensional hair modeling method based on single image
US10665013B2 (en) 2016-04-19 2020-05-26 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
CN105844706B (en) * 2016-04-19 2018-08-07 浙江大学 A kind of full-automatic three-dimensional scalp electroacupuncture method based on single image
CN106022221B (en) * 2016-05-09 2021-11-30 腾讯科技(深圳)有限公司 Image processing method and system
CN106022221A (en) * 2016-05-09 2016-10-12 腾讯科技(深圳)有限公司 Image processing method and processing system
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN106419923A (en) * 2016-10-27 2017-02-22 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
WO2018076977A1 (en) * 2016-10-27 2018-05-03 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
CN109923385B (en) * 2016-11-11 2021-09-21 汉高股份有限及两合公司 Method and apparatus for determining hair color uniformity
CN109923385A (en) * 2016-11-11 2019-06-21 汉高股份有限及两合公司 The method and apparatus for determining hair color uniformity
CN106778827B (en) * 2016-11-28 2019-04-23 南京鑫和汇通电子科技有限公司 A method for evaluating hair density based on line clustering
CN106778827A (en) * 2016-11-28 2017-05-31 南京英云创鑫信息技术有限公司 A kind of hair density appraisal procedure based on lines cluster
CN106611160A (en) * 2016-12-15 2017-05-03 中山大学 CNN (Convolutional Neural Network) based image hair identification method and device
CN106611160B (en) * 2016-12-15 2019-12-17 中山大学 A method and device for image hair recognition based on convolutional neural network
CN107122791A (en) * 2017-03-15 2017-09-01 国网山东省电力公司威海供电公司 Electricity business hall employee's hair style specification detection method based on color development and Texture Matching
CN108198192A (en) * 2018-01-15 2018-06-22 任俊芬 A kind of quick human body segmentation's method of high-precision based on deep learning
CN108460336A (en) * 2018-01-29 2018-08-28 南京邮电大学 A kind of pedestrian detection method based on deep learning
CN109117760A (en) * 2018-07-27 2019-01-01 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN109117760B (en) * 2018-07-27 2021-01-22 北京旷视科技有限公司 Image processing method, apparatus, electronic device and computer readable medium
CN111292247A (en) * 2018-12-07 2020-06-16 北京字节跳动网络技术有限公司 Image processing method and device
CN111539960B (en) * 2019-03-25 2023-10-24 华为技术有限公司 Image processing methods and related equipment
US12131443B2 (en) 2019-03-25 2024-10-29 Huawei Technologies Co., Ltd. Image processing method and related device
CN111539960A (en) * 2019-03-25 2020-08-14 华为技术有限公司 Image processing method and related equipment
CN110009708A (en) * 2019-04-10 2019-07-12 上海大学 Method, system and terminal for hair color transformation based on image color segmentation
CN110287807A (en) * 2019-05-31 2019-09-27 上海亿童科技有限公司 A kind of human body information acquisition method, apparatus and system
US11288807B2 (en) 2019-06-03 2022-03-29 Beijing Dajia Internet Information Technology Co., Ltd. Method, electronic device and storage medium for segmenting image
CN110189340A (en) * 2019-06-03 2019-08-30 北京达佳互联信息技术有限公司 Image partition method, device, electronic equipment and storage medium
CN110189340B (en) * 2019-06-03 2022-01-21 北京达佳互联信息技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111815733A (en) * 2020-08-07 2020-10-23 深兰科技(上海)有限公司 Video coloring method and system
CN114078083A (en) * 2020-08-11 2022-02-22 北京达佳互联信息技术有限公司 Hair transformation model generation method and device, and hair transformation method and device
CN114078083B (en) * 2020-08-11 2024-11-22 北京达佳互联信息技术有限公司 Hair transformation model generation method and device, hair transformation method and device
CN112084965A (en) * 2020-09-11 2020-12-15 义乌市悦美科技有限公司 Scalp hair detection device and system
CN112102196A (en) * 2020-09-16 2020-12-18 广州虎牙科技有限公司 Image hairdressing processing method and device, electronic equipment and readable storage medium
CN113033662A (en) * 2021-03-25 2021-06-25 北京华宇信息技术有限公司 Multi-video association method and device
CN114187309A (en) * 2022-01-11 2022-03-15 盛视科技股份有限公司 Hair segmentation method and system based on convolutional neural network
CN114187309B (en) * 2022-01-11 2024-10-15 盛视科技股份有限公司 Hair segmentation method and system based on convolutional neural network
CN114663274A (en) * 2022-02-24 2022-06-24 浙江大学 A method and device for removing hair from portrait images based on GAN network

Similar Documents

Publication Publication Date Title
CN102103690A (en) Method for automatically portioning hair area
Ban et al. Face detection based on skin color likelihood
Shahab et al. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images
CN102831447B (en) Method for identifying multi-class facial expressions at high precision
Ott et al. Implicit color segmentation features for pedestrian and object detection
CN102436636B (en) Method and system for segmenting hair automatically
Bekhouche et al. Pyramid multi-level features for facial demographic estimation
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
EP3101594A1 (en) Saliency information acquisition device and saliency information acquisition method
CN105160317A (en) Pedestrian gender identification method based on regional blocks
Tsai et al. Road sign detection using eigen colour
Asi et al. A coarse-to-fine approach for layout analysis of ancient manuscripts
CN102163281B (en) Real-time human body detection method based on AdaBoost frame and colour of head
CN103824052A (en) Multilevel semantic feature-based face feature extraction method and recognition method
CN102682287A (en) Pedestrian detection method based on saliency information
Yang et al. Real-time traffic sign detection via color probability model and integral channel features
WO2011074014A2 (en) A system for lip corner detection using vision based approach
CN108664969B (en) A Conditional Random Field Based Road Sign Recognition Method
CN105718866A (en) Visual target detection and identification method
Du et al. Wavelet domain local binary pattern features for writer identification
Warrell et al. Labelfaces: Parsing facial features by multiclass labeling with an epitome prior
Lodh et al. Flower recognition system based on color and GIST features
CN110008920A (en) Research on facial expression recognition method
CN114373079A (en) A Fast and Accurate Ground Penetrating Radar Target Detection Method
Deshmukh et al. Real-time traffic sign recognition system based on colour image segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110622