CN103440864A - Personality characteristic forecasting method based on voices - Google Patents

Personality characteristic forecasting method based on voices Download PDF

Info

Publication number
CN103440864A
CN103440864A CN2013103292952A CN201310329295A CN103440864A CN 103440864 A CN103440864 A CN 103440864A CN 2013103292952 A CN2013103292952 A CN 2013103292952A CN 201310329295 A CN201310329295 A CN 201310329295A CN 103440864 A CN103440864 A CN 103440864A
Authority
CN
China
Prior art keywords
personality
voice
prediction
feature
acoustic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103292952A
Other languages
Chinese (zh)
Inventor
赵欢
张希翔
陈佐
郑睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN2013103292952A priority Critical patent/CN103440864A/en
Publication of CN103440864A publication Critical patent/CN103440864A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种基于语音的人格特征预测方法,其实施步骤如下:针对多个参考测定人进行人格评估测定得到多项人格特征因素评分值;采集参考测定人的语音片段并提取多项声学韵律特征,提取多项统计特征值;建立语音人格预测机器学习模型,将每一个参考测定人的多项人格特征因素评分值及统计特征值分别输入语音人格预测机器学习模型进行训练;采集测定人的语音片段,提取声学韵律特征和统计特征,输入语音人格预测机器学习模型得到各项声学韵律特征对应的多项人格特征因素评分值,将每一项特征的所有人格特征因素评分值加权求和得到测定人的多项人格特征因素评分值并输出。本发明具有预测素材采集简便、预测过程快捷、效果客观准确的优点。

Figure 201310329295

The invention discloses a voice-based personality feature prediction method, the implementation steps of which are as follows: personality evaluation and measurement are performed on multiple reference testers to obtain scores of multiple personality feature factors; voice segments of reference testers are collected and multiple acoustic components are extracted. Prosodic features, extracting a number of statistical feature values; establishing a voice personality prediction machine learning model, inputting the score values and statistical feature values of multiple personality feature factors and statistical feature values of each reference measurement person into the voice personality prediction machine learning model for training; collecting and measuring people Extract the acoustic prosodic features and statistical features, input the voice personality prediction machine learning model to obtain the scores of multiple personality characteristic factors corresponding to each acoustic prosodic feature, and weight and sum the scores of all personality characteristic factors for each feature Obtain and output the score values of multiple personality characteristic factors of the measured person. The invention has the advantages of simple collection of prediction materials, quick prediction process and objective and accurate effect.

Figure 201310329295

Description

Voice-based personality characteristics Forecasting Methodology
Technical field
The present invention relates to the Computer Applied Technology field, be specifically related to a kind of voice-based personality characteristics Forecasting Methodology.
Background technology
On internet, the personality Forecasting Methodology generally adopts the form based on the word test paper at present.Although the personality Forecasting Methodology of word test paper has had abundant achievement in research, as five-factor model personality test (Big Five), Cartel 16 factor personality tests (Sixteen Personality Factor Questionnaire, 16PF) etc.But the user need to spend the plenty of time and carries out answer in this manner, the prediction required time depends on exercise question quantity and tester's answer speed, and prediction steps is various tediously long, the tester easily produces and is sick of and the psychology of conflicting, the accuracy of test result depends on tester's subjective cooperate degree, therefore this method not very applicable internet advocate simple and convenient " fast food type " application model.
The technical scheme that number of patent application is 201010606120.8 discloses a kind of personality method of testing and device that proposes the mutual question and answer mode of a kind of speech type based on multiple dialect background, the word interrogation reply system of personality test is changed into to the voice question and answer mode, solved to a certain extent adaptability and the convenience problem of special population, but not from solving in essence the too tediously long situation of test process.In addition, the technical scheme that number of patent application is 201310059465.X discloses a kind of user's of utilization handwriting picture and has carried out the analyses and prediction personality characteristics, although can remove tediously long answer predicted time from, but at present in mobile social activity, network social intercourse, the use of hand-written picture is not extensive, has problems such as being difficult to gather predicted data.The present invention is based on the personality prediction mode of voice, step is few and simple to operate, can under moving internet, mobile environment, in numerous application, promote, and then provide further social service for the user accurately and efficiently.Therefore, how to overcome personality characteristics prediction mode length consuming time, effect on internet, mobile platform and be subject to the deficiencies such as subjective factor affects, determination data is difficult to obtain, for the user provides " fast food type " personality that is simple and easy to use Approaches For Prediction, become a technical matters urgently to be resolved hurrily.
Summary of the invention
For the above-mentioned enumeration problem of prior art, the enumeration problem that the present invention will solve be to provide a kind of prediction consuming time short, effect is objective and accurate, material collection voice-based personality characteristics Forecasting Methodology simply and easily.
In order to solve above-mentioned enumeration problem, the technical solution used in the present invention is:
A kind of voice-based personality characteristics Forecasting Methodology, implementation step is as follows:
1) set up voice personality prediction machine learning model: carry out personality assessment's mensuration for a plurality of reference mensuration people that select and obtain the multinomial personality characteristics factor score value of marking as the true benchmark of the personality characteristics factor with reference to the mensuration people; Gather a plurality of described sound bites with reference to measuring people's normal articulation voice, described sound bite is carried out pre-service and extracts multinomial acoustics prosodic features, extract the multinomial statistical characteristics of described acoustics prosodic features; Foundation comprises the acoustics prosodic features to the voice personality of personality characteristics factor score value mapping relations prediction machine learning model, each is inputted respectively to described voice personality prediction machine learning model with reference to multinomial personality characteristics factor score value of measuring the people and multinomial statistical characteristics corresponding to the every acoustics prosodic features of sound bite and trained;
2) personality characteristics prediction: gather the normal articulation voice of measuring the people and obtain sound bite to be predicted, described sound bite is carried out pre-service and extracts multinomial acoustics prosodic features and corresponding multinomial statistical nature, described multinomial acoustics prosodic features and corresponding multinomial statistical nature are inputted to described voice personality prediction machine learning model to carry out the regretional analysis of personality characteristics factor score value and obtains every acoustics prosodic features and multinomial personality characteristics factor score value corresponding to statistical nature, corresponding all personality characteristic factor score values weighted sum by each acoustics prosodic features respectively, finally obtain measuring the people multinomial personality characteristics factor score value output.
As further improvement in the technical proposal of the present invention:
A plurality ofly with reference to measuring people, carry out a kind of in the specifically fingering row five-factor model personality test of personality assessment's mensuration, the multinomial personality test in Minnesota, Cartel 16 personalities tests for what select in described step 1).
Described step 1) and step 2) in to sound bite, carry out pre-service and extract the detailed step of multinomial acoustics prosodic features as follows: sound bite is carried out to pre-emphasis, windowing process, divide frame, end-point detection obtains pretreated sound bite, pretreated each sound bite is extracted respectively and comprises the Mel frequency cepstral coefficient, the linear prediction cepstrum coefficient coefficient, the perception linear predictor coefficient, pitch, front two resonance peaks, energy, sound segment length, unvoiced segments length, the perception linear predictor coefficient, short-time zero-crossing rate, the humorous ratio of making an uproar, multinomial at interior acoustics prosodic features when long in averaging spectrum.
The multinomial statistical characteristics of extracting the acoustics prosodic features in described step 1) specifically refers to multiple in the maximal value of extracting described acoustics prosodic features, minimum value, average, variance, relative entropy, slope, difference value.
Voice personality in described step 1) prediction machine learning model specifically refers to a kind of in gaussian kernel support vector machine statistical model, logistic regression method model, decision-tree model, LEAST SQUARES MODELS FITTING, perceptron algorithm model, Boost method model, Hidden Markov Model (HMM), gauss hybrid models, neural network model, degree of deep learning model.
The present invention has following technique effect: the present invention is by the voice personality prediction machine learning model of setting up in advance, any sound bite that can utilize the user to provide is realized the personality characteristics prediction by voice personality prediction machine learning model, set up the mapping relations between phonetic feature and personality characteristics factor by utilizing statistical learning method, dope every personality factors index, overcome traditional personality and predicted length consuming time, effect is subject to the subjective factor impact, measure the deficiencies such as material is not easy to obtain, can take full advantage of current network social intercourse, mobile social activity is easy to obtain the characteristics of sound materials, the acoustics prosodic features of any sound bite of person to be measured of submitting to by the extraction user, utilize statistical learning method to calculate a plurality of personality characteristics factor score values that this sound bite is corresponding, a plurality of personality characteristics factor score values are weighted to summation to be obtained predicting people's final personality characteristics comprehensive grading value and provides social service for the user based on this, can provide the best personality coupling marriage and making friend based on personality characteristics for the user, the interpersonal relation prediction, the social class personality prediction such as job market planning application quick service, there is prediction consuming time short, effect is objective and accurate, material collection is simple and convenient, the advantage had wide range of applications.
The accompanying drawing explanation
The method flow schematic diagram that Fig. 1 is the embodiment of the present invention.
The principle schematic that Fig. 2 is personality characteristics prediction in the embodiment of the present invention.
Embodiment
As shown in Figure 1, the implementation step of the voice-based personality characteristics Forecasting Methodology of the present embodiment is as follows:
One, set up voice personality prediction machine learning model.
1.1) carry out with reference to measuring people the multinomial personality characteristics factor score value that personality assessment's mensuration obtains measuring as reference people's the true benchmark scoring of personality characteristics factor for a plurality of of selection.In the present embodiment, measure the people for a plurality of references of selecting and carry out the specifically fingering row five-factor model personality test (Big Five) of personality assessment's mensuration, show that each is with reference to measuring people's nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factor score values of doing one's duty property (Conscientiousness).In addition, carry out the personality assessment for a plurality of reference mensuration people and can also adopt the multinomial personality test in Minnesota or Cartel 16 personality tests etc., its result equally also can obtain multinomial personality characteristics factor score value, and the item number of personality characteristics factor score value can be different and different due to concrete personality assessment's assay method.
1.2) gather a plurality of sound bites with reference to measuring people's normal articulation voice, sound bite is carried out pre-service and extracts multinomial acoustics prosodic features.In the present embodiment, select altogether 400 with reference to measuring the people, each records 10 sections any normal articulation voice about 15 seconds with reference to measuring the people, obtain altogether 4000 sound bites, because being greater than 300, general experiment image data amount meets the psychological analysis needs, therefore the present embodiment is set up the sound bite that voice personality prediction machine learning model uses and is met the correlated sampling standard, will wherein approximately 2/3rds sound bite be for training set in the present embodiment, and remaining 1/3rd for test set.In the present embodiment, sound bite is carried out to pre-service and obtain the detailed step of multinomial acoustics prosodic features of sound bite as follows: sound bite is carried out to the voice pre-service and (carry out successively pre-emphasis, windowing process, divide frame, end-point detection) obtain a plurality of sound bites, each sound bite is extracted respectively to Mel frequency cepstral coefficient (MFCC), pitch (Pitch, design per second vocal cord vibration number of times, be related to the tone and intonation), front two resonance peaks (First formant F1 and second-order resonance peak F2), energy (Energy), sound segment length (L0), unvoiced segments length (L1, be used for L0 in conjunction with rear relevant to pronunciation speed), perception linear predictor coefficient (Perceptual Linear Predictive), short-time zero-crossing rate, humorous making an uproar than (Harmonics-to-Noise-Ratio), the multinomial multinomial acoustics prosodic features obtained as extraction when long in averaging spectrum (Long-Term Average Spectrum).
1.3) extract the multinomial statistical characteristics of acoustics prosodic features.In the present embodiment, the multinomial statistical characteristics of extracting the acoustics prosodic features specifically refers to multiple in the maximal value (Max) of multinomial acoustics prosodic features, minimum value (Min), average (Mean), variance (Stdev), relative entropy KL, slope, difference value.
1.4) set up and to comprise the voice personality prediction machine learning model of acoustics prosodic features to personality characteristics factor score value mapping relations, each is inputted respectively to the voice personality with reference to multinomial personality characteristics factor score value of measuring the people and multinomial statistical characteristics corresponding to the every acoustics prosodic features of sound bite and predict that machine learning model is trained.
In the present embodiment, the voice personality prediction machine learning model of the multinomial statistical characteristics input of each multinomial acoustics prosodic features with reference to a plurality of sound bites of the multinomial personality characteristics factor score value of measuring the people and correspondence is specifically referred to gaussian kernel support vector machine statistical model (gaussian kernel Support Vector Machine) model, each of each sound bite acoustics prosodic features comprises corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factor score values of doing one's duty property (Conscientiousness).In addition, can also adopt as required other voice personality prediction machine learning model that comprises logistic regression method model, decision-tree model, LEAST SQUARES MODELS FITTING, perceptron algorithm model, Boost method model, Hidden Markov Model (HMM), gauss hybrid models, neural network model, degree of deep learning model, no matter but for any voice personality prediction machine learning model, the sample size of its degree of accuracy and training is relevant, and the sample size of training degree of accuracy more at most is higher.The present embodiment is by after the multinomial statistical characteristics input gaussian kernel support vector machine statistical model of each multinomial acoustics prosodic features with reference to the multinomial personality characteristics factor of measuring the people and correspondence, gaussian kernel support vector machine statistical model is completed to training and obtain comprising the gaussian kernel support vector machine statistical model of acoustics prosodic features to the mapping of personality characteristics factor score value, obtain aforesaid voice personality prediction machine learning model.Because comprising the acoustics prosodic features, this voice personality prediction machine learning model shines upon to personality characteristics factor score value, therefore can utilize any sound bite that the user provides to carry out the personality prediction, by utilizing the mapping relations between sound bite acoustics prosodic features and personality characteristics factor score value to dope every personality factors index, thereby lay the foundation for the personality signatures to predict.
Two, personality characteristics prediction.
2.1) normal articulation voice that gather to measure the people obtain sound bite to be predicted, sound bite is carried out pre-service and extracts the multinomial statistical nature that multinomial acoustics prosodic features is corresponding.Gather sound bite and can pass through two kinds of modes: one, gather the voice of measuring the people, the user can use mobile phone, computer, flat board or other electronic equipments to choose the sound bite file of having recorded, and is committed to the voice collecting interface of application the present embodiment method by network; Two, the user can select the real-time recording function of the system of application the present embodiment method, records one section sound bite and is committed to the voice collecting interface.In the present embodiment, specifically by the voice collecting interface, receive the sound bite audio file that the user submits to by network, its sampling rate is 11025Hz, and the sound bite audio file all saves as the wav form.In addition, sound bite is carried out to pre-service and obtains the step and step 1.2 of the multinomial acoustics prosodic features of sound bite) identical, do not repeat them here.
2.2) multinomial acoustics prosodic features and corresponding multinomial statistical nature input voice personality prediction machine learning model are carried out to the regretional analysis of personality characteristics factor score value obtain every acoustics prosodic features and multinomial personality characteristics factor score value corresponding to statistical nature.Because comprising the acoustics prosodic features, voice personality prediction machine learning model shines upon to personality characteristics factor score value, therefore multinomial acoustics prosodic features input voice personality prediction machine learning model is carried out to the regretional analysis of personality characteristics factor score value, can obtain a plurality of personality characteristics factor score values of corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factors of doing one's duty property (Conscientiousness).Finally, obtain five personality characteristics factor score values that every acoustics prosodic features is corresponding, each acoustics prosodic features is to there being five personality characteristics factor score values.
2.3) corresponding all personality characteristic factor score values weighted sum by each acoustics prosodic features respectively, finally obtain measuring the people multinomial personality characteristics factor score value output.
As shown in Figure 2, the present embodiment is at first by step 2.1) gather sound bite and extract multinomial acoustics prosodic features and statistical nature, extract multinomial acoustics prosodic features and comprise pitch, resonance peak (front two resonance peaks, comprise First formant F1 and second-order resonance peak F2) etc., calculate multinomial statistical nature and comprise maximal value, minimum value, average, variance, relative entropy etc., due to through step 2.2) after input voice personality prediction machine learning model carries out the regretional analysis of personality characteristics factor score value, each acoustics prosodic features obtains corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factor score values of doing one's duty property (Conscientiousness), the present embodiment is finally by step 2.3) by corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), the personality characteristics factor score value that doing one's duty property (Conscientiousness) is five is weighted the final scoring that summation draws five personality factors, dopes five personality factors exponential quantities measuring the people and predicts the outcome as the final personality characteristics of measuring the people.
In sum, the present embodiment is by the voice personality prediction machine learning model of setting up in advance, can utilize any sound bite that the user provides to carry out the personality prediction, set up the mapping relations between phonetic feature and personality characteristics factor by utilizing statistical learning method, dope every personality factors index, overcome traditional personality and predicted length consuming time, effect is subject to the subjective factor impact, measure the deficiencies such as material is not easy to obtain, can take full advantage of current network social intercourse, mobile social activity is easy to obtain the characteristics of sound materials, the acoustics of any sound bite of person to be measured of submitting to by the extraction user, the features such as the rhythm, utilize statistical learning method to calculate a plurality of personality characteristics factor score values that this sound bite is corresponding, a plurality of personality characteristics factor score values are weighted to summation to be obtained predicting people's final personality characteristics comprehensive grading value and provides social service for the user based on this, by for a plurality of, with reference to measuring the people, having carried out respectively personality characteristics prediction accuracy contrast experiment, experimental data shows that the priori accuracy of the present embodiment can reach 67% left and right, the actual measurement accuracy 75% of carrying out artificial personality assessment's mensuration with prior art is more approaching, can meet personality characteristics forecast demand fast and accurately, can provide the best personality coupling marriage and making friend based on personality characteristics for the user, the interpersonal relation prediction, the social class personality prediction such as job market planning application quick service, there is prediction consuming time short, effect is objective and accurate, material collection is simple and convenient, the advantage had wide range of applications.
The above is only the preferred embodiment of the present invention, and protection scope of the present invention also not only is confined to above-described embodiment, and all technical schemes belonged under thinking of the present invention all belong to protection scope of the present invention.It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (5)

1.一种基于语音的人格特征预测方法,其特征在于实施步骤如下: 1. A method for predicting personality traits based on speech, characterized in that the implementation steps are as follows: 1)建立语音人格预测机器学习模型:针对选择的多个参考测定人进行人格评估测定得到作为参考测定人的人格特征因素真实基准评分的多项人格特征因素评分值;采集多个所述参考测定人正常发音语音的语音片段,对所述语音片段进行预处理并提取多项声学韵律特征,提取所述声学韵律特征的多项统计特征值;建立包含声学韵律特征到人格特征因素评分值映射关系的语音人格预测机器学习模型,将每一个参考测定人的多项人格特征因素评分值以及语音片段各项声学韵律特征对应的多项统计特征值分别输入所述语音人格预测机器学习模型进行训练; 1) Establish a voice personality prediction machine learning model: perform personality assessment and measurement on multiple selected reference measurement persons to obtain multiple personality characteristic factor score values as the real benchmark score of the personality characteristic factors of the reference measurement person; collect multiple reference measurement persons A speech segment of a person's normal pronunciation, preprocessing the speech segment and extracting multiple acoustic prosody features, extracting multiple statistical feature values of the acoustic prosody features; establishing a mapping relationship including acoustic prosody features to personality characteristic factor score values The phonetic personality prediction machine learning model of each reference measurement person's multiple personality characteristic factor score values and the multiple statistical feature values corresponding to each acoustic prosodic feature of the voice segment are respectively input into the voice personality prediction machine learning model for training; 2)人格特征预测:采集测定人的正常发音语音得到待预测的语音片段,对所述语音片段进行预处理并提取多项声学韵律特征和对应的多项统计特征,将所述多项声学韵律特征以及对应的多项统计特征输入所述语音人格预测机器学习模型进行人格特征因素评分值回归分析得到各项声学韵律特征和统计特征对应的多项人格特征因素评分值,分别将每一项特征对应的所有人格特征因素评分值加权求和,最终得到测定人的的多项人格特征因素评分值并输出。 2) Personality feature prediction: collect and measure the normal pronunciation of a person to obtain a speech segment to be predicted, preprocess the speech segment and extract multiple acoustic prosody features and corresponding multiple statistical features, and convert the multiple acoustic prosody Features and corresponding multiple statistical features are input into the voice personality prediction machine learning model to perform regression analysis on the score values of personality feature factors to obtain multiple personality feature factor score values corresponding to various acoustic prosodic features and statistical features, and each feature is respectively The weighted summation of all corresponding personality characteristic factor scores is finally obtained and outputted. 2.根据权利要求1所述的基于语音的人格特征预测方法,其特征在于:所述步骤1)中针对选择的多个参考测定人进行人格评估测定具体是指进行大五人格测试、明尼苏达多项人格测试、卡特尔16人格测试中的一种。 2. The method for predicting personality characteristics based on speech according to claim 1, characterized in that: in the step 1), carrying out personality assessment and determination for a plurality of selected reference measurement persons specifically refers to conducting the Big Five personality test, Minnesota multi- One personality test, one of the Cattell 16 personality tests. 3.根据权利要求2所述的基于语音的人格特征预测方法,其特征在于:所述步骤1)及步骤2)中对语音片段进行预处理并提取多项声学韵律特征的详细步骤如下:对语音片段进行预加重、加窗处理、分帧、端点检测得到预处理后的语音片段,对预处理后的每一个语音片段分别提取包括Mel频率倒谱系数、线性预测倒谱系数、感知线性预测系数、音高、前两共振峰、能量、有声段长度、无声段长度、感知线性预测系数、短时过零率、谐噪比、长时平均谱中的多项在内的声学韵律特征。 3. The voice-based personality feature prediction method according to claim 2, characterized in that: in the step 1) and step 2), the detailed steps of preprocessing the voice segment and extracting multiple acoustic prosody features are as follows: Speech segments are pre-emphasized, windowed, framed, and endpoint detected to obtain preprocessed speech segments, and each pre-processed speech segment is extracted including Mel frequency cepstral coefficients, linear prediction cepstral coefficients, and perceptual linear prediction. Acoustic prosody features including coefficient, pitch, first two formants, energy, voiced segment length, unvoiced segment length, perceptual linear prediction coefficient, short-term zero-crossing rate, harmonic-to-noise ratio, and long-term average spectrum. 4.根据权利要求3所述的基于语音的人格特征预测方法,其特征在于:所述步骤1)中提取声学韵律特征的多项统计特征值具体是指提取声学韵律特征的最大值、最小值、均值、方差、相对熵、斜率、差分值中的多种。 4. The voice-based personality feature prediction method according to claim 3, characterized in that: the multiple statistical feature values of the acoustic prosody features extracted in the step 1) specifically refer to the maximum and minimum values of the acoustic prosody features extracted , mean, variance, relative entropy, slope, and difference values. 5.根据权利要求1~4中任意一项所述的基于语音的人格特征预测方法,其特征在于:所述步骤1)中的语音人格预测机器学习模型具体是指高斯核支持向量机统计模型、逻辑回归方法模型、决策树模型、最小二乘法模型、感知器算法模型、Boost方法模型、隐马尔科夫模型、高斯混合模型、神经网络模型、深度学习模型中的一种。 5. The speech-based personality feature prediction method according to any one of claims 1 to 4, characterized in that: the speech personality prediction machine learning model in the step 1) specifically refers to the Gaussian kernel support vector machine statistical model , logistic regression method model, decision tree model, least squares method model, perceptron algorithm model, Boost method model, hidden Markov model, Gaussian mixture model, neural network model, deep learning model.
CN2013103292952A 2013-07-31 2013-07-31 Personality characteristic forecasting method based on voices Pending CN103440864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103292952A CN103440864A (en) 2013-07-31 2013-07-31 Personality characteristic forecasting method based on voices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103292952A CN103440864A (en) 2013-07-31 2013-07-31 Personality characteristic forecasting method based on voices

Publications (1)

Publication Number Publication Date
CN103440864A true CN103440864A (en) 2013-12-11

Family

ID=49694555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103292952A Pending CN103440864A (en) 2013-07-31 2013-07-31 Personality characteristic forecasting method based on voices

Country Status (1)

Country Link
CN (1) CN103440864A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105022929A (en) * 2015-08-07 2015-11-04 北京环度智慧智能技术研究所有限公司 Cognition accuracy analysis method for personality trait value test
CN105069294A (en) * 2015-08-07 2015-11-18 北京环度智慧智能技术研究所有限公司 Calculation and analysis method for testing cognitive competence values
CN105147304A (en) * 2015-08-07 2015-12-16 北京环度智慧智能技术研究所有限公司 A Stimulus Information Preparation Method for Personality Trait Value Test
CN107348962A (en) * 2017-06-01 2017-11-17 清华大学 A kind of personal traits measuring method and equipment based on brain-computer interface technology
CN107689012A (en) * 2017-09-06 2018-02-13 王锦程 A kind of marriage and making friend's matching process
CN108175424A (en) * 2015-08-07 2018-06-19 北京环度智慧智能技术研究所有限公司 A kind of test system for cognition ability value test
CN108829668A (en) * 2018-05-30 2018-11-16 平安科技(深圳)有限公司 Text information generation method and device, computer equipment and storage medium
CN109192277A (en) * 2018-08-29 2019-01-11 沈阳康泰电子科技股份有限公司 A kind of psychological characteristics measure based on general effective question and answer scale
CN109672930A (en) * 2018-12-25 2019-04-23 北京心法科技有限公司 Personality association type emotional arousal method and apparatus
CN110111810A (en) * 2019-04-29 2019-08-09 华院数据技术(上海)有限公司 Voice personality prediction technique based on convolutional neural networks
CN110652294A (en) * 2019-09-16 2020-01-07 清华大学 Creativity personality trait measuring method and device based on electroencephalogram signals
CN111460245A (en) * 2019-01-22 2020-07-28 刘宏军 Multi-dimensional crowd characteristic measuring method
CN112561474A (en) * 2020-12-14 2021-03-26 华南理工大学 Intelligent personality characteristic evaluation method based on multi-source data fusion
CN112786054A (en) * 2021-02-25 2021-05-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device and equipment based on voice and storage medium
CN116631446A (en) * 2023-07-26 2023-08-22 上海迎智正能文化发展有限公司 Behavior mode analysis method and system based on speech analysis
EP3186751B1 (en) * 2014-08-26 2024-07-24 Google LLC Localized learning from a global model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694162A (en) * 2005-03-31 2005-11-09 金庆镐 Speech recognition analysis system and service method
CN101359995A (en) * 2008-09-28 2009-02-04 腾讯科技(深圳)有限公司 Method and apparatus providing on-line service
CN101375304A (en) * 2006-01-31 2009-02-25 松下电器产业株式会社 Suggesting device, suggesting method, suggesting program, and recording medium having suggesting program recorded thereon
EP2233077A1 (en) * 2007-12-07 2010-09-29 Zaidanhojin Shin-Iryozaidan Personality testing apparatus
CN101999903A (en) * 2010-12-27 2011-04-06 中国人民解放军第四军医大学 Voice type personality characteristic detection system based on multiple dialect backgrounds
CN103106346A (en) * 2013-02-25 2013-05-15 中山大学 Character prediction system based on off-line writing picture division and identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694162A (en) * 2005-03-31 2005-11-09 金庆镐 Speech recognition analysis system and service method
CN101375304A (en) * 2006-01-31 2009-02-25 松下电器产业株式会社 Suggesting device, suggesting method, suggesting program, and recording medium having suggesting program recorded thereon
EP2233077A1 (en) * 2007-12-07 2010-09-29 Zaidanhojin Shin-Iryozaidan Personality testing apparatus
CN101359995A (en) * 2008-09-28 2009-02-04 腾讯科技(深圳)有限公司 Method and apparatus providing on-line service
CN101999903A (en) * 2010-12-27 2011-04-06 中国人民解放军第四军医大学 Voice type personality characteristic detection system based on multiple dialect backgrounds
CN103106346A (en) * 2013-02-25 2013-05-15 中山大学 Character prediction system based on off-line writing picture division and identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GELAREH MOHAMMADI ET AL.: "Automatic Personality Perception: Prediction of Trait Attribution Based on Prosodic Features", 《IEEE TRANSACTIONS ON AFFECTIVE COMPUTING》 *
赵力: "《语音信号处理》", 31 March 2003 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3186751B1 (en) * 2014-08-26 2024-07-24 Google LLC Localized learning from a global model
CN105022929A (en) * 2015-08-07 2015-11-04 北京环度智慧智能技术研究所有限公司 Cognition accuracy analysis method for personality trait value test
CN105069294A (en) * 2015-08-07 2015-11-18 北京环度智慧智能技术研究所有限公司 Calculation and analysis method for testing cognitive competence values
CN105147304A (en) * 2015-08-07 2015-12-16 北京环度智慧智能技术研究所有限公司 A Stimulus Information Preparation Method for Personality Trait Value Test
CN105147304B (en) * 2015-08-07 2018-01-09 北京环度智慧智能技术研究所有限公司 A Stimulus Information Preparation Method for Personality Trait Value Test
CN108065942B (en) * 2015-08-07 2021-02-05 北京智能阳光科技有限公司 Method for compiling stimulation information aiming at oriental personality characteristics
CN108042147A (en) * 2015-08-07 2018-05-18 北京环度智慧智能技术研究所有限公司 A kind of stimulus information provides device
CN108065942A (en) * 2015-08-07 2018-05-25 北京环度智慧智能技术研究所有限公司 A kind of preparation method of stimulus information for east personality characteristics
CN105069294B (en) * 2015-08-07 2018-06-15 北京环度智慧智能技术研究所有限公司 A kind of calculation and analysis method for cognition ability value test
CN108175424A (en) * 2015-08-07 2018-06-19 北京环度智慧智能技术研究所有限公司 A kind of test system for cognition ability value test
CN108175424B (en) * 2015-08-07 2020-12-11 北京智能阳光科技有限公司 Test system for cognitive ability value test
CN107348962B (en) * 2017-06-01 2019-10-18 清华大学 A method and device for measuring personality traits based on brain-computer interface technology
CN107348962A (en) * 2017-06-01 2017-11-17 清华大学 A kind of personal traits measuring method and equipment based on brain-computer interface technology
CN107689012A (en) * 2017-09-06 2018-02-13 王锦程 A kind of marriage and making friend's matching process
CN108829668B (en) * 2018-05-30 2021-11-16 平安科技(深圳)有限公司 Text information generation method and device, computer equipment and storage medium
CN108829668A (en) * 2018-05-30 2018-11-16 平安科技(深圳)有限公司 Text information generation method and device, computer equipment and storage medium
CN109192277B (en) * 2018-08-29 2021-11-02 沈阳康泰电子科技股份有限公司 Psychological characteristic measuring method based on universal effective question-answering ruler
CN109192277A (en) * 2018-08-29 2019-01-11 沈阳康泰电子科技股份有限公司 A kind of psychological characteristics measure based on general effective question and answer scale
CN109672930A (en) * 2018-12-25 2019-04-23 北京心法科技有限公司 Personality association type emotional arousal method and apparatus
CN111460245A (en) * 2019-01-22 2020-07-28 刘宏军 Multi-dimensional crowd characteristic measuring method
CN110111810B (en) * 2019-04-29 2020-12-18 华院数据技术(上海)有限公司 Voice personality prediction method based on convolutional neural network
CN110111810A (en) * 2019-04-29 2019-08-09 华院数据技术(上海)有限公司 Voice personality prediction technique based on convolutional neural networks
CN110652294B (en) * 2019-09-16 2020-08-25 清华大学 Creativity personality trait measuring method and device based on electroencephalogram signals
CN110652294A (en) * 2019-09-16 2020-01-07 清华大学 Creativity personality trait measuring method and device based on electroencephalogram signals
CN112561474A (en) * 2020-12-14 2021-03-26 华南理工大学 Intelligent personality characteristic evaluation method based on multi-source data fusion
CN112561474B (en) * 2020-12-14 2024-04-30 华南理工大学 A method for evaluating intelligent personality characteristics based on multi-source data fusion
CN112786054A (en) * 2021-02-25 2021-05-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device and equipment based on voice and storage medium
CN112786054B (en) * 2021-02-25 2024-06-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device, equipment and storage medium based on voice
CN116631446A (en) * 2023-07-26 2023-08-22 上海迎智正能文化发展有限公司 Behavior mode analysis method and system based on speech analysis
CN116631446B (en) * 2023-07-26 2023-11-03 上海迎智正能文化发展有限公司 Behavior mode analysis method and system based on speech analysis

Similar Documents

Publication Publication Date Title
CN103440864A (en) Personality characteristic forecasting method based on voices
Huang et al. Depression detection from short utterances via diverse smartphones in natural environmental conditions
CN104732977B (en) A kind of online spoken language pronunciation quality evaluating method and system
Bhakre et al. Emotion recognition on the basis of audio signal using Naive Bayes classifier
Dibazar et al. Pathological voice assessment
CN103559892B (en) Oral evaluation method and system
KR20240135018A (en) Multi-modal system and method for voice-based mental health assessment using emotional stimuli
Golabbakhsh et al. Automatic identification of hypernasality in normal and cleft lip and palate patients with acoustic analysis of speech
CN103559894B (en) Oral evaluation method and system
Gillespie et al. Cross-Database Models for the Classification of Dysarthria Presence.
CN102222500A (en) Extracting method and modeling method for Chinese speech emotion combining emotion points
CN101261832A (en) Extraction and modeling method of emotional information in Chinese speech
CN103366735B (en) The mapping method of speech data and device
CN101996635B (en) Evaluation method of English pronunciation quality based on stress prominence
CN103366759A (en) Speech data evaluation method and speech data evaluation device
CN102655003A (en) Method for recognizing emotion points of Chinese pronunciation based on sound-track modulating signals MFCC (Mel Frequency Cepstrum Coefficient)
Sabir et al. Improved algorithm for pathological and normal voices identification
Dubey et al. Detection and assessment of hypernasality in repaired cleft palate speech using vocal tract and residual features
Rahman et al. Dynamic time warping assisted svm classifier for bangla speech recognition
CN111341346A (en) Language expression capability evaluation method and system for fusion depth language generation model
Babu et al. Forensic speaker recognition system using machine learning
CN202758611U (en) Speech data evaluation device
Jacob et al. Prosodic feature based speech emotion recognition at segmental and supra segmental levels
Jaid et al. Review of Automatic Speaker Profiling: Features, Methods, and Challenges
Speights et al. Computer-assisted syllable analysis of continuous speech as a measure of child speech disorder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131211