CN109766844B - A mobile terminal identity identification and memory method based on brooch device - Google Patents
A mobile terminal identity identification and memory method based on brooch device Download PDFInfo
- Publication number
- CN109766844B CN109766844B CN201910031147.XA CN201910031147A CN109766844B CN 109766844 B CN109766844 B CN 109766844B CN 201910031147 A CN201910031147 A CN 201910031147A CN 109766844 B CN109766844 B CN 109766844B
- Authority
- CN
- China
- Prior art keywords
- face image
- neural network
- mobile terminal
- brooch
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 claims abstract description 70
- 210000002569 neuron Anatomy 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 9
- 210000004205 output neuron Anatomy 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000000513 principal component analysis Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 238000012986 modification Methods 0.000 claims description 2
- 230000004048 modification Effects 0.000 claims description 2
- 230000001815 facial effect Effects 0.000 claims 3
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 230000008676 import Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明提出了一种基于胸针设备的移动终端身份鉴别与记忆方法,可以方便用户实时知晓对方身份信息。此方法主要包括以下步骤:步骤1:用户在移动终端下载并安装智能伴侣记忆客户端;步骤2:用户启动智能伴侣记忆客户端,并将胸针设备采集的人脸图像通过移动终端的蓝牙传输到移动终端;步骤3:移动终端将接收到的胸针设备采集的人脸图像传送至服务器端,利用训练完成的预设神经网络分类识别胸针设备采集的人脸图像。该方法能够通过使用简单的胸针设备和移动终端快速识别对方身份信息,实时显示对方身份鉴别与记忆结果,自动提示用户对方的姓名以及相关信息,从而大大降低认错人或叫错名字的概率。
The invention proposes a mobile terminal identity identification and memory method based on a brooch device, which can facilitate users to know the identity information of the counterparty in real time. The method mainly includes the following steps: Step 1: The user downloads and installs the smart companion memory client on the mobile terminal; Step 2: The user starts the smart companion memory client, and transmits the face image collected by the brooch device to the mobile terminal via Bluetooth. Mobile terminal; Step 3: The mobile terminal transmits the received face image collected by the brooch device to the server, and uses the trained preset neural network to classify and recognize the face image collected by the brooch device. The method can quickly identify the identity information of the counterparty by using a simple brooch device and a mobile terminal, display the identity identification and memory results of the counterparty in real time, and automatically prompt the user of the counterparty's name and related information, thereby greatly reducing the probability of misidentification or calling the wrong name.
Description
技术领域technical field
本发明涉及电子设备技术领域,尤其涉及一种基于胸针设备的移动终端身份鉴别与记忆方法。The present invention relates to the technical field of electronic equipment, in particular to a method for identifying and remembering the identity of a mobile terminal based on a brooch device.
背景技术Background technique
大数据时代,数据爆炸性增长。人们需要处理的数据日益增多。商务人士业务繁多,交际面非常广。这导致他们难以准确记住在业务交际中每一个人的姓名及其相关信息。商务人士在交际中时常会认错人或叫错名字,引起不必要的尴尬和误会,也给对方留下不好的印象。这会直接影响后续的沟通交流、人际关系的建立和维护。因此,如何辅助商务人士在交际过程中快速对交际对象的身份进行精准识别成为了现今商务人士越来越迫切的一个需求。In the era of big data, the explosive growth of data. The amount of data that people need to deal with is increasing day by day. Business people have a lot of business and a wide range of communication. This makes it difficult for them to accurately remember the names and related information of everyone in business communication. Business people often recognize the wrong person or call the wrong name in communication, causing unnecessary embarrassment and misunderstanding, and leaving a bad impression on the other party. This will directly affect the subsequent communication, the establishment and maintenance of interpersonal relationships. Therefore, how to assist business people to quickly and accurately identify the identities of communication objects in the process of communication has become an increasingly urgent need of today's business people.
随着年龄的增长,人们的记忆力会逐渐退化。对于中老年人而言,准确记住身边的亲朋好友也是一件颇有难度的事情。因此,中老年人也需要一个工具来辅助他们精准识别亲朋好友的身份。People's memory gradually deteriorates with age. For middle-aged and elderly people, it is also quite difficult to accurately remember relatives and friends around them. Therefore, middle-aged and elderly people also need a tool to help them accurately identify the identities of relatives and friends.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于,提供一种基于胸针设备的移动终端身份鉴别与记忆方法,使用胸针设备采集人脸图像并通过蓝牙发送到移动终端再用神经网络分类识别人脸,在移动终端显示身份鉴别与记忆结果并自动提示用户,方便用户实时知晓对方身份信息,从而大大降低认错人或叫错名字的概率。The purpose of the present invention is to provide a mobile terminal identification and memory method based on a brooch device. The brooch device is used to collect a face image and send it to a mobile terminal through bluetooth, and then use a neural network to classify and identify the face, and display the identity identification on the mobile terminal. It can memorize the results and automatically prompt the user, which is convenient for the user to know the identity information of the other party in real time, thus greatly reducing the probability of identifying the wrong person or calling the wrong name.
为实现上述发明目的,本发明提出一种基于胸针设备的移动终端身份鉴别与记忆方法,该方法包括以下步骤:In order to achieve the above purpose of the invention, the present invention proposes a method for identifying and remembering the identity of a mobile terminal based on a brooch device, the method comprising the following steps:
S1)用户在移动终端下载并安装智能伴侣记忆客户端。S1) The user downloads and installs the smart companion memory client on the mobile terminal.
S2)用户启动智能伴侣记忆客户端:若用户首次启动该客户端,需首先在客户端进行用户注册并预先导入若干组人脸图像及这些人脸图像对应的相关联系人信息,并将该预先导入的若干组人脸图像及这些人脸图像对应的相关联系人信息保存至服务器端数据库,并作为训练集在服务器端训练预设神经网络,预设神经网络训练完成后客户端启动胸针设备,并接收该胸针设备采集的人脸图像,最后打开移动终端的蓝牙,使移动终端与胸针设备进行配对,将胸针设备采集的人脸图像通过移动终端的蓝牙传输到移动终端;若用户并非首次启动该客户端,用户需首先登陆客户端,然后客户端启动胸针设备,并接收该胸针设备采集的人脸图像,最后打开移动终端的蓝牙,使移动终端与胸针设备进行配对,将胸针设备采集的人脸图像通过移动终端的蓝牙传输到移动终端。S2) The user starts the smart companion memory client: if the user starts the client for the first time, it is necessary to perform user registration on the client first, import several groups of face images and the relevant contact information corresponding to these face images in advance, and store the preset The imported groups of face images and the relevant contact information corresponding to these face images are saved to the server-side database, and used as a training set to train a preset neural network on the server side. After the preset neural network training is completed, the client starts the brooch device. And receive the face image collected by the brooch device, and finally turn on the Bluetooth of the mobile terminal to pair the mobile terminal with the brooch device, and transmit the face image collected by the brooch device to the mobile terminal through the Bluetooth of the mobile terminal; For the client, the user needs to log in to the client first, then the client starts the brooch device, receives the face image collected by the brooch device, and finally turns on the Bluetooth of the mobile terminal, so that the mobile terminal is paired with the brooch device, and the data collected by the brooch device is The face image is transmitted to the mobile terminal through the Bluetooth of the mobile terminal.
其中,所述胸针设备采集人脸图像是通过使用胸针设备上的拍照按钮拍照实现。Wherein, the collection of the face image by the brooch device is realized by using the photographing button on the brooch device to take pictures.
S3)移动终端将接收到的胸针设备采集的人脸图像传送至服务器端,并在服务器端利用步骤S2)训练完成的预设神经网络分类识别胸针设备采集的人脸图像,最后和服务器端数据库中的预先导入的人脸图像对应的相关联系人信息进行匹配,若匹配成功,在移动终端界面上显示匹配成功信息;若匹配失败,在移动终端界面上显示匹配失败信息并保存匹配失败的胸针设备采集的人脸图像及该人脸图像对应的联系人信息至服务器端数据库中的未匹配库,保存后可将该人脸图像对应的联系人信息添加至服务器端的联系人信息库。S3) The mobile terminal transmits the received face image collected by the brooch device to the server, and uses the preset neural network trained in step S2) on the server to classify and recognize the face image collected by the brooch, and finally communicate with the server-side database. Match the relevant contact information corresponding to the pre-imported face image in the mobile terminal. If the matching is successful, the matching success information will be displayed on the mobile terminal interface; if the matching fails, the matching failure information will be displayed on the mobile terminal interface and the failed brooches will be saved. The face image collected by the device and the contact information corresponding to the face image are sent to the unmatched library in the server-side database. After saving, the contact information corresponding to the face image can be added to the server-side contact information library.
此外,所述智能伴侣记忆客户端还包括对联系人信息库中的联系人信息进行增删改查、查看已匹配成功的联系人的历史记录页面、已保存人脸图像及该人脸图像对应的联系人信息的显示页面和客户端设置页面等。In addition, the smart companion memory client also includes adding, deleting, modifying and checking the contact information in the contact information database, viewing the history record page of the successfully matched contacts, the saved face image and the corresponding face image. Contact information display page and client setting page, etc.
所述移动终端为智能手机、平板电脑等。The mobile terminal is a smart phone, a tablet computer, or the like.
本发明提供了一种基于胸针设备的移动终端身份鉴别与记忆方法,通过使用简单的胸针设备和移动终端可以快速识别对方身份信息,移动终端中的智能伴侣记忆客户端可在移动终端实时显示对方身份鉴别与记忆结果,自动提示用户对方的姓名以及相关信息,方便用户实时知晓对方身份信息,以便展开更加深入、积极的沟通交流,维系良好的人际关系,从而大大降低认错人或叫错名字的概率。The invention provides a mobile terminal identity identification and memory method based on a brooch device. By using a simple brooch device and a mobile terminal, the identity information of the other party can be quickly identified, and the intelligent companion memory client in the mobile terminal can display the other party in real time on the mobile terminal. Identity identification and memory results, automatically prompt the user with the name and related information of the other party, so that the user can know the identity information of the other party in real time, so as to carry out more in-depth and active communication and maintain good interpersonal relationships, thereby greatly reducing the number of people who recognize the wrong person or call the wrong name. probability.
附图说明Description of drawings
图1为本发明移动终端身份鉴别与记忆方法流程图。FIG. 1 is a flow chart of a method for identifying and remembering an identity of a mobile terminal according to the present invention.
图2为本发明移动终端匹配人脸图像详细操作流程图。FIG. 2 is a flowchart of a detailed operation of the mobile terminal for matching face images according to the present invention.
图3为本发明移动终端服务器端用神经网络算法训练人脸图像详细操作流程图。FIG. 3 is a flow chart of the detailed operation of training a face image with a neural network algorithm on the server side of the mobile terminal of the present invention.
图4为本发明智能伴侣记忆客户端用户注册和登陆界面。FIG. 4 is the user registration and login interface of the smart companion memory client according to the present invention.
图5为本发明智能伴侣记忆客户端匹配成功显示页面。FIG. 5 is a display page of successful matching of the smart companion memory client according to the present invention.
图6为本发明智能伴侣记忆客户端匹配失败显示页面。FIG. 6 is a display page for matching failure of the smart companion memory client according to the present invention.
图7为本发明智能伴侣记忆客户端未匹配库页面。FIG. 7 is a smart companion memory client unmatched library page of the present invention.
图8为本发明智能伴侣记忆客户端用户添加新联系人页面。FIG. 8 is a page for adding a new contact to the user of the smart companion memory client according to the present invention.
图9为本发明智能伴侣记忆客户端查看已匹配成功的联系人的历史记录页面。FIG. 9 is a history page of the smart companion memory client of the present invention to view the contacts that have been matched successfully.
图10为本发明智能伴侣记忆客户端已保存联系人人脸图像的相关信息显示页面。FIG. 10 is a display page of related information of the saved contact face image in the smart companion memory client of the present invention.
图11为本发明智能伴侣记忆客户端设置页面。FIG. 11 is the setting page of the smart companion memory client of the present invention.
图12为本发明所采用的神经网络示意图。FIG. 12 is a schematic diagram of the neural network adopted in the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案及优点更加清楚、明确,以下将参照附图和具体实施方式对本发明的技术方案进行进一步详细说明。In order to make the objectives, technical solutions and advantages of the present invention clearer and clearer, the technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
本发明提出了一种基于胸针设备的移动终端身份鉴别与记忆方法,其流程图如图1所示,该方法包括以下步骤:The present invention proposes a method for identifying and remembering the identity of a mobile terminal based on a brooch device, the flowchart of which is shown in Figure 1, and the method includes the following steps:
S1)用户在移动终端下载并安装智能伴侣记忆客户端。S1) The user downloads and installs the smart companion memory client on the mobile terminal.
S2)用户启动智能伴侣记忆客户端:注册和登陆界面如图4所示,若用户首次启动该客户端,点击【注册】按钮进行用户注册并预先导入若干组人脸图像及这些人脸图像对应的相关联系人信息,将所述预先导入的若干组人脸图像及这些人脸图像对应的相关联系人信息保存至服务器端数据库,并作为训练集在服务器端训练反向传播(Back Propagation,BP)神经网络,BP神经网络训练完成后客户端启动胸针设备,并接收该胸针设备采集的人脸图像,最后打开移动终端的蓝牙,使移动终端与胸针设备进行配对,将胸针设备采集的人脸图像通过移动终端的蓝牙传输到移动终端;若用户并非首次启动该客户端,用户需首先输入账号密码点击【登陆】按钮登陆客户端,然后客户端启动胸针设备,并接收该胸针设备采集的人脸图像,最后打开移动终端的蓝牙,使移动终端与胸针设备进行配对,将胸针设备采集的人脸图像通过移动终端的蓝牙传输到移动终端。其中,所述胸针设备采集人脸图像是通过使用胸针设备上的拍照按钮拍照实现。S2) The user starts the smart companion memory client: the registration and login interface is shown in Figure 4. If the user starts the client for the first time, click the [Register] button to register the user and import several groups of face images and the corresponding face images in advance. The relevant contact information, the pre-imported several groups of face images and the relevant contact information corresponding to these face images are saved to the server-side database, and as a training set, back-propagation (Back Propagation, BP) is trained on the server side as a training set. ) neural network, after the BP neural network training is completed, the client starts the brooch device, receives the face image collected by the brooch device, and finally turns on the Bluetooth of the mobile terminal, so that the mobile terminal is paired with the brooch device, and the face collected by the brooch device The image is transmitted to the mobile terminal through the Bluetooth of the mobile terminal; if the user does not start the client for the first time, the user must first enter the account password and click the [Login] button to log in to the client, and then the client starts the brooch device and receives the people collected by the brooch device. face image, and finally turn on the bluetooth of the mobile terminal to pair the mobile terminal with the brooch device, and transmit the face image collected by the brooch device to the mobile terminal through the bluetooth of the mobile terminal. Wherein, the collection of the face image by the brooch device is realized by using the photographing button on the brooch device to take pictures.
所述训练集包含若干组样本,其中,每组样本包含若干个属性,其中一部分属性与人脸图像相关,剩余部分属性与该人脸图像对应的相关联系人信息有关。The training set includes several groups of samples, wherein each group of samples includes several attributes, some of which are related to the face image, and the remaining part of the attributes are related to the relevant contact information corresponding to the face image.
S3)移动终端将接收到的胸针设备采集的人脸图像传送至服务器端,并在服务器端利用步骤S2)训练完成的BP神经网络分类识别胸针设备采集的人脸图像,最后和服务器端数据库中预先导入的人脸图像对应的相关联系人信息进行匹配,移动终端智能伴侣记忆客户端匹配人脸图像的详细操作步骤如图2所示:S3) The mobile terminal transmits the received face image collected by the brooch device to the server side, and uses the BP neural network trained in step S2) on the server side to classify and recognize the face image collected by the brooch device, and finally compares it with the server-side database. The relevant contact information corresponding to the pre-imported face image is matched, and the detailed operation steps of the mobile terminal intelligent companion memory client to match the face image are shown in Figure 2:
若匹配成功,在移动终端界面上显示匹配成功信息,如图5所示,显示图像为胸针设备采集的当前人脸图像,显示信息为用户对当前人脸图像对应的相关联系人信息的描述,如“某某、某地点”等,用户还可以点击客户端中的【替换】图标选择是否用胸针设备采集的当前人脸图像替换服务器端数据库中预先导入的具有与当前人脸图像相同的相关联系人信息的人脸图像,若用户选择替换,则保存胸针设备采集的当前人脸图像到服务器端数据库,同时删除被替换掉的预先导入的人脸图像,服务器端将替换的胸针设备采集的当前人脸图像及被替换掉的预先导入的人脸图像对应的联系人信息作为训练集中的一组新的样本供BP神经网络后续训练使用,若用户选择不替换,则保存胸针设备采集的当前人脸图像至客户端中的历史记录页面;If the matching is successful, the matching success information is displayed on the mobile terminal interface. As shown in Figure 5, the displayed image is the current face image collected by the brooch device, and the displayed information is the user's description of the relevant contact information corresponding to the current face image. For example, "a certain place", etc., the user can also click the [Replace] icon in the client to choose whether to replace the pre-imported face image in the server-side database with the current face image collected by the brooch device with the same correlation as the current face image. If the user chooses to replace the face image of the contact information, the current face image collected by the brooch device will be saved to the server-side database, and the replaced pre-imported face image will be deleted. The current face image and the contact information corresponding to the replaced pre-imported face image are used as a new set of samples in the training set for the subsequent training of the BP neural network. If the user chooses not to replace, the current data collected by the brooch device will be saved. The face image is sent to the history page in the client;
若匹配失败,在移动终端界面上显示匹配失败信息并保存匹配失败的胸针设备采集的当前人脸图像及胸针设备采集的当前人脸图像对应的联系人信息至服务器端数据库中的未匹配库,如图6所示,用户可以进入未匹配库中查看匹配失败的人脸图像,并对匹配失败的人脸图像进行联系人信息的添加和删除操作,如图7所示,具体为:用户点击【添加】图标添加未匹配库中人脸图像的联系人信息,例如姓名、职位等,如图8所示;用户点击【保存】图标保存胸针设备采集的当前人脸图像及对应的联系人信息至服务器端数据库,作为训练集中的一组新的样本供BP神经网络后续训练使用;用户点击【删除】图标,未匹配库中的人脸图像将被删除;若用户想查看已删除的人脸图像,可进入客户端中的历史记录页面进行查看,如图9所示。If the matching fails, display the matching failure information on the mobile terminal interface and save the current face image collected by the brooch device that failed to match and the contact information corresponding to the current face image collected by the brooch device to the unmatched library in the server-side database. As shown in Figure 6, the user can enter the unmatched library to view the face images that fail to match, and add and delete contact information for the face images that fail to match, as shown in Figure 7, specifically: the user clicks The [Add] icon adds contact information that does not match the face image in the library, such as name, position, etc., as shown in Figure 8; the user clicks the [Save] icon to save the current face image collected by the brooch device and the corresponding contact information To the server-side database, as a new set of samples in the training set for the subsequent training of the BP neural network; the user clicks the [Delete] icon, and the face images in the unmatched database will be deleted; if the user wants to view the deleted face image The image can be viewed by entering the history page in the client, as shown in Figure 9.
进一步地,将预先导入的若干组人脸图像及这些人脸图像对应的相关联系人信息作为训练集之前,需要对预先导入的若干组人脸图像进行预处理,该预处理方法包括如下步骤:Further, before using the pre-imported groups of face images and the relevant contact information corresponding to these face images as the training set, it is necessary to pre-process the pre-imported groups of face images, and the pre-processing method includes the following steps:
a1)首先对预先导入的一幅人脸图像作去噪处理;a1) First, denoise a pre-imported face image;
a2)对步骤a1)中经过去噪处理的人脸图像作几何校正处理,这是为实现位置校准并消除人脸尺度变化和旋转所造成的影响;a2) Perform geometric correction processing on the denoised face image in step a1), which is to achieve position calibration and eliminate the influence of face scale change and rotation;
a3)由于人脸识别率受照明条件的影响较大且各个人脸图像的平均灰度值均不相同,还要对步骤a2)中经过几何校正处理的人脸图像作灰度幅值归一化操作;a3) Since the face recognition rate is greatly affected by the lighting conditions and the average gray value of each face image is different, the gray scale value of the face image that has undergone geometric correction processing in step a2) should also be normalized operation;
a4)判断经过步骤a3)灰度幅值归一化操作后的人脸图像中是否存在人脸:若存在则进入步骤a41);若不存在则进入步骤a42);a4) Judging whether there is a human face in the face image after step a3) grayscale amplitude normalization operation: if there is, enter step a41); if not, enter step a42);
a41)对经过步骤a3)灰度幅值归一化操作后的人脸图像进行关键点检测、对齐校准操作,并利用主成分分析(PCA)算法转换为1×20维的特征列向量,将其与该人脸图像对应的相关联系人信息一起送至BP神经网络中作为一组样本输入,训练BP神经网络,并进入步骤a5);a41) Perform key point detection, alignment and calibration operations on the face image after the grayscale amplitude normalization operation in step a3), and use the principal component analysis (PCA) algorithm to convert it into a 1×20-dimensional feature column vector. It is sent to the BP neural network as a group of sample input together with the relevant contact information corresponding to the face image, trains the BP neural network, and enters step a5);
a42)在移动终端界面上提醒用户该人脸图像为无效图像,需要重新导入一幅新的预先导入的人脸图像,然后进入步骤a1);a42) reminding the user that this face image is an invalid image on the mobile terminal interface, and needs to re-import a new pre-imported face image, and then enters step a1);
a5)导入下一幅预先导入的人脸图像并进入步骤a1),直至完成对所有预先导入的人脸图像的预处理。a5) Import the next pre-imported face image and enter step a1) until the preprocessing of all pre-imported face images is completed.
进一步地,使用所述训练集训练BP神经网络的方法如图3所示,具体包括如下步骤:Further, the method of using the training set to train the BP neural network is shown in Figure 3, which specifically includes the following steps:
b1)初始化BP神经网络:给BP神经网络中每个权值和偏置值随机指定一个非零值,然后进入步骤b2);b1) Initialize the BP neural network: randomly assign a non-zero value to each weight and bias value in the BP neural network, and then enter step b2);
b2)输入一组样本用于BP神经网络学习,前向计算BP神经网络每层神经元的输入值和输出值;b2) Input a set of samples for BP neural network learning, and forward calculate the input value and output value of each layer of neurons in the BP neural network;
b3)判断经过步骤b2)处理的BP神经网络最终输出层的实际输出与期望输出是否一致:若一致则进入步骤b31),若不一致则进入b32);b3) Determine whether the actual output of the final output layer of the BP neural network processed in step b2) is consistent with the expected output: if they are consistent, enter step b31), if they are inconsistent, enter b32);
b31)输入下一组样本BP神经网络学习,然后进入步骤b4);b31) Input the next group of samples for BP neural network learning, and then enter step b4);
b32)根据反向传播算法计算BP神经网络每层的局部梯度值,然后进入步骤b33);b32) Calculate the local gradient value of each layer of the BP neural network according to the back-propagation algorithm, and then enter step b33);
b33)根据计算得到的BP神经网络每层的局部梯度值修正BP神经网络中每个权值和偏置值,然后进入步骤b2)。b33) Amend each weight and bias value in the BP neural network according to the calculated local gradient value of each layer of the BP neural network, and then proceed to step b2).
b4)判断BP神经网络是否学习完所有样本:若是则进入步骤b5);若不是则进入步骤b2);b4) Determine whether the BP neural network has learned all the samples: if so, enter step b5); if not, enter step b2);
b5)结束训练;b5) end the training;
这里使用3层BP神经网络是因为它可以实现任意的非曲线映射,不使用更多层结构的神经网络是因为与3层结构神经网络相比,利用反向传播算法更新时,更多层结构的神经网络容易陷入权值和偏置值的局部极小值,如图12所示。该反向传播神经网络(BP)网络输入层有20个神经元,隐含层神经元个数的计算如式1.7所示,输出层神经元的数目依赖于服务器端数据库中的相关联系人信息的类别数,即相关联系人信息的属性数目。The 3-layer BP neural network is used here because it can realize any non-curve mapping, and the neural network with more layers is not used because compared with the 3-layer neural network, when the back-propagation algorithm is used to update, there are more layers. The neural network of is prone to getting trapped in local minima of the weights and bias values, as shown in Figure 12. The back propagation neural network (BP) network has 20 neurons in the input layer, the number of neurons in the hidden layer is calculated as shown in Equation 1.7, and the number of neurons in the output layer depends on the relevant contact information in the server-side database The number of categories, that is, the number of attributes of related contact information.
为降低误差率,我们使用改进的Sigmoid函数作为BP神经网络的激活函数,如式1.1所示。该BP神经网络使用的激活函数如式(1.1)所示,To reduce the error rate, we use the improved Sigmoid function as the activation function of the BP neural network, as shown in Equation 1.1. The activation function used by the BP neural network is shown in formula (1.1),
其中χ=w1x2+w2x2+w3x3+...+wdxd+b,b为偏置值,将第m组样本表示为(Xm,Ym),其中,m=1,2,3,...,q,q为样本总数,Xm=(x1,x2,...,xd)表示第m组样本中与人脸图像相关的属性,Ym表示与第m组样本中人脸图像对应的相关联系人信息有关的属性,xp(p=1,2,3,...,d)是第m组样本在Xm中的第p个属性上的取值,d为样本在Xm中的属性个数,即BP神经网络输入层神经元的个数,d=20,wp(p=1,2,3,...,d)为xp对应的权重值。where χ=w 1 x 2 +w 2 x 2 +w 3 x 3 +...+w d x d +b, b is the bias value, and the mth group of samples is represented as (X m , Y m ), Among them, m=1, 2, 3,...,q,q is the total number of samples, X m =(x 1 ,x 2 ,...,x d ) represents the face images related to the mth group of samples attribute, Y m represents the attribute related to the relevant contact information corresponding to the face image in the mth group of samples, x p (p=1,2,3,...,d) is the mth group of samples in Xm The value of the p-th attribute of , d is the number of attributes of the sample in X m , that is, the number of neurons in the input layer of the BP neural network, d=20, w p (p=1,2,3,. ..,d) is the weight value corresponding to x p .
BP神经网络在样本为(Xm,Ym)处的误差函数如式(1.2)所示:The error function of the BP neural network when the sample is (X m , Y m ) is shown in formula (1.2):
其中,n为样本在Ym中的属性个数,即BP神经网络输出神经元的个数,n=1024,为第m个样本在第i个输出神经元的实际输出,表示第m个样本在第i个输出神经元的期望输出。Among them, n is the number of attributes of the sample in Y m , that is, the number of output neurons of the BP neural network, n=1024, is the actual output of the i-th output neuron for the m-th sample, represents the expected output of the mth sample at the ith output neuron.
对于每一组样本,所述BP神经网络中每个权值和偏置值修正规则如式(1.3)-(1.4)、(1.5)和(1.6)所示:For each group of samples, the correction rules for each weight and bias value in the BP neural network are shown in formulas (1.3)-(1.4), (1.5) and (1.6):
其中,对于每一组样本,均表示从BP神经网络第l-1层中的第j个神经元指向第l层的第i个神经元的权重值,均表示从BP神经网络第l-1层中的第j个神经元指向第l层的第i个神经元权重调整值的计算,E为误差函数,对于每m组样本,采用式(1.2)所示的方法计算误差函数,为计算E对偏导数,同理,对于每一组样本,为第l层的第i个神经元的偏置值,第l层的第i个神经元偏置调整值的计算,为计算E对偏导数,ρ为学习率,取值为0~1,N为隐含层神经元的个数,表示向上取整,d为BP神经网络输入层神经元的个数,n为BP神经网络输出层神经元的个数,常数a的取值范围是1~10,由于输入特征较多,此处a=10,BP神经网络的层数为3层,包括位于第1层的输入层、位于第2层的隐含层和位于第3层的输出层,即l=2,3。where, for each set of samples, Both represent the weight value from the j-th neuron in the l-1 layer of the BP neural network to the i-th neuron in the l-th layer, Both represent the calculation of the weight adjustment value from the j-th neuron in the l-1 layer of the BP neural network to the i-th neuron in the l-th layer, E is the error function, and for each m group of samples, formula (1.2) The method shown computes the error function, For calculating E pair Partial derivative, in the same way, for each set of samples, is the bias value of the i-th neuron in the l-th layer, The calculation of the bias adjustment value of the i-th neuron in the l-th layer, For calculating E pair Partial derivative, ρ is the learning rate, ranging from 0 to 1, N is the number of neurons in the hidden layer, Represents rounded up, d is the number of neurons in the input layer of the BP neural network, n is the number of neurons in the output layer of the BP neural network, and the value range of the constant a is 1 to 10. Due to the large number of input features, here a=10, the number of layers of the BP neural network is 3 layers, including the input layer located in the first layer, the hidden layer located in the second layer and the output layer located in the third layer, that is, l=2,3.
如图10所示,在本发明的实施方式中,点击【联系】图标可查看已保存人脸图像对应的联系人信息和已保存人脸图像对应的联系人信息的分组显示,例如,本实施方式中根据拍摄地点将已保存人脸图像对应的联系人信息进行分组,如上海、成都、武汉等,此外,用户还可根据自身需求进行其他方式分组。As shown in FIG. 10, in the embodiment of the present invention, click the [Contact] icon to view the group display of the contact information corresponding to the saved face image and the contact information corresponding to the saved face image. For example, this implementation In the method, the contact information corresponding to the saved face images is grouped according to the shooting location, such as Shanghai, Chengdu, Wuhan, etc. In addition, users can also group in other ways according to their own needs.
如图11所示,本发明实例中,点击【设置】图标可查看客户端设置信息,例如人脸图像分类识别时所在位置信息、人脸图像分类识别时的具体时间、人脸图像的具体身份信息说明等。As shown in Figure 11, in the example of the present invention, click the [Settings] icon to view the client setting information, such as the location information when the face image is classified and recognized, the specific time when the face image is classified and recognized, and the specific identity of the face image. information, etc.
点击【蓝牙】图标可查看蓝牙匹配信息。Click the [Bluetooth] icon to view the Bluetooth matching information.
点击【退出登陆】图标即退出当前账号,下次进入需重新登陆。Click the [Logout] icon to log out of the current account. The next time you log in, you need to log in again.
此外,所述智能伴侣记忆客户端还包括查看已匹配成功的联系人的历史记录页面。In addition, the smart companion memory client also includes a history page for viewing successfully matched contacts.
上述移动终端为智能手机、平板电脑等。The above-mentioned mobile terminals are smart phones, tablet computers, and the like.
本发明提供了一种基于胸针设备的移动终端身份鉴别与记忆方法,通过使用简单的胸针设备采集人脸图像并通过蓝牙发送到移动终端再用神经网络分类识别人脸,在移动终端实时显示对方身份鉴别与记忆结果,自动提示用户对方的姓名以及相关信息,方便用户实时知晓对方身份信息,以便展开更加深入、积极的沟通交流,维系良好的人际关系,从而大大降低认错人或叫错名字的概率。The invention provides a mobile terminal identity identification and memory method based on a brooch device. By using a simple brooch device to collect a face image and sending it to a mobile terminal through bluetooth, the face is classified and recognized by a neural network, and the other party is displayed on the mobile terminal in real time. Identity identification and memory results, automatically prompt the user with the name and related information of the other party, so that the user can know the identity information of the other party in real time, so as to carry out more in-depth and active communication and maintain good interpersonal relationships, thereby greatly reducing the number of people who recognize the wrong person or call the wrong name. probability.
以上描述了本发明的具体实施方式,但仅仅是举例说明,相关界面图均为示意图。在不背离本发明的原理和实质的前提下,在实际应用中可对这些实施方式做出多种修改。凡采用等同替换或等效替换,这些变化是显而易见,一切利用本发明构思的发明创造均在保护之列。The specific embodiments of the present invention are described above, but only for illustration, and the relevant interface diagrams are schematic diagrams. Various modifications may be made to these embodiments in practice without departing from the principles and spirit of the present invention. Where equivalent replacements or equivalent replacements are adopted, these changes are obvious, and all inventions and creations utilizing the concept of the present invention are included in the protection list.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910031147.XA CN109766844B (en) | 2019-01-14 | 2019-01-14 | A mobile terminal identity identification and memory method based on brooch device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910031147.XA CN109766844B (en) | 2019-01-14 | 2019-01-14 | A mobile terminal identity identification and memory method based on brooch device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109766844A CN109766844A (en) | 2019-05-17 |
| CN109766844B true CN109766844B (en) | 2022-10-14 |
Family
ID=66454004
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910031147.XA Expired - Fee Related CN109766844B (en) | 2019-01-14 | 2019-01-14 | A mobile terminal identity identification and memory method based on brooch device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109766844B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110163436A (en) * | 2019-05-23 | 2019-08-23 | 西北工业大学 | Intelligent workshop production optimization method based on bottleneck prediction |
| CN111899035B (en) * | 2020-07-31 | 2024-04-30 | 西安加安信息科技有限公司 | High-end wine authentication method, mobile terminal and computer storage medium |
| CN114359287A (en) * | 2022-03-21 | 2022-04-15 | 青岛正信德宇信息科技有限公司 | Image data processing method and device |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107133576A (en) * | 2017-04-17 | 2017-09-05 | 北京小米移动软件有限公司 | Age of user recognition methods and device |
| CN107590141A (en) * | 2017-10-19 | 2018-01-16 | 崔玉桂 | A kind of user's meet prompt terminal and method |
| CN109117801A (en) * | 2018-08-20 | 2019-01-01 | 深圳壹账通智能科技有限公司 | Method, apparatus, terminal and the computer readable storage medium of recognition of face |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9042596B2 (en) * | 2012-06-14 | 2015-05-26 | Medibotics Llc | Willpower watch (TM)—a wearable food consumption monitor |
| CN108596140A (en) * | 2018-05-08 | 2018-09-28 | 青岛海信移动通信技术股份有限公司 | A kind of mobile terminal face identification method and system |
-
2019
- 2019-01-14 CN CN201910031147.XA patent/CN109766844B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107133576A (en) * | 2017-04-17 | 2017-09-05 | 北京小米移动软件有限公司 | Age of user recognition methods and device |
| CN107590141A (en) * | 2017-10-19 | 2018-01-16 | 崔玉桂 | A kind of user's meet prompt terminal and method |
| CN109117801A (en) * | 2018-08-20 | 2019-01-01 | 深圳壹账通智能科技有限公司 | Method, apparatus, terminal and the computer readable storage medium of recognition of face |
Non-Patent Citations (1)
| Title |
|---|
| BP神经网络在人脸识别中的应用研究;冯玉涵;《计算机光盘软件与应用》;20140115;第17卷(第2期);第152、154页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109766844A (en) | 2019-05-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111738357B (en) | Method, device and equipment for identifying garbage pictures | |
| CN112395979B (en) | Image-based health state identification method, device, equipment and storage medium | |
| CN108197532A (en) | The method, apparatus and computer installation of recognition of face | |
| CN115115552A (en) | Image correction model training method, image correction device and computer equipment | |
| CN109766844B (en) | A mobile terminal identity identification and memory method based on brooch device | |
| CN103020602B (en) | Based on the face identification method of neural network | |
| CN110991249A (en) | Face detection method, face detection device, electronic equipment and medium | |
| CN114127801B (en) | Systems and methods for utilizing person identifiability across a network of devices | |
| US11899765B2 (en) | Dual-factor identification system and method with adaptive enrollment | |
| CN112329586B (en) | Customer return visit method and device based on emotion recognition and computer equipment | |
| CN110489659A (en) | Data matching method and device | |
| CN110472509B (en) | Fat-lean recognition method and device based on face image and electronic equipment | |
| CN113591603A (en) | Certificate verification method and device, electronic equipment and storage medium | |
| US11295117B2 (en) | Facial modelling and matching systems and methods | |
| US11430283B2 (en) | Methods and systems for delivering a document | |
| CN111382410A (en) | Face verification method and system | |
| CN110516426A (en) | Identity identifying method, certification terminal, device and readable storage medium storing program for executing | |
| JP2023530893A (en) | Data processing and trading decision system | |
| CN110084142B (en) | Age privacy protection method and system for face recognition | |
| US20210256468A1 (en) | Methods and systems for improved mail delivery and notifications | |
| US12243336B2 (en) | Authentication of age, gender, and other biometric data from live images of users | |
| CN114387635B (en) | Method, device and electronic device for updating biometric database | |
| CN119337353B (en) | Multi-level network authentication and access control system | |
| CN108830217B (en) | Automatic signature distinguishing method based on fuzzy mean hash learning | |
| US20230046250A1 (en) | System and method for user interface management to provide an augmented reality-based therapeutic experience |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20221014 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |

































