Facial Expression Recognition Could Soon Become a Reality for Online Advertising
Image courtesy of Vanguard Visions, Flickr.
In the next few years, online advertising may offer you products based on your facial expression. The proposed method, devised by PhD students from Prince of Songkla University in Thailand, analyzes the movements of facial features—such as the eyes and nose—to determine your emotional reaction when viewing an advert. The greatest benefit for consumers who consent to having their facial expressions used in this way is the delivery of more relevant, high-quality marketing. To achieve this, the algorithm will determine which ads incur a positive reaction in the viewer and then show them similar commercials in the future. Based on such an algorithm, advertising platforms like Google could successfully avoid showing adverts that incur a negative reaction, thus improving their overall quality.
Over the last decade, conventional advertising revenue on TV and in newspapers has been steadily decreasing, with the size of online marketing forecast to overtake that of TV by 2019. Advertising firms are looking for ways to make online advertising more profitable. Currently, companies are charged each time somebody clicks on one of their adverts. By exploring the use of facial recognition, advertisers like Google hope to increase the number of clicks and thereby increase their ad revenue.
Although previous facial emotion recognition techniques were good at detecting whether an individual was happy or sad, the approaches showed less accurate results when determining other basic emotions, such as anger and surprise. Because of this, they were not as reliable or as suitable for use in advertising when compared with the method that was proposed by Stankovic and Karnjanadecha, which is able to distinguish all basic facial expressions (happiness, sadness, surprise, anger, fear, disgust, contempt) with an accuracy of 96.8%.
Stankovic and Karnjanadecha’s new method greatly improves the facial recognition rate by reducing head-movement errors. The biggest challenge faced by many facial recognition algorithms is compensating for unexpected head movements while the recognition takes place. This is an understandable problem that needs to be addressed, as a user cannot be expected to keep their head still in order to get tailored ads. The new method addresses this challenge by using a reference point in the form of a facial feature, such as the nose, throughout the recognition stage.
Similar to existing algorithms, the approach uses multiple images of the facial expression as it occurs – not just when the expression reaches its maximum point. These movements in facial landmarks are then measured, taking into account the aforementioned head movement. Next, they are compared to a database of pre-existing image sequences, each sequence having a pre-defined expression. This database is called CK+, and is used by a range of researchers involved with facial expression recognition. CK+ contains 593 image sequences from 123 subjects, making it a reliable facial-expression database.
Compared with Lucey et al.’s previously proposed method, Stankovic and Karnjanadecha’s apprroach increases the likelihood of detecting the right facial expression from 88.6% to 96.8%. Moreover, the accuracy is consistent across different facial expressions, whereas the previous method achieved 100% accuracy for happiness, but only 68% accuracy for sadness. The new approach can therefore be more successfully implemented in real-life situations. For example, advertising platforms could charge companies premium rates if there is an overwhelmingly positive reaction by a consumer to one of their adverts, as this would suggest that the consumer is more likely to buy the product, or at least appreciate the respective brand more. Advertising platforms like Facebook would also know which adverts annoyed their customers, removing them from circulation. This would inevitably increase the quality of ads.
The biggest drawback facing the implementation of facial expression recognition is the associated invasion of privacy. A user may not like the idea that Google captures a bunch of images of their face every time they perform a search. One solution to this might be to give the user the option of turning the facial expression recognition feature on or off, with an icon placed at the top of the browser to indicate when the recognition is on. The advertising firm can also provide assurance, with legal repercussions, that they do not use the personal data collected for other purposes.
In the next few years, new algorithms will further enhance recognition rates. Already, progress is being made on a third version of the CK Database, which will contain even more image sequences from subjects. This will improve the reliability of matching measurements to the right facial expressions. Facial-expression-recognition software has already been trialled by a number of companies, such as Plan UK’s “Because I’m a girl” campaign, which changed the ad displayed based on the consumer’s gender. This reactive advertising can only increase in the future as the technology becomes more widespread.
- Vanguard Visions, Search Engine Online Advertising, 2014.
- Stankovic, I. and Karnjanadecha, M., Use of septum as reference point in a neurophysiologic approach to facial expression recognition., Songklanakarin Journal of Science and Technology, 2013.
- MarketingCharts, Online and Traditional Media Advertising Outlook, 2015-2019., 2015.
- Harmon Leon, How 5 brands have used facial recognition technology, 2015.