The recent technological innovations in artificial intelligence (AI) and machine learning are transforming the landscape of advertising, which enables highly personalized ads, the use of chatbots in e-commerce and brand communication, campaigns using deepfakes, and synthetic ads created by generative adversarial networks (GANs). This ultimately leads to the proliferation of synthetic advertising – a type of manipulated advertising – in which ads are “generated or edited through the artificial and automatic production and modification of data” (Campbell et al., 2021; p. 1).
Alarmingly, synthetic advertising with great manipulation sophistication (e.g., personalized ads, hyper-realistic videos) has been argued to positively affect consumers’ perceived realness and creativity of the ads, which in turn lead to positive attitudes and greater purchase intentions. Also, consumers often find it hard to detect falsities involved in synthetic advertising, which hinders them from making informed decisions. Extant research on this advertising phenomenon and related policy recommendations are lagging.
This project is to fill these gaps. The aims of this project are: 1) theoretically define synthetic advertising and examine consumers’ cognitive and affective processing of it, 2) develop a scale to gauge consumers’ literacy toward AI-powered synthetic advertising and identify their knowledge gaps, 3) to educate and inform consumers of the necessary literacy to recognize ad falsity in different forms of synthetic advertising and empower consumers to make informed decision, and 4) offer evidence-based and actionable policy recommendations to regulatory bodies in Singapore.