17/01/2024 (Malaysia) - OpenAI, the developer behind ChatGPT, announced its intention to combat disinformation in anticipation of several key elections this year in countries with half of the world's population. Amidst the global AI revolution spurred by the success of ChatGPT, concerns have risen about the potential misuse of such technologies to spread disinformation and influence voters. In response, OpenAI decided to restrict the use of its technologies, including ChatGPT and DALL-E 3, for political campaigns, particularly in major electoral countries like the US, India, and Britain.
The company expressed its commitment to preventing its technology from undermining democratic processes in a recent blog post. OpenAI is currently assessing the impact of its tools on personalized persuasion and has temporarily halted their use in political campaigning and lobbying efforts until a clearer understanding is reached.
The World Economic Forum recently identified AI-driven disinformation and misinformation as significant global risks. These could potentially destabilize newly elected governments in major economies, underscoring the urgent need for solutions. The rapid development of AI text and image generators has heightened these risks, making it difficult to distinguish between authentic and manipulated content.
To address these challenges, OpenAI is developing new tools to provide credible attribution for texts generated by ChatGPT and to detect images created using DALL-E 3. The company plans to implement digital credentials that use cryptography to trace content origins through the Coalition for Content Provenance and Authenticity (C2PA). This initiative includes major tech firms like Microsoft, Sony, Adobe, Nikon, and Canon. Furthermore, ChatGPT will guide users to official websites for procedural election information, and DALL-E 3 is equipped with safeguards to prevent the creation of images depicting real individuals, including politicians.
This move by OpenAI follows similar steps taken by US tech giants such as Google and Meta to limit AI's role in election interference. The importance of these measures is highlighted by the proliferation of deepfakes and doctored content in political contexts, as seen in recent elections. These developments underscore the growing concern about the impact of AI on the integrity of information and the trust in political institutions.