A Balanced Approach to Artificial Intelligence for the 2025 PH Elections: Ethical Guidelines, Transparency Approaches, and Political Education

LENTE recognizes that advancements in Artificial Intelligence, such as generative AI, are increasingly used in information/influence operations or misused to peddle disinformation, particularly within the context of elections. The use of deepfakes in the elections of Indonesia and India this year or during the Nigerian and Slovakian elections last year, as well as in the run-up to the US elections, are concrete cases of the need to safeguard elections from the misuse of AI for election disinformation.

However, LENTE believes it is crucial to approach AI with a healthy level of caution, and it is equally important not to be alarmist to avoid inciting unnecessary fear. Moreover, in coming up with any framework or initiative to respond to the risks it poses to democratic institutions, it is important to recognize its embeddedness in our global digital ecosystem, its widespread adoption across countless industries or sectors, and how it drives innovation and socio-economic advancement.

Thus, LENTE advocates for a balanced perspective that recognizes the potential challenges posed by AI to our electoral processes, while also acknowledging the impossibility or impracticality of banning AI considering how deeply integrated it is into the infrastructure of modern society. Instead of outright prohibition, LENTE believes that it is more feasible to focus on developing robust ethical guidelines and/or regulatory frameworks that ensure AI is used responsibly during elections.

In its Media Monitoring Project for the 2022 National and Local Elections, LENTE found that
the bulk of disinformation originated from affiliate pages. These "Affiliate or Unofficial" pages primarily endorse a candidate, regardless of whether they are directly managed by the candidate or their campaign staff (Section 9c, COMELEC Resolution No. 10730). LENTE's Social Media Monitoring revealed that most disinformation posts on these pages had an average engagement of 1,000 or fewer, highlighting how such posts can evade fact-checkers while still reaching unsuspecting users. Consequently, LENTE recommends rigorous social media monitoring by the Commission to effectively operationalize this provision.

Furthermore, enforcing a ban on AI would require the same level of rigorous social media monitoring by the Commission. The same efforts would be necessary for COMELEC to monitor the proper usage of AI and to take actions against the proliferation of misinformation and disinformation. LENTE believes that a ban alone would not prevent content creators from using generative AI for disinformation, which could still evade detection. Without proactive efforts and dedicated personnel from the Commission to conduct thorough social media monitoring, it will be challenging to uphold the principles essential to our democratic process.

The COMELEC should also engage with social media/tech companies since platforms like TikTok,1 X2 (formerly Twitter), Meta,3 and YouTube4 have implemented various measures
to combat disinformation. First is content labeling5 where misleading or inappropriate content is labeled based on fact-checking data). This has been applied to the last 2022 National and Local Elections in the Philippines.6 Second is through AI Content Disclosure7 where creators must disclose AI-generated content, especially when realistic enough to be mistaken for real
entities. Third is through policy enforcement through platform community guidelines where violations can lead to content restriction, suspension, or deplatforming). OpenAI8 and
Midjourney9 also have policies regarding the use of their tools for political campaigning although these remain somewhat limited in terms of impact.10

The COMELEC should enlist the help of the Department of Information and Communications Technology and other relevant agencies in the development of an “AI Ethics Circular” similar to an effort of the Ministry of Communication and Information (Kominfo) of Indonesia.11 Circular No. 9 dated 19 December 2023 outlined the “ethical values, implementation of ethical values, and responsibility in the use and development of AI” which will apply to AI-based programming activities in the public and private sectors.

LENTE also recommends the following points in approaching disinformation:

1. Focus on transparency approaches over content regulation: In previous research where we mapped the policies of other countries regarding disinformation, it has been shown that purely content regulation policies like those of Bahrain, Cambodia, Egypt, Jordan, Qatar, Thailand, and Saudi Arabia, lead to censorship and abuse. Content regulation involves broad and ambiguously worded definitions of what constitutes false content or disinformation and is used in conjunction with criminal sanctions. Transparency initiatives are less draconian and effective in addressing the problem of disinformation by increasing the perception of accountability insofar as suppliers of political content are concerned. The AI content disclosure policies of social media platforms, as discussed above, is contributing to a more robust transparency approach towards AI and disinformation. Moreover, there has been a call for more transparency in algorithm development and user data protection.

Comprehensive media, information, and digital literacy programs are needed to empower discerning information consumption. However, on top of digital literacy and public education efforts for the public, transparency initiatives by COMELEC should also involve providing candidates and political parties a platform for a healthy and productive debate and giving them equal opportunities to address election disinformation against them. RA 9006 or the Fair Elections Act empowers the Commission to do this through the Affirmative Action provision (Section 7) and COMELEC Space and Time (Section 8).

2. Systematic and consistent monitoring of social media is needed: Transparency initiatives should be enabled by systematic and constant monitoring of media - both traditional and online media (as these two are inseparable sources of information for voters). LENTE recommends that the COMELEC harness the expertise of civil society organizations who have the know-how in terms of methodology and tools to effectively monitor social media during elections.

3. Lobby for additional budget to operationalize the Affirmative Action of the Commission on Elections and establish a Social Media Monitoring Unit: To strengthen and operationalize the provision on COMELEC’s affirmative action under the Fair Elections Act, COMELEC, through the Education and Information Department (EID), must lobby for an additional budget to provide more equal opportunities to national candidates in terms of campaigning, and enhance the quality of election information. This is to give meaning to the intent of the framers of the Fair Elections Act. Moreover, the additional budget will also help the EID to establish a Social Media Monitoring Unit with a dedicated team of social media monitors.

4. COMELEC should lead a multi-stakeholder effort in tackling disinformation: In relation to 1 & 2, COMELEC should lead a multi-stakeholder effort in tackling disinformation that involves not only election monitoring organizations/civil society and government agencies but also PR & Advertising Firms, Media, Social Media Companies, Political Parties, and Candidates.

The creation of a Voluntary Code of Conduct for the Practice of Political PR and Advertising was identified as a preliminary initiative to promote transparency and cooperation and to address the problematic aspects of online campaigning and election disinformation previously experienced in the 2016 and 2019 elections. LENTE reached out to multiple PR and Advertising Associations to consult with them on the proposal, which resulted in a Code of Conduct supported by the KBP, PANA, and UPMG. Efforts to engage more PR and Advertising firms in a discussion on their roles and responsibilities in ensuring the integrity of elections should emanate from the Commission on Elections.

LENTE lauds the COMELEC for being proactive in its approach to mis- and disinformation. LENTE believes and strongly recommends the exploration of ethical guidelines, transparency initiatives, and digital literacy/political education for the public to navigate the challenges posed by disinformation and AI-enabled disinformation in our elections.

  1. Newsroom | TikTok. “New Labels for Disclosing AI-Generated Content,” August 16, 2019.
    https://newsroom.tiktok.com/en-us/new-labels-for-disclosing-ai-generated-content. ↩︎
  2. “Our Synthetic and Manipulated Media Policy | X Help.” Accessed June 3, 2024.
    https://help.twitter.com/en/rules-and-policies/manipulated-media. ↩︎
  3. Meta. “Labeling AI-Generated Images on Facebook, Instagram and Threads,” February 6, 2024.
    https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/ ↩︎
  4. “How Does YouTube Responsibly Approach Generative AI? - How YouTube Works.” Accessed June 3, 2024. https://www.youtube.com/howyoutubeworks/our-commitments/responsible-ai/. ↩︎
  5. Wilson, Mark. “Here Is Facebook’s First Serious Attempt To Fight Fake News.” Fast Company,
    December 15, 2016. https://www.fastcompany.com/3066630/here-is-facebooks-first-serious-attempt-to-fight-fake-news. ↩︎
  6. See a screenshot of the content label during the 2022 NLE here. ↩︎
  7. “Our Approach to Responsible AI Innovation - YouTube Blog.” Accessed June 3, 2024.
    https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/. ↩︎
  8. “OpenAI Won’t Let Politicians Use Its Tech for Campaigning, for Now - The Washington Post.”
    Accessed June 3, 2024. https://www.washingtonpost.com/technology/2024/01/15/openai-election-misinformation-disinformation/. ↩︎
  9. Helmore, Edward. “AI Firm Considers Banning Creation of Political Images for 2024 Elections.” The Guardian, February 10, 2024, sec. Technology. https://www.theguardian.com/technology/2024/feb/10/ai-political-images-ban-trump-biden-midjourney. ↩︎
  10. “A Look at the Political Policies of AI Tools | Bipartisan Policy Center.” Accessed June 3, 2024. https://bipartisanpolicy.org/blog/political-policies-ai-tools/. ↩︎
  11. “Indonesia: Kominfo Publishes Circular on AI Ethics | News Post | DataGuidance.” Accessed June 3, 2024. https://www.dataguidance.com/news/indonesia-kominfo-publishes-circular-ai-ethics. ↩︎

Download the full statement here: "A Balanced Approach to Artificial Intelligence for the 2025 PH Elections: Ethical Guidelines, Transparency Approaches, and Political Education"


Get Updates