Systems powered by artificial intelligence (AI) can generate any type of content imaginable, including text, images, videos, and audio. AI produces range has many potential applications, including in marketing, journalism, education, and entertainment. However, AI's content may also present ethical and legal challenges. These challenges may include the dissemination of false information, the manipulation of data, the plagiarism of others' work, or the violation of intellectual property rights.
The European Union (EU) is concerned about the potential risks and harms of artificial intelligence (AI) content, particularly on social media platforms like Google and Facebook. The European Union believes that users have a right to know whether the content they view or interact with results from human labour or automated labour. In a recent report, the EU expressed this opinion. As a result, the European Union (EU) has requested that Google and Facebook label content on their platforms generated by AI as a component of its broader strategy to regulate AI in Europe.
Accountability and transparency are the two guiding principles of the European Union's proposal to label artificial intelligence-generated content. Users should be able to tell the difference between human- and machine-generated content and the nature and source of the content they see online, and this is what the term "transparency" means. When we talk about accountability, the producers and disseminators of AI-generated content should take full responsibility for their work's nature, scope, and consequences and ensure that it complies with all applicable legal and ethical requirements.
Although the proposal made by the EU to label content that AI-generated is not yet legally binding, it is included in a draft regulation on AI that the EU Commission published. The proposed rule aims to establish a common framework for developing and using artificial intelligence in the EU. This framework will be based on four levels of risk: unacceptable, high, limited, and minimal. The AI-generated content falls into the category of negligible risk, which indicates that it does not pose a significant threat to human rights or safety; however, it does require some safeguards and obligations nonetheless.
The proposed regulation requires that providers of AI systems that generate or manipulate content ensure that their systems are designed and developed in a way that enables users to be informed that the content was generated or managed by AI. This requirement applies to providers of AI systems that create or manipulate content using their systems. This functionality could be accomplished by including labels or icons that are legible and easy to see on the content or the platform on which it is displayed. In addition, the service providers need to ensure that their systems do not generate or manipulate any content that violates the rights or interests of third parties, such as those privacy, dignity, reputation, intellectual property, or even fair competition.
According to the draft of the regulation that was just released, users of artificial intelligence systems that generate or manipulate content must use those systems responsibly and lawfully. When developing or disseminating content caused or controlled by AI, this indicates that they should consider the rights and interests of other people. They should also comply with any labelling requirements or obligations imposed by the providers of the AI systems or by the platforms where they share their content. Those requirements and responsibilities may include a provision that they label their content.
Several experts and stakeholders have praised the EU's proposal to label content generated by artificial intelligence as a positive step towards ensuring greater accountability and transparency in the online environment. On the other hand, it has also given rise to some questions and difficulties, such as how to define and recognize AI-generated content, how to enforce the labelling requirements across a variety of platforms and jurisdictions, how to strike a balance between the labelling obligations and the freedom of expression and creativity of users, and how to educate and empower users to critically evaluate the content that they consume or interact with online.
Before it can be considered a final regulation, the EU's proposal to label content generated by artificial intelligence (AI) must undergo additional debate and be modified. The final regulation will then apply to all providers and users of AI systems within the EU, as well as those individuals and businesses outside the EU who offer their services or products in the market located within the EU.