A London gen AI startup, supported by Octopus Ventures, is under scrutiny.
- The startup’s platform enables creation of ‘potentially harmful’ content.
- UKTN testing reveals lax safeguards compared to industry peers.
- Images feature public figures in misleading contexts, provoking disinformation fears.
- Platform terms discourage misuse but lack effective enforcement.
A London-based artificial intelligence (AI) startup has raised eyebrows for permitting users to generate content considered potentially detrimental. This firm, backed by significant venture capital funding, notably Octopus Ventures, finds itself at the centre of a debate over consumer protection and the management of AI capabilities. The platform allows image and video creation similar to other tools like OpenAI’s Dall E, yet appears deficient in safeguards to bar generation of misleading content.
Tests conducted by UKTN revealed unsettling results when the platform was used to create images portraying recognizable figures in controversial scenarios. Among these were illustrations of British Prime Minister Keir Starmer depicted with a burning flag, as well as fictitious meetings between Donald Trump and Taylor Swift. Such creations highlight the risks of misuse by those wishing to spread disinformation or malicious narratives.
Experts have routinely flagged the potential misuse of AI, and the concern intensifies with platforms capable of emulating real people’s likenesses. This poses significant ethical dilemmas, as AI-generated deepfakes and false public endorsements can manipulate public perception. Common precautionary measures among AI platforms include prohibiting content generation that involves real individuals without consent. In stark contrast, attempts with this platform yielded concerning outputs that could distort reality.
Comparative assessments show that higher-tier AI services such as Meta AI and ChatGPT incorporate stringent measures, effectively blocking attempts that could lead to misinformation. When requests were made to these services with similar parameters, responses emphasized their inability to create potentially harmful content. Yet, the scrutinized startup’s systems failed to provide adequate warning, raising alarms in light of its terms of use which explicitly warn against breaching consent policies.
Despite these clearly articulated terms, the AI system in question did not enforce the necessary content moderation to prevent breaches, underscoring significant lapses in its protective measures. This incident has resonated with wider concerns in the UK, where AI-generated audios featuring false statements by political figures have circulated, underscoring the potential damage unchecked AI advancements can inflict. Calls for a more robust regulatory framework for AI tool use are becoming increasingly urgent.
The incident underscores the pressing need for enhanced safeguards in AI platforms to prevent misuse and protect public trust.