We've sent a verification link by email
Didn't receive the email? Check your Spam folder, it may have been caught by a filter. If you still don't see it, you can resend the verification email.
Started June 7th, 2024 · 25 replies · Latest reply by Out-Of-Sync 2 months, 3 weeks ago
It definitely degrades the experience when useless computer-generated "art" posts with 780213 tags appear in every search. This has **already** happened even before generative AI, some script kiddie creates 10,000 random mashups and posts them. It's even happened here, though luckily not as widespread as other places.
I used AI on two albums for generating breakbeats, but the novelty wore off and the quality was pretty bad too. The rationale was that breakbeats are usually sourced through non-legal means anyway, so generating new breakbeats from existing material didn't seem like such a bad thing. But there's only so much it can do and I ended up modifying the breakbeats with chains of effects anyway. I didn't upload any of it to Freesound
The issues surrounding "generative AI" are so obfuscated in misleading terminology, shady politics and outright propaganda I'm not even sure where to begin. I've been trying to follow the public debates closely, albeit more from the skeptical tech critical side of things. While I try to remain open to technological "innovations" and their potential benefits, the move (gamble?) to shift as much social, capital and energy resources as fast as possible possible to machine driven computational probability processes (i.e. "AI") reveals some deeply concerning but all too familiar historical patterns (which are not even specifically about technology itself). I will only highlight two that I feel are most relevant to the debate around the mods offering up the whole of Freesound as training data (regardless of how any contributors may have felt about this). The first is a fairly clear and "inevitable" power shift toward automation that goes all the way back to the earliest days of the "industrial revolution" to the days when skilled textile workers were reduced or eliminated by factory owners in favour of more sophisticated machinery to increase output while cutting back on labour costs. In this way you could consider me a "digital luddite" who prefers hands on "digital labour" over mass automated processes that are supposed to "optimise" our lives. Another familiar pattern is how the "AI boom" has followed strategies for concentrating wealth and power under "liberal economics" by what one might consider 'primitive-accumulation' and by this I mean the Marxist variant whereby public or common goods and resources are privatised on a grand scale through coercion or violence by existing land owners. There are parallels in the 'digital world'. It's quite clear that the main stakeholders in the "AI boom" are already the main players (most can be considered anti-competitive monopolies) and have enforced their position primarily through scraping of publicly available data and outright theft of creative commons and copyrighted material with zero accountability so far (why have they scrapped all their 'ethics departments' starting around 2018?). Since business and ethics can't coexist it has opened the flood gates for rapidly growing abuse of "AI" for corrupt lobbying, closed door policy making, scams, deception and disinformation campaigns. Yes, the latter are mostly criminal activity but as has been often proven in the glorious 'digital economy', yesterdays scams become tomorrows business models. So, sure I understand these are much bigger issues and we're mostly poor researchers and creators. As a community driven research and sharing platform with a fairly high degree of standards, I appreciate Freesound's aims, commitments and everything you've done so far. But without clear safe guards and policies that protect users contributions (as a form of digital labour) from outright abuse that only contributes to damaging historical trends, I feel many questions remain unclear (particularly, what is it am I actually contributing to?). While I probably will not remove the files I've uploaded so far (because it seems the damage has been done), I find myself far less enthusiastic to contribute from now on until more protections are in place and can be enforced legally and clear moderation policies are enacted.
I'm pretty sure that AI is failure as any othe buzzword. The quality of LLM sis going down so much this year with all this generated pictures, sound and music scraping again it's a disaster. What's more people hate AI generated content this year even more than last since it's nothing new and quality it is just disgusting for many.
The fear of losing human creative control to a machine has existed since the dawn of actual robots in the mid 1920's. We have feared losing our identity (via our capacity to reason) to machines, whatever form they took. Though the means of creating these machines is only 100 years old, the concept of a humanoid machine has been around since 400 BC or so. E.G., in Cretian mythology a bronze man guarded the island of Talos. There have been many imagined automatons since.
Now, if you have any information up on the internet, it has already trained the first generations of AI. So, one could say, 'the damage is done.' But that being said, I wholeheartedly support AI labeling or identification. Transparency is so important, especially in an age where it is so easy for any person, or any machine, to create 'fake' versions of almost anything.
However, and here is the rub (I say this optimistically): Humans get very tired of mimicry and repetition, and this is exactly how machines, bots, and AI make their art. The human mind, and more importantly the mind of an entire collective (group of humans) is extremely talented at discerning tone, feeling, and shades of meaning. This ability is even more acute when those humans are also involved in a creative process together, like making art, music, or... recording/crafting sounds.
So, I have hope that we will manage to retain our capacity to create, even in the face of these new tools that we call 'AI'.
The important work is to make sure the internet is free and transparent. And that AI is open source and free to all. If it becomes the tool of a select few, or used only for the sake of profit... it will fail, as all 'trickle up' systems must.