@Edges
is using AI to generate music.
The basis of most AIs is knowledge. An AI has to know what to do in order to meet its given criteria when generating something. Take ChatGPT for example. A few years ago, it was not very good at its job and was more or less an internet toy for people to play with. Nowadays, ChatGPT’s writing can easily be mistaken for a human’s. This is a result of better training and just more data to learn from in general.
Let me reiterate. If an AI is badly trained / has not been trained very much yet, it won’t have the required knowledge to perform a requested task to human/near-human standards. It’s most obvious in AIs made to generate audio, video, or imagery. Every kind of AI has its own rendition of the 7-fingered hand.
Music AIs are relatively new, yet their giveaways are no more obvious now than that of when thispersondoesnotexist.com (for example) was introduced. People thought those faces looked real. Now, not so much.
Usually, an audio-based generative AI will create a white noise signal, then use subtractive spectral synthesis to pick out whatever frequencies it needs from the white noise to make a certain sound. Sometimes it fails to the latter step effectively. If it’s not 100% sure of what frequency to select at a certain point, some of that white noise will seep through.
It’s comparable to a person mumbling or stifling their speech when they’re unsure as to how to pronounce a word. At current, it’s audible in EVERY music AI. You can hear it in leads, chords, and pretty much everything that isn’t primarily a transient.
It’s audible in
@Edges
‘s tracks. Where I said it would be. It might be hard to hear at first, but if you listen to a few audio samples of a voiceover AI screwing up that frequency picking thing I mentioned earlier, it gets more and more obvious when you listen to AI music as well. This track contains some examples.
Neon Dreams
has noisy vocals as well.