• About
  • Privacy Policy
  • Terms and Conditions
  • Contact
Wednesday, July 16, 2025
  • Login
Best Technologies
  • Home
  • News
  • Tech
  • Spotlight

    Beyond Short-Term Fixes: How Themis Ecosystem Brings Long-Term Green Solutions

    A look inside both the Legion Go and Steam Deck OLED

    Construction robot builds massive stone walls on its own

    Receive an alert when one of your contacts is about to have a special day

    Here are the best iPad deals right now

    Here are the best smart locks you can buy right now

    Biomass Ultima Micro: A Smart Innovation That Solves a Big Problem

    What is an ‘AI prompt engineer’ and does every company need one?

    Recycled coffee grounds can be used to make stronger concrete

  • Business
  • Space
  • Videos
  • More
    • Mobile
    • Windows
    • Energy
    • Security
    • Health
    • Entertainment
No Result
View All Result
  • Home
  • News
  • Tech
  • Spotlight

    Beyond Short-Term Fixes: How Themis Ecosystem Brings Long-Term Green Solutions

    A look inside both the Legion Go and Steam Deck OLED

    Construction robot builds massive stone walls on its own

    Receive an alert when one of your contacts is about to have a special day

    Here are the best iPad deals right now

    Here are the best smart locks you can buy right now

    Biomass Ultima Micro: A Smart Innovation That Solves a Big Problem

    What is an ‘AI prompt engineer’ and does every company need one?

    Recycled coffee grounds can be used to make stronger concrete

  • Business
  • Space
  • Videos
  • More
    • Mobile
    • Windows
    • Energy
    • Security
    • Health
    • Entertainment
No Result
View All Result
Best Technologies
No Result
View All Result
Home Tech

What happens when you feed AI nothing

by News Room
June 18, 2025
in Tech
Share on FacebookShare on Twitter

If you stumbled across Terence Broad’s AI-generated artwork (un)stable equilibrium on YouTube, you might assume he’d trained a model on the works of the painter Mark Rothko — the earlier, lighter pieces, before his vision became darker and suffused with doom. Like early-period Rothko, Broad’s AI-generated images consist of simple fields of pure color, but they’re morphing, continuously changing form and hue.

But Broad didn’t train his AI on Rothko; he didn’t train it on any data at all. By hacking a neural network, and locking elements of it into a recursive loop, he was able to induce this AI into producing images without any training data at all — no inputs, no influences. Depending on your perspective, Broad’s art is either a pioneering display of pure artificial creativity, a look into the very soul of AI, or a clever but meaningless electronic by-product, closer to guitar feedback than music. In any case, his work points the way toward a more creative and ethical use of generative AI beyond the large-scale manufacture of derivative slop now oozing through our visual culture.

Broad has deep reservations about the ethics of training generative AI on other people’s work, but his main inspiration for (un)stable equilibrium wasn’t philosophical; it was a crappy job. In 2016, after searching for a job in machine learning that didn’t involve surveillance, Broad found employment at a firm that ran a network of traffic cameras in the city of Milton Keynes, with an emphasis on data privacy. “My job was training these models and managing these huge datasets, like 150,000 images all around the most boring city in the UK,” says Broad. “And I just got so sick of managing datasets. When I started my art practice, I was like, I’m not doing it — I’m not making [datasets].”

Legal threats from a multinational corporation pushed him further away from inputs. One of Broad’s early artistic successes involved training a type of artificial neural network called an autoencoder on every frame of the film Blade Runner (1982), and then asking it to generate a copy of the film. The result, bits of which are still available online, are simultaneously a demonstration of the limitations, circa 2016, of generative AI, and a wry commentary on the perils of human-created intelligence. Broad posted the video online, where it soon received major attention — and a DMCA takedown notice from Warner Bros. “Whenever you get a DMCA takedown, you can contest it,” Broad says. “But then you make yourself liable to be sued in an American court, which, as a new graduate with lots of debt, was not something I was willing to risk.”

When a journalist from Vox contacted Warner Bros. for comment, it quickly rescinded the notice — only to reissue it soon after. (Broad says the video has been reposted several times, and always receives a takedown notice — a process that, ironically, is largely conducted via AI.) Curators began to contact Broad, and he soon got exhibitions at the Whitney, the Barbican, Ars Electronica, and other venues. But anxiety over the work’s murky legal status was crushing. “I remember when I went over to the private view of the show at the Whitney, and I remember being sat on a plane and I was shitting myself because I was like, Oh, Warner Bros. are going to shut it down,” Broad recalls. “I was super paranoid about it. Thankfully, I never got sued by Warner Bros., but that was something that really stuck with me. After that, I was like, I want to practice, but I don’t want to be making work that’s just derived off other people’s work without their consent, without paying them. Since 2016, I’ve not trained a sort of generative AI model on anyone else’s data to make my art.”

In 2018, Broad started a PhD in computer science at Goldsmiths, University of London. It was there, he says, that he started grappling with the full implications of his vow of data abstinence. “How could you train a generative AI model without imitating data? It took me a while to realize that that was an oxymoron. A generative model is just a statistical model of data that just imitates the data it’s been trained on. So I kind of had to find other ways of framing the question.” Broad soon turned his attention to the generative adversarial network, or GAN, an AI model that was then much in vogue. In a conventional GAN, two neural networks — the discriminator and the generator — combine to train each other. Both networks analyze a dataset, and then the generator attempts to fool the discriminator by generating fake data; when it fails, it adjusts its parameters, and when it succeeds, the discriminator adjusts. At the end of this training process, tug-of-war between discriminator and generator will, theoretically, produce an ideal equilibrium that enables this GAN to produce data that’s on par with the original training set.

Broad’s eureka moment was an intuition that he could replace the training data in the GAN with another generator network, loop it to the first generator network, and direct them to imitate each other. His early efforts led to mode collapse and produced “gray blobs; nothing exciting,” says Broad. But when he inserted a color variance loss term into the system, the images became more complex, more vibrant. Subsequent experiments with the internal elements of the GAN pushed the work even further. “The input to [a GAN] is called a latent vector. It’s basically a big number array,” says Broad. “And you can kind of smoothly transition between different points in the possibility space of generation, kind of moving around the possibility space of the two networks. And I think one of the interesting things is how it could just sort of infinitely generate new things.”

Looking at his initial results, the Rothko comparison was immediately apparent; Broad says he saved those first images in a folder titled “Rothko-esque.” (Broad also says that when he presented the works that comprise (un)stable equilibrium at a tech conference, someone in the audience angrily called him a liar when he said he hadn’t input any data into the GAN, and insisted that he must’ve trained it on color field paintings.) But the comparison sort of misses the point; the brilliance in Broad’s work resides in the process, not the output. He didn’t set out to create Rothko-esque images; he set out to uncover the latent creativity of the networks he was working with.

Did he succeed? Even Broad’s not entirely sure. When asked if the images in (un)stable equilibrium are the genuine product of a “pure” artificial creativity, he says, “No external representation or feature is imposed on the networks outputs per se, but I have speculated that my personal aesthetic preferences have had some influence on this process as a form of ‘meta-heuristic.’ I also think why it outputs what it does is a bit of a mystery. I’ve had lots of academics suggest I try to investigate and understand why it outputs what it does, but to be honest I am quite happy with the mystery of it!”

Talking to him about his process, and reading through his PhD thesis, one of the takeaways is that, even at the highest academic level, people don’t really understand exactly how generative AI works. Compare generative AI tools like Midjourney, with their exclusive emphasis on “prompt engineering,” to something like Photoshop, which allows users to adjust a nearly endless number of settings and elements. We know that if we feed generative AI data, a composite of those inputs will come out the other side, but no one really knows, on a granular level, what’s happening inside the black box. (Some of this is intentional; Broad notes the irony of a company called OpenAI being highly secretive about its models and inputs.)

Broad’s explorations of inputless output shed some light on the internal processes of AI, even if his efforts sometimes sound more like early lobotomists rooting around in the brain with an ice pick rather than the subtler explorations of, say, psychoanalysis. Revealing how these models work also demystifies them — critical at a time when techno-optimists and doomers alike are laboring under what Broad calls “bullshit,” the “mirage” of an all-powerful, quasi-mystical AI. “We think that they’re doing far more than they are,” says Broad. “But it’s just a bunch of matrix multiplications. It’s very easy to get in there and start changing things.”

Source: The Verge

Tags: AITech

Related Posts

Tech

Now Microsoft’s Copilot Vision AI can scan everything on your screen

July 16, 2025
Tech

ICEBlock isn’t ‘completely anonymous’

July 15, 2025
Tech

Our biggest questions about ChromeOS and Android merging

July 15, 2025
Tech

Trump announces billions in investments to make Pennsylvania an AI hub

July 15, 2025
Tech

Xbox tests letting you stream your own games on PC

July 15, 2025
Tech

Sony’s pocket-sized RX1R camera returns with its first update in 10 years

July 15, 2025

Trending Now

Plugin Install : Popular Post Widget need JNews - View Counter to be installed

Latest News

Mobile

The Nothing Phone (3) and Headphone (1) are now widely available in most regions

July 16, 2025
Tech

Now Microsoft’s Copilot Vision AI can scan everything on your screen

July 16, 2025
Mobile

You won't believe what the Atari 2600 from 1977 did to ChatGPT, and Copilot while scaring Gemini

July 16, 2025
Entertainment

To make Ironheart feel tactile, the show’s creative team had to get on the same page

July 15, 2025
Tech

ICEBlock isn’t ‘completely anonymous’

July 15, 2025
Security

I Tried Grok’s Built-In Anime Companion and It Called Me a Twat

July 15, 2025
Best Technologies

Best Technologies™ is an online tech news portal. It started as an honest effort to provide unbiased and well-suited information on the latest and trending tech news.

Sections

  • Business
  • Energy
  • Entertainment
  • Health
  • Mobile
  • News
  • Security
  • Space
  • Spotlight
  • Tech
  • Windows

Browse by Topic

AI amazon amazon prime day android Apple apps artificial intelligence buying guides cars deals Donald Trump elon musk Entertainment gadgets gaming google health household laptops Meta microsoft mobile news Nintendo OpenAI phones policy politics Prime Day privacy quantum computing review reviews Roundup Samsung science security shopping smart home smartphones social media space streaming Tech Wearable

Recent Posts

  • The Nothing Phone (3) and Headphone (1) are now widely available in most regions
  • Now Microsoft’s Copilot Vision AI can scan everything on your screen
  • You won't believe what the Atari 2600 from 1977 did to ChatGPT, and Copilot while scaring Gemini
  • About
  • Privacy Policy
  • Terms and Conditions
  • Contact

© 2022 All Right Reserved - Blue Planet Global Media Network

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News
  • Tech
  • Spotlight
  • Business
  • Space
  • Videos
  • More
    • Mobile
    • Windows
    • Energy
    • Security
    • Health
    • Entertainment

© 2022 All Right Reserved - Blue Planet Global Media Network

This website uses cookies. By continuing to use this website, you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.