Why does AI art have too many fingers?
A notable quirk of AI art is that it often represents people with profoundly weird hands. The “weird hands quirk” is becoming a common indicator that the art was artificially generated. This oddity offers more insight into how generative AI does (and doesn’t) work. Start with the corpus that DALL-E and similar visual generative AI tools are pulling from: pictures of people usually provide a good look at their face but their hands are often partially obscured or shown at odd angles, so you can’t see all the fingers at once. Add to that the fact that hands are structurally complex—they’re notoriously difficult for people, even trained artists, to draw. And one thing that DALL-E isn’t doing is assembling an elaborate 3D model of hands based on the various 2D depictions in its training set. That’s not how it works. DALL-E doesn’t even necessarily know that “hands” is a coherent category of thing to be reasoned about. All it can do is try to predict, based on the images it has, what a similar image might look like. Despite huge amounts of training data, those predictions often fall short.
Phipps speculates that one factor is a lack of negative input.
It mostly trains on positive examples, as far as I know. They didn’t give it a picture of a seven fingered hand and tell it “NO! Bad example of a hand. Don’t do this.” So it predicts the space of the possible, not the space of the impossible. Basically, it was never told to not create a seven fingered hand.
There’s also the factor that these models don’t think of the drawings they’re making as a coherent whole; rather, they assemble a series of components that are likely to be in proximity to one another, as shown by the training data. DALL-E may not know that a hand is supposed to have five fingers, but it does know that a finger is likely to be immediately adjacent to another finger. So, sometimes, it just keeps adding fingers. (You can get the same results with teeth.) In fact, even this description of DALL-E’s process is probably anthropomorphizing it too much; as Phipps says, “I doubt it has even the understanding of a finger. More likely, it is predicting pixel color, and finger-colored pixels tend to be next to other finger-colored pixels.”
Potential negative impacts of generative AI
These examples show you one of the major limitations of generative AI: what those in the industry call hallucinations, which is a perhaps misleading term for output that is, by the standards of humans who use it, false or incorrect. All computer systems occasionally produce mistakes, of course, but these errors are particularly problematic because end users are unlikely to spot them easily: If you are asking a production AI chatbot a question, you generally won’t know the answer yourself. You are also more likely to accept an answer delivered in the confident, fully idiomatic prose that ChatGPT and other models like it produce, even if the information is incorrect.
Even if a generative AI could produce output that’s hallucination-free, there are various potential negative impacts:
- Cheap and easy content creation: Hopefully it’s clear by now that ChatGPT and other generative AIs are not real minds capable of creative output or insight. But the truth is that not everything that’s written or drawn needs to be particularly creative. Many research papers at the high school or college undergraduate level only aim to synthesize publicly available data, which makes them a perfect target for generative AI. And the fact that synthetic prose or art can now be produced automatically, at a superhuman scale, may have weird or unforeseen results. Spam artists are already using ChatGPT to write phishing emails, for instance.
- Intellectual property: Who owns an AI-generated image or text? If a copyrighted work forms part of an AI’s training set, is the AI “plagiarizing” that work when it generates synthetic data, even if it doesn’t copy it word for word? These are thorny, untested legal questions.
- Bias: The content produced by generative AI is entirely determined by the underlying data on which it’s trained. Because that data is produced by humans with all their flaws and biases, the generated results can also be flawed and biased, especially if they operate without human guardrails. OpenAI, the company that created ChatGPT, put safeguards in the model before opening it to public use that prevent it from doing things like using racial slurs; however, others have claimed that these sorts of safety measures represent their own kind of bias.
- Power consumption: In addition to heady philosophical questions, generative AI raises some very practical issues: for one thing, training a generative AI model is hugely compute intensive. This can result in big cloud computing bills for companies trying to get into this space, and ultimately raises the question of whether the increased power consumption—and, ultimately, greenhouse gas emissions—is worth the final result. (We also see this question come up regarding cryptocurrencies and blockchain technology.)
Use cases for generative AI
Despite these potential problems, the promise of generative AI is hard to miss. ChatGPT’s ability to extract useful information from huge data sets in response to natural language queries has search giants salivating. Microsoft is testing its own AI chatbot, dubbed “Sydney,” though it’s still in beta and the results have been decidedly mixed.
But Phipps thinks that more specialized types of search are a perfect fit for this technology. “One of my last customers at IBM was a large international shipping company that also had a billion-dollar supply chain consulting side business,” he says.
Their problem was that they couldn’t hire and train entry level supply chain consultants fast enough—they were losing out on business because they couldn’t get simple customer questions answered quickly. We built a chatbot to help entry level consultants search the company’s extensive library of supply chain manuals and presentations that they could turn around to the customer.
If I were to build a solution for that same customer today, just a year after I built the first one, I would 100% use ChatGPT and it would likely be far superior to the one I built. What’s nice about that use case is that there is still an expert human-in-the-loop double-checking the answer. That mitigates a lot of the ethical issues. There is a huge market for those kinds of intelligent search tools meant for experts.
Other potential use cases include:
- Code generation: The idea that generative AI might write computer code for us has been bubbling around for years now. It turns out that large language models like ChatGPT can understand programming languages as well as natural spoken languages, and while generative AI probably isn’t going to replace programmers in the immediate future, it can help increase their productivity.
- Cheap and easy content creation: As much as this one is a concern (listed above), it’s also an opportunity. The same AI that writes spam emails can write legitimate marketing emails, and there’s been an explosion of AI copywriting startups. Generative AI thrives when it comes to highly structured forms of prose that don’t require much creativity, like resumes and cover letters.
- Engineering design: Visual art and natural language have gotten a lot of attention in the generative AI space because they’re easy for ordinary people to grasp. But similar techniques are being used to design everything from microchips to new drugs—and will almost certainly enter the IT architecture design space soon enough.
Conclusion
Generative AI will surely disrupt some industries and will alter—or eliminate—many jobs. Articles like this one will continue to be written by human beings, however, at least for now. CNET recently tried putting generative AI to work writing articles but the effort foundered on a wave of hallucinations. If you’re worried, you may want to get in on the hot new job of tomorrow: AI prompt engineering.