Play Iconary, a simple drawing game that hides a deceptively deep AI - HY FY TECH

Post Top Ad

Play Iconary, a simple drawing game that hides a deceptively deep AI

Share This

It may not seem like it takes a lot of smarts to play a game like Pictionary, but in fact it involves a lot of subtle and abstract visual and linguistic skills. This AI built to play a game like it is similarly complex, and its interpretations and creations when you play it (as you can now) may seem eerily human — but it’s also refreshing to have such an agent working collaboratively with you rather than beating you with superhuman skills.

Iconary, as the game’s creators at the Allen Institute for AI decided to call it to avoid lawsuits from Mattel, has you drawing and arranging icons to form phrases, or guessing at the arrangements of the computer player.

For instance, if you were to get the phrase “woman drinking milk from a glass,” you’d probably draw a woman — a stick figure, probably, and then select the “woman” icon from the computer’s interpretations of your sketch. Then you’d draw a glass, and place that near the woman. Then… milk? How do you draw milk? There is actually a milk bottle icon if you look for it, but you could also draw a cow and put that in or next to the glass.

The computer then guesses at what you’ve put together, and after a few tries it would probably get it. You can also play it the other way, where the computer arranges icons and you have to guess.

Now, let’s get this right out of the way: This is very different from Google’s superficially similar “Quick, Draw” game. In that one the system can only guess whether your drawing is one of a few hundred pre-selected objects it’s been specifically trained to recognize.

Not only are there some 75,000 phrases supported in Iconary, with more being added regularly, but there’s no way to train the AI on them — the way that any one of them can be represented is uncountable.

“When you start bringing in phrases, the problem space explodes,” explained Ali Farhadi, one of the creators of the project; I talked with him and researcher Aniruddha Kembhavi about Iconary ahead of its release. “Sure, you can easily recognize a cat or a dog. But can you recognize a cat purring, or a dog scratching its back? There’s a huge diversity in elements people choose and how they position them.”

Although Pictionary may seem at first like a game that depends on your drawing skill, it’s really much more about arranging ideas and understanding the relationship with them — seeing the intent behind the drawing. How else can some people manage to recognize a word or phrase from a handful of crude shapes and some arrows?

The AI behind Iconary, then, isn’t a drawing recognition engine at all but one that has been trained to recognize relationships between objects, informed by their type, position, number and everything else. This is, the researchers say, the most significant example of AI collaborating meaningfully with humans yet created.

And this logic is kept fuzzy enough that several “person” icons gathered together could mean women, men, people, group, crowd, team or anything else. How would you know if it was a “team?” Well, if you put a soccer ball near it or put them on a play field, it becomes obvious. If there’s a blackboard there, it’s probably a class. And so on.

Of course, I say “and so on,” but that small phrase in a way encompasses the entirety of human intuition and years of training on how to view and interpret the visual world. Naturally Iconary isn’t nearly as good at it as we are, but its logic is frequently surprisingly human.

If you can only get part of the answer, you can ask the AI to draw again, and just like we do in Pictionary it will adapt its representation to address your needs.

It was of course trained on human drawings collected via Mechanical Turk, but it isn’t just replicating what people drew. If the only thing it ever saw to represent a scientist was a man next to a microscope, how would it know to recognize the same idea in a woman, or standing next to an atom or rocket? In fact, the model has never been exposed to the phrases you can play with now. As the researchers write:

AllenAI has never before encountered the unique phrases in Iconary, yet our preliminary games have shown that our AI system is able to both successfully depict and understand phrases with a human partner with an often surprising deftness and nuance. This feat requires combining natural language understanding, computer vision, and the use of common sense to reason about phrases and their depictions within the constraints of a small vocabulary of possible icons. Being successful at Iconary requires skills beyond basic pattern recognition, including multi-hop reasoning, abstraction, collaboration, and adaptation

Instead of simply pairing “ball” with “sport,” it learned about why those objects are related, and how to exert some basic common sense — a sort of Holy Grail in AI, though this is only a small step in that direction. If one person draws “father” as a man bigger than a smaller person, it isn’t obvious to the computer that the father is the big one, not the small. And it’s another logical jump that a “mother” would be a similarly sized woman, or that the small one is a child.

But by observing how people used the objects and how they relate to one another, the AI built up a network of ideas about how different things are represented or related. “Child” is closer to “student” than “horse,” for instance. And “student” is close to “desk” and “laptop.” So if you draw a child by a desk, maybe it’s a student? This kind of robust logic is so simple to us that we don’t even recognize we’re doing it, but incredibly hard to build into a machine learning agent.

This type of AI is deceptively broad and intelligent, but it isn’t flashy the way that the human-destroying AlphaStar or AlphaGo are. It isn’t superhuman — in fact, it’s not even close to human. But board and PC games are tightly bounded problem spaces with set rules and limits. Visual expression of a complex phrase like “crowd celebrating a victory on a street” isn’t a question of how fast the computer can process, but the depth of its understanding of the concepts involved, and how others think about them.

This kind of learning is also more broadly applicable in the real world. Robots and self-driving cars do need to know how to exceed human capacity in some cases, but it’s also massively important to be able to understand the world around them in the same way people do. When it sees a person by a hospital bed holding a book, what does that mean? When a person leaves a knife out next to a whole tomato? And so on.

“Real life problems involve semantics, abstraction and collaboration,” said Farhadi. “They involve theory of mind.”

Interestingly, the agent is biased a bit (as these things tend to be) owing to the natural bias of our language. Images “read” from left to right, as people tend to draw them, since we also read in that direction, so keep that in mind.

Try playing a couple of games both drawing and guessing, and you may be surprised at the cleverness and weirdness of the AI’s suggestions. Don’t feel bad about skipping one — the agent is still learning, and sometimes its attempts to represent ideas are a bit too abstract. But I certainly found myself impressed more than baffled.

If you’d like to learn more, stay tuned: The team behind the system will be publishing a paper on it later this year. I’ll update this post when that happens.



from TechCrunch https://tcrn.ch/2t7nEji

No comments:

Post a Comment

Post Bottom Ad

Pages