The RobotArt Competition was started by Andrew Conru as a way to merge two of his passions – technology and art – and to encourage the advancement of both together. Fundamental to the competition is the belief that creativity and expression are emerging in unexpected ways from our relationship with technology. This contest hopes to explore and advance its development over the next five years.
Andrew holds a Ph.D. in mechanical engineering design from Stanford University. He’s taught control systems at Santa Clara University and Robotics at San Jose State University. He is also an amateur artist who started painting in 2015 as a new year resolution. His blog.
The robots who paint with direct human involvement can get input multiple ways. Nothing to stop the robotic artists from being friends either. The most obvious way is to create a physical tool that a human artist can move and have a robot mimic the motions remotely in a way similar to remote doctors performing surgery. When the competition was first set up, it was thought that this would be a popular technique, but it turned out that teams wanted a greater challenge. Dramatically, the eyePaint team used an eye-tracking system to remotely tell the robot how to move. cloudPainter also allowed people using the Internet to remotely paint using a robot in Washington D.C. by sending brush stroke commands.
The software-generated paintings commands can be considered as the technique of the “artist” and often are similar to how human artist paint. The human designers of the robotic artist approach the creation of the techniques in different ways. The most direct way is to start with a photo or image of what is desired to painted and then write software that can look for patterns and shapes that can be converted into brush strokes. A straightforward technique is to simplify a photo into several different colors and then tell the robot to paint each color separately. Paintings done like this can have a “paint by number” look (see artwork by RHIT as examples).
A more elaborate technique is to let the robot paint a bit “outside the lines” and then use a camera to see what the robot needs to do to make adjustments so that the painting more closely matches the desired image (see artwork by e-David and cloudPainter as examples of techniques that use this corrective feedback loop to improve a painting over time). Other techniques include strategies that decompose an image into large abstract shapes and paint them first in what human artist call an underlayer. They then paint additional layers on top to add detail and richness to the painting (see artwork by TAIDA for examples). The human designer gets to create different painting techniques for a wide range of painting styles. By looking at the different styles of artwork submitted in this year's contest, you can see how each technique becomes the signature style of the robot.
Starting with a source photo or image isn’t the only way to create art with robots. It is possible to completely generate the desired image using AI (artificial intelligence) and then use either specially designed or some of the previously mentioned painting techniques to create the physical painting. For example, Picassnake looks at the frequency patterns of music and converts them into color and brush stroke commands to create colorful abstractions. NoRAA took a more analytical approach by creating software to generate an assortment of interesting color and shape patterns. We anticipate that future RobotArt teams will even try to synthesize existing art using machine learning techniques like deep learning to either determine and repurpose its core essence or extend the creative process in interesting new ways (see Google’s “inception” art for more examples).
How to know if a robot succeeds in creating art?
Whether or not a robot has succeeded in creating something beautiful is easy to judge. You don’t need to be an expert to know if you like an artwork. This is why we decided to open up part of the judging process to the general public. We had over 2200 people register to vote on their favorite artwork. Each person was given 10 votes that they could distribute among their favorites – with up to 3 votes per artwork.
What are some benefits of students participating?
For the most part, engineering graduates rarely get an opportunity to take art-related classes. The RobotArt competition is a way to give students an opportunity to get hands-on with the art creation process. Students quickly find that science and engineering share many of the innovative problem-solving skills used by artist such as balancing details and abstraction, looking at reality in different ways, and planning steps that build on each other to become greater than the sum of the parts. Oh yah, and it’s cool. Who doesn’t like watching a robot whip out a painting?
Does robot art invalidate human generated art?
Art can be appreciated on many levels – technique mastery, aesthetics, and even from a personal connection with the artist (ask any parent with their child’s artwork taped to the refrigerator). The camera didn’t invalidate the portrait artist as the portrait artist was often trying to capture a deeper emotion that a perfect copy of the sitter. However, the camera opened up a whole new form of art – photography. Likewise, human generate art will always be highly respected not only for its creativity but in our shared human experience. While we may be impressed by AI chess software, we are thrilled and impressed by human grandmasters.
That being said, robots and AI will enable artist to attempt art that are perhaps intellectually or physically more ambition than before. AI advances in human mimicry or extension might also affect the fundamental connection between the artwork and those who interact with it. While this contest can be though a “John Henry” moment where we are investigating the fuzzy, less analytical, comparisons between human and machine, in art, there are no losers. We all win when we see something beautiful.
Why a robot art competition NOW?
There are several trends that are converging that makes the timing right for this competition. First, as there is growing access to inexpensive yet powerful robots (see 7Bot on Kickstarter), we are at a point where robots will become commonplace. It’s a bit similar to getting critical mass to easy online tools for creating web pages. Once there is access, people can become more creative. Second, computer processing speed and advancements in AI, especially in image processing and object recognition, will enable the software that controls robots to think in ways similar to humans (here’s a short summary).
Where is this competition headed?
We are excited to see how teams build upon their technical successes of this year to focus more on the creative process and artistry in the next 4 years of competitions. We are also considering having an art auction of selected artworks following the announcement of winners in the 2017 competition.
This webinar shares some of the research carried out for the new IDTechEx report, Sensors for Robotics: Technologies, Markets and Forecasts 2017-2027. In it we will discuss sensing technologies like machine vision and force sensing, the additional capabilities they enable and the market segments that they will impact.
Why robotic sensors?
Industrial, collaborative and advanced mobile robots.
Machine vision and force sensing.
Forecasted volume of vision and force sensing in robotics.