Monday, August 12, 2013

Art Loops

A creative system involves artist, art and viewer in an information-generative feedback loop. A creative system’s success can be measured by the amount of information generated. Some art conveys a deep message to a targeted audience, some a shallow message to a general audience. The art that fails to reach anyone returns to dust. The art that achieves both depth and breadth disseminates via mechanical reproduction, and can be considered extraordinarily successful. Hans Haacke is an artist more aware of the feedback loop than perhaps any other. His pieces are site-specific reflections upon the politics and environments of the art world. Some work is more reserved: Condensation Cube is a plexiglass box which maintains an equilibrium with the museum climate, showing the water cycle in isolation. In a more aggressive gesture, Haacke produced a series of framed photographs and text: Shapolsky et al. Manhattan Real Estate Holdings, A Real Time Social System, as of May 1, 1971. The series outlines the shady dealings of a landlord named Harry Shapolsky. Haacke created the piece for a show at the Guggenheim Museum, which the trustees promptly cancelled when learning of the plan. This act of censorship is of course the best affirmation Haacke could hope for.

Condensation Cube
Hans Haacke
1963 - 1965
Water in plexiglass

Influence of the three components of a creative system— artist, art and viewer— is not evenly distributed. Some artists lend creative control to the viewer directly. Some encode complex ideas fully in the work itself, some rely on wall text, some offer only abstractions. At the center of all art is metaphor: colors become sentiments, figures become archetypes, symbols abound. It is through metaphor that an artist expresses ideas. Good art is both surprising and relatable. A good artist blends deliberation with instinct— they intend to represent something, but remain open to the influence of their medium. By being beautiful, or funny, or eye-catching, the art invites viewers to spend time interpreting it. The artist relies on large swaths of overlapping taste, or else finds the right audience. 

Creative art presents ideas not otherwise available to the senses. Images of mythical creatures, extinct creatures, and creatures of an artist’s invention are examples of creative art. Representing these three kinds of creatures well requires a biological understanding of how animal bodies work and look, a practical understanding of an artistic medium, awareness of historical context, and an experienced aesthetic to fill in the gaps. Often the most challenging part of creation is deletion. Much creativity happens in the edit. Robert Rauschenberg is infamous for his Erased de Kooning Drawing. To paraphrase his interview on the piece by SFMOMA, it was a continuation of his series of white painted canvases. He began by erasing his own drawings but was unsatisfied, and decided it must be art first. So he went to de Kooning, whom he considered the most indisputably artistic, and asked for a drawing. After careful consideration, de Kooning selected a piece that was both dear to him and inclusive of several different media, making it as difficult to erase as possible. Rauschenberg says the piece took three months, and uncountable erasers.

Erased de Kooning Drawing
Robert Rauschenberg
Drawing media on paper

Creative systems occur at micro and macro levels simultaneously. At the center of this scale is the local ecology of an artist creating and viewing their own work.  To the left of the scale are subroutines of neurons and nerves. To the right are collaboratives, institutions, and arenas. The richer the subsystems, the more successful the work. This idea reveals the first opportunity for a synthetic creative system: unlike a typical human artist, a robot feels no different painting in a studio than in a gallery. Viewers are often left unaware of what goes in to a piece, and artists are often secretive about their process. Viewers can watch a robot work with unabashed curiosity. If designed to work quickly, a robot can even take willing viewers on as subjects.

Wednesday, July 31, 2013

Hue Sort

My project is too big for a regular spreadsheet but not quite ready for a relational database. After testing the waters of the Python Imaging Library, I've settled on a file system to organize my colors. Six top level folders named Red, Orange, Yellow, Green, Blue, and Purple each have sixty subfolders. This allots one folder for each of the 360 hues in the color wheel. I'm going to name each of these 360 hues myself, and ask my viewers to name as many of the 16,777,216 unique hex colors as possible.

But I just hit a stumbling block. The 360 major computer generated hues do not fall neatly into 6 categories of 60. I made a little program to draw swatches for each hue so I could name them all; I was hoping to end up with a column for each ROYGBP. I shifted the spectrum so the hues were offset by 30 which helped a bit, but not much. There's too much Green, and not enough Yellow.

The visible spectrum image from Wikipedia looks much more balanced. I wonder if it's rendered in something more attuned to this than Processing, or if the person who generated it did some lerping. Either way, I don't think I'm going to use the Processing hues as the second tier of my color hierarchy. I'll hand-pick them. I'll deal with Magenta (which occurs in the color wheel but not the rainbow) by splitting it across Red and Purple.

Below the Wikipedia image are the spectrums from Processing and the ColorPy library I will be using in the near future. ColorPy is better, but still not great. Computer screens are a matrix of Red, Green & Blue LEDs— displaying secondary and composite colors by blending and gamma correction— which might also account for the Yellow deficit. I look forward to building my own spectrum, and finding the paint and light equivalents.

Thursday, July 25, 2013

Pair Programming

Tom and I did some pair programming last week: he drove (wrote the code) and I navigated (dictated instructions). What we ended up with (link to GitHub) was something that crawls through text looking for color words. It divides the text into lines, and the lines into words. Any time the words red, orange, yellow, green, blue, or purple come up, the other words in that line are filed into a matrix. Once the algorithm is finished, it returns the set of words that have more than one correlation. For example, given the text, The Loves of Krishna in Indian Painting and Poetry by W. G. Archer, the program returned this:

The only really interesting result here is that Krishna is indeed blue (the literal translation of the name is "black" or "dark" but most depictions give him blue skin). It's singular, but still very exciting. One text is too small a dataset, so I'm building up some compilations of my favorite poets, transcendentalists, aesthetic philosophers, etc. I'll also tokenize the text by sentence rather than line (except in poems), and weigh the associated words by how close they are to the color word (so that in the line, red shoes by the newsstand, shoes gets more points than newsstand). I'll also add the words color/s and colour/s.

The results improved dramatically when I added words like black, dark, white, and light. These words are used much more often, particularly in metaphor. It occurred to me to start collecting those for Nila, my black and white painting robot, and I'm thrilled with the idea.

Thursday, July 18, 2013

Neko's Brain Trust

On Monday, Neko and I were successfully funded on Kickstarter! It's tremendously exciting and I can't thank my supporters enough. I just put in a big motor order and look forward to a minor rebuild. As for software, there are two things I'm now teaching Neko: how to select a digital color given a set of words, and how to select pigments to represent that digital color. My new vocabulary for the day is distributional semantics. This is a practice based on the idea that words with similar distributions have similar definitions; "You shall know a word by the company it keeps." I'm building a text crawler that will look for my six core words— Red, Orange, Yellow, Green, Blue, Purple— and find words that are highly collocated with each. I'm picking out old public domain books to walk through, and would love suggestions on books that have high frequency of these words. While I'm waiting for the new motors to arrive, I'll put all the most-associated words into my database. Some of the Kickstarter funds will be allocated to a Mechanical Turk bid, though I'm not sure quite what to ask yet.
The second task is matching the pigments to the colors on-screen. For this I've ordered a color sensor which Neko can use to compare what's on the palette to the target color. I've picked a set of mixing pigments— Cadmium Red Medium, Cadmium Yellow Lemon, and Phthalo Blue— to be Neko's go-to adjusters.

Monday, June 24, 2013

Random Walk to Remember

The random walk is my Hello World— the foundation code I always write to get things up and running. If I'm using a new visualizer, I try to make it look like TV static. I'll use random colors or numbers or motions to get a broad sense of what my new system can do.

I'm pleased with Neko's code so far, it's very well-organized. The random walk is my best yet so I'll explain it for the sake of anyone who doesn't know how to read code, but would like a little puzzle to try it out.

This a screenshot of one of the functions in my program: randomWalk(). I tell the function to repeat infinitely with two parameters: the maximum distance the brush should jump, and the time to pause after each iteration. My current parameters are 50 steps and 500 milliseconds, so when the computer reads the instruction randomWalk(50, 500) it will run this piece of code with the variables jump and del set to 50 steps and 500ms, respectively.

This function uses randomness in two ways: it randomly picks one of the six motors, then it randomly picks the number of steps for that motor to move.

Neko also has a motorized shoulder which is not moved in the random walk, but instead responds to the ultrasonic sensor. This keeps the brush on the canvas. The accelerometer is read in a balanceBrush() function, but I don't use it much yet. There's also a readGyro() function, adapted from this tutorial, which prints the gyroscope readings but does nothing more; another noch-nicht. A zip file of Neko's Arduino program can be found here— a link I'll update with each new version.

Reading live data as scrolling lines of serial output can be quite meditative. I just disabled the motors and moved the brush around the canvas by hand, allowing my own neural network to get a sense of how the sensor data changes while drawing different shapes and lines. As a stepping-stone to a Kalman filter, I'll now try writing a complementary fiter to combine accelerometer and gyroscope data and determine how the arm is moving.

Thursday, June 20, 2013

Beeping with Robots

The central objective of my work is to allow robots to express themselves using oil paint. But today I've been thinking about incorporating another mode of expression: beeps. My first Roomba was a castaway from my mother who, though an enthusiastic early adopter, has vacuuming standards the robot was unable to meet. Eventually I tried teaching it to paint which went well, until it went poorly. Its stair detecting feature failed (presumably paint-related) and it plummeted off a sawhorse.

Now I have a newer model Roomba and although it's better at vacuuming, the updated voice is a huge disappointment. The red model had an all-beep vocabulary. The white one has the saccharine GPS navigatress saying things like "open Roomba's brush cage" or "please charge Roomba." It's speaking in the third person, in parroted English. The Roomba only needs to vocalize three things: I'm ready to clean, I'm in distress, I've successfully cleaned. Whatever caused the distress is apparent as soon as it's got your attention.

I've been testing out some basic emotive themes on my keyboard today, and came up with some beeps to try out on Neko.

Monday, June 17, 2013

Terra Verte

Little Neko finally got some paint tonight: Terra Verte. He's just doing a random walk, using the ultrasonic sensor to keep the brush at the canvas, and the accelerometer to stay upright. I'm hoping eventually that using a random walk and a canvas, Neko could learn how to address the canvas as an X-Y plane. Writing something like that isn't easy, but it's less tedious than hard coding all the different angles. I'm hoping I can get away without putting potentiometers at each joint.

A List of Robot Artists

I am not alone in my efforts to build artistic robots. I've seen many, some already decades old. This post will serve as a list to be expanded with time. It won't be comprehensive— there are too many, often similar— just my favorites. If you know of any I've missed, please leave a comment.


Tuesday, June 11, 2013

A New Subroutine

I married a man that obviously had 60+ years of interesting conversation in him, and a roadtrip yesterday was no exception. We were considering the information content of photographs versus paintings. Our general consensus was that they're equal in richness, but tell different stories. To test this theory I will start with a photograph, try to replicate it in paint, then slowly replace information until I have something different enough to be interesting. I think the way I'd like to do this is take a photograph of me, Tom, and our little dog Hector. I'll make a series of paintings, each conveying a more abstracted version of the same image. In a little envelope taped behind the canvas will be my intended information delta.

Thursday, May 30, 2013

Sensing Brush Location

Getting Neko back and forth across the country proved extremely difficult, so I've been rebuilding him at half-size. In the process I decided to add a gyroscope (measures tilt about 3 axes) to the accelerometer  (measures static and dynamic acceleration, due to gravity and motion respectively) and ultrasonic sensor (measures distance to the nearest object by emitting an inaudible ping, and timing how long it takes to bounce back). In Neko's first iteration, I had the accelerometer on the shoulder joint, telling those motors to keep the arm generally upright. I had the ping sensor on the wrist, telling the shoulder motors to keep the brush at the right distance from the canvas. The wrist motors moved the brush back and forth as the elbow motor moved from the top of the canvas to the bottom. The shoulder had continuous rotation servos, all others were standard servos.

For half-pint Neko, all the servos are continuous rotation. I was aiming for uniformity, and the ability to scale up to stepper motors if I rebuild the robot at full-size. The accelerometer will tell me pitch and roll, but I need pitch and yaw. I think a gyroscope can fill the gaps. I got some trusty-sounding advice from r/robotics on sensor placement, and now have everything mounted together. I'm still pretty confused, and scared of whatever Unscented Kalman Filtering is, but I'm making progress. My current technical difficulty is that I don't know how to use the gyroscope in Python (the Prototyping library is extremely limited, and not really meeting my motor needs either), and don't know how to log data without it (Firmata seems deprecated). 

Friday, April 26, 2013

The Robotic Gaze

I began the morning by reading a fantastic summation of the male gaze. Not the happiest way to start out, but a subject at the back of my mind. I recently watched two movies at opposite ends of a cultural spectrum: The Godfather, and Cannonball Run II. The latter was so unredeeming I left the room and was bitterly chastised for over-sensitivity. The former had obvious merit and I stayed the course. But it was so male dominated that I didn't feel offended, I felt bored. There was so little I could relate to. My interest was sustained by the beauty of the shots and Marlon Brando's acting, and by my husband's delivered promise of a good parting shot. The only woman who really speaks is Diane Keaton. She's wonderful, but in her two big scenes she's a walking incarnation of this footnote from The System of Objects:
12. 'Loud' colours are meant to strike the eye. If you wear a red suit you are more than naked — you become a pure object with no inward reality. The fact that women's tailored suits tend to be in bright colours is a reflection of the social status of women as objects. - Jean Baudrillard, 1968

It was fortuitous to read this text and see this movie for the first time within weeks of each other. I couldn't stop thinking about it. I love this section of the book; here is what surrounds the footnote:
The world of colours is opposed to the world of values, and the 'chic' invariably implies the elimination of appearances in favour of being:12 black, white, grey — whatever registers zero on the colour scale — is correspondingly paradigmatic of dignity, repression, and moral standing.

'Natural Colour'
Colours would not celebrate their release from this anathema until very late. It would be generations before cars and typewriters came in anything but black, and even longer before refrigerators and washbasins broke with their universal whiteness. It was painting that liberated colour, but it still took a very long time for the effects to register in everyday life. - p. 30
The good news here is that robots are ungendered. They're unburdened by history, with the potential for an aesthetic unshaped by its influence. Maybe they'll show us that the female human form is objectively beautiful and we were right all along. Maybe they'll show us something different. I really love the photos my robot Nila takes while she's painting, and am trying to figure out how to expand this with Neko.

The viewer never saw the image feed from Nila's camera, because people like looking at their own image, and I wanted them to look at Nila. I think I'll change this with Neko, and am experimenting with different modes of interaction.

To cleanse the palate, here is my favorite piece of art on the subject. The camera is given its rightful place as neutral observer. Below is an interview with the artist by MoMA.

Picture for Women, Jeff Wall, 1979

Thursday, April 25, 2013

Learning About Learning

I started Andrew Ng's coursera class on Machine Learning this week. It's fun so far, and I've learned some new terminology to help frame my goals. There are two domains in which I aim to use ML: 1) learning to associate colors with words through an expansive database, and 2) learning to recommend a color given a text prompt. Here's a tidy definition of ML offered by Ng, and how I think it can be applied to both of my domains:
computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P improves with experience E. - Tom Mitchell, 1998
1) In the case of Neko learning from datasets:

   E is the collocation of colors and words in a database.
   T is the clustering and re-clustering of colors with words.
   P is the score of the clusters (how well-sorted they are).

An example of k-means clustering

2) In the case of Neko learning from people:

   E is testing colors on different individuals.
   T is returning a color, given some text.
   P is the number of well-liked colors.

An example of a support vector machine

The categorical names for each are that Case 1 is unsupervised clustering, and Case 2 is supervised classification. K-means is a likely algorithm for the former, and a support vector machine for the latter. Because order is meaningful (Orange is closer to Red than Yellow), color is a regression problem with continuously valued output. But there is a sense in which colors are discrete as well, so that's what I'm mulling over now.

Monday, April 22, 2013

How Color Works

Color is a signal. It's an interaction between radiating bodies, reflective surfaces, and vision systems. Radiating bodies, like a lamp or the Sun, undergo a chemical process that emits energy in tiny packets called photons. These photons travel along wavelengths varying across the range of the electromagnetic spectrum. At the low-energy end of the visible sub-spectrum is red, with a very long wavelength. At the high-energy end of the range is violet, with a very short wavelength. After traveling the path of least resistance the photons hit the retina of the eye, stimulate rod and cone cells tethered through the optic nerve to the brain, and become the phenomenon of light, color, and image.

Twilight in the Wilderness, Frederick Edwin Church,  1860
Consider this Hudson River School painting of a sunset. We call sunlight white but it's technically golden, and this is exaggerated at sunset, when atmospheric haze makes it easier to see the star. The light is generally white because the photons emitted by the Sun are full-spectrum. This does not mean all photons travel at an average value; some photons travel on the red wavelength, some on the green, etc. Each color occurs with a roughly equal proportion, so they blend as beams into white. Church used Lead White (now deprecated) under secretive blends of oil, wax and resin to give his painted Sun internal reflection.

As matter interacts with photons on the journey to Earth, the wavelength dictates each color's reaction. The sky is blue because the paths of the short blue waves are more prone to collision. The blue light scatters easily in atmospheric dust and moisture. Church used a pigment called Cerulean, which was developed for skies, despite being a little too green. Interestingly, if our vision was more accurate, the sky would be violet. Our brain makes it blue. Our art history makes it turquoise.

Clouds are wonderful paint and color subjects. They're generally white, because the water vapor that composes them diffracts sunlight and makes them glow. When clouds are dense with vapor, on the brink of rain, less light escapes and they turn gray. At twilight only the photons on longer waves reach the clouds, lighting them up red like fire. Of course that's a bad analogy because we tend to see fire at relatively low temperatures— reds and yellows are called 'warm colors' while blues and greens are called 'cool colors,' despite the fact that blue light is hotter than red.

The remaining colors in the painting are desaturated but mostly green: the predominant color of plant-life. Terra Verte is a likely pigment. Plants live through a process called photosynthesis: photons from sunlight provide the energy to chemically bond carbon dioxide to water, forming carbohydrates. The subroutine of translating light energy to electric energy occurs in the photosynthetic reaction center: a mesh of proteins and pigments. Chlorophyll is a typical pigment— chlorophyll absorbs the blue and red light, and reflects the green. If a plant's leaf is the color of Terra Verte, that is its resonant frequency. The leaf system stores vibrational energy at that frequency, so the Terra Verte photons are amplified upon reflection. A well-informed painter might also know Terra Verte has been used as an undertone for flesh since the 11th Century, and can subtly anthropomorphize a landscape.

Friday, April 19, 2013

Supplying a Robot with Paint

Paint delivery one of the more challenging aspects of art robotics. Paint is messy, wet, complex, and ruins a brush that's allowed to dry. I use oil paint because it's extremely beautiful, but also because it's a particularly difficult medium that art roboticists tend to avoid.

Spray paint is the most common solution to the problem. (Ink works as well; I'll address that in a future post on drawing.) The first artistic robots, and the ones I consider most successful, are spray painters. They work in factories worldwide, like this trio in Germany:

This is an ideal application for robots. It's a toxic setting that requires an absolutely steady, smooth and consistent hand. Spray paint requires a constant distance from the surface to create an even coat, which is much easier for a robot to calculate than a person. Though the paint won't give them respiratory problems, it can still gum up the works, which is why they're fitted with protective socks.

My 6 month old palette
It takes a lot of dexterity to get oil paint onto a palette. Selecting, grabbing, moving and squeezing the tube, along with several iterations of cleaning the brush when finished. I've tried every mechanical delivery system I could think of (paint rolled out of tubes by motorized rods, linseed oil delivered through a solenoid-opened tube to dry pigment, hanging bottles of diluted paint with mechanized lids). Until my robots have a hand built for more than a paintbrush it's something I'll do for them, so we can keep using my beloved oil paint.

Monday, April 15, 2013

Neko the Modernist

Modernism is a movement in philosophy and art, concerned with self-examination. It begins by discarding all a priori knowledge, and building up a system of evidence-based understanding. The modernist uses their medium to examine the medium itself.
“[Modernist art] happens to convert theoretical possibilities into empirical ones, in doing which it tests many theories about art for their relevance to the actual practice and actual experience of art.” Clement Greenberg, Modernist Painting, 1960
Neko's database will allow him to correlate words to colors to pigments. It's my hypothesis that he'll reveal a wealth of information about color can be used in painting to represent different concepts. I consider him a color field painter.
“Through its fetishization of the base, the sculpture reaches downward to absorb the pedestal into itself and away from actual place; and through the representation of its own materials or the process of its construction, the sculpture depicts its own autonomy.” Rosalind Krauss, Sculpture in the Expanded Field, 1979
Autonomy is the goal as I reconstruct Neko. This will come through 1) a well-balanced feedback system with me, viewers, and the work; 2) a unified, articulate, transportable body and pedestal ; 3) a well-organized archive.

Why teach robots to oil paint?

Some form of this question is the one I'm most frequently asked. I have many answers but the one that's most honest is probably the least satisfying: I do it because I like it. I like bringing arms to life from unassuming piles of desk clutter. I like thinking about all the surrounding ideas, reading the research, and discussing it with everyone I can. It seems like the right thing to be doing.

More often, my answers to this question are: 1. as a gesture of friendship, 2. to learn more about oil painting, 3. to uncover an objective standard of taste. These are the answers I'll be addressing in this blog using examples from art history, philosophy, color theory, and computer science.