Sunday, October 7, 2018

Steampunk city-sized computers

"When the vast extent of a machine sufficiently large to include all words and sequences is considered, we observe at once the absolute impossibility of forming one for practical purposes, in as much as it would cover an area exceeding probably all London, and the very attempt to move its respective parts upon each other, would inevitably cause its own destruction."

From The process of thought adapted to words and language, by Alfred Smee, 1851 

Monday, October 1, 2018

Logic Machines and Diagrams

I just got from Amazon a little known book by Martin Gardner called Logic Machines and Diagrams. The cover shows Ramon Llull's diagram, which was the cover to the prototype copy of Machinamenta. Here's the first paragraph of the foreword: 
If the various branches of discovery were to be measured by their relative antiquities, then of all scientific pursuits the mechanization of thought must be the most respectable. The ancient Babylonians had mechanical aids to reckoning, and in Plato's time geometers were already building machines to support formal derivations. 
I think this is going to be my kind of book!  

Friday, January 5, 2018

Semantic Primes

I've been thinking about how to build up a dictionary how you would if you were trapped on a desert island with someone who didn't speak your language and you wanted to teach it to them from scratch. I got a very interesting book called Semantics: Primes and Universals by Anna Wierzbicka that talks about the very first words you would use to start such a project. The "semantic primes" are around 40 words that she takes as undefinable but from which a much larger number of words can be defined. She says that these words are special because every human language has a word (or a sense of a word) that is a direct translation of each of these 40 words, and that they are among the first concepts that children learn to express. She also talks a little about a simplified grammar that lets you combine these words.

learnthesewordsfirst.com is an online dictionary/lesson plan that builds up English in this way. It starts with

61 semantic primes, defined mainly by pictures and examples. Using only these 61 words, it defines

300 "semantic molecules." Using only these semantic molecules, it defines

2000 words used in the Longman Defining Vocabulary. These words are used to define

230,000 words in the Longman Dictionary of Contemporary English.



Now imagine that you found a way to program the meaning of these 61 words into a robot, and programmed in the ability to read a sentence using these 61 words and derive the meaning of a new word from that. You could build up to the meaning of all the words in the dictionary this way. Programming those first 61 words and the grammar would be challenging, but I don't think impossible.



I don't think this would be sufficient to understand everything about those concepts. Suppose I gave the definition "an arc-shaped fruit, around 6-12 inches long, with soft white flesh and a skin that is green when unripe, yellow when ripe, and soft and brown when overripe, and grows in bunches." This would be enough to pick out a banana from any other food in the supermarket, but it wouldn't tell you much about what a banana really looks like. You wouldn't be able to recognize a banana split from such a definition. A good definition generally tells you just enough to distinguish the item from any potential confusers. But it would be an excellent start that you could begin to flesh out with other capabilities.   

Wednesday, December 6, 2017

Some of my recent experiments with style transfer

These combine style transfer with some manual editing. I'm using the deep learning output as a tool to get images I like to see, and I like to see what I can get it to do. Some of the tricks I've learned:


  • If you want to add details to a real item, make sure that the scale and lighting are consistent between the source and style images.
  • Monochrome is generally more successful than depending on it to get color right.
  • Bas-relief style details, and other shallow sculptural details work really well.
  • If two images are very similar, you can use the details of the high-resolution one to enhance the low resolution one. But this requires a very close correspondence.
  • You can make fractals by doing the low-res to high-res trick repeatedly with the same image, zooming in on small parts of it.
  • You can force the output to have symmetry by making sure both the source and style image are symmetric. This works especially well when the style image is symmetric with some shallow depth but lit from one side. Then the output will be, too.
  • You can use frequency decomposition to add details to an image without affecting the overall composition. In Photoshop, this is done by the following process:

    • detail image: high pass at radius n, linear light blending mode, 50 % opacity, on top of:
    • background image: gaussian blur at radius n


Friday, June 30, 2017

Generation of new artistic styles


A deep learning system that can apply an existing style to a new image is one thing, but can an AI actually generate a new artistic style? These researchers (an assorted group including people from Rutgers and Facebook) and have experimented with using generative adversarial networks to create artwork that is recognized by another neural network as artwork, but doesn't seem to come from any existing style. Personally I like the clouds on the upper right the best, but there are interesting things going on in all the images-- I like how it uses color gradients in particular. It's an interesting new advance, but to feel like art I think one thing this lacks is a motivation for choosing a particular subject. I would like to see a system that makes aesthetic choices in the goal of expressing an idea about the world that comes from having experienced and thought about the world.
The following images generated by the adversarial system were judged by human subjects the most highly in their respective categories (click to enlarge):


Link to the paper

Saturday, June 17, 2017

Wednesday, February 22, 2017

Estimating photos from sketches


When I finished writing Machinamenta six years ago, I suggested some things that could be done to make artificial creativity go beyond simple kaleidoscope patterns. In many ways, deep learning software has surpassed the suggestions I put forward. Here is another example. 
This work comes out of the Berkeley AI research lab at the University of California, Berkeley. Alexei Efros is a familiar name-- he worked on image quilting and automatic photo pop-up and was at CMU (along with Martial Hebert and Abhinav Gupta) during the period I was working with them professionally.
The way this works is that a neural network is trained on pairs of images. The right hand image of the pair is a photograph of a cat; the left hand image is an automatic edge detection on the photograph, using the HED image detector. This means that no humans were needed to create the training data-- important because of how many training samples are needed. It does mean that the edges it is looking for are not necessarily the ones people perceive as most important, but modern edge detectors like HED do a lot better job of that than the Canny edge detector, which was the best available when I first started working on computer vision.
The sketch contains far less information than the photograph. The only reason it is possible to do this at all is that the system has a great prior model of what cats look like, and does its best to fit that model to the constraints of the sketch. I wonder if drawing a Siamese profile would be enough of a hint to give it Siamese coloration? What happens if you try to draw a dog, or a pig, or a house instead?

Try it out yourself, it's a lot of fun.

The original paper