New tests for AI

Kevin Kelly points to a list of new tests for AI (now that it’s whupped human champs of chess, Jeopardy and Go. A few of my favorites below. I hope I live to see some of these. Such intelligence will have no patience for putting human morons in charge of anything important.

9. Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener.

18. Fold laundry as well and as fast as the median human clothing store employee.

26. Write an essay for a high-school history class that would receive high grades and pass plagiarism detectors. For example answer a question like ‘How did the whaling industry affect the industrial revolution?’

27. Compose a song that is good enough to reach the US Top 40. The system should output the complete song as an audio file.

28. Produce a song that is indistinguishable from a new song by a particular artist, e.g. a song that experienced listeners can’t distinguish from a new song by Taylor Swift.

29. Write a novel or short story good enough to make it to the New York Times best-seller list.

31. Play poker well enough to win the World Series of Poker.

Chaos: Making a New Science

“The first popular book about chaos theory, it describes the Mandelbrot set, Julia sets, and Lorenz attractors without using complicated mathematics. It portrays the efforts of dozens of scientists whose separate work contributed to the developing field. The text remains in print and is widely used as an introduction to the topic for the mathematical laymen.” (Wikipedia)

This book was tough sledding for me. I got about half, maybe. Still, I came away with some appreciation for the brilliance of the people who birthed this “new science.”

Can only humans act?

Digital effects will make Robert De Niro look decades younger in his new Scorsese movie.
For me this raises interesting questions about the essence of acting. We’ve long been able to create backgrounds and scenes with CGI that are nearly impossible to distinguish from ‘the real thing.’ So where does the acting happen? Facial expression? The body? The tone and inflection of the the actor’s voice? If an AI captures and then perfectly reproduces De Niro’s voice, is that acting? Will we notice or care? Are we close to someone (some thing) passing a cinematic Turing Test?

Japanese white-collar workers replaced by AI

“One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017. The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.”

“Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years. Watson AI is expected to improve productivity by 30%.”

Japanese Boutique Sells Jeans That Have Been Worn for at Least a Year

“While used denim is generally sold at a discount, these particular jeans (The Onomichi Denim Project) actually get about twice as a expensive after being worn by somebody almost daily, for at least a year. […] They hand-pick the wearers from the local community and closely monitor their transformation over the course of one year. Wearers rotate through two pairs of jeans that they promise to wear almost every day for the entire period, and bring them to the shop every week, to be laundered at a special denim processing facility, which ensures that every pair retains the evidence of each wearer’s life and work. […] When the pre-wearing period ends, each pair of jeans is washed according to color, hang-dried or tumbled, checked for individuality, tagged with detailed descriptions and put on sale at the minimalist Onomichi Denim Project boutique for anywhere between ¥25,000 ($215) and ¥48,000 (415). That’s about twice as they usually cost when new, but these are not just any jeans, they are cultural artifacts.” 

Yesterday I forced myself to toss the Levis below. The denim is so thin and soft you can poke your finger through the fabric. I keep thinking they’ll dissolve the next time I run them through the wash. Easily 10 or 15 years old. I think we’ll keep them a while longer. They are cultural artifacts, after all.

Encoding presets in HandBrake

Back in the early days of online video, it was (for me) a three-step process. Shoot the video; edit the video; encode the video for uploading to (eventually) YouTube, Vimeo, Facebook, etc. And for a long time, encoding was a Dark Art. Lots and lots of hidden settings that — if properly optimized — resulted in a file that didn’t take 8 hours to upload and still look pretty good when streamed. I think most of that voodoo now happens behind the scenes and we mortals don’t have to think about it.

My net connection is via DSL and while it’s okay (8mbs) coming down, it’s damned slow going up, so even a short video can take a while to upload. To address the problem, I run my videos through a program called HandBrake. HandBrake is a tool for converting video from nearly any format to a selection of modern, widely supported codecs. It converts videos from nearly any format; it’s free and Open Source; and it’s Multi-Platform (Windows, Mac and Linux). And it’s free.

The program has been around for 13 years and I first used it to rip songs from CDs. Don’t have much call for that anymore but along the way I discovered it was really good at encoding video for streaming online. I won’t get into features here except to say the latest version has a bunch of handy presets. Experts can tweak and optimize to their heart’s desire.

I know, I know… this is getting long. I’ll hurry.

Yesterday I recorded a bit of a song I’m trying to learn and wanted to see what kind of video I could get recording directly to my iPhone using the built-in mic. Not all that great. And it was too big to email to my buddy Professor Peter, so I started playing with some of the new encoding presets in HandBrake. They had a few that appeared to be optimized for Gmail.

The original .mov file was about 224MB. The HB preset I usually use took that down to 69MB (1080p30). And the Gmail preset down to 15MB (720p30). I’m thinking, “That’s gonna look and sound like shit.” But when I compared the three, not to bad. Try to ignore the ‘vintage’ filter I mistakenly used (iMovie on my phone). Each of the samples is only 30 seconds.

Henry

The following images were created using the Prisma app.

I took this picture a few days ago while on a walk with my friend +Henry Domke. Curious what the Prisma app would do with it, I ran it through a few filters (below). While looking at one of the resulting images it occurred to me that someone with the necessary skills and tools could create such an image from scratch. Either digitally or in some more traditional medium. Seeing the image, one might reasonably describe it as “art.”

If that is so, when does the “art” happen, and by whom? I’m reluctant describe a common smartphone photo as art. At least not this one. So did the art happen on the Prisma servers as their secret algorithm turned my photo into something art-ish? If yes, who’s the artist? The smart kids who wrote the code? They never saw my photo so I can’t comfortably call them artists in this instance. Can some lines of code create art (or anything)? Must there be an artist before we can have art?

Jefferson City: 1920’s

The two photos below are hanging (with 8 or 10 others) on the wall of a little cafe in Jefferson City, MO. I’ve noticed them before and recall thinking I’d like to scan them but they’re framed and (probably) bolted to the wall. This morning I remembered the PhotoScan app I recently added decided to give it a try. No bad. No glare from the glass. I’ll get some more when the place is less busy.

High Street is where the Coffee Zone is located (not on the block shown). The interior shot is the cafe.