Technology

Humans are in a symbiotic relationship with another species. Without this relationship, modern people would not exist.

The only thing that really separates us from other species is our shapeshifting survival strategy. Most animals have a survival strategy that sets like concrete into a virtual job description. The job description includes what food you eat, how you acquire it, how you shelter, mate, and raise young: A strategy.

Survival strategies are like an idea that grows into an organism. The organism fully commits to the idea by adapting; physically becoming the technology (the claws, teeth, running muscles, etc.) needed for the job. These adaptations improve success within the strategy but it’s like evolving into a corner. It’s like gambling the house on your betting system.

Humanity substitutes tools for adaptations, we don’t morph into a specialized form and narrow down. Tool use isn’t exactly what separates us though. There are numerous “generalist” species such as birds and other primates that cleverly use tools. The difference is that these species are elegantly solving problems posed by tasks in their job description and surviving better as a result, but never innovating to the extent of “leveling up” to a different job.

We have a sort of default simple job of walking around in a group picking up food. We have a second sense for noticing useful stuff and trying it. It’s as simple as understanding that that stick will help us knock down more fruit. When faced with more complicated problems like catching fish, we try all sorts of half-assed ideas before getting a flash of insight like damming up the far end of the pool in the stream to trap the fish for easier capture. Continue reading

FacebooktwittermailFacebooktwittermail

Not my writing or source. I thought their title was a bit much and rewrote it in Hugh language. This is an intriguing step toward creative innovation in artificial intelligence. To this point AI creativity has been sleight of hand, working within the “variety tolerances” of very complex algorithms. This is an interesting innovation but just the beginning.

And as usual, let’s hope they don’t kill us all.

Computers Evolve a New Path Toward Human Intelligence

Jeff Clune / Quanta Magazine

In 2007, Kenneth Stanley, a computer scientist at the University of Central Florida, was playing with Picbreeder, a website he and his students had created, when an alien became a race car and changed his life. On Picbreeder, users would see an array of 15 similar images, composed of geometric shapes or swirly patterns, all variations on a theme. On occasion, some might resemble a real object, like a butterfly or a face. Users were asked to select one, and they typically clicked on whatever they found most interesting. Once they did, a new set of images, all variations on their choice, would populate the screen. From this playful exploration, a catalog of fanciful designs emerged…

…One day Stanley spotted something resembling an alien face on the site and began evolving it, selecting a child and grandchild and so on. By chance, the round eyes moved lower and began to resemble the wheels of a car. Stanley went with it and evolved a spiffy-looking sports car. He kept thinking about the fact that if he had started trying to evolve a car from scratch, instead of from an alien, he might never have done it, and he wondered what that implied about attacking problems directly. “It had a huge impact on my whole life,” he said. He looked at other interesting images that had emerged on Picbreeder, traced their lineages, and realized that nearly all of them had evolved by way of something that looked completely different. “Once I saw the evidence for that, I was just blown away.”

Read on: Link to the article 

FacebooktwittermailFacebooktwittermail

And their vision of our future

The soft brutality of crappy algorithms, and how this model delivers blunt reflections of prejudice.

Corporations don’t want our opinions with the nuances and compassion still attached. They capture a causally harsh picture of us and that picture is reflected in the development of the only tools we have to communicate with each other…which reinforces its impact on society.

  • Virtually all internet hubs traffic in this lowest-common-denominator model, embracing and broadcasting our worst and weakest traits as norms. How do we fight this?
  • Whatever systems we have for societal healing and repair (do we have any?) are drowned in a flood of unanticipated impacts.
  • How will these results affect the assumptions built into business and social planning?
  • How will those plans reinforce our tacit assumptions about each other?

Emil Protalinski, writing for VentureBeat: At the Movethedial Global Summit in Toronto yesterday, I listened intently to a talk titled “No polite fictions: What AI reveals about humanity.” Kathryn Hume, Borealis AI’s director of product, listed a bunch of AI and algorithmic failures — we’ve seen plenty of that. But it was how Hume described algorithms that really stood out to me. “Algorithms are like convex mirrors that refract human biases, but do it in a pretty blunt way,” Hume said. “They don’t permit polite fictions like those that we often sustain our society with.” I really like this analogy. It’s probably the best one I’ve heard so far, because it doesn’t end there. Later in her talk, Hume took it further, after discussing an algorithm biased against black people used to predict future criminals in the U.S.

“These systems don’t permit polite fictions,” Hume said. “They’re actually a mirror that can enable us to directly observe what might be wrong in society so that we can fix it. But we need to be careful, because if we don’t design these systems well, all that they’re going to do is encode what’s in the data and potentially amplify the prejudices that exist in society today.” If an algorithm is designed poorly or — as almost anyone in AI will tell you nowadays — if your data is inherently biased, the result will be too. Chances are you’ve heard this so often it’s been hammered into your brain.

 

FacebooktwittermailFacebooktwittermail

“[We] cannot find out the use of steam engines, until comes steam-engine-time. ” Charles Fort

Charles Fort was an extraordinary thinker and a witty if challenging writer. Born into the heart of the steam-driven industrial revolution, He was nonplussed to learn about the Aeolipile, an ancient Roman steam engine. It was a very simple device and researchers aren’t certain if it was an entertaining party trick or had some small practical use. We do know that its impact on this historical period is zero. It didn’t capture the imagination of the time or generate new ideas and new technologies. It was intellectually inert.

What then makes a technological breakthrough roar into life seemingly from nowhere? Why do paradigm shifts sometimes appear startlingly fast?

“Steam engine time” may sound too techno-mystical to be an idea of practical use but I think the meaning is straightforward. Steam engine time (or gunpowder time or antibiotic time) is an EMERGENT effect of the laying down sufficient essential substrate to make the idea fertile. That substrate collects slowly and incrementally. It consists of underlying 1. technological and 2. intellectual readiness.

  1. The sort of hardware needed to express the idea physically must be “off the shelf” accessible. Not like Superstore accessible but in the general world of the moment and probably serving completely unrelated purposes at present. If you have to invent a bunch of other things to compile and test your idea, it isn’t time yet.
  2. There must be a sort of slowly heating or charging excitement growing in the community of innovators and thinkers. They may keep their thoughts to themselves but related ideas are percolating and making connections throughout the surrounding world. The questions are crystallizing and there is a growing sense of urgency. Competition plays a part too. Pride and fear add to the pressure. This process speeds up when more people are engaging with the issue.

If you’ve read my stuff on Darwin and Wallace you know of their representative competition but the IDEA of evolution was on a low boil everywhere in their cultural moment. The substrate was laid and the moment was fertile. Their theories (and others) could only emerge in a powerful way that shaped the future from this state of readiness. A breakthrough theory coming before the substrate is ripe and ready is roundly ignored.

Feuding Dutchmen, and Telescope Time

With the Renaissance came a new freedom of thought and hunger for knowledge. Ptolemaic knowledge of astronomy was rediscovered and published along with mythology, astrology, and philosophy. Our place in the universe was one of the ideas beginning to bubble in many minds. Technology and craftsmanship rose from the old, rediscovered knowledge and quickly had a practical impact. It was inevitable that as glassmaking and lens-grinding techniques improved in the late 1500s, someone would hold up two lenses and observe what they could do.

The first patent application for a telescope came from Dutch eyeglass maker Hans Lippershey. In 1608, Lippershey claimed he’d invented a device that could magnify objects three times. His telescope had a concave eyepiece aligned with an objective convex lens. Another eyeglass maker, Zacharias Jansen, claimed Lippershey had stolen the idea from him. Jansen and Lippershey lived in the same town and both worked on making optical instruments.

We have no evidence that Lippershey did not develop his telescope independently therefore, he gets the credit for the telescope, because of the patent application, while Jansen is credited with inventing the compound microscope. Both appear somehow to have been a part of the development of both instruments.

This is an extraordinary impact on science and the future from one little Dutch town and two very competitive residents. Our exploration of the very big and far and the very small and close comes to us courtesy of this jealous, grumpy lens grinding soap opera. Continue reading

FacebooktwittermailFacebooktwittermail