infocology

2

A new digital ecology is evolving, and humans are being left behind

Incomprehensible computer behaviors have evolved out of high-frequency stock trading, and humans aren’t sure why. Eventually, it could start affecting high-tech warfare, too. We spoke with a researcher at University of Miami who thinks humans will be outpaced by a new “machine ecology.” For all intents and purposes, this genesis of this new world began in 2006 with the introduction of legislation which made high frequency stock trading a viable option. This form of rapid-fire trading involves algorithms, or bots, that can make decisions on the order of milliseconds (ms). By contrast, it takes a human at least one full second to both recognize and react to potential danger. Consequently, humans are progressively being left out of the trading loop. And indeed, it’s a realm that’s rapidly expanding. For example, a new dedicated transatlantic cable is being built between US and UK traders that could boost transaction speed by another 5 ms. In addition, the new purpose-built chip iX-eCute is being launched which can prepare trades in an astounding 740 nanoseconds. (via A new digital ecology is evolving, and humans are being left behind)

A Polytopia reader - Introductory links to Wildcat writings

(Having been asked for it, this is a re-post for those that missed it)

A Cyber Soaring Humanity

1. A Cyber Soaring Humanity (or The rise of the Cyber Unified Civilization)

2.The Natural Asymmetry of Infocologies

3.This mountain has no top

4.Hybrid futures, Knowmads and the Notion state

5. Hybrid futures and Knowmads (pt2)

6.Knowmads as metabolic reactors of information (Hybrid Future and Knowmads (pt 3)

7.Knowmads as Aesthetic Curators of information (Hybrid Futures & Knowmads pt 4)

8. Knowmads as Critical Relevancies (Hybrid Futures & Knowmads pt 5)

9. Aesthetic Management As The Future Of Joy (or a Foray in InfoBeauty)

Forays in Philotopia

# Polytopia as Rhizomatic Hyperconnectivity- a new form of wisdom emerges

# The Future History of Individualism (Pt.1)

# Parsing Hyper Humanism – a different angle to Posthumanism

# The Luxurious Ambiguity of Intelligence in Hyperconnectivity

Cyber Identity

# Fluid affinities replace nucleic identity

# What is it like to be a ‘Nym’ - A Polytopian Stance

# Some will be Gangsters of Poetry, Some will be Pan-Symbolists

We have always lived in an information economy, a fact that sometimes tends to be displaced by the immense amount of information now available at our fingertips. The huge amount of talk generated by the current infoconomy explosion takes little, if at all, account that ever since knowledge has been passed from parent to child and from culture to culture, the barter coin of trade was always information. Whether the information passed was gossip or the way to light a fire, the method of creating a better blade or the latest fashion fad, information was always the basis of human interaction.
— 

(now writing the next step)

Wildcat: The Natural Asymmetry of Infocologies

What species would become dominant on Earth if humans died out?

See on Scoop.it - Knowmads, Infocology of the future

External image

In a post-apocalyptic future, what might happen to life if humans left the scene? After all, humans are very likely to disappear long before the sun expands into a red giant and exterminates all living things from the Earth.

Assuming that we don’t extinguish all other life as we disappear (an unlikely feat in spite of our unique propensity for driving extinction), history tells us to expect some pretty fundamental changes when humans are no longer the planet’s dominant animal species.

So if we were given the chance to peek forward in time at the Earth some 50m years after our disappearance, what would we find? Which animal or group of animals would “take over” as the dominant species? Would we have a Planet of the Apes, as imagined in popular fiction? Or would the Earth come to be dominated by dolphins, or rats, or water bears, or cockroaches or pigs, or ants?

The question has inspired a lot of popular speculation and many writers have offered lists of candidate species. Before offering any guesses, however, we need to carefully explain what we mean by a dominant species.
Let’s stick to the animal kingdom

One could argue that the current era is an age of flowering plants. But most people aren’t imagining Audrey Two in Little Shop of Horrors when they envision life in the future (even the fictional triffids had characteristically animal features – predatory behaviour and the ability to move).


See on theconversation.com
In praise of artificial food — Rachel Laudan — Aeon Opinions

See on Scoop.it - Knowmads, Infocology of the future

Artificial food. That’s what humans eat. I say this to anyone who will listen. ‘Oh yes,’ comes the reply. ‘The more’s the pity. Cheap, nasty, imitation food-like substances. It’s high time to return to natural food.’ But, no, I mean artificial in its original sense of man-made, produced by humans, artfully created.

Our distant ancestors found little good in the food that nature provided. Greens had too few calories to sustain life, chewy meat came tightly wrapped in awkward-sized packages known as living animals, nuts were bitter or oily, roots tended to be poisonous, and grains were tiny and so hard that they passed undigested through the system. Acquiring and digesting food was a constant struggle.

So sometime in the distant past, at least 20,000 years ago and probably much more, members of our species decided they could improve on nature. They discovered how to process raw foods by using fire to cook them, or stones to chop and grind them, or coopting microorganisms to ferment them. They began creating niches for the more edible species, breeding sweeter fruits, less toxic roots, and bigger grains. In short, they created the art of cookery to transform the natural.

The art of cookery, they believed – and modern science has confirmed – produced more food that, on balance, was nutritious, easier to digest, safer, longer-lasting and better-tasting than raw plants and meat. With the benefit of hindsight, anthropologists such as Richard Wrangham at Harvard have argued that bodies changed as the energy formerly spent on digesting was diverted to brains that increased in size, and society evolved as a response to cooked, and hence communal, meals.


See on aeon.co
You Don't Know as Much as You Think: False Expertise

See on Scoop.it - Knowmads, Infocology of the future

External image

It is only logical to trust our instincts if we think we know a lot about a subject, right? New research suggests the opposite: self-proclaimed experts are more likely to fall victim to a phenomenon known as overclaiming, professing to know things they really do not.

People overclaim for a host of reasons, including a desire to influence others’ opinions—when people think they are being judged, they will try to appear smarter. Yet sometimes overclaiming is not deliberate; rather it is an honest overestimation of knowledge.

In a series of experiments published in July in Psychological Science, researchers at Cornell University tested people’s likelihood to overclaim in a variety of scenarios. In the first two experiments, participants rated how knowledgeable they believed themselves to be about a variety of topics, then rated how well they knew each of 15 terms, three of which were fake. The more knowledgeable people rated themselves to be on a particular topic, the more likely they were to claim knowledge of the fake terms in that field. In a third experiment, additional participants took the same tests, but half were warned that some terms would be fake. The warning reduced overclaiming in general but did not change the positive correlation between self-perceived knowledge and overclaiming.

In a final experiment, the researchers manipulated participants’ self-perceived knowledge by giving one group a difficult geography quiz, one group an easy quiz and one group no quiz. Participants who took the easy quiz then rated themselves as knowing more about geography than did participants in the other groups and consequently were more likely to overclaim knowledge of fake terms on a subsequent test.


See on scientificamerican.com
A Radical Way of Paying for College … From 18th-Century Scotland

See on Scoop.it - Knowmads, Infocology of the future

External image

The saddest part of the debate over how to rein in the cost of college is that rising prices have not been tied to any real improvement in the quality of education. Skyrocketing tuition, it’s generally agreed, has been brought on by the expansion of student services. There are nothing but bad choices, it seems: Allow the status quo to persist and saddle students with debt that will hamper their ability to buy houses, start families, or even get the jobs they need to pay off their debt. Or make college (and graduate school, argues Samual Garner, a bioethicist who chronicled his personal student-debt crisis in Slate) taxpayer-funded, and risk a larger and more catastrophic version of the cost escalation that can come with a pot of free money.

While extravagances such as hot tubs, movie theaters, and climbing walls may seem to make this discussion distinctively modern, parts of today’s college-cost dilemma are recognizable, in fact, in an 18th-century debate about how best to finance a university’s operations. It was so important that Adam Smith took time out of analyzing more traditional economic subjects like the corn laws to devote a long section of The Wealth of Nations to it. And with cause: The Scottish universities of the 18th century, much like America’s today, had been quickly becoming the universally acknowledged ticket to social advancement.

Smith, despite accusations of Connery-esque misplaced nationalism, was justly proud of the Scottish system of universities, which ran on a radical (by today’s standards, at least) system in which students paid their professors directly. Scotland had begun the 18th century with the humiliating Act of Union, which rendered it subject to the British Empire and shuttered its parliament. The country then boasted only three universities, all of which taught an obsolete, traditional medieval curriculum (in Latin, no less), and carried a reputation as a backwater of subsistence farmers and awkward, only recently semi-civilized rubes.


See on theatlantic.com
Google achieves AI 'breakthrough' by beating Go champion - BBC News

See on Scoop.it - Knowmads, Infocology of the future

External image

A Google artificial intelligence program has beaten the European champion of the board game Go.

The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.

The tech company’s DeepMind division said its software had beaten its human rival five games to nil.

One independent expert called it a breakthrough for AI with potentially far-reaching consequences.

The achievement was announced to coincide with the publication of a paper, in the scientific journal Nature, detailing the techniques used.

Earlier on Wednesday, Facebook’s chief executive had said its own AI project had been “getting close” to beating humans at Go.

But the research he referred to indicated its software was ranked only as an “advanced amateur” and not a “professional level” player.


See on bbc.com
The Many Minds of Marvin Minsky (R.I.P.)

See on Scoop.it - Knowmads, Infocology of the future

External image

Marvin Minsky, a pioneer of artificial intelligence, died on Sunday, January 24, in Boston, according to The New York Times. He was 88. Minsky contributed two important articles to Scientific American: Artificial Intelligence, on his theories of multiple minds, and Will Robots Inherit The Earth?, on the future of AI. I profiled Minsky for Scientific American in 1993, after spending an afternoon with him at MIT’s Artificial Intelligence Laboratory, and again in The End of Science. Below is an edited version of the latter profile. -–John Horgan

Before I visited Marvin Minsky at MIT, colleagues warned me that he might be defensive, even hostile. If I did not want the interview cut short, I should not ask him too bluntly about the falling fortunes of artificial intelligence or of his own particular theories of the mind. A former associate pleaded with me not to take advantage of Minsky’s penchant for outrageous utterances. “Ask him if he means it, and if he doesn’t say it three times you shouldn’t use it.”

When I met Minsky, he was rather edgy, but the condition seemed congenital rather than acquired. He fidgeted ceaselessly, blinking, waggling his foot, pushing things about his desk. Unlike most scientific celebrities, he gave the impression of conceiving ideas and tropes from scratch rather than retrieving them whole from memory. He was often but not always incisive. “I’m rambling here,” he muttered after a riff on verifying mind-models collapsed in a heap of sentence fragments.

Even his physical appearance had an improvisational air. His large, round head seemed entirely bald but was actually fringed by hairs as transparent as optical fibers. He wore a braided belt that supported, in addition to his pants, a belly pack and a tiny holster containing pliers with retractable jaws. With his paunch and vaguely Asian features, he resembled Buddha–Buddha reincarnated as a hyperactive hacker.


See on blogs.scientificamerican.com