April 01, 2005
Caveman Economics: Were Neanderthals Wiped Out By Free Trade?
Jason Shogren, Erwin Bulte, and Richard Horan propose that homo sapiens out-competed Neanderthals by forming trading networks that produced a greater specialization of labor.
March 23, 2005 -- Economics-free trade may have contributed to the extinction of Neanderthals 30,000-40,000 years ago, according to a paper published in the "Journal of Economic Organization and Behavior."
"After at least 200,000 years of eking out an existence in glacial Eurasia, the Neanderthal suddenly went extinct," writes University of Wyoming economist Jason Shogren, along with colleagues Richard Horan of Michigan State University and Erwin Bulte from Tilburg University in the Netherlands. "Early modern humans arriving on the scene shortly before are suspected to have been the perpetrator, but exactly how they caused Neanderthal extinction is unknown."
Creating a new kind of caveman economics in their published paper, they argue early modern humans were first to exploit the competitive edge gained from specialization and free trade. With more reliance on free trade, humans increased their activities in culture and technology, while simultaneously out-competing Neanderthals on their joint hunting grounds, the economists say.
Archaeological evidence exists to suggest traveling bands of early humans interacted with each other and that inter-group trading emerged, says Shogren, the Stroock Distinguished Professor of Natural Resource Conservation and Management in the UW College of Business. Early humans, the Aurignations and the Gravettians, imported many raw materials over long ranges and their innovations were widely dispersed. Such exchanges of goods and ideas helped early humans to develop "supergroup social mechanisms." The long-range interchange among different groups kept both cultures going and generated new cultural explosions, Shogren says.
Anthropologists have noted how judicious redistribution of excess resources provides a distinct advantage to "efficient hunters" as measured by factors such as increased survivorship, social prestige, or reproductive opportunities, the researchers say.
Could humans have developed a greater tendency to feel obligated to each other? Imagine genetic mutations that would increase the feeling of obligation when someone does you a favor. Then imagine a family that does more for each other because they all share this mutation and therefore have an edge in survival.
"For instance, it is believed that killing large game became a method of acquiring wealth, and that efficient hunters could build up 'reciprocal obligations' by exchanging food," Shogren says of his research. "Such obligations, and the gains from trade in general, provide strong incentives to search for new technologies. One of the striking features of the archaeological record is that Neanderthal technology was nearly stationary for many thousands of years whereas technology of early humans experienced many innovations."
Adam Smith would not have approved of the Neanderthals.
He says the evidence does not support the concept of division of labor and trade among Neanderthals. While Neanderthals probably cooperated with one another to some extent, the evidence does not support the view that specialization arose from any formal division of labor or that inter- or intra-group trade existed, he says. These practices seem to require all the things that Neanderthals lacked: a more complicated social organization, a degree of innovative behavior, forward planning and the exchange of information, ideas and raw materials.
"Basic economic forces of scarcity and relative costs and benefits have played integral roles in shaping societies throughout recorded human history," Shogren says. "No reason exists today to discount either the presence or potential impact of economics in the pre-historic dawning of humanity."
He suggests that early humans could have prevailed even if humans and Neanderthals were about equally capable -- provided that humans invented the appropriate economic exchange institutions that created greater wealth for the greater good.
Did humans evolve genotypes that supported better communications skills with which to bargain? Or did humans evolve other abilities that facilitated free trade such as a more sophisticated ability to compare the net worth of different physical objects?
"Through trade and specialization, humans could have conquered their niche even if the incumbent party was somewhat stronger, better adjusted to its environment and equipped with a larger brain volume," he says. "If language and symbolic communication facilitated the invention of trade, it enabled humans to turn the tables on their otherwise dominant competitor."
I think the development of language abilities was important for the development of trade. However, I strongly suspect that more was involved. Consider the wide range of social behaviors seen in various species. Some species are loners that meet up only occasionally to mate. Other species exist only in groups and the shapes of those groups and how they relate to each other varies greatly from species to species.
One can easily imagine (since some species are this way) a human-like species that does not like to socialize at all outside of the immediate group. Such a species could only achieve economies of scale in small groups. Whereas another equally smart primate that enjoys socialization across larger boundaries than just a local group would have much greater chances for setting up trade to achieve larger scale division of labor.
A species that focuses a lot of attention on understanding the personalities of other members of the same species would also have advantages for forming trade relations. A trader who has a mind that causes him or her to think about the behavior and personality of others of the same species would be better equipped to notice and remember how well other traders carried out terms of previous trades.
Anthropologist Eric Delson has doubts about this theory.
“It’s an intriguing and novel idea,” says Delson. “But it requires stronger support.” He points out that the Gravettians in particular only emerged 28,000 years ago, while the last of the Neanderthals died about 29,000 years ago.
So the Gravettians could not have had very much influence in the extinction of the Neanderthals, he argues. “He also assumes that all they ate was meat, which of course is not true,” he adds.
Whether or not this particular theory about Neanderthal extinction turns out to be correct I predict that human genes that code for a variety of trade-supporting cognitive characteristics will be found. When genetic engineering of offspring becomes possible the future economic order of huiman or post-human society will depend on what decisions parents and governments make about genes that code for cognitive characteristics which affect economic behavior.
If we some day allow free trade between artificially intelligent computers and robots will the resulting competition and specialization of labor drive humans to extinction?
First you need the artificially-intelligent computers and robots. A development always "a few years out" but never arriving. I think SENS is a lot more likely to work than strong AI, and I have my doubts about that program.
[ "If we some day allow free trade between artificially intelligent computers and robots will the resulting competition and specialization of labor drive humans to extinction?" ]
I think yes, this is exactly what would happen.
One of my favorite "movies" is the Animatrix, because it was the first movie I have seen to really show this danger explicitly.
And it is likely that we would try to resort to violence to regain our former status, leading to an all out World War that we could not hope to win.
I think it is important that we start to consider, right now, our strategy to prevent any one of several possible distopias that can/will be caused by our reliance on technology and Artificial Intelligence.
I have publicy preposed that artificially creating a large Black Hole on Earth may be a good Weapon of Mutually Assured Destruction that we should consider.
Not being an Astrophysicist I am assuming that such a weapon would devour our entire solar sytem before any A.I. could escape it.
However, once they have left our solarsystem, or if they develop faster than light capabilities, even a weapon as monsterous as this would not serve as an effective deterrent.
I hope the scientific community begins to take these issues more seriously - or soon we may find ourselves with little chance to recover from our own hubris and folly.
While AI researchers have been excessively optimistic I think they underestimated the difficulty of the task by orders of magnitude. We haven't had computers fast enough or with enough memory capacity. But Hardware limitations will cease to be the limiting factor in AI development in a decade or two. Then it will be an algorithm design problem (or, rather, a large set of algorithm design problems).
As for SENS: The reason I am optimistic is that I expect the growth of replacement organs to be solved in 10 to 20 years. Once we can replace parts we are a long way toward SENS. No one need die of heart disease or liver failure or kidney failure if new organs can be grown. Old people who can afford it will be able to have very young organs.
The harder part is going to be rejuvenation of the nervous system. We can't grow replacement brains because brains contain our identity. Gene therapies and other therapies will be needed to rejuvenate the nervous system.
Lono, "science fiction" ideas may be realistic, but that doesn't mean that Hollywood sf has anything to teach about their real implications. Try reading this paper for serious discussion of this issue.
The not fast enough or capcious enough excuse has been in play since the 60's. The real problem is the hardware. No matter high well it performs or the amount of storage it commands, I don't think you are ever going to get something that a human would identify as thought rather than calculation out of a traditional Von Neuman processor. (Seymour Papert long ago admitted that he and Marvin Minsky were knowingly lying when they told the DOD funding committees they could do it all in software.) Even much of the neural net work falls far short. There is a belief in some quarters that if you can simulate a planarian worm and scale that up to the level of a human brain it will magically function like a human brain rather than just being a really big worm. This is what gives us cinematic AI monsters like Skynet.
We really don't know yet what it takes to produce our own thoughts. It certainly seems to be a much deeper problem than scaling up things that don't think until they somehow do.
The 'Second Renaissance' section of 'The Animatrix' was insultingly stupid. The idea of making humanoid slaves to do labor better performed by large dumb machines made no sense whatsoever. The allusions to historical human slavery and the Israel/PLO conflict was just despicable. We already have big dumb machines operated by humans to do the work once done by enslaved humans and domesticated animals. Wouldn't an AI simply be put to work operating the machines rather tan being embodied in inadequate forms? Even this would suppose the AI offered an improvement over the human operator. We can't be assured of this since an entity capable of thinking like us might also reproduce our weaknesses or have its own unique problems. Any entity that is arguably sentient is going to have those seeking to grant it legal protections long before it is doing real work. In this aspect the 'Second Renaissance' suffers from the same flaw of much science fiction in that it presumes to exist in an alternate universe where nobody reads science fiction and thus there is nobody anticipating new developments that might require new legal definitions. The original Eando Binder story 'I, Robot' was written decades ago.
To me it was plain the Wachowski's never really thought any of this through before comitting to producing all of the additional material after the enjoyable first movie or at the least were in too much of a hurry to produce the sequels.
I do not see the need to move beyond von Neumann machines to be a serious obstacle. You can do that easily now with a bunch of FPGAs and a VHDL compiler. One can even code up algorithms that figure out how to modify gate arrays on-the-fly to reprogram them to do different algorithms. I've even been asked to code up a solution to a problem in C++ so that another guy could translate my solution into VHDL. Suddenly the solution becomes extremely parallelized.
The hard part is coming up with all the algorithms that go into making a human brain do what it does. However, advances are being made on many fronts. For example, language processing software is becoming more sophisticated. Also, image processing software that does pattern detection to recognize different kinds of objects is getting better.
I see the drive toward AI as being primarily driven by efforts to develop better algorithms for a long list of different purposes. Lots of incremental improvements made for commercial purposes will lead to the accumulation of a big tool box of algorithms that will solve different parts of the puzzle.
As for movies that overuse human-shaped robots: Yes, it makes no sense to use electro-mechanical devices in humanoid shapes to automate most processes. Specialized equipment shaped to each task is a lot more practical.
Also, most pieces of equipment have no need to be a full A.I. on their own. Equipment designed to automate most tasks needs only a very small subset of all the cognitive abilities that a human brain has.
Wow, Eric, I could not disagree with you more on this issue.
[ To me it was plain the Wachowski's never really thought any of this through before comitting to producing all of the additional material after the enjoyable first movie or at the least were in too much of a hurry to produce the sequels. ]
First of all the Wachowski's stole almost all of the ideas for "The Matrix" and related fiction from another writer who won a huge settlement against them.
That not withstanding, since when did Human innovation and commerece ever revolve around pure logic or efficiency of function?
The idea that we would make A.I. in our image is very realistic as we our irrational, narccistic, and emotional creatures.
[ The allusions to historical human slavery and the Israel/PLO conflict was just despicable ]
Actually if history repeats itself, like it has so many times before, this also is very plausible as well.
It is unlikely we will endow our "creations" with rights until they intentionally and irrefutably force the issue.
Finally I merely used the Animatrix as an illustration of what could happen if/when A.I. machines became so essential to commerce that they simply out competed us economically.
In my opinion the Animatrix did a very good job of illustrating how quickly our financial might could be undermined by intelligent, properly motivated, tireless machines.
Getting back to the all but forgotten Neandertals, are you sure they went extinct and did not simply become genetically diluted by interbreeding with a large number of immigrants? After all, you can still find modern Europeans with such Neandertal traits as heavy brow ridges.
“The first-ever analysis of ancient DNA from a Neandertal bone is a technical tour de force and offers a new kind of evidence for the view that Neandertals were an evolutionary dead end”
Eric Pobirs: “I don't think you are ever going to get something that a human would identify as thought rather than calculation out of a traditional Von Neuman processor.”
Douglas Hofstadter has been a major influence on my thinking. He explains well the limits of “algorithmic thinking” in “Godel, Esher, Bach”. Roger Penrose continued the theme in his book, “The Emperor’s New Mind”. This led to some interesting alternative theories of mind suggesting that thought might arise from quantum computing somehow using the biomolecules making up the cytoskeleton. (At that time very little was known about the cytoskeleton.) I think there are still people building quantum mind theories.
Much more is now known about the cytoskeleton. I haven’t heard of any evidence that supports the idea that the cytoskeleton is involved in quantum computing.
Much more is also now known about brain function. My guess is that human thought will be fully explained by viewing the brain as several interconnected, multi-layer neural nets. Research tools are approaching the point where electrical currents can be tracked across the dendrites and axons of individual neurons. I expect high level thought to be directly connected to those low level electrical patterns. Eventually electrical implants should be able to translate and initiate the nerve impulses associated with human senses, muscles, memories, and thoughts.
If human thought can be fully explained by electrical currents in neural networks then I believe the estimate of one or two decades before computers reach the brain’s computational power is a good prediction.
So if powerful computers may someday surpass human brainpower but Von Neuman processors are inherently limited to “algorithmic thinking” what can one conclude? My guess is that either human thought is also constrained by such limits or that non-algorithmic methods can be combined with standard computing technology to replicate human thinking. (A non-linear dynamic system could be used to produce chaotic inputs into a Von Neuman processor and the combined system would exhibit non-algorithmic behavior. In other words, specially designed noise in a digital system might replicate thought.)
I checked out your reference. The work is interesting, but cannot be regarded as evidence that the Neandertals did not contribute to the gene pool of modern man. The one statement in the article, ' "Neandertals in Europe could not have contributed to the modern human mitochondrial genome," says Stanford University geneticist Luca Cavalli-Sforza. That "destroys one of the fortresses of the regional continuity model," he says, which postulates that Neandertals in Europe are among the ancestors of living Europeans.' is not very well expressed, as there is no such thing as 'contributing to a mitochondrial genome'. The mitochondrial genome is inherited intact from the mother, so there is no crossing over and no such thing as 'contributing to it.' The paper says that the differences between the Neandertal mtDNA and that of modern man are consistent with a separation time of 600,000 years, as compared with the time of the so-called mitochondrial Eve, which was 120,000 to 150,000 years ago. All that means is that the Neandertals and the archaic Homo sapiens from whom they evolved (about 100,000 years ago) were isolated from the population through which the Eve mitochondria was spreading. This is no surprise; the Neandertals were in Europe and the Middle East, while the early spread of the Eve mitochondria was in Africa.
If you read down further in the article, you will see that the proponents of the 'Out of Africa' model and the 'multiregional' models are beginning to converge, with the "Out of Africa' crowd now allowing that there may have been some interbreeding that explains the continuity of certain regional traits, e.g., the shovel-shaped incisors found in Asian populations, which persisted even through the proposed replacement of the population by the newer Out of Africa population. My point is that features such as brow ridges, large noses, and a horizontal-oval mandibular foramen are Neandertal features that persist in the modern European population.
I wouldn't be surprised to find out that individual neurons are smarter than we've been assuming.
On the other hand, the biggest limits on the human brain, skull size as it exits the birth canal and energy use (the brain uses about 25% of the body's oxygen supply), won't apply to machine intelligences. We won't be powering them with starches and oxygen, but with electricity from nuclear plants, and there won't be any need to get them through an opening between someone's legs. So, unless there's a third constraint (maybe increasing the complexity of thought imposes overhead on the whole system that ends up growing faster than the advantages of that extra intelligence - "paralysis by analysis" isn't exactly unknown among smart humans), the sky's the limit.
Doesn't mean that letting them trade with each other will consign us to extinction. There's always more work to do than any population can handle, provided that the powers that be leave in place the profit motives that motivate them to do it, and they'll profit by paying us to do something even if they could do it better, just so they'll be free to do something else. Now maybe they'll decide to kill us for other reasons, but we won't starve just because they're allowed to trade with each other.
Thanks for the follow up on that link. New findings usually go through a period of critical review before being either accepted or rejected. I wondered if my posting that link would shake out some more recent results.
So the mtDNA data can prove that all Homo sapiens and Neanderthals had a common mother some 600,000 year ago but doesn’t prove there wasn’t some gene mixing since that time.
One might compare all the Neanderthal genes to Homo sapiens genes to find the separation dates. The most recent separation dates would indicate the most recent intermixing.
Yes, machines really will be able to render much human labor worthless. Look at farm animals for a comparison. Seen any cutting edge American farms using Oxen to plow the fields? Oxen are not smart enough or powerful enough to compete with a man on a tractor. Assorted other reasons animals were used have been obsolesced by technology.
The reason the human farmer does not become useless is that he owns the tractor. If a future computer that controls the tractor becomes artificially intelligent, has rights, owns property, and operates its own farm then where does that leave the human? I don't expect the human to have anything to offer at that point.
Now, you can argue that a lot more animals are used now but they are almost all used to grow meat (or produce eggs or milk). Er, I don't want to become food for machines. I also don't want to lay eggs, produce milk, or have my hair grown to be harvested. But even the use of animals to grow meat is going to be obsolesced by in vitro meat grown in special vats. I'm vaguely recalling a Kornbluth sci fi novel that featured this idea with "Chicken Little" grown in vats.
In the American economy the trend for the last few decades has been for the bottom 10 percent to experience declining wages. In inflation-adjusted terms the minimum wage is about half what it was in the late 1960s. Part of the cause of that decline in wages at the bottom end is the influx of milions of low skilled dim bulb immigrants who compete for the least skilled jobs (and the Open Borders crowd is being really irresponsible for supporting this influx). But the trend of stagnant or declining wages at the bottom goes back to the 1970s before illegal immigrants made much of a dent in the US labor market.
My take on the bottom of the US labor market is that given a choice a lot of companies would rather deal with machines than use the least skilled labor. My guess is that super intelligent AIs will prefer to avoid inefficient and problematic human labor and sub out various tasks to highly specializeable computers and robots (just upload the needed algorithms in an instant with no lengthy training time needed) that are just smart enough do to those tasks. Human-run industry is already working toward that goal with the trend toward "lights out" factories where very little human labor is employed. Toyota has even begun an initiative to develop robots to do the more complex steps in car manufacture.
You have to be cautious about trying to date the most recent separation of populations by using what might be called the 'convergence date' of the mitochondrial DNA. The way I prefer to think of it is in terms of a family tree, in which each person is represented by the bottom point of a V with his (or her) father represented by the left side of the V and the mother represented by the right side. Now repeat the representation going back N generations, and you will create a tree of ancestors in which there will be 2 to the N individuals at the earliest level. If we go back a mere 10 generations, there will be 1024 individuals in the 10th level. We can say something definite about 2 of them. If the person at the bottom of the tree is male, the left-most of the line of 1024 is the one from whom he inherited his Y-chromosome. The last or one on the extreme right of the 1024 is the one from whom the root individual inherited his (or her) mitochondrial DNA. Of course, in the case of the Neandertals and archaic Homo sapiens we are talking about many more than 10 generations, and with that many people involved you can expect to find a considerable number of repeats in the earlier rows of ancestors because of people marrying distant cousins. Nonetheless, the first individual in any such row is the source of the Y-chromosome and the last is the source of the mitochondrial DNA. With that perhaps overlong buildup, the point of all this is that there are a lot of other folks in each line who may contribute to the rest of the genome.
Of course, if you are confidant that one population just came along and wiped out and replaced another population, then there is reason to think that the mitochondrial date of convergence tells us something about the evolution of the overall genome. That is how the Out of Africa faction thinks of it. I prefer to think that mankind is not always so genocidal. Consider how the genetics of the Americas will look to someone who might study them in the distant future. It will superficially appear that the Native Americans were simply replaced by the Europeans and other Old World peoples, but in fact there was considerable interbreeding, and any American whose family extends back very far on this continent will likely have some Indian ancestors.
Lono, are you aware that the settlement never happened? It was only reported as such by some outlets that are strongly focused on race issues and very little on facts. The lunatic woman only succeeded in getting some litigation into court and nothing ever came of it. It there had been a major sum of money involved it would have made the front page of every business section in the country instead of being an item for 'News of the Weird.'
This same silly person, Sophia Stewart is also claiming she created the basis for 'The Terminator' series. This is completely invalidate by her claimed worked appearing later than that of Harlan Ellison, who did successfully sue and was acknowledged by James Cameron as a major influence. The difference is that Ellison's work was published and Stewart's was not. In fact, she hasn't presented any compelling evidence for her supposed story having existed at a vintage that would allow it to be stolen without a time machine.
If you read some of the interviews with Stewart appearing in such noted publications as 'The Ghetto Tymz' it becomes apparent she is absolutely nuts.
Randall, yes there has been greater sophistication of software to accommodate human needs in automated systems but when the conversation gets around to AI the issue for me is sentience. The best stuff around today strikes me as no more sentient than Eliza decades ago. More useful and worth pursuing, most definitely but capable of emergent behavior that leads it to become a competing species? Not even close. They may occasionally produce surprising behavior that might suggest something more under the hood than previously believed but then so does my dog.
In my forty years I've lost track of the myriad range of technological developments that were just one big breakthrough away. I now have great difficulty in taking seriously any predicted delivery date of a technology that is not at least demonstrable in the lab. Too often the 'one big breakthrough' translates to 'then a miracle occurs.' Comprehension of the emergence of sentience strikes me as one of those issues. Anyone trying to put a date on when that will happen is blowing smoke. The assumption that everything will neatly fall into place once the instrumentation is up to snuff has been found wanting all too often. Many problems keep proving tougher than the best researchers can overcome. This isn't to say they will remain forever unsolved, only that it is foolish to make promises one cannot keep.
I don't think that the "open borders crowd" regards the immigrants as being intrinsically destined to remain in the bottom ten percent, although they may start there as a result of being newcomers with inadequate English skills. However, your comment raises an interesting point. Why should Americans be any more accepting of the introduction of an artificially intelligent machine having potential to become a person that "has rights, owns property, and operates its own farm" thereby displacing human Americans from their livelihoods, than of immigration by people with comparable potential. By comparable, a simply mean that it ranges as far; I understand that you regard this futuristic machine as being more competent, persevering, efficient, etc., so that if productivity is the primary concern, the machine is the better choice.
It comes down to the question of what is the basis for deciding. If these machines have rights, it implies that they are in some sense autonomous, and if that were the case, what is there within them that directs their striving. There has to be something of the kind that we refer to as "values" that its presumably utilitarian ethic strives to maximize in the course of making decisions. What would the machine's designers use for that--Isamov's robotic laws?
It seems inconceivable to me that any people would seek deliberately to import or create its own masters, so the coming about of the kind of situations that you envision would seem to result more from inattention than from choice. I would hope that these things would not happen before we become aware that there are choices to be made and that there is an agreed-upon basis for making that choice.
You are correct - apparently some news outlets reported her as winning a settlement when this was not the case.
Seems to be a nusiance lawsuit at best.