TALLAHASSEE, Fla. -- Members of the successful Apollo space program are experiencing higher rates of cardiovascular problems that are thought to be caused by their exposure to deep space radiation, according to a Florida State University researcher.
In a new paper in Scientific Reports, FSU Dean of the College of Human Sciences and Professor Michael Delp explains that the men who traveled into deep space as part of the lunar missions were exposed to levels of galactic cosmic radiation that have not been experienced by any other astronauts or cosmonauts. That exposure is now manifesting itself as cardiovascular problems.
If you want to visit the Moon be prepared to die from a heart attack.
Delp found that 43 percent of deceased Apollo astronauts died from a cardiovascular problem. That is four to five times higher than non-flight astronauts and astronauts who have traveled in low Earth orbit.
These missions did not last long. Apollo 11, the first lunar landing mission with Neal Armstrong and Buzz Aldrin setting foot on the Moon, lasted only 195 hours total and some of that time was near Earth. Imagine a Mars mission with exposures lasting many months rather than one or two weeks.
I've said this before: we need really big advances in biotechnology to adapt us to space travel and colonization of another planet. Without those advances we should stay on Earth.
Robots have been automating blue collar occupations such as manufacturing. But some heavily cerebral occupations have been immune to automation. But that's starting to change. Robots are going to automate some professional occupations
They list manual dexterity as one of the human skills that robots have a hard time with. But it really depends on setting and risks. A robotic fruit picker has easier safety requirements than a robotic nurse. A robotic fruit picker also has an enormously shorter list of tasks than does a robotic nurse.
Highly routine professional tasks are more prone to automation. A substantial chunk of accounting can get automated, especially as more of the data arrives in automated data flows with standard formats.
One claim in the article is that social intelligence is not going to get automated any time soon. But a lot of companies are hiring machine learning modelers to predict a lengthening list of things about what people might do. Those prediction models often can beat humans in specialized categories of predictions about what we want or what we might do. For example, models can predict the rate of criminal recidivism and even odds of becoming a criminal at date of birth. Such models will become much more accurate when DNA sequencing costs fall another order of magnitude and everyone gets their DNA sequenced.
As his work has been put into use across the country, Berkís academic pursuits have become progressively fantastical. Heís currently working on an algorithm that he says will be able to predict at the time of someoneís birth how likely she is to commit a crime by the time she turns 18. The only limit to applications like this, in Berkís mind, is the data he can find to feed into them.
Rather than think of a single thing called social intelligence it is better to think of a long list of tasks that involve social intelligence. Then for each task ask how susceptible it is to automation. Jobs that require workers to make predictions about human behavior (e.g. parole boards and some pieces of police detective work) are certainly susceptible to automation for the prediction part of the job. That's true for a large chunk of marketing (e.g. who to send each sort of credit card offer).
But an institution has to be willing to use and accept the output of prediction models. That's not always the case. College admissions could be done much more accurately with prediction models than with human application reviewers. Models could greatly exceed the performance of current admissions staff. But colleges do not want to use the models. A college of lower rank that wanted to raise its ranking and did not mind using models could come up with models for who to offer scholarships for and at what dollar amount to get higher ranked students. A college could rank all applicants using a prediction model of likely college performance. The opportunity exists for the college willing to automate and use a model.
Early stage autonomous vehicles won't be able to handle cities. But they could take over some driving tasks. How to use their more limited capabilities?
"Itís much easier to solve self-driving when the weatherís good, for example, than when itís snowy, dark or rainy. And itís easier on highways and in suburbs," says Dixon. "So you can imagine pushing a button on your Uber or Lyft app, and depending on the situation and location, an autonomous car comes or a person comes."
What this makes me think: We need a way for highway-capable autonomous taxis to have a way to pick up passengers in relatively simple zones, areas simple enough for them to understand before they are ready to take on full city complexity. Imagine walking down to a place that is like a highway rest station but it is really an autonomous vehicle passenger stop. Such a station would exist to give autonomous vehicles simple environments where they can let passengers on and off. Then the vehicles go back onto the highway.
I could imagine paid access turnpikes doing this. I could even imagine paid access turnpikes that put your car on a multi-car autonomous towing vehicle so that turnpikes could switch to fully autonomous mode (with lots of advantages for tight electronic cooperation between vehicles). Existing human-operated cars could get down the turnpike without their owners having to drive. the turnpikes could achieve a big increase in safety, speed, and throughput.
Autonomous highway vehicles could be made to work in any area where long stretches of road are simple enough for first generation autonomous vehicles to handle. These vehicles would not be able to handle complex surface roads with lots of pedestrians. But they could handle less complex environments. Automated measurement of road environment complexity could identify which roads the first gen autonomous vehicles could handle.
Tim O'Reilly says Donít Replace People. Augment Them.
If we let machines put us out of work, it will be because of a failure of imagination and the will to make a better future!
That sure sounds nice. But imaginations driven to find ways to automate and cut costs will do whatever cuts costs and boosts profits the most. If they can totally eliminate human labor and doing so will be cheaper then that's what companies will do. Really, from the perspective of managers of capital, why not? If you can make materials enter a factory and a car or home appliance come out on the other side with no human in the factory then why not? How will putting a human in the loop help? If the automated factory can make the exact design as specified and humans aren't needed then what's the business reason to use humans? There isn't one. Get over it and move on.
O'Reilly argues that technology extends human capability. Which humans? Certainly not the replaced humans. Technology definitely extends the capabilities of scientists, engineers, and some managers (while replacing other managers and some types of engineers). It even extends the capabilities of many other workers. But often times this capabilities-extension is a transitional phase until machines can fully replace the humans.
Take all sorts of manual laborers for example. Is there really a big future for manual laborers whose labor is enhanced by robots? The trend has been to automate manual labor. I do not see that trend changing. Granted, rising affluence of the knowledge workers sometimes causes them to shift expenditures toward using kinds of manual labor (e.g. gardeners or home improvement workers) that they could not previously afford. But shifting income distribution patterns do not suggest manual laborers are maintaining their pricing power in the labor market.
O'Reilly thinks companies that only automate existing processes rather than developing new things to do are exhibiting a failure of imagination. But if a large portion of the population is dependent on the management elite to keep trying to find ways to make their continued presence in the labor market essential then, well, that's a change from past practice. Engineers of 100 or 50 years ago did not need to worry about how to make manual laborers useful. The usefulness of the manual laborers (and other low skilled workers) came as a side effect of industries pursuing improvements in processes and development of new products.
Lee Drutman and Yascha Mounk ask Will automation kill the middle classóand democracy with it?
Can democracy thrive when more and more benefits accrue to machines that are stronger in body, and quicker in mind, than any mere mortal? And will the machinesí owners remain willing to honor the claims of their social inferiors when they no longer need them to make their food, or to staff their companies, or to fight their wars?
I do not see how this kills democracy. They think the elites will undermine democracy. But that assumes the elites see value in sticking around. Right when elites no longer need large populations why should they keep their robots in large population countries with all the risks and taxes that entails? What I think is a more likely scenario: the end of the need for manual labor and low skilled labor causes capital and management elites to decamp from big Western democracies, taking their engineer and scientists with them. Their likely destinations will be much smaller states where they can pay much smaller slices of their production as taxes. Those left behind could still manage to operate a democracy I think.
Before 1870 three quarters of the people worked as farmers. That certainly made for a different sort of democracy than what industrialization made possible. Even before World War II a very small city called Washington DC was a sleepy backwater. Now it is an ideological and economic battleground of interests and factions. If the biggest capitalists of America and Germany and France and England go elsewhere these countries will still have national capitals. Its just some of the battles over redistribution won't be held in those capitals. In a nutshell: It'll be far easier to shift production when very few people are needed to actually do production.
We learned yesterday evening that NHTSA is opening a preliminary evaluation into the performance of Autopilot during a recent fatal crash that occurred in a Model S. This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.
From the above figures are you thinking that Tesla Autopilot is safer than the average driver? That was my initial (admittedly dumb) reaction. But the different death rates reported here do not allow an apples-to-apples comparison. In what ways is the death rate of Tesla on Autopilot different than that of the average vehicle? Here are some factors to consider:
How many miles we need to measure to detect a safety advantage from autonomous vehicles depends on how big is the safety advantage. Imagine that an autonomous vehicle system was so great it would be involved in fatal accident only once every 3 billion miles. Well, id we hit the half billion miles traveled with no fatalities we'd be feeling pretty good about the autonomous driving system under test.
Another complication: With Tesla's system undergoing rapid updating we aren't really comparing the same system today as we were comparing 3 or 6 months ago. So measuring progress is hard. How can we know Tesla's latest update is really safer than one from a few months ago. Short answer: we can't.
I firmly believe that autonomous vehicles are going to cause a huge in car collisions and fatalities in the next 20 years. I'm far less clear on when the drop in fatality rates will start.
Autonomous vehicles will increase the difficulty of comparison shopping for vehicle safety. How are we going to know whether a particular car model's autonomous driving feature is at least as safe as we are?
As I've previously stated, robots will take over restaurant cooking before they take over truck driving. The reason: A restaurant back room is a far more controlled environment than an open highway or city street. A cooking robot does not have to deal with as much complexity.
The political movement for higher minimum wages is accelerating the move to automate restaurants. Robotic pizza, robotic tacos, robotic coffee: it is all coming. One of the coolest consequences: full automation with robotic cooks will enable you to order preparation of your own custom recipe. You will be able to choose from recipes found on the web and have local restaurants bid in an automated online auction to prepare your recipe.
HIgh wage cities such as San Francisco will drive this revolution. In 10 years mature robots will be cheaper. Restaurant food will be cheaper, better tasting, and prepared more precisely to your preferences. Plus, preparation will be faster, you will be able to order before you arrive at a restaurant, and eventually (maybe in 15-20 years) the food will be delivered to your home by robots.