- A company has developed a type of technology that allows a machine to effectively learn from fewer examples and refine its knowledge as further examples are provided.
- This technology could be applied to everything from teaching a smartphone to recognize a user's preferences to helping autonomous driving systems quickly identify obstacles.
You've heard it before-- This is the stuff of science fiction. But it's not. It's real and it's here and it's one of the surest signs that our civilization can take advantage of computing in new and powerful ways, including but not limited to robotics.
And Robotics is where this gets interesting. We've heard a lot of talk about advanced engineering-- robotics taking over jobs. With technology of the ilk described in this article, we could see robots being trained to build new machines that incrementally increase efficiency by increased automation. It's quite possibly the path to the end of work, which has been explored by both the Atlantic and the New York Times.
Mike Wall, writing for Space.com, reveals that "For the first time ever, a private company has permission to land on the moon."
It's a great idea and long overdue. As much as I'm excited for development of the moon, I od have one concern and that's a worry about changing the face of the moon that we all see at night. If there's anything that all humans have in common (external to their anatomy), it's the idea that we've all looked at the same moon whenever it was present, for the countless generations we've been around. That heritage is important to save. The face of the satellite that's exposed to Earth should remain relatively unchanged for as long as possible.
Why am I concerned? As awesome as the whole endeavour is, this promotional video, a marketing sizzle reel, makes Moon Express look like a company that's not very used to showing restraint.
The "Robot" used in the Police Killing of an armed and dangerous Dallas man, suspected in killing five DPD officers and wounding seven more was, to be sure, a rolling drone, somewhat modified, rather than some autonomous homunculus on a mission to kill a man.
Still, it's a disquieting moment when devices like this remote controlled unit, designed to investigate and possibly remove bombs, was used to deliver one that was meant to detonate.
Top of mind is the detachment and ease with which this state-sanctioned killing took place. Still, one has to wonder whether the use of the robotic device was at least as detached and easy as a man fuelled by hate, pointing a high-powered rifle at unsuspecting law men and women, and then pulling the trigger to the effect of hitting 12 people. Horrifying.
CNN describes how the situation took place here (story and video) and ZDnet discusses the controversy of death by government robot here. It's important to give some thought to these issues as we move toward a world where our technology penetrates every facet of our civil life.
Joseph Cox and Kason Koebler writing for Motherboard:
A video of the aftermath of a fatal shooting of a black man by a police officer was temporarily removed from Facebook. The company has said the removal was due to a “technical glitch.”
“We're very sorry that the video was inaccessible,” a Facebook spokesperson told The Telegraph. “It was down to a technical glitch and restored as soon as we were able to investigate.”
They go on to say:
The video has since been restored, but with a “Warning—Graphic Video,” disclaimer.
“Videos that contain graphic content can shock, offend and upset. Are you sure you want to see this?” the disclaimer continues.
Facebook did not respond to a series of questions about the apparent glitch, or if the video was flagged by a user or by Facebook itself.
This is problematic. Just last week, Facebook announced that they were going to change their news feed algorithms to favour content generated and uploaded by users rather than news sites like Upworthy and CNN. Here we have someone attempting to make sure the world sees the injustice she believes that her lover and her are facing and Facebook essentially silences her for a time.
Now it could be that of the millions of users who viewed the video, thousands or tens of thousands flagged the content as inappropriate. If that's the case, then they should just say so.
Given the aforementioned algorithm change and the recent kerfuffle about the Trending News suffering from liberal editorial bias, Facebook is in jeopardy of losing the trust that it's re-gained over the last several years after multiple privacy issues.
The bottom line is that while the social media giant is clearly a private company with its own processes, Facebook needs to come up with a clear set of guidelines for those within and without to work with. That would cease this confusion, as well as build and maintain trust.
Over in Europe, where they hate technology, Charles Riley, writing for CNN Money, reveals findings from a new draft policy report that seeks to penalize corporations by taxing them on robotic manufacturing techniques. The report, headed by Mady Delvaus, a Luxumbourg representative to the EU Parliment, states in part that the EU should secure tax revenue from not people, nor corporations, but from the machines that corporations use to generate goods: Robots. Here are some choice quotes:
"The proposal suggests that robots should have to register with authorities, and says laws should be written to hold machines liable for damage they cause, such as loss of jobs."
"If advanced robots start replacing human workers in large numbers, the report recommends the European Commission force their owners to pay taxes or contribute to social security. The establishment of a basic income, or guaranteed welfare program, is also suggested as a protection against human unemployment."
Switzerland's government just held an open debate and referendum on a "basic income" and it was roundly rejected.
Back to the point-- taxing companies for their gains at efficiency is problematic. For nearly a century, the Western World has dreamt and worked toward an agenda of productivity that frees humans from the sort of physical toil that can lead to workplace injuries and long-term health problems by building machines that can take over those roles and perform them more efficiently. These efforts, in turn, lead to (1) better quality of life for workers, (2) positive economic growth as productivity rises and (3) fuels growth in the standard of living as manufactured goods and devices become cheaper.
Amazon.com uses just such robots to make sure they can deliver purchased items to users quickly, while saving valueable space in warehouses.
In the case of autonomous cars, we'll see less taxis on the road, and less drivers taking the wheel, which will likely lead to less costly accidents, less traffic, and less injuries. Stopping manufacturers from pursuing this course of action by throwing a new tax in the way of the efficiency incentives tied to this new technology is a bad idea.
Let's not forget what happened when Google was told by Spain that their service of helping people find news stories was going to be taxed: Google News went away leaving Spain's public less informed than the rest of their fellow EU citizens.
There is some good news nestled deep within this report. It suggests we implement Asimov's three laws on all new artificial intelligence devices. They're simple to understand and are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Considering that the governments of the world have agreed to such pacts regarding Antarctica, chemical weapons, lasers, and even outer space, the above three principles shouldn't be too hard to agree upon.
Taxes, however, are an entirely different story.