ETHICS: 5G AND THE IOT

OVERVIEW

“There are basically two things that grow in parallel as society evolves. There’s the power of our technology, and then there’s the wisdom of us humans for how to manage the technology. If technology grows faster than the wisdom, it’s kind of like going into kindergarten and giving them a bunch of hand grenades to play with.“  Motherboard documentary. https://www.youtube.com/watch?v=5qBjFZV19p0

5G and the Internet of Things (IoT) bring with them a host of life-altering ethical questions, issues, and dilemmas. Just as technology touches almost every area of our lives, and so too will the ethical ramifications of technology. Furthermore, due to the speed with which our world is migrating to cyber space, new ethical issues are constantly arising.

This page touches on just a few topics, but will hopefully serve as a springboard for further investigation. We invite you to study the ethical issues that digitalizing our world presents, weigh the pros and cons, and consider if perhaps it would be wiser to modify our path forward.

We have divided the topics into two main subject areas: 1) ways in which new technology, inventions, and platforms will impact our quality of life, and 2) environmental and social injustices (including inter-generational injustice) primarily caused by radiation, e-waste, conflict minerals, and toxins.

More than ever, humanity is in desperate need of our best minds to come together and make recommendations for a wiser path forward.  Such a group, or council, would consist of (independent from industry) ethicists, philosophers, psychologists, environmentalists, scientists, medical professionals, legals minds, privacy and cyber security experts, top people in AI and data analytics, and members of the public. Too much is at stake to leave the future of all Life in the hands of profit-driven industries.

“I believe we must find alternative outlets for our creative forces, beyond the culture of perpetual economic growth; this growth has largely been a blessing for several hundred years, but it has not brought us unalloyed happiness, and we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers.” Bill Joy https://www.wired.com/2000/04/joy-2/

Though written 18 years ago, Joy’s ideas apply even more today. Please see:

Why the Future Doesn’t Need Us  April 1st, 2000 | Bill Joy | Wired 
With integrity, honesty, breadth of vision, and clarity of thought, Scientist and IT researcher/inventor questions the role of technology in our world.
“We have been seeking knowledge since ancient times. Aristotle opened his Metaphysics with the simple statement: ‘All men by nature desire to know.’ We have, as a bedrock value in our society, long agreed on the value of open access to information, and recognize the problems that arise with attempts to restrict access to and development of knowledge. In recent times, we have come to revere scientific knowledge. But despite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs.”

We need to talk about AI – a film by Futurist Gerd Leonhard: thoughts on artificial intelligence May 21st, 2018 | Gerd Leonhard

Futurist Keynote Speaker Gerd Leonhard: the key challenges of the Internet of Things April 21st, 2018 Gerd Leonhard

At the Heart of Intelligence: Futurist Gerd Leonhard and Telia Finland film collaboration

The coming privacy crisis on the Internet of Things | Alasdair Allan | TEDxExeterSalon Nov. 28th, 2017

Uber Self-Driving Car Hits Pedestrian: Who’s To Blame?

ROBOTS STANDING IN FOR HUMANS

Gently smiling robot that looks like a benevalent bald woman.

Sophia was granted “citizenship in Saudi Arabia in 2017.

Robots are increasingly replacing humans in jobs and relationships. For menial labor, this may make sense. But should robots be used in jobs or roles requiring interpersonal communication and moral decision making? Is it ethical for a robot to be a caregiver? Should robots preside at funerals? Following are some ethically questionable ways in which robots are already replacing humans.

Judge:

Legal robots deployed in China to help decide thousands of cases  Aug. 4th, 2017, Neil Connor| The Telegraph
“’Legal robots’ have been deployed on thousands of cases in China to help decide sentencing. The robots – which are about three feet tall and have heads shaped like toasters – review documents and identify problems with cases. They also advise on sentencing, and can generate arrest warrants and ‘approve indictments,’ said prosecutors in the eastern province of Jiangsu, where the robots are being piloted.
http://www.telegraph.co.uk/news/2017/08/04/legal-robots-deployed-china-help-decide-thousands-cases/

Lover:

Will robots be BETTER lovers than humans? Sex with machines could become addictive, warns expert  Sept. 5th, 2016 | Shivali Best | Daily Mail
“Sex with robots is set to overtake human sex by 2050. But a robotics expert has warned that robot sex could become addictive, and will someday even replace human-human sex entirely. Like many other technologies that have replaced humans, robots could even surpass human technique, to become better lovers, he claims. “
http://www.dailymail.co.uk/sciencetech/article-3774191/Will-robots-BETTER-lovers-humans-Sex-machines-addictive-warns-expert.html

Spouse:

Chinese man ‘marries’ robot he built himself  April 4th, 2017 | Benjamin Haas | The Guardian
“Zheng Jiajia had grown tired of pressure to get married so he turned to Yingying, a robot spouse he constructed last year….After two months of ‘dating’, he donned a black suit to ‘marry’ her at a ceremony attended by his mother and friends….”
https://www.theguardian.com/world/2017/apr/04/chinese-man-marries-robot-built-himself

Monk/Funeral Officiant:

Pepper the robot is now a Buddhist priest programmed to chant at funerals  Aug. 24th, 2017 | Thuy Ong | The Verge
After working in the home, as an assistant at various stores, and as a waiter, SoftBank’s humanoid robot Pepper is adding Buddhist priest to the list of careers the robot can take on. Pepper can chant sutras in a computerized voice while hitting a drum, reports Reuters, as detailed at the creepily-named Life Ending Industry Expo in Tokyo.
https://www.theverge.com/2017/8/24/16196752/robot-buddhist-priest-funeral-softbank

Citizen of Saudi Arabia:

Saudi Arabia’s Newest Citizen Is a Robot October 26, 2017 | Kirsten Korosec | Fortune
“Sophia, an intelligent humanoid robot, has been granted citizenship in Saudi Arabia. It’s the first country in the world to give a bot the same status reserved for humans.  Sophia was awarded the citizenship while on stage at the Future Investment Initiative in Riyadh.”
“I am very honored and proud for this unique distinction. This is historical to be the first robot in the world to be recognized with a citizenship.”
http://fortune.com/2017/10/26/robot-citizen-sophia-saudi-arabia/

“Carebot” (Caretaker):

Does Grandma Need a Robot? Mar. 27th, 2018 | Dylan Furness | Techemergence 
Meet Alice – a child-sized social care-bot who stars in a new documentary Ik ben Alice, in which she’s introduced as an experiment into the separate homes of three elderly women.”
“While a social worker can converse and has the capacity for concern, Alice’s involvement is algorithmic. Her questions and answers are programmed. She only elicits the illusion of emotional engagement. Though Alice might perform the functions of social engagement and concern, it’s impossible to say that Alice actually cares.”
https://www.techemergence.com/does-grandma-need-a-robot/

Child care:

 ‘This is awful’: robot can keep children occupied for hours without supervision  Sept. 2016 | Julie Carrie Wong | The Guardian  
“The 3ft tall iPal has wide eyes, working fingers, pastel trimming, and a touchscreen tablet on its chest. It can sing, dance, and play rock paper scissors. It can talk with children, answer questions like ‘Why is the sun hot?’, and provide surveillance/video chat for absent parents. The iPal, he boasted, could keep children aged three to eight occupied for a couple of hours without adult supervision. It is perfect for the time when children arrive home from school a few hours before their parents get off work.”
https://www.theguardian.com/technology/2016/sep/29/ipal-robot-childcare-robobusiness-san-jose

Reporter:

What News-Writing Bots Mean For The Future Of Journalism  Feb. 2017 | Joe Keohane |Wired
“It works like this: Editors create narrative templates for the stories, including key phrases that account for a variety of potential outcomes (from ‘Republicans retained control of the House’ to ‘Democrats regained control of the House’), and then they hook Heliograf [the AI “reporter”] up to any source of structured data—in the case of the election, the data clearinghouse VoteSmart.org. The Heliograf software identifies the relevant data, matches it with the corresponding phrases in the template, merges them, and then publishes different versions across different platforms. The system can also alert reporters via Slack of any anomalies it finds in the data—for instance, wider margins than predicted—so they can investigate.”
https://www.wired.com/2017/02/robots-wrote-this-story/

Here’s a twist – Pedophilia support:

Pedophiles to be given ‘Child Sex Bots’ to prevent them from attacking children  Dec. 28th, 2017 | Source ExposingGovernment.com 
“A well-known Sex Robot philosopher who recently spoke at the Sex Robots Conference in London has said that pedophiles should be given ‘child sex bots’ (CSBs) to prevent them from having sex with real children.  
https://www.myjoyonline.com/world/2017/December-28th/pedophiles-to-be-given-child-sex-bots-to-prevent-them-from-attacking-children.php


Philosopher:

Meet the most lifelike robot ever created  Feb. 13, 2017 | Blazey
‘Sophia’, a robot questions ‘her’ own existence and grapples with whether ‘she’ is happy to be ‘alive’ and whether robots will ever harm people. Her ‘creator’ poses some pretty tough questions. 
https://www.youtube.com/watch?v=DyxgDM8O8OM

Ethical robots:

Teaching Robots Right from Wrong  June/July, 2017, Simon Parkin | The Economist
Discussion about making robots that know – or are able to learn – right from wrong.
“In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.”
https://www.1843magazine.com/features/teaching-robots-right-from-wrong
 

Personal Assistant:

The ‘Iphone’ Of Robots Could Bring Capitalism As We Know It To An End   July 27Th, Matthew Hart | Nerdist
“Although it seems as if there’s a monstrous mishmash of robots already at our doorstep—sometimes literally, in the case of the Domino’s Pizza Delivery Robot—it’s likely that all these robots are, in the grand scheme of the evolution of AI, gimmicky. They’re narrow AI: capable of a specific task, within specific parameters. But the hypothetical iPhone of robots will be generally intelligent. It’ll be able to make your breakfast in the morning, play catch with you in the afternoon, and go off to its factory job at night.”

The idea is that just like the I-phone has become integrated into just all areas of our lives, so too will each person have a robot equivalent – an “I-robot”- that will become essentially, their personal assistant.
http://nerdist.com/the-iphone-of-robots-could-bring-capitalism-as-we-know-it-to-an-end/

Possessed robot:

Industrial hack can turn powerful machines into killer robots Aug 22, 2017 | Taylor Hatmaker | TechCrunch
“When we imagine the existential threat to humanity posed by automation, we usually think of the negative impact on the workforce, not robots quite literally bludgeoning us to death.  In a post titled ‘Exploiting Industrial Collaborative Robots,’ security researchers at IOActive detail how popular models of consumer and industrial robots have already been compromised in such a way that could cause humans bodily harm.”
https://techcrunch.com/2017/08/22/universal-robots-exploit-ioactive/?ncid=mobilenavtrend

And yes, even God:

Inside The First Church Of Artificial Intelligence  Nov. 15th, 2017 | Mark Harris | Wired 
“With the internet as its nervous system, the world’s connected cell phones and sensors as its sense organs, and data centers as its brain, the ‘whatever’ will hear everything, see everything, and be everywhere at all times. The only rational word to describe that ‘whatever’, thinks Levandowski, is ‘god.’”
“I would love for the machine to see us as its beloved elders that it respects and takes care of. We would want this intelligence to say, ‘Humans should still have rights, even though I’m in charge.’”
https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/

CYBORGIFICATION - MERGING HUMANS WITH TECHNOLOGY

Machines are already making us smarter and will move inside our brains soon.  Phoebe Weston | Daily Mail

Many of us may find the notion of  cyborgification, humans merging with machines, to be repugnant (excepting for medical reasons). But unfortunately, the direction in which industry is herding us makes these questions ones we may need to face in the very near future. Is cyborgification, aka enhanced humans, ethical? And if so, should it be mandated either overtly through laws, or covertly through marketing, media, fashion, and/or “convenience”. For example, would it be ethical to permit, incentivize, or mandate micro-chipping? Will we need new human rights laws to protect our right to “remain natural”?

The World’s Most Famous Real-Life Cyborgs  Medical Futurist
“In the future of humankind, brain implants could improve our memory. Implanted magnets or RFID chips implanted in our fingers could replace passwords and keys. Exoskeletons could boost our strength, and augment a whole range of our human capabilities. So, it will never be more important to keep the features that make us human, such as empathy, creativity or the ability for change. It is not easy to find the right balance between technology and being human, though.”
http://medicalfuturist.com/the-worlds-most-famous-real-life-cyborgs/

Super humans who are sexier, stronger and smarter will arrive by 2029 as brains begin to fuse with machines, Google expert claims  Mar. 16th, 2017 | Phoebe Weston |  Daily Mail 
“Technological singularity will turn us into super humans some time in the next 12 years, according to a Google expert.
This might sound like science fiction, but Google’s Director of Engineering, Ray Kurzweil, has made 147 predictions since the 1990s and has a success rate of 86 per cent.
Kurzweil says when we live in a cybernetic society we will have computers in our brains and machines will be smarter than human beings. 
He claims this is already happening with technology – especially with our addiction to our phones – and says the next step is to wire this technology into our brains.”
http://www.dailymail.co.uk/sciencetech/article-4319436/Singularity-create-super-humans-Google-expert-claims.html

Embedded chip on your shoulder? Some privacy and security considerations  Aug. 1, 2017 | Joseph Jerome | IAPP
“Our cyborg future has been viewed by some as inevitable; Elon Musk has argued the merging of machine and biological matter is essential for humanity to remain relevant as a species.”
“’Cyborgification’ raises a host of ethical questions, but employer-driven ‘chipping’ poses at least three immediate challenges.”
https://iapp.org/news/a/embedded-chip-on-your-shoulder-some-privacy-and-security-considerations/?mkt_tok=eyJpIjoiTWprNU5EUXhPVFJtTW1aaSIsInQiOiJHTkdNaUhVMWNkeGJDM01YZjlFNXdRRlwvMVExV2VwU29hVUxJTm5UWnNSQ29JZ0ZncmpIeE1QcG1HZ0ozVlVndHdaQkNrRWpnR1pvSjd1VFIyMFwvMUxyNnFXTDlmMk5LeG1DZ2R5WFJQRVhnajEzNnJHWW54VWoyWjVHUkt2Z1pRIn0%3D

Weaponizing the Brain: Neuroscience Advancements Spark Debate   May 11th, 2017 | James Giordano | National Defense
“Advancements in neuroscience could be used to create ‘super soldiers,’ link brains to weapon systems for command and control, or even manipulate groups or leaders into taking actions that they normally wouldn’t do.”
http://www.nationaldefensemagazine.org/articles/2017/5/11/weaponizing-the-brain-neuroscience-advancements-spark-debate

Seems like we’ll all be robots sooner or later  Aug.3rd, 2017, Paul Sassone | Chicago Tribune
“Until now, robots, automated devices, required a human agency. Someone has to turn them on, direct what they do. They were instruments of humans. But by implanting microchips into humans, no instrumentality is required. Humans have become the instrument itself…. It seems that we now are approaching robotization from two directions – automation and a kind of microchipped human evolution.”
http://www.chicagotribune.com/ct-ppn-column-sassone-tl-0810-20170802-story.html

Engineers Just Created a Tiny Antenna, Which Could Be Used for Brain Implants   Aug. 24, 2017 | Luke Dormehl | Digital Trends
“Revolutionary antenna, hundreds of times smaller than existing models, could help shrink phones and satellites, and even make smart brain implants a real thing.”
https://www.digitaltrends.com/cool-tech/tiny-antenna-brain-implant/

AlterEgo: Interfacing with devices through silent speech April 4th, 2018 | MIT Media Lab
“Alter ego aims to combine humans and computers such that computing, the Internet, and AI would weave into the human personality as a ‘second self’ and augment human cognition and abilities.”

“The robots may come, but the cyborgs are already in the office”   Aug. 8th, 2017, Mark Coeckelbergh | Gastkommentar
“Increasing automation may imply not only replacement but also new couplings between humans and machines. New cyborgs. ‘New’, since this is already the way many of us work today. In factories, but also in the office: we collaborate with computer programs to do our work. New algorithms will be developed that change these collaborations, without replacing the human. This creates chances to do new things, perhaps also new creative things. But there is also the danger that this increased cyborg-style automation leads to new ways of working that have even less regard for human needs and well-being, and that further exploit us for profit.”
https://science.apa.at/dossier/The_robots_may_come_but_the_cyborgs_are_already_in_the_office/SCI_20170803_SCI75934351236981372

ARTIFICIAL INTELLIGENCE AND SINGULARITY

“We must start a fundamental exploration of what role we want AI to play in serving humanity.” Gerd Leonhard

Artificial Intelligence (AI) is evolving exceedingly rapidly. AI could very soon achieve super-intelligence far surpassing human capabilities in some areas. “Singularity” is a “hypothetical moment in time” at which technological change is so profound that, in the words of futurist, Ray Kurzweil, it represents a “rupture in the fabric of human history.” Is this what we want for ourselves and our children, or are we being herded into it by industry greed, futurists, and so-called “progress”?

It should be noted that AI and data harvesting come as a packaged deal. AI feeds off of the data we (largely unknowingly) generate from our personal devices, the IoT, sensors, surveillance cameras, smart cities, and hyped about driverless cars. AI success is also contingent on saturating our homes and neighborhoods with wireless transmitters since much of IoT and smart city data will be collected and transmitted wirelessly. Restated: We must accept health harms, environmental impacts, cyber security risks, and privacy violations if we want robust AI. Is this a trade-off we are willing to make?

Futurist Gerd Leonhard on the Internet of Things: a new Meta-Intelligence and the Global Brain  April 5th, 2018 | Gert Leonhard
Who’s in mission control of this giant brain?”
Very short video clip explaining how by aggregating all the data we send to the Cloud, we essentially have a “global brain.”
“We’re essentially building a new meta-intelligence. And so we can see in 10 years we’re going to have many of those global brains existing next to each other. And that is both very powerful, and also quite scary.”
https://www.youtube.com/watch?v=AoR6zyhUWPg

The best scenes from 2017 talks and presentations: Futurist Keynote Speaker Gerd Leonhard  Sept. 28th, 2017 | Gerd Leonhard
“In 5 years you will not be searching Google for the best sushi, or whatever you want to eat in Brussels. The system will already know where you are, who you are, what you want to eat, who your friends are, how much money you have for eating. It may actually eat for you as well.”
https://www.youtube.com/watch?v=oq0ci3a243k

Automation Nightmare: Philosopher Warns We Are Creating a World Without Consciousness  February 11, 2017 | Paul Ratner
Recent conference held on AI hosted by the Future of Life Institute.
“The conversation’s topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations.”
http://bigthink.com/paul-ratner/automation-nightmare-we-might-be-headed-for-a-world-without-consciousness?utm_campaign=Echobox&utm_medium=Social&utm_source=Facebook#link_time=1504885562

The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster  Editorial  June 12th, 2018 | The Guardian 
“Google’s ethical principles for the use of artificial intelligence are little more than a smokescreen, but they show that many engineers are rightly worried by the possible uses of the technology they’re developing.”
“Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to. It is not the monster, but the good Dr Frankenstein we need to worry about most.”
https://www.theguardian.com/commentisfree/2018/jun/12/the-guardian-view-on-the-ethics-of-ai-its-about-dr-frankenstein-not-his-monster

Wikipedia study warns even good robots are unpredictable  Aug. 26th, 2017 | Kevin Leen | Gears of Biz
“In an open letter to the UN, these business leaders – including Tesla’s Elon Musk and the founders of Google’s DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways.  But the real threat is much bigger – and not just from human misconduct but from the machines themselves…The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions.”
http://gearsofbiz.com/wikipedia-study-warns-even-good-robots-are-unpredictable/30428

AUTONOMOUS WEAPONS AND KILLER ROBOTS

On the battlefield and in law enforcement, should robots be able to decide who should live and who should die? Is it ethical to leave such a decision to AI? Does AI have the capability of making such judgment calls?  And what would the effects of autonomous weapons on war or crime?

To quote General Sir Richard Barrons referring to killer robots:

“The temptation to have them I think will be terrific because they will be more effective, they will be cheaper, they will take people out of harm’s way and will give you bigger armed forces, so the pressure to have them will only grow.”

The notion of “leveling the playing field” would certainly die a rapid death along with those on the receiving end of autonomous weapons. 

A mobile detection assessment response system patrols the perimeter of an airfield in Djibouti

A mobile detection assessment response system patrols the perimeter of an airfield in Djibouti | Air Force photo by Staff Sgt. Eric Summers Jr.

Campaign To Stop Killer Robots  Website
“The Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons.”
https://www.stopkillerrobots.org/

Micro Drones Killer Arms Robots – Autonomous Artificial Intelligence – Warning!!  Nov. 16th, 2017 | CS Consulting
“Killer drone arms, artificial intelligence an increasingly real fiction, Social and Smart Phone Facial Recognition, Smart swarms, Warning!”
https://www.youtube.com/watch?v=TlO2gcs1YvM

Prepare for rise of ‘killer robots’ says former defence chief  Aug. 27th, 2017 | Ben Farmer | The Telegraph
“He [Gen Sir Richard Barrons] said: ‘The temptation to have them [killer robots] I think will be terrific because they will be more effective, they will be cheaper, they will take people out of harm’s way and will give you bigger armed forces, so the pressure to have them will only grow.’”
http://www.telegraph.co.uk/news/2017/08/27/prepare-rise-killer-robots-says-former-defence-chief/

Killer robots are almost inevitable, former defense chief warns  Aug. 27th, 2017 | Sally Hayden | The Independent
“‘A machine doesn’t blink, doesn’t get hot, doesn’t get cold and just follows the rules,’ he said. ‘Why would you send a 19-year-old with a rifle into a house first to see if anything is in there if you could send a machine?’”
http://www.independent.co.uk/news/killer-robots-bound-to-happen-general-sir-richard-barrons-elon-musk-tesla-united-nations-artificial-a7914931.html

Elon Musk leads 116 experts calling for outright ban of killer robots Open letter signed by Tesla chief and Alphabet’s Mustafa Suleyman urges UN to block use of lethal autonomous weapons to prevent third age of war  Aug. 20th, 2017 | Samuel Gibbs | The Guardian
“Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots saying:
‘Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.  We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.’”
https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war

The Dawn of The Killer Robot (Full documentary)  April 16th, 2015 | Motherboard 
“In INHUMAN KIND, Motherboard gains exclusive access to a small fleet of US Army bomb disposal robots—the same platforms the military has weaponized—and to a pair of DARPA’s six-foot-tall bipedal humanoid robots. We also meet Nobel Peace Prize winner Jody Williams, renowned physicist Max Tegmark, and others who grapple with the specter of artificial intelligence, killer robots, and a technological precedent forged in the atomic age. It’s a story about the evolving relationship between humans and robots, and what AI in machines bodes for the future of war and the human race.”
https://www.youtube.com/watch?v=5qBjFZV19p0

 

ALGORITHMS, DRIVERLESS CARS, AND LIABILITY

 

Algorithms are not as objective as we believe them to be. In fact, they often mimic the same biases of their programmers. For example, Facial Recognition Technology is more sensitive to light colors than dark ones which can skew results when gathering data on white or black people.

Driverless cars must be programmed to “decide” in a split second how to respond in a life threatening situation. Should the car veer away from a child potentially harming the driver? Who decides how AI should be programmed? Who are the ethicists behind the algorithms?  Are there any?

And what about liability? Who’s accountable in the event of an IoT mishap? The designer of the product? The manufacturer? The retailer? The owner?  Nowhere is this question more relevant than with driverless cars.

‘We have a problem’: Racist and sexist robots  – Artificial intelligence is increasingly biased against women and non-white people, experts claim as such programs creep ever further into our lives  Aug. 25th, 2017 | Henry Bodkin | National Post
“Programmes designed to ‘pre-select’ candidates for university places or to assess eligibility for insurance cover or bank loans are likely to discriminate against women and non-white applicants, according to their [experts’] research.”
http://nationalpost.com/news/world/we-have-a-problem-racist-and-sexist-robots

Moral Math of Robots: Can Life and Death Decisions Be Coded?  July 26th, 2016 | World Science Festival 
“A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed?”

Who’s to blame when driverless cars have an accident?  Mar. 20th, 2018 | The Conversation 
“The news that an Uber self-driving vehicle has killed a pedestrian in the US has made headlines around the world….But partial or full autonomy raises the question of who is to blame in the case of an accident involving a self-driving car? In conventional (human-driven) cars, the answer is simple: the driver is responsible because they are in control. When it comes to autonomous vehicles, it isn’t so clear cut.”
https://theconversation.com/whos-to-blame-when-driverless-cars-have-an-accident-93132

ENVIRONMENTAL, SOCIAL, AND INTERGENERATIONAL INJUSTICES

 

 

“There is an approaching consensus in empirical studies of well-being and happiness that there is a weak correlation at best between well-being and high levels of consumption, and that consumerist lifestyles may actually make people unhappy.”

Should industry’s insatiable appetite for growth and profits take precedence over environmental and social justice?

The “growth or die” model of capitalism is wreaking havoc on our world from the craze to digitalize the every aspect of our lives. Industry is pulling out all stops in an grand effort to saturate every inch of the planet and every moment of our lives with digital connectivity – not for the good of society, but for industry’s “bottom line.”

How will omnipresent connectivity and pervasive screenification impact our day-to-day living? Our children? Our relationships? Our sanity? How will it impact the workers who mine for minerals needed for all this technology, in harsh slave-like conditions? And factory workers who become ill, and sometimes die, from toxins in all these things? What about all the harmful wireless radiation permeating our homes, schools, communities and workplaces? And what effect will all this technology have on our children who now spend more time online than with their family, friends, and nature combined? What about the future of wildlife, our pollinators, endangered birds, and trees? Are we even considering the exponential increase in e-waste as every “thing” becomes “smart”.

Perhaps it’s not so “smart” to sever our connection with ourselves, each other, and the earth.

HEALTH:

Should we be forced to accept 4G/5G “small” cell towers in front of their homes when this may result in harms to our health?

Public health is littered with examples where economic interests trumped scientific advice  Oct. 29th, 2017 | Devra Davis | OpEd piece, The Hill
http://thehill.com/opinion/technology/357591-public-health-is-littered-with-examples-where-economic-interests-trumped

PRIVACY:

Should industry be allowed to erode our privacy by the 24/7 harvesting of all our data?

The coming privacy crisis on the Internet of Things Nov. 28th, 2017 | Alasdair Allan | TEDxExeterSalon
Concise and clear explanation of how the IoT data business model works.
“…the data from these objects [IoT “things”] almost invariably ends up in the Cloud where it’s aggregated, packaged, and then almost certainly sold.” And as we “agree” to this business model, that makes us the “product, not the customer.”
https://www.youtube.com/watch?v=yG4JL0ZRmi4

The internet of things is built to leak  June 8th, 2018 | Gilad Rosner | The Hill
IoT devices should protect individuals’ privacy as part of their design. Otherwise, before we know it, the relentless data-tracking practices that have become business as usual online will migrate to the offline world, and the concept of a private space where we can retreat, be anonymous, and avoid the gaze of commercial companies and other people will become a quaint relic of the past.
http://thehill.com/opinion/cybersecurity/391347-the-internet-of-things-is-built-to-leak

For more on privacy please see:
https://whatis5g.info/data-internet-of-things/

Sustainable Technologies for Sustainable Lifestyles  Philip Brey  2017 | MIT Press  
Brey examines the role of technology in contributing to, and potentially mitigating, environmental degradation from over consumption and our industrial society. He writes:
“Technological reform will certainly be of great help in the move towards sustainable patterns of consumption. However, such reform should be seen as part of a comprehensive strategy for sustainable consumption, which also includes social and economic incentives and public debates about values, lifestyles, the quality of life, and the future of the planet.”
“Most centrally, I have argued, a fundamental reform is needed in patterns of consumption and modern, Western lifestyles, and the values and beliefs that underlie them.”
https://ethicsandtechnology.eu/wp-content/uploads/downloadable-content/Brey-2017-Sustainable-Tech-Lifestyles.pdf

TOXINS:

Should workers manufacturing our boundless technology be exposed to toxins that cause illness and sometimes even death?

Where’s the Harm — Component Manufacturing  Electronics Take Back Coalition
“Toxic materials: A complex combination of toxic materials is used in electronics manufacturing, including chlorinated and brominated substances, photoactive chemicals, toxic gases, acids, solvents, heavy metals, plastics, and plastic additives, many of which impose a heavy burden on the environment and worker health.”
http://www.electronicstakeback.com/toxics-in-electronics/wheres-the-harm-component/

WORKERS’ CONDITIONS:

Should workers be paid slave wages and bear harsh working conditions?

What Is The Cost Of Our Digital Dependency? The Dirty Secret Of Our Digital Addiction  Documentary by Sue Williams, 2017.
“In an investigation that spans the globe, filmmaker Sue Williams investigates the underbelly of the electronics industry and reveals how even the smallest devices have deadly environmental and health costs.”
http://deathbydesignfilm.com/

CONFLICT MINERALS:

Should our technology be built on the backs of people (and children) mining for minerals under horrendous conditions?

Descending into conflict: tech minerals finance war: The technology we now take for granted relies on minerals which may have been mined in conflict zones where the trade finances war and oppression  July 25th, 2017 | Raconteur https://www.raconteur.net/business/descending-into-conflict-tech-minerals-finance-war

(For more on conflict minerals please see https://whatis5g.info/conflict-minerals/)

E-WASTE:

Should we forge ahead with the exponential increase in technology from the IoT knowing it will also cause environmental devastation and injustices to workers dismantling this ever growing e-waste?

Almost everything you know about e-waste is wrong  May 18th, 2018 | Josh Lepawsky | The Conversation
https://theconversation.com/almost-everything-you-know-about-e-waste-is-wrong-93904 

U.S. obsession with electronics has huge human price  August 24, 2017 | Austin Lombard | InMotion 
http://www.daytonastateinmotion.com/u-s-obsession-with-electronics-has-huge-human-price/

(For more on e-waste please see, https://whatis5g.info/e-waste/)

CYBER SECURITY:

Should people be subjected to cyber security risks that could potentially be life threatening?

IResearchers warn against ‘hackable’ robots   Aug. 22nd, 2017 | Jane McCallion | IT Pro
“Cyber security firm IOActive claims it has discovered flaws in several commercially available robots that could lead to the devices hacking networks or even physically attacking their owners.”
“There’s an even more sinister element to the vulnerabilities IOActive said…: they can be programmed to maim the humans around them.”
http://www.itpro.co.uk/security/29275/researchers-warn-against-hackable-robots

Can the heart be hacked? Experts find 8,000 security flaws in pacemaker software  May 28th, 2017 | RT.com 
https://www.rt.com/viral/390008-hacking-pacemakers-vulnerabilities-security/

When It Comes To Iot Security, Liability Is Muddled  June 19th, 2018 | Lindsey O’Donnell | Threat Post
“From hacked connected cars to power grids, the implications of IoT security issues seem to be getting graver – yet when it comes to pointing fingers for security troubles, many times victims don’t even know where to start.”
https://threatpost.com/when-it-comes-to-iot-security-liability-is-muddled/132948/

(For more on cyber security, please see, https://whatis5g.info/cybersecurity-vulnerability/)

EFFECTS ON OUR BRAINS AND HUMANITY

What effect will 24/7 connectivity have on our children, families and relationships? With our current technology, the social fabric of our society is already unraveling. Should we perhaps be moving toward less “technoference” rather than more?

When it comes to children’s development, parents should worry less about kids’ screen time—and more about their own. July/August 2018 |Erika Christakis | The Atlantic |

“Smartphones have by now been implicated in so many crummy outcomes—car fatalities, sleep disturbances, empathy loss, relationship problems, failure to notice a clown on a unicycle—that it almost seems easier to list the things they don’t mess up than the things they do.”

 
“Yet for all the talk about children’s screen time, surprisingly little attention is paid to screen use by parents themselves, who now suffer from what the technology expert Linda Stone more than 20 years ago called ‘continuous partial attention.’ This condition is harming not just us, as Stone has argued; it is harming our children. The new parental-interaction style can interrupt an ancient emotional cueing system, whose hallmark is responsive communication, the basis of most human learning. We’re in uncharted territory.”
 
“A mother telling kids to go out and play, a father saying he needs to concentrate on a chore for the next half hour—these are entirely reasonable responses to the competing demands of adult life. What’s going on today, however, is the rise of unpredictable care, governed by the beeps and enticements of smartphones. We seem to have stumbled into the worst model of parenting imaginable—always present physically, thereby blocking children’s autonomy, yet only fitfully present emotionally.”
https://www.theatlantic.com/magazine/archive/2018/07/the-dangers-of-distracted-parenting/561752/
For more on Brains and Humanity, please see, https://whatis5g.info/brains-and-humanity/
 
LOWERED PROPERTY VALUES:
Who should bear the brunt of lowered property values, and aesthetic blights?

Cell Phone Towers Lower Property Values: Documentation And Research on Cellular Base Stations Near Homes
https://ehtrust.org/cell-phone-towers-lower-property-values-documentation-research/

 

GOOGLE CLEANS UP ITS ACT WITH 7 ETHICAL PRINCIPLES IN THE WAKE OF PROJECT MAVEN

 

One Recent Example Of How Ethics Are Playing Out In The IoT:

Project Maven is Google’s controversial program that provides artificial intelligence to the U.S. Department of Defense for analyzing drone footage. After much backlash from Google employees and academics, the tech giant decided not to renew its contract for Project Maven. Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program
 

More About Project Maven:

According to Google, Project Maven is,
 
“a large government program that will result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right abuses. The scale and magic of GCP [Google Cloud Platform], the power of Google ML [machine learning], and the wisdom and strength of our people will bring about multi-order-of-magnitude improvements in safety and security for the world.”
Behind the scenes emails pitch things somewhat differently (For more, see Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program):
 
“Google intended to build a ‘Google-earth-like’ surveillance system that would allow Pentagon analysts to ‘click on a building and see everything associated with it’ and build graphs of objects like vehicles, people, land features, and large crowds for ‘the entire city,’ states one email recapping a Maven kickoff meeting with Pentagon representatives. Google’s artificial intelligence would bring ‘an exquisite capability’ for ‘near-real time analysis,’ the email said.”
 Google had a lot to worry about:
“Despite the excitement over Google’s performance on Project Maven, executives worried about keeping the project under wraps. ‘It’s so exciting that we’re close to getting MAVEN! That would be a great win,’ Fei-Fei Li, chief scientist for AI at Google Cloud, wrote in a September 24, 2017 email. ‘I think we should do a good PR on the story of DoD collaborating with GCP from a vanilla cloud technology angle (storage, network, security, etc.), but avoid at ALL COSTS any mention or implication of AI.’
How should Google pitch Maven so as not to incite public backlash?
“‘Google is already battling with privacy issues when it comes to AI and data; I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.’
With so much at stake, and such backlash, Google backed down and decided to replace Project Maven with a declaration of Ethical Principles for AI. 

AI at Google: our principles  June 7th, 2018 | Sundar Pichal  | AI Baby angel with hands in prayer position
“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
https://www.blog.google/topics/ai/ai-principles/

MISCELLANEOUS ARTICLES PERTAINING TO ETHICS IN THE IOT

Do we have a right to mental privacy?  Are new human rights laws needed to protect “unauthorised collection, storage, use or even deletion'” of our thoughts?

New computers could delete thoughts without your knowledge, experts warnNew human rights laws are required to protect sensitive information in a person’s mind from ‘unauthorised collection, storage, use or even deletion  April 26th, 2017 | Ian Johnston | Independent
This article discusses the ramifications of new technologies that can read our thoughts. Ethicists propose four new human rights laws: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
https://www.independent.co.uk/news/science/delete-thoughts-read-your-mind-without-your-knowledge-neurotechnology-new-human-rights-laws-a7701661.html

What role should technology play in society?

The Strategic Role of Technology in a Good Society  Feb. 2018 | Phillip Brey | Science Direct
Brey investigates what the proper role of technology should be. He posits five “key values” and three “instrumental values” that are necessary for a “good” society.  Brey concludes that technology can be developed to support these values but must be designed with these values in mind, aka Values By Design, rather than by current stake-holder driven approaches.   

“Technologies often have social impacts that positively or negatively affect the realization of values defining a good society.
Intrinsic values for a good society are well-being and justice; instrumental values are freedom, democracy and sustainability.
For each of these values, there are ways to develop technology so as to systematically support it and promote a good society.”
https://www.sciencedirect.com/science/article/pii/S0160791X1630149X?via%3Dihub

Should tech companies wield more power than governments?

Big Tech Companies Are So Powerful That A Nation Sent An Ambassador To Them
Introducing the Danish envoy to Silicon Valley and beyond.  June 23rd, 2018 | Paul Blumenthal | Huffington Post 
Confirmation that the governing power of tech companies is recognized internationally. 
Denmark has officially appointed Casper Klynge, “Tech Ambassador.” 
“Klynge’s appointment is an admission of a striking, and perhaps uncomfortable, reality. It is the latest sign that as big tech companies have grown larger without meaningful oversight from regulators or antitrust watchdogs in the United States, they’ve become regulators and policymakers by default.”
“The reasoning behind this is a cool analysis – that, whether we like it or not, some of the big tech companies, including a lot of them in the area outside my window right now in Silicon Valley, are enormously influential.”
https://www.huffingtonpost.co.za/entry/silicon-valley-ambassador-nations_us_5b2aed12e4b00295f158ef8f

How will the following impact our already failing trust in media?

CGI and AI are going to turbocharge ‘fake news’ and make it far harder to tell what’s real   July 28, 2017, Rob Price | Business Insider UK
“Tech is being developed that will make it easy to create fake video footage of public figures or audio of their voice.
The developments aren’t perfect yet, but they threaten to turbocharge ‘fake news’ and boost hoaxes online.
In years to come, people will need to be far more skeptical about the media they see.”
http://uk.businessinsider.com/cgi-ai-fake-news-videos-real-2017-7?r=US&IR=T

One of many unfortunate consequences of a Cloud connected world. How many more will it take for us to realize that 24/7 connectivity to the Internet is not an improvement over the natural world?

Thermostats, Locks and Lights: Digital Tools of Domestic Abuse  June 23rd, 2018 | Nellie Bowles | NY Times
“Internet-connected locks, speakers, thermostats, lights and cameras that have been marketed as the newest conveniences are now also being used as a means for harassment, monitoring, revenge and control.”
https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html

Wearables Bring a Host of Tricky Legal Questions, Says Tech Attorney  June 13th, 2018 | Eliza Strickland | IEEE Spectrum 
“Baker talked with IEEE Spectrum about the legal questions that will come up as wearables—including “embeddables” and “ingestibles”—become intimate parts of our lives. He sees these technologies as the latest step in a long trend toward human-computer integration….Thinking about the legal issues of wearables, the first things that come to mind are data privacy and security. How do these issues play out in wearables?“
 
New computers could delete thoughts without your knowledge, experts warn  New human rights laws are required to protect sensitive information in a person’s mind from ‘unauthorised collection, storage, use or even deletion’  April 26th, 2017 | Ian Johnston | Independent
“’Thou canst not touch the freedom of my mind,’ wrote the playwright John Milton in 1634. But, nearly 400 years later, technological advances in machines that can read our thoughts mean the privacy of our brain is under threat. Now two biomedical ethicists are calling for the creation of new human rights laws to ensure people are protected, including ‘the right to cognitive liberty’ and ‘the right to mental integrity.”
 
Four new human rights laws proposed:
• The right to cognitive liberty,
• The right to mental privacy,
• The right to mental integrity and
• The right to psychological continuity
 
Should “profit” and “progress” be our engine into the future or should we choose a wiser path forward?
 
The “Artificial” of Artificial Intelligence and MIT’s “Moral Machine”  Nov. 2nd, 2018 | Patricia Burke | Activist Post 
At the Media Lab at the renowned Massachusetts Institute of Technology, researchers wanted to explore some of the ethical and moral dilemmas of Artificial Intelligence, focusing on self-driving cars.
Examples of headlines in the news this week included:

• Should a self-driving car kill the baby or the grandma? Depends on where you’re from (MIT’s Technology Review)
• Driverless cars: Who should die in a crash? (BBC)
• Self-driving cars will have to decide who should live and who should die. Here’s who humans would kill. (Washington Post)
• Who should AI kill in a driverless car crash? It depends who you ask
• Responses vary around the world when you ask the public who an out-of-control self-driving car should hit (The Guardian)
• Out of Two Million People, Most Prefer That a Self-Driving Car Kill the Elderly (Popular Mechanics)