ETHICS: 5G AND THE IOT
5G and the Internet of Things (IoT) bring with them a host of life-altering ethical questions, issues, and dilemmas. As technology now touches almost every aspect of our lives, so too do its ethical ramifications. Furthermore, due to the speed with which we are being corralled into cyber space, new and more complex ethical issues are bound to arise.
This page touches on just a few topics, but will hopefully serve as a springboard for further investigation. We invite you to study the ethical issues that an all-things-Internet-connected world presents, weigh the pros and cons, and consider if perhaps it would be wiser to rethink our path into the future.
The topics are divided into two main subject areas: 1) ways in which new technology, inventions, and platforms will impact our quality of life, and 2) environmental and social injustices primarily caused by radiation, e-waste other attendant pollutants.
More than ever, humanity is in desperate need of our best minds to come together to make recommendations for a wiser path forward. Such a body, or council, would consist of (independent from industry) ethicists, visionaries, philosophers, psychologists, environmentalists, scientists, medical professionals, religious leaders, legals minds, privacy and cyber security experts, top people in AI and data analytics, and members of the public. Too much is at stake to leave the future of all Life in the hands of profit-driven industries.
Though written 18 years ago, Joy’s ideas seem even more relevant today. Please see:
Why the Future Doesn’t Need Us April 1st, 2000 | Bill Joy | Wired
With integrity, honesty, breadth of vision, and clarity of thought, Scientist and IT researcher/inventor questions the role of technology in our world.
“We have been seeking knowledge since ancient times. Aristotle opened his Metaphysics with the simple statement: ‘All men by nature desire to know.’ We have, as a bedrock value in our society, long agreed on the value of open access to information, and recognize the problems that arise with attempts to restrict access to and development of knowledge. In recent times, we have come to revere scientific knowledge. But despite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs.”
ROBOTS STANDING IN FOR HUMANS
Robots are increasingly replacing humans in jobs and relationships. For menial labor, this may make sense. But should robots be used in jobs or roles requiring interpersonal communication and moral decision making? Is it right for a robot to be a caregiver? Should robots preside at funerals? Following are some ethically questionable ways in which robots are already replacing humans.
Legal robots deployed in China to help decide thousands of cases Aug. 4th, 2017, Neil Connor| The Telegraph
“’Legal robots’ have been deployed on thousands of cases in China to help decide sentencing. The robots – which are about three feet tall and have heads shaped like toasters – review documents and identify problems with cases. They also advise on sentencing, and can generate arrest warrants and ‘approve indictments,’ said prosecutors in the eastern province of Jiangsu, where the robots are being piloted.
Will robots be BETTER lovers than humans? Sex with machines could become addictive, warns expert Sept. 5th, 2016 | Shivali Best | Daily Mail
“Sex with robots is set to overtake human sex by 2050. But a robotics expert has warned that robot sex could become addictive, and will someday even replace human-human sex entirely. Like many other technologies that have replaced humans, robots could even surpass human technique, to become better lovers, he claims. “
Chinese man ‘marries’ robot he built himself April 4th, 2017 | Benjamin Haas | The Guardian
“Zheng Jiajia had grown tired of pressure to get married so he turned to Yingying, a robot spouse he constructed last year….After two months of ‘dating’, he donned a black suit to ‘marry’ her at a ceremony attended by his mother and friends….”
Pepper the robot is now a Buddhist priest programmed to chant at funerals Aug. 24th, 2017 | Thuy Ong | The Verge
After working in the home, as an assistant at various stores, and as a waiter, SoftBank’s humanoid robot Pepper is adding Buddhist priest to the list of careers the robot can take on. Pepper can chant sutras in a computerized voice while hitting a drum, reports Reuters, as detailed at the creepily-named Life Ending Industry Expo in Tokyo.
Citizen of Saudi Arabia:
Saudi Arabia’s Newest Citizen Is a Robot October 26, 2017 | Kirsten Korosec | Fortune
“Sophia, an intelligent humanoid robot, has been granted citizenship in Saudi Arabia. It’s the first country in the world to give a bot the same status reserved for humans. Sophia was awarded the citizenship while on stage at the Future Investment Initiative in Riyadh.”
“I am very honored and proud for this unique distinction. This is historical to be the first robot in the world to be recognized with a citizenship.”
Does Grandma Need a Robot? Mar. 27th, 2018 | Dylan Furness | Techemergence
“Meet Alice – a child-sized social care-bot who stars in a new documentary Ik ben Alice, in which she’s introduced as an experiment into the separate homes of three elderly women.”
“While a social worker can converse and has the capacity for concern, Alice’s involvement is algorithmic. Her questions and answers are programmed. She only elicits the illusion of emotional engagement. Though Alice might perform the functions of social engagement and concern, it’s impossible to say that Alice actually cares.”
Caregiver for a child:
‘This is awful’: robot can keep children occupied for hours without supervision Sept. 2016 | Julie Carrie Wong | The Guardian
“The 3ft tall iPal has wide eyes, working fingers, pastel trimming, and a touchscreen tablet on its chest. It can sing, dance, and play rock paper scissors. It can talk with children, answer questions like ‘Why is the sun hot?’, and provide surveillance/video chat for absent parents. The iPal, he boasted, could keep children aged three to eight occupied for a couple of hours without adult supervision. It is perfect for the time when children arrive home from school a few hours before their parents get off work.”
What News-Writing Bots Mean For The Future Of Journalism Feb. 2017 | Joe Keohane |Wired
“It works like this: Editors create narrative templates for the stories, including key phrases that account for a variety of potential outcomes (from ‘Republicans retained control of the House’ to ‘Democrats regained control of the House’), and then they hook Heliograf [the AI “reporter”] up to any source of structured data—in the case of the election, the data clearinghouse VoteSmart.org. The Heliograf software identifies the relevant data, matches it with the corresponding phrases in the template, merges them, and then publishes different versions across different platforms. The system can also alert reporters via Slack of any anomalies it finds in the data—for instance, wider margins than predicted—so they can investigate.”
Here’s a twist – Pedophilia support:
Meet the most lifelike robot ever created Feb. 13, 2017 | Blazey
‘Sophia’, a robot questions ‘her’ own existence and grapples with whether ‘she’ is happy to be ‘alive’ and whether robots will ever harm people. Her ‘creator’ poses some pretty tough questions.
Discussion about making robots that know – or are able to learn – right from wrong.
“In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.”
Emotionally intelligent robots:
Robots reading feelings April 5th, 2019 | Case Western Reserve University | PhysOrg
“These social robots will take some time to catch in the U.S.,” Lee said. “But in places like Japan, where there is a strong culture around robots, this is already beginning to happen. In any case, our future will be side-by-side with emotionally intelligent robots.”
The ‘Iphone’ Of Robots Could Bring Capitalism As We Know It To An End July 27Th, Matthew Hart | Nerdist
“Although it seems as if there’s a monstrous mishmash of robots already at our doorstep—sometimes literally, in the case of the Domino’s Pizza Delivery Robot—it’s likely that all these robots are, in the grand scheme of the evolution of AI, gimmicky. They’re narrow AI: capable of a specific task, within specific parameters. But the hypothetical iPhone of robots will be generally intelligent. It’ll be able to make your breakfast in the morning, play catch with you in the afternoon, and go off to its factory job at night.”
The idea is that just like the I-phone has become integrated into just all areas of our lives, so too will each person have a robot equivalent – an “I-robot”- that will become essentially, their personal assistant.
Industrial hack can turn powerful machines into killer robots Aug 22, 2017 | Taylor Hatmaker | TechCrunch
“When we imagine the existential threat to humanity posed by automation, we usually think of the negative impact on the workforce, not robots quite literally bludgeoning us to death. In a post titled ‘Exploiting Industrial Collaborative Robots,’ security researchers at IOActive detail how popular models of consumer and industrial robots have already been compromised in such a way that could cause humans bodily harm.”
And yes, even God:
Inside The First Church Of Artificial Intelligence Nov. 15th, 2017 | Mark Harris | Wired
“With the internet as its nervous system, the world’s connected cell phones and sensors as its sense organs, and data centers as its brain, the ‘whatever’ will hear everything, see everything, and be everywhere at all times. The only rational word to describe that ‘whatever’, thinks Levandowski, is ‘god.’”
“I would love for the machine to see us as its beloved elders that it respects and takes care of. We would want this intelligence to say, ‘Humans should still have rights, even though I’m in charge.’”
CYBORGIFICATION - MERGING HUMANS WITH TECHNOLOGY
Many of us may find the notion of cyborgification repugnant. But unfortunately, the direction toward which industry is herding us makes these questions relevant now. Is cyborgification, aka enhanced humans, ethical? And if so, should it be allowed to infuse our world in the name of “convenience”? Should it be permitted in the workplace to “optimize productivity”, or perhaps even mandated in the name of “public safety”? Would it be ethical to permit, incentivize, or mandate micro-chipping? Do we need new human rights laws to protect our right to “remain natural”? Laws addressing “neurorights” are already being considered in a number of countries.
Neuroethicist, Marcello Ienca suggests we need four new human rights laws:
1. The right to cognitive liberty
2. The right to mental privacy
3. The right to mental integrity
4. The right to psychological continuity
Brainreading tech is coming. The law is not ready to protect us. In the era of neurocapitalism, your brain needs new rights.
Aug 30, 2019 | Sigal Samuel | Vox
“Your brain, the final privacy frontier, may not be private much longer.”
“Some neuroethicists argue that the potential for misuse of these technologies [brain computer interfaces] is so great that we need revamped human rights laws — a new ‘jurisprudence of the mind’ — to protect us. The technologies have the potential to interfere with rights that are so basic that we may not even think of them as rights, like our ability to determine where our selves end and machines begin. Our current laws are not equipped to address this.”
The World’s Most Famous Real-Life Cyborgs Medical Futurist
“In the future of humankind, brain implants could improve our memory. Implanted magnets or RFID chips implanted in our fingers could replace passwords and keys. Exoskeletons could boost our strength, and augment a whole range of our human capabilities. So, it will never be more important to keep the features that make us human, such as empathy, creativity or the ability for change. It is not easy to find the right balance between technology and being human, though.”
Super humans who are sexier, stronger and smarter will arrive by 2029 as brains begin to fuse with machines, Google expert claims Mar. 16th, 2017 | Phoebe Weston | Daily Mail
“Technological singularity will turn us into super humans some time in the next 12 years, according to a Google expert.
This might sound like science fiction, but Google’s Director of Engineering, Ray Kurzweil, has made 147 predictions since the 1990s and has a success rate of 86 per cent.
Kurzweil says when we live in a cybernetic society we will have computers in our brains and machines will be smarter than human beings.
He claims this is already happening with technology – especially with our addiction to our phones – and says the next step is to wire this technology into our brains.”
Embedded chip on your shoulder? Some privacy and security considerations Aug. 1, 2017 | Joseph Jerome | IAPP
“Our cyborg future has been viewed by some as inevitable; Elon Musk has argued the merging of machine and biological matter is essential for humanity to remain relevant as a species.”
“’Cyborgification’ raises a host of ethical questions, but employer-driven ‘chipping’ poses at least three immediate challenges.”
Weaponizing the Brain: Neuroscience Advancements Spark Debate May 11th, 2017 | James Giordano | National Defense
“Advancements in neuroscience could be used to create ‘super soldiers,’ link brains to weapon systems for command and control, or even manipulate groups or leaders into taking actions that they normally wouldn’t do.”
Seems like we’ll all be robots sooner or later Aug.3rd, 2017, Paul Sassone | Chicago Tribune
“Until now, robots, automated devices, required a human agency. Someone has to turn them on, direct what they do. They were instruments of humans. But by implanting microchips into humans, no instrumentality is required. Humans have become the instrument itself…. It seems that we now are approaching robotization from two directions – automation and a kind of microchipped human evolution.”
Engineers Just Created a Tiny Antenna, Which Could Be Used for Brain Implants Aug. 24, 2017 | Luke Dormehl | Digital Trends
“Revolutionary antenna, hundreds of times smaller than existing models, could help shrink phones and satellites, and even make smart brain implants a real thing.”
“The robots may come, but the cyborgs are already in the office” Aug. 8th, 2017, Mark Coeckelbergh | Gastkommentar
“Increasing automation may imply not only replacement but also new couplings between humans and machines. New cyborgs. ‘New’, since this is already the way many of us work today. In factories, but also in the office: we collaborate with computer programs to do our work. New algorithms will be developed that change these collaborations, without replacing the human. This creates chances to do new things, perhaps also new creative things. But there is also the danger that this increased cyborg-style automation leads to new ways of working that have even less regard for human needs and well-being, and that further exploit us for profit.”
ARTIFICIAL INTELLIGENCE AND SINGULARITY
Artificial Intelligence (AI) is evolving exceedingly rapidly. AI could very soon achieve super-intelligence far surpassing human capabilities in some areas. “Singularity” is a “hypothetical moment in time” at which technological change is so profound that, in the words of futurist, Ray Kurzweil, it represents a “rupture in the fabric of human history.” Is this what we want for ourselves and our children, or are we being herded into it by industry greed, futurists, and so-called “progress”?
It should be noted that AI and data harvesting come as a packaged deal. AI feeds off of the data we (largely unknowingly) generate from our personal devices, the IoT, sensors, surveillance cameras, smart cities, and hyped about driverless cars. AI success is also contingent on saturating our homes and neighborhoods with wireless transmitters since much of IoT and smart city data will be collected and transmitted wirelessly. Restated: We must accept health harms, environmental impacts, cyber security risks, and privacy violations if we want robust AI. Is this a trade-off we are willing to make?
Further Articles on Artificial Intelligence and Ethics:
Why Technology Favors Tyranny Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it. Oct. 1st, 2018 | Yuval Noah Harari | The Atlantic
Yuval Noah Harari calls us to consider the deeply troubling societal impacts of AI.
“AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.”
“…once somebody (whether in San Francisco or Beijing or Moscow) gains the technological ability to manipulate the human heart—reliably, cheaply, and at scale—democratic politics will mutate into an emotional puppet show.”
“We are unlikely to face a rebellion of sentient machines in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something—be it a car, a politician, or an entire ideology.”
Futurist Gerd Leonhard on the Internet of Things: a new Meta-Intelligence and the Global Brain April 5th, 2018 | Gert Leonhard
Who’s in mission control of this giant brain?”
Very short video clip explaining how by aggregating all the data we send to the Cloud, we essentially have a “global brain.”
“We’re essentially building a new meta-intelligence. And so we can see in 10 years we’re going to have many of those global brains existing next to each other. And that is both very powerful, and also quite scary.”
The best scenes from 2017 talks and presentations: Futurist Keynote Speaker Gerd Leonhard Sept. 28th, 2017 | Gerd Leonhard
“In 5 years you will not be searching Google for the best sushi, or whatever you want to eat in Brussels. The system will already know where you are, who you are, what you want to eat, who your friends are, how much money you have for eating. It may actually eat for you as well.”
Automation Nightmare: Philosopher Warns We Are Creating a World Without Consciousness February 11, 2017 | Paul Ratner
Recent conference held on AI hosted by the Future of Life Institute.
“The conversation’s topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations.”
The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster Editorial June 12th, 2018 | The Guardian
“Google’s ethical principles for the use of artificial intelligence are little more than a smokescreen, but they show that many engineers are rightly worried by the possible uses of the technology they’re developing.”
“Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to. It is not the monster, but the good Dr Frankenstein we need to worry about most.”
Wikipedia study warns even good robots are unpredictable Aug. 26th, 2017 | Kevin Leen | Gears of Biz
“In an open letter to the UN, these business leaders – including Tesla’s Elon Musk and the founders of Google’s DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways. But the real threat is much bigger – and not just from human misconduct but from the machines themselves…The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions.”
AUTONOMOUS WEAPONS AND KILLER ROBOTS
On the battlefield and in law enforcement, should robots be able to decide who should live and who should die? Is it ethical to leave such a decision to AI? Does AI have the capability of making such judgment calls? And what would the effects of autonomous weapons on war or crime?
To quote General Sir Richard Barrons referring to killer robots:
“The temptation to have them I think will be terrific because they will be more effective, they will be cheaper, they will take people out of harm’s way and will give you bigger armed forces, so the pressure to have them will only grow.”
The notion of “leveling the playing field” would certainly die a rapid death along with those on the receiving end of autonomous weapons.
Campaign To Stop Killer Robots Website
“The Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons.”
Micro Drones Killer Arms Robots – Autonomous Artificial Intelligence – Warning!! Nov. 16th, 2017 | CS Consulting
“Killer drone arms, artificial intelligence an increasingly real fiction, Social and Smart Phone Facial Recognition, Smart swarms, Warning!”
Prepare for rise of ‘killer robots’ says former defence chief Aug. 27th, 2017 | Ben Farmer | The Telegraph
“He [Gen Sir Richard Barrons] said: ‘The temptation to have them [killer robots] I think will be terrific because they will be more effective, they will be cheaper, they will take people out of harm’s way and will give you bigger armed forces, so the pressure to have them will only grow.’”
Killer robots are almost inevitable, former defense chief warns Aug. 27th, 2017 | Sally Hayden | The Independent
“‘A machine doesn’t blink, doesn’t get hot, doesn’t get cold and just follows the rules,’ he said. ‘Why would you send a 19-year-old with a rifle into a house first to see if anything is in there if you could send a machine?’”
Elon Musk leads 116 experts calling for outright ban of killer robots – Open letter signed by Tesla chief and Alphabet’s Mustafa Suleyman urges UN to block use of lethal autonomous weapons to prevent third age of war Aug. 20th, 2017 | Samuel Gibbs | The Guardian
“Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots saying:
‘Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.’”
The Dawn of The Killer Robot (Full documentary) April 16th, 2015 | Motherboard
“In INHUMAN KIND, Motherboard gains exclusive access to a small fleet of US Army bomb disposal robots—the same platforms the military has weaponized—and to a pair of DARPA’s six-foot-tall bipedal humanoid robots. We also meet Nobel Peace Prize winner Jody Williams, renowned physicist Max Tegmark, and others who grapple with the specter of artificial intelligence, killer robots, and a technological precedent forged in the atomic age. It’s a story about the evolving relationship between humans and robots, and what AI in machines bodes for the future of war and the human race.”
ALGORITHMS, DRIVERLESS CARS, AND LIABILITY
Algorithms are not as objective as we believe them to be. In fact, they often mimic the same biases of their programmers. For example, Facial Recognition Technology is more sensitive to light colors than dark ones which can skew results when gathering data on white or black people.
Driverless cars must be programmed to “decide” in a split second how to respond in a life threatening situation. Should the car veer away from a child potentially harming the driver? Who decides how AI should be programmed? Who are the ethicists behind the algorithms? Are there any?
And what about liability? Who’s accountable in the event of an IoT mishap? The designer of the product? The manufacturer? The retailer? The owner? Nowhere is this question more relevant than with driverless cars.
‘We have a problem’: Racist and sexist robots – Artificial intelligence is increasingly biased against women and non-white people, experts claim as such programs creep ever further into our lives Aug. 25th, 2017 | Henry Bodkin | National Post
“Programmes designed to ‘pre-select’ candidates for university places or to assess eligibility for insurance cover or bank loans are likely to discriminate against women and non-white applicants, according to their [experts’] research.”
“A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed?”
Who’s to blame when driverless cars have an accident? Mar. 20th, 2018 | The Conversation
“The news that an Uber self-driving vehicle has killed a pedestrian in the US has made headlines around the world….But partial or full autonomy raises the question of who is to blame in the case of an accident involving a self-driving car? In conventional (human-driven) cars, the answer is simple: the driver is responsible because they are in control. When it comes to autonomous vehicles, it isn’t so clear cut.”
ENVIRONMENTAL, SOCIAL, AND INTERGENERATIONAL INJUSTICES
Should industry’s insatiable appetite for growth and profits take precedence over environmental and social justice?
The “growth or die” model of capitalism is wreaking havoc on our world from the craze to digitalize every aspect of our lives. Industry is pulling out all stops in a grand effort to saturate every inch of the planet and every moment of our lives with digital connectivity – not for the good of society, but for industry’s “bottom line.”
How will omnipresent connectivity and pervasive “screenification” impact our day-to-day living? Our children? Our relationships? Our sanity? How will it impact the workers who mine for minerals needed for all this technology, in harsh slave-like conditions? And factory workers who become ill, and sometimes die, from toxins in all these things? What about all the harmful wireless radiation permeating our homes, schools, communities and workplaces? And what effect will all this technology have on our children who now spend more time online than with their family, friends, and nature combined? What about the future of wildlife, our pollinators, endangered birds, and trees? Are we even considering the exponential increase in e-waste as every “thing” becomes “smart”. Finally, can the collusion between Telecom and our government – particularly the FCC – be considered ethical?
Perhaps it’s not so “smart” to sever our connection with ourselves, each other, and the earth.
Should we be forced to accept 4G/5G “small” cell towers in front of their homes when this may result in harms to our health?
Public health is littered with examples where economic interests trumped scientific advice Oct. 29th, 2017 | Devra Davis | OpEd piece, The Hill
Ethics and Surveillance Capitalism: “A global architecture of behavior modification threatens human nature in the 21st century.”
Age of Surveillance Capitalism: “We Thought We Were Searching Google, But Google Was Searching Us” Mar. 1st, 2019 | Shoshana Ruboff, Professor Emeritus at Harvard Business School, on Surveillance Capitalism interviewed on Democracy Now
“Surveillance Capitalism claims private human experience for the market dynamic as a free source of raw material that is translated into behavioral data. These data are then combined with advanced computational abilities to create predictions…predictions of what we will do…predictions of our behavior…predictions of what we will do now, soon, and later. And these predictions are then sold to business customers in a new kind of market place that trades exclusively in human futures.”
“They have no interest in us. It doesn’t matter if we are happy or sad. It doesn’t matter if we’re doing well or poorly. It only matters that we do these things in ways that they can scrape the experience and turn it into data.”
Should industry be allowed to erode our privacy by the 24/7 harvesting of all our data?
The coming privacy crisis on the Internet of Things Nov. 28th, 2017 | Alasdair Allan | TEDxExeterSalon
Concise and clear explanation of how the IoT data business model works.
“…the data from these objects [IoT “things”] almost invariably ends up in the Cloud where it’s aggregated, packaged, and then almost certainly sold.” And as we “agree” to this business model, that makes us the “product, not the customer.”
The internet of things is built to leak June 8th, 2018 | Gilad Rosner | The Hill
IoT devices should protect individuals’ privacy as part of their design. Otherwise, before we know it, the relentless data-tracking practices that have become business as usual online will migrate to the offline world, and the concept of a private space where we can retreat, be anonymous, and avoid the gaze of commercial companies and other people will become a quaint relic of the past.
For more on privacy please see:
Sustainable Technologies for Sustainable Lifestyles Philip Brey 2017 | MIT Press
Brey examines the role of technology in contributing to, and potentially mitigating, environmental degradation from over consumption and our industrial society. He writes:
“Technological reform will certainly be of great help in the move towards sustainable patterns of consumption. However, such reform should be seen as part of a comprehensive strategy for sustainable consumption, which also includes social and economic incentives and public debates about values, lifestyles, the quality of life, and the future of the planet.”
“Most centrally, I have argued, a fundamental reform is needed in patterns of consumption and modern, Western lifestyles, and the values and beliefs that underlie them.”
Should workers manufacturing our boundless technology be exposed to toxins that cause illness and sometimes even death?
Where’s the Harm — Component Manufacturing Electronics Take Back Coalition
“Toxic materials: A complex combination of toxic materials is used in electronics manufacturing, including chlorinated and brominated substances, photoactive chemicals, toxic gases, acids, solvents, heavy metals, plastics, and plastic additives, many of which impose a heavy burden on the environment and worker health.”
Should workers be paid slave wages and bear harsh working conditions?
What Is The Cost Of Our Digital Dependency? The Dirty Secret Of Our Digital Addiction Documentary by Sue Williams, 2017.
“In an investigation that spans the globe, filmmaker Sue Williams investigates the underbelly of the electronics industry and reveals how even the smallest devices have deadly environmental and health costs.”
Should our technology be built on the backs of people (and children) mining for minerals under horrendous conditions?
Descending into conflict: tech minerals finance war: The technology we now take for granted relies on minerals which may have been mined in conflict zones where the trade finances war and oppression July 25th, 2017 | Raconteur https://www.raconteur.net/business/descending-into-conflict-tech-minerals-finance-war
(For more on conflict minerals please see https://whatis5g.info/conflict-minerals/)
Should we forge ahead with the exponential increase in technology from the IoT knowing it will also cause environmental devastation and injustices to workers dismantling this ever growing e-waste?
Almost everything you know about e-waste is wrong May 18th, 2018 | Josh Lepawsky | The Conversation
U.S. obsession with electronics has huge human price August 24, 2017 | Austin Lombard | InMotion
(For more on e-waste please see, https://whatis5g.info/e-waste/)
Should people be subjected to cyber security risks that could potentially be life threatening?
IResearchers warn against ‘hackable’ robots Aug. 22nd, 2017 | Jane McCallion | IT Pro
“Cyber security firm IOActive claims it has discovered flaws in several commercially available robots that could lead to the devices hacking networks or even physically attacking their owners.”
“There’s an even more sinister element to the vulnerabilities IOActive said…: they can be programmed to maim the humans around them.”
Can the heart be hacked? Experts find 8,000 security flaws in pacemaker software May 28th, 2017 | RT.com
“From hacked connected cars to power grids, the implications of IoT security issues seem to be getting graver – yet when it comes to pointing fingers for security troubles, many times victims don’t even know where to start.”
(For more on cyber security, please see, https://whatis5g.info/cybersecurity-vulnerability/)
EFFECTS ON OUR BRAINS AND HUMANITY
What effect will 24/7 connectivity have on our children, families and relationships? With our current technology, the social fabric of our society is already unraveling. Should we perhaps be moving toward less “technoference” rather than more?
“Smartphones have by now been implicated in so many crummy outcomes—car fatalities, sleep disturbances, empathy loss, relationship problems, failure to notice a clown on a unicycle—that it almost seems easier to list the things they don’t mess up than the things they do.”
LOWERED PROPERTY VALUES:
Cell Phone Towers Lower Property Values: Documentation And Research on Cellular Base Stations Near Homes
TELECOM AND GOVERNMENT COLLUSION
Can the collusion between Telecom and our government – particularly the FCC – be considered ethical?
Captured Agency: How The Federal Communications Agency Is Dominated By the Industries It Presumably Regulates 2015 | Norm Alster | Edmond J. Safra Center for Ethics Harvard University
GOOGLE CLEANS UP ITS ACT WITH 7 ETHICAL PRINCIPLES IN THE WAKE OF PROJECT MAVEN
One Recent Example Of How Ethics Are Playing Out In The IoT:
More About Project Maven:
“a large government program that will result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right abuses. The scale and magic of GCP [Google Cloud Platform], the power of Google ML [machine learning], and the wisdom and strength of our people will bring about multi-order-of-magnitude improvements in safety and security for the world.”
“Google intended to build a ‘Google-earth-like’ surveillance system that would allow Pentagon analysts to ‘click on a building and see everything associated with it’ and build graphs of objects like vehicles, people, land features, and large crowds for ‘the entire city,’ states one email recapping a Maven kickoff meeting with Pentagon representatives. Google’s artificial intelligence would bring ‘an exquisite capability’ for ‘near-real time analysis,’ the email said.”
“Despite the excitement over Google’s performance on Project Maven, executives worried about keeping the project under wraps. ‘It’s so exciting that we’re close to getting MAVEN! That would be a great win,’ Fei-Fei Li, chief scientist for AI at Google Cloud, wrote in a September 24, 2017 email. ‘I think we should do a good PR on the story of DoD collaborating with GCP from a vanilla cloud technology angle (storage, network, security, etc.), but avoid at ALL COSTS any mention or implication of AI.’
“‘Google is already battling with privacy issues when it comes to AI and data; I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.’
AI at Google: our principles June 7th, 2018 | Sundar Pichal | AI
“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
UPDATE: April 2019 Google has yet more ethical problems to deal with.
We asked experts for practical suggestions on why Google’s AI ethics council bombed and what the company should do next
April 6th, 2019 | Bobbie Johnson and Gideon Lichfield | MIT Technology Review
“The inclusion of the Heritage Foundation’s president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing.
How did things go so wrong? And can Google put them right? We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next. If these people had been on ATEAC, the story might have had a different outcome.”
MISCELLANEOUS ARTICLES PERTAINING TO ETHICS IN THE IOT
Do we have a right to mental privacy? Are new human rights laws needed to protect “unauthorised collection, storage, use or even deletion'” of our thoughts?
New computers could delete thoughts without your knowledge, experts warn – New human rights laws are required to protect sensitive information in a person’s mind from ‘unauthorised collection, storage, use or even deletion April 26th, 2017 | Ian Johnston | Independent
This article discusses the ramifications of new technologies that can read our thoughts. Ethicists propose four new human rights laws: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
What role should technology play in society?
The Strategic Role of Technology in a Good Society Feb. 2018 | Phillip Brey | Science Direct
Brey investigates what the proper role of technology should be. He posits five “key values” and three “instrumental values” that are necessary for a “good” society. Brey concludes that technology can be developed to support these values but must be designed with these values in mind, aka Values By Design, rather than by current stake-holder driven approaches.
“Technologies often have social impacts that positively or negatively affect the realization of values defining a good society.
Intrinsic values for a good society are well-being and justice; instrumental values are freedom, democracy and sustainability.
For each of these values, there are ways to develop technology so as to systematically support it and promote a good society.”
Should tech companies wield more power than governments?
Big Tech Companies Are So Powerful That A Nation Sent An Ambassador To Them
Introducing the Danish envoy to Silicon Valley and beyond. June 23rd, 2018 | Paul Blumenthal | Huffington Post
Confirmation that the governing power of tech companies is recognized internationally.
Denmark has officially appointed Casper Klynge, “Tech Ambassador.”
“Klynge’s appointment is an admission of a striking, and perhaps uncomfortable, reality. It is the latest sign that as big tech companies have grown larger without meaningful oversight from regulators or antitrust watchdogs in the United States, they’ve become regulators and policymakers by default.”
“The reasoning behind this is a cool analysis – that, whether we like it or not, some of the big tech companies, including a lot of them in the area outside my window right now in Silicon Valley, are enormously influential.”
How will the following impact our already failing trust in media?
CGI and AI are going to turbocharge ‘fake news’ and make it far harder to tell what’s real July 28, 2017, Rob Price | Business Insider UK
“Tech is being developed that will make it easy to create fake video footage of public figures or audio of their voice.
The developments aren’t perfect yet, but they threaten to turbocharge ‘fake news’ and boost hoaxes online.
In years to come, people will need to be far more skeptical about the media they see.”
One of many unfortunate consequences of a Cloud connected world. How many more will it take for us to realize that 24/7 connectivity to the Internet is not an improvement over the natural world?
Thermostats, Locks and Lights: Digital Tools of Domestic Abuse June 23rd, 2018 | Nellie Bowles | NY Times
“Internet-connected locks, speakers, thermostats, lights and cameras that have been marketed as the newest conveniences are now also being used as a means for harassment, monitoring, revenge and control.”
“Baker talked with IEEE Spectrum about the legal questions that will come up as wearables—including “embeddables” and “ingestibles”—become intimate parts of our lives. He sees these technologies as the latest step in a long trend toward human-computer integration….Thinking about the legal issues of wearables, the first things that come to mind are data privacy and security. How do these issues play out in wearables?“
At the Media Lab at the renowned Massachusetts Institute of Technology, researchers wanted to explore some of the ethical and moral dilemmas of Artificial Intelligence, focusing on self-driving cars.
Examples of headlines in the news this week included:
“In a world where business and technology trumps ethics and regulation, America is already being hacked. The mass surveillance state of data harvesting in real-time is coming.”
“The Sky Net programme, now renamed Pingan Chengshi, or Safe Cities, claimed to have connected 170 million cameras across China last year. By 2020, another 400 million units will be installed.”
“BEIJING PLANS TO BE ABLE TO IDENTIFY ANYONE, ANYTIME, ANYWHERE IN CHINA WITHIN THREE SECONDS.”
And guess who’s next in line…the rest of us…Thanks to 5G!!!
ADDITIONAL RESOURCES ABOUT DIGITAL ETHICS