Noodle Notes: Ethical Machine Learning
First up is a paper in the AMA Journal of Ethics about AI and healthcare titled “Is it ethical to use prognostic estimates from machine learning to treat psychosis?” You might think that’s looking pretty far into the future, but just this week an insurance company stated that they are enacting a plan to use wearable device information to determine pricing in life insurance, so this is not the future we’re reading about, it’s now.

We’re also reading this Forbes piece about the questionable ethics of tracking everyone in a country 100% of the time in the new Chinese system of social credit, as the author uses it as one example out of several of the issue of imparting (or not imparting) ethics in our AI models. Again, this is already being tested live, so this is not the future – it is now.

Brookings takes a stab at the challenge of corporations taking responsibility for ethical dilemmas in AI. They specifically focus on five challenges: weapons and military grade applications, law enforcement, government surveillance, racial bias, and social credit.
We enjoyed this Medium post about changing an engineers mindset from “how” to “why.” It emphasizes how this shift could benefit machine learning by challenging bias, help with issues of data collection and privacy, and center ethics in AI.
Last, but not least, we’ll close it out by revisiting Nick Bostrom’s 2011 paper on ethics and artificial intelligence.
—
Photos by Clark Tibbs, Fabian Albert and Ibrahim Rifath on Unsplash

Categories
Recent Posts
- Ding Dong! Avon’s Calling… for Streamlining of its Supply Chain
- Mars, Inc. Orbits the Idea of Supply Chain Sustainability by 2050
- Clorox Doubles Down on Sustainability
- Webcast Replay: Noodle.ai Sits Down with Intel and Forrester to Talk Smart Factories
- Accurate forecasts are possible in the current market with Enterprise AI