Noodle Notes: Ethical Machine Learning
First up is a paper in the AMA Journal of Ethics about AI and healthcare titled “Is it ethical to use prognostic estimates from machine learning to treat psychosis?” You might think that’s looking pretty far into the future, but just this week an insurance company stated that they are enacting a plan to use wearable device information to determine pricing in life insurance, so this is not the future we’re reading about, it’s now.
We’re also reading this Forbes piece about the questionable ethics of tracking everyone in a country 100% of the time in the new Chinese system of social credit, as the author uses it as one example out of several of the issue of imparting (or not imparting) ethics in our AI models. Again, this is already being tested live, so this is not the future – it is now.
Brookings takes a stab at the challenge of corporations taking responsibility for ethical dilemmas in AI. They specifically focus on five challenges: weapons and military grade applications, law enforcement, government surveillance, racial bias, and social credit.
We enjoyed this Medium post about changing an engineers mindset from “how” to “why.” It emphasizes how this shift could benefit machine learning by challenging bias, help with issues of data collection and privacy, and center ethics in AI.
Last, but not least, we’ll close it out by revisiting Nick Bostrom’s 2011 paper on ethics and artificial intelligence.
- Targeting the Right Signals with Demand Signal AI
- The Promise and Purpose Behind Noodle.ai Enterprise AI®
- Digging into Noodle.ai’s Asset Health AI Application
- Energy Conservation AI: Let’s talk about your carbon footprint
- Noodle.ai Named to Supply & Demand Chain Executive’s SDCE 100 Top Supply Chain Projects for 2019