Noodle Notes: Ethical Machine Learning
First up is a paper in the AMA Journal of Ethics about AI and healthcare titled “Is it ethical to use prognostic estimates from machine learning to treat psychosis?” You might think that’s looking pretty far into the future, but just this week an insurance company stated that they are enacting a plan to use wearable device information to determine pricing in life insurance, so this is not the future we’re reading about, it’s now.
We’re also reading this Forbes piece about the questionable ethics of tracking everyone in a country 100% of the time in the new Chinese system of social credit, as the author uses it as one example out of several of the issue of imparting (or not imparting) ethics in our AI models. Again, this is already being tested live, so this is not the future – it is now.
Brookings takes a stab at the challenge of corporations taking responsibility for ethical dilemmas in AI. They specifically focus on five challenges: weapons and military grade applications, law enforcement, government surveillance, racial bias, and social credit.
We enjoyed this Medium post about changing an engineers mindset from “how” to “why.” It emphasizes how this shift could benefit machine learning by challenging bias, help with issues of data collection and privacy, and center ethics in AI.
Last, but not least, we’ll close it out by revisiting Nick Bostrom’s 2011 paper on ethics and artificial intelligence.
- Breaking Down Noodle.ai’s Product Quality AI Application
- Meet Noodle.ai at the Gartner Supply Chain Conference This Week
- Webcast Replay: The End of Rules – Break Your Supply Chain Free with AI
- No B.S. answers to some of your most-asked AI questions
- Enterprise AI Drives Transportation Toward Radical Innovation