Noodle Notes: Ethical machine learning
First up is a paper in the AMA Journal of Ethics about AI and healthcare titled “Is it ethical to use prognostic estimates from machine learning to treat psychosis?” You might think that’s looking pretty far into the future, but just this week an insurance company stated that they are enacting a plan to use wearable device information to determine pricing in life insurance, so this is not the future we’re reading about, it’s now.
We’re also reading this Forbes piece about the questionable ethics of tracking everyone in a country 100% of the time in the new Chinese system of social credit, as the author uses it as one example out of several of the issue of imparting (or not imparting) ethics in our AI models. Again, this is already being tested live, so this is not the future – it is now.
Brookings takes a stab at the challenge of corporations taking responsibility for ethical dilemmas in AI. They specifically focus on five challenges: weapons and military grade applications, law enforcement, government surveillance, racial bias, and social credit.
We enjoyed this Medium post about changing an engineers mindset from “how” to “why.” It emphasizes how this shift could benefit machine learning by challenging bias, help with issues of data collection and privacy, and center ethics in AI.
Last, but not least, we’ll close it out by revisiting Nick Bostrom’s 2011 paper on ethics and artificial intelligence.
- Webcast replay: Navigate turbulent times in your supply chain
- Coping with COVID-19 in supply chain planning
- Disaster-proof your supply chain with AI and be ready for the next global event
- AI can reduce the impact of black swan events like COVID-19 on your supply chain
- Noodle.ai named to the 2020 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups