Here at Noodle we spend a great deal of time thinking of ways to center ethics in our machine learning and data science work. This means that we’re constantly reading about (and thinking about) ethics in AI from every possible angle: from healthcare to industry and beyond.

First up is a paper in the AMA Journal of Ethics about AI and healthcare titled “Is it ethical to use prognostic estimates from machine learning to treat psychosis?” You might think that’s looking pretty far into the future, but just this week an insurance company stated that they are enacting a plan to use wearable device information to determine pricing in life insurance, so this is not the future we’re reading about, it’s now.




We’re also reading this Forbes piece about the questionable ethics of tracking everyone in a country 100% of the time in the new Chinese system of social credit, as the author uses it as one example out of several of the issue of imparting (or not imparting) ethics in our AI models. Again, this is already being tested live, so this is not the future – it is now.





Brookings takes a stab at the challenge of corporations taking responsibility for ethical dilemmas in AI. They specifically focus on five challenges: weapons and military grade applications, law enforcement, government surveillance, racial bias, and social credit.


We enjoyed this Medium post about changing an engineers mindset from “how” to “why.” It emphasizes how this shift could benefit machine learning by challenging bias, help with issues of data collection and privacy, and center ethics in AI.

Last, but not least, we’ll close it out by revisiting Nick Bostrom’s 2011 paper on ethics and artificial intelligence.


Photos by Clark Tibbs, Fabian Albert and Ibrahim Rifath on Unsplash


Work With Us