Algorithmic decision-making has enormous potential to do good.
But things can go drastically wrong when decisions are trusted to algorithms without ensuring they adhere to established ethical norms. Two recent examples illustrate how government agencies are failing to automate fairness.
Algorithms can take much of the hard work out of tough decisions. But to avoid problems like the Robodebt debacle or unfair parole rulings, we need to ensure machines operate with human-like ethics.