Anna Felländer is one of Sweden’s leading experts on the effects of digitalization on organisations, society and the economy. She has long been passionate about making meaning from data. As Head of Financial Analysis at the crisis management office of the Prime Minister of Sweden during the financial crisis of 2008, Anna learned that a multidisciplinary perspective can uncover insights and concealed patterns that traditional statistics can’t. Later, as an advisor to the Swedish government and as the Chief Economist at Swedbank, her key research analysed the hidden values of digitalization from an economic, societal and business perspective.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions.