Just like physics particles, reports sometimes oddly collide.
This week, the European Consumer Organisation (BEUC) revealed that, according to a large survey, “less than 20 per cent of Europeans believe that current laws ‘efficiently regulate’ artificial intelligence (AI), and 56 per cent have low trust in authorities to exert effective control over the technology” (see below). “When a consumer believes they have been harmed because of AI-based products or services, they are not only unable to identify who’s responsible but also feel that they can’t rely on authorities to protect them,” explains BEUC’s deputy director general Ursula Pachl.
But as management consulting firm Boston Consulting Group recently said, the time has come for ‘AI-powered governments’! Especially amidst the Covid-19 pandemic, which is the time when “government professionals should “engage with the populations they serve, to support the use of AI and other technology to power change”.
This task, if validated, is as crucial as it is immense. But only it will allow an update of the existing legislation to make the most of the assumptive power of an AI help. Even more as AI ethics groups seem to repeat one of society’s classic mistakes: they fail to account for the cultural and regional contexts in which AI operates, as underlines the MIT TechReview.
- Olivier Dessibourg, GESDA
(EN)
|