Daily Brief logo

Good morning, this is Michelle and with deteriorating relations between Russia and western countries, space stability could be more at risk than ever. This week, countries began a series of discussions on how to avoid a war in space by setting some ground rules.

We also bring you the latest update from UN Geneva, plus a Ukraine Story about Russians who oppose the war and refuse to leave their country.

photo journaliste

Michelle Langrand

11.05.2022


On our radar


Photo article

The International Space Station on 8 November, 2021. (Credit: NASA)

Policing space. Diplomatic delegations are gathering this week in Geneva to discuss how to maintain stability in space. With some 25,000 satellites orbiting around the Earth since 1957, when the Soviet Union successfully launched Sputnik 1, and the rising militarisation of space in the last decade, there is growing concern that conflict could ignite at any moment. The meeting, held at UN Geneva headquarters in Palais des Nations from 9 to 13 May, will be the first of four sessions by a working group tasked with trying to find some common rules for responsible behaviour in space.

Geneva Solutions (EN)

Here's what else is happening


Science and diplomacy reads by GESDA


Photo article

(Credit: wikimedia)

AI can now produce texts almost no different from humans. These algorithms made tremendous progress in 2021: they are called large language models (LLM). Meta (Facebook) just announced it created a massive new LLM, but is giving it away for free! (read below). But what if LLM started to write science articles? The very foundation of the centennial and global generation of knowledge would be shaked!

As Nature sums it up, LLM “can churn out astonishingly convincing prose, translate between languages, answer questions and even produce code”. But there are caveats: LLM “have deep flaws, parroting misinformation, prejudice, toxic language”, or problematic stereotypes in the zillions of documents they’re trained on. “And researchers worry that streams of apparently authoritative computer-generated language that’s indistinguishable from human writing could cause distrust and confusion.”

Shobita Parthasarathy, a specialist in the governance of emerging technologies at the University of Michigan in Ann Arbor, is one of them. She thinks, among others, that “some people will use LLM to generate fake or near-fake papers, if it is easy and they think that it will help their career”. More generally, “the algorithmic [texts] could make errors, include outdated information or remove nuance and uncertainty, without users appreciating this. If anyone can use LLM to make complex research comprehensible, but they risk getting a simplified, idealised view of science that’s at odds with the messy reality, that could threaten professionalism and authority. It might also exacerbate problems of public trust in science.”

Joelle Pineau, managing director at Meta AI, who created the newly announced LLM, answers to such critics, in a MIT Tech Review article (read below): “There were a lot of conversations about how to do that in a way that lets us sleep at night, knowing that there’s a non-zero risk in terms of reputation, a non-zero risk in terms of harm.” She dismisses the idea that you should not release a model because it’s too dangerous. “I understand the weaknesses of these models, but that’s not a research mindset,” she adds, concluding that “the only way to build trust is extreme transparency”.

For Parthasarathy, that is not enough. The scientist pleads along with her colleagues in a report for bodies to step in with general regulation of LLM: “It’s fascinating to me that hardly any AI tools have been put through systematic regulations or standard-maintaining mechanisms.”

The goal is simple, when talking about the possibility of writing science articles with such algorithms, and hence the production of new knowledge which might be hard to sort from fake content: to make sure that one of the most interesting and powerful achievements of science, LLM, does not become one of its most furtive enemies.

- Olivier Dessibourg, GESDA

Meta has built a massive new language AI – and it’s giving it away for free.

MIT Technology Review (EN)

Race to cut carbon emissions fuels climate tech boom. Investors are pouring money into start-ups that want to build a greener economy.

Financial Times (EN)

From burst bubble to medical marvel: how lipid nanoparticles became the future of gene therapy.

STAT (EN)

Human genetic engineering is coming. We must discuss the social and political implications now.

The Global and Mail (EN)

Scientist used AI to create an enzyme that breaks down plastic in a week instead of a century.

Singularity Hub (EN)

Worried that quantum computers will supercharge hacking, White House calls for encryption shift.

Science (EN)

Neurorights Foundation launches major publication ‘International human rights protection gaps in the age of neurotechnology’.

Neurorights Foundation (EN)

logo gesda

This selection is proposed by the Geneva Science and Diplomacy Anticipator GESDA, working on anticipating cutting-edge science and technological advances to develop innovative and inclusive solutions for the benefit of the planet and its inhabitants.


GS news is a new media project covering the world of international cooperation and development. Don’t hesitate to forward our newsletter!

Have a good day!

Avenue du Bouchet 2
1209 Genève
Suisse