Dear reader,
Welcome to BN Edition: concise analysis on the stories that offer us hints at our unfolding future. Fresh from the desks of the Brilliant Noise team.
Each edition takes a handful of stories from recent weeks and asks three things:
What? The story in a few sentences.
So what? Why do I need to know?
What next? What do I need to do or watch out for?
This week, find out more about three ways AI is affecting economic, geo-political power and raising issues in safety, security and economics.
Our CEO Antony Mayfield is being interviewed on stage at the World Media Group’s Innovation Forum on 5th June in London. Let us know if you’ll be there. Also check out our sister newsletter, Antonym.
Nvidia is the most important stock in the world
What?
Another big tech company announcing big profits? No. This is different. The financial markets saw Nvidia’s quarterly results this week as an indicator of which way the whole economy will go. It beat market expectations, already high, by a couple of billion dollars.
More chips bought = more AI investment = more economic growth.
Nvidia’s chips have been in high demand by Big Tech companies rushing to develop artificial intelligence products.
The company expects continued growth with the launch of its new Blackwell chips and plans to roll out more powerful chips at a consistent pace.
Demand for Nvidia's AI data centre graphics processing units remains high, and the company's data centre.
Source: Financial Times
So what?
They say data is the new oil. But it’s more accurate to say that compute is the new oil.
In simple terms, compute describes the manipulation of information; it's used to organise and process data. In a knowledge economy, if you own the compute power, you can decode the most information with and ultimately own and sell it back to us as knowledge.
What next?
Analysts say that because Nvidia holds the key (the chips) to compute power, they are likely to be the first ever company to be valued at $3 trillion. For perspective, the UK’s GDP was about $3.4 trillion last year.
Nvidia makes the thing that creates the compute power that’s firing all the LLMs owned by the likes of OpenAI, Google and Anthropic. The value of their stock is a new indicator for growth in all areas. The stock is the power. And like a raw resource, he who controls the resource that powers the world, controls it.
Nvidia is becoming the engine of the US economy… and, by extension, the world. All this from a chip maker.
AI’s Black Box is unlocked
What?
Amazon-backed rival to ChatGPT Anthropic has made progress in understanding the inner workings of large language models (LLMs) that power generative AI. And they’ve done it by acting like neuroscientists studying a brain.
Source: Wired
So what?
‘Interpretability’ is the field of computer science that tries to underrate how AI systems work. AI researcher Chris Olah and his team at Anthropic have been studying artificial neural networks, which are like the brains of AI systems.
They wanted to understand how these AI systems work and why they give certain answers. The problem is that AI systems are like a "black box" – we don’t really know how they get from a prompt to an answer.
Olah and his team made a breakthrough by treating AI systems like brains and figuring out which parts of the system are activated when they are asked something. Similar to an MRI scan, when doctors observe brain activity involved with certain tasks or emotions, the team at Anthropic asked questions to their LLM and identified combinations of artificial neurons that signify specific concepts or features.
It’s early days, but their work has implications for AI safety. BBy understanding these features, they can make AI systems safer and reduce bias. For example, they can suppress features that represent unsafe computer code or instructions for making dangerous products. They can also control how much attention the AI system pays to certain features by adjusting a dial.
This is important because it helps us use AI in a safer and more responsible way.
What next?
Last week, after their big 4o announcement OpenAI also announced that their co-founder Ilya Sutskever was leaving his role at the company as Chief Scientist. Part of his role was to lead the ‘Alignment team’ which is what AI companies tend to call their ‘safety’ teams. Alignment meaning – alignment with human interest and survival(!)
There has since been a lot of speculation that one of the reasons he left was due to AI safety not being high enough on the list of priorities at OpenAI. Speculation was not helped by the fact that another very senior member of the ‘Alignment’ team left last week at the same time as Sutskever.
It’s interesting that the work being done at Anthropic to understand how LLMs ‘think’ is being done by the safety team because it indicates they’re trying to work out and predict how the next generation of AI models might behave and how to mitigate risk.
This is in contrast to the news of Sutskever’s departure from OpenAI at a time when the company is focused on building 4o and ChatGPT5 combined with Sam Altman’s treatment of Scarlett Johansson, and you have a worrying indications of a company being far more concerned with money than safety or the rule of law.
Xi-PT and why sovereign AI matters
What?
China has developed a large language model (LLM) called "Chat Xi PT" (that is trained on President Xi Jinping's political philosophy.
The model aims to control how AI informs Chinese internet users and may be released for wider use in the future. The LLM can do all the things we expect an AI chatbot to do – answer questions, create reports, summarise information, and translate between Chinese and English.
Chinese officials have made extensive efforts to disseminate Xi's ideas and have mandated that generative AI providers embody core socialist values and avoid subverting state power.
Source: Perplexity
So what?
The Chinese version of ChatGPT is another illustration of how generative AI = modern power. They had to develop and control their own artificial intelligence systems because they don’t want their people using US-owned artificial intelligence programs like ChatGPT or Anthropic.
What next?
If technologies like ChatGPT are taken away because of political schism, or if your AI service provider suddenly hikes prices from £20 to £500 a month, the global business impact could be profound. Much like Germany's economy grappling with the fallout from its dependency on Russian gas. In this new landscape, knowledge—powered by AI—becomes as crucial as energy.
Like countries must consider their energy sources, leaders need to think about where their organisations get their computational power. The risk of dependency on a single provider is real. Things to consider:
Diversify AI Providers: Don't rely on a single AI service. Investigate multiple providers to mitigate risk.
Plan for Self-Sufficiency: Explore options for becoming self-sufficient in AI capabilities.
Knowledge is (AI) Power: AI not only creates knowledge but also wields power. This power extends across commercial, geopolitical, economic, and personal realms. To stay competitive, ensure your organisation is resilient against potential disruptions in the AI supply chain.
Thank you for reading.
The Brilliant Noise team