rw-book-cover

Metadata

Highlights

  • Safety Island has changed governments. In a result that shocked precisely no one, the British public evicted AI enthusiast and Summit organizer-in-chief Rishi Sunak, returning the center-left Labour Party to government after a 14-year hiatus. The Labour Party have kept their cards pretty close to their chest, so it’s hard to say exactly what they’re planning for the tech sector, but their manifesto gives us a few clues. Ultimately, the previous government managed to keep tech non-political and there are few disagreements of substance between the main parties. (View Highlight)
  • It is likely that the UK will pass limited legislation to regulate the most powerful foundation models, but stop short of an EU-style general AI regulation. At the same time, it’s no secret that new prime minister Keir Starmer lacks his predecessor’s interest in AI. The AI Summit and the AI Safety Institute came into being as a result of Sunak’s personal patronage. It remains to be seen if the new government will give such initiatives the resources and focus they need to have an impact. (View Highlight)
  • Along with quantum and semiconductors, the rule looks set to restrict investment into Chinese AI companies working on a broad range of applications deemed detrimental to national security, including defense, surveillance, and audio, image, or video recognition. They are also exploring introducing a compute threshold and specific restrictions around companies that primarily use biological sequence data. (View Highlight)
  • At this stage, measures like this serve more as a warning shot to both investors and the tech sector. While certain US VCs probably do have questions to answer about their past China investments, major firms have stayed clear of the country’s AI sector for the past couple of years or spun off their China businesses. Rules like this remind the tech sector more widely - whether it’s investors, foundation model providers, or hardware manufacturers - that the restrictions are going to keep coming. It’s likely not a coincidence that the timing of this notice coincided with OpenAI’s warning that it would begin blocking Chinese users from accessing ChatGPT. (View Highlight)
  • Another dispute that only appears to be deepening is the AI copyright war. While OpenAI continues to strike deals with media publishers to avoid ugliness, the music industry has opted for violence. The Recording Industry Association of America (RIAA) announced that it was suing music generation services Suno and Udio for massive infringement of copyright. Pointing to close similarities between artists’ work and generated music, the RIAA argues that large volumes of copyrighted music were used as training data by the companies. While neither company is forthcoming on the makeup of its training data, neither has explicitly denied using copyrighted material. Expect much heated argument over fair use definitions. (View Highlight)
  • While the music lawsuit was always a fight waiting to happen, another front has opened up, with media organizations rounding on buzzy AI search engine Perplexity. The starting gun was fired by Forbes, which claimed the company’s new Perplexity Pages content curation feature was plagiarizing a raft of media outlets. They pointed to how passages had been lifted word-for-word, along with custom illustrations, from their reporting on a drone project, with limited citation. Said ‘plagiarized’ post was also turned into a podcast and a YouTube video. Perplexity said that the product was in its early days and that they would improve attribution. Forbes has threatened to sue the company. (View Highlight)
  • Wired rowed in behind Forbes, pointing to similar incidents with its own content. More concerningly, it presented evidence that the company was deliberately circumventing attempts by websites to block its crawler via their robots.txt file by using a pool of secret IP addresses. Perplexity blamed an unnamed third party provider, refused to confirm that it would stop doing this, and noted that robots.txt is “not a legal framework”. While technically true, this does strike us as a violation of the internet’s social contract. To the authors’ likely amusement, Wired was then able to point to Perplexity apparently plagiarizing its reporting on its own alleged plagiarism… (View Highlight)
  • The legal rows continue as we move into hardware. In a new report, France’s competition authority has expressed concern about the market power of certain actors in the generative AI space. Like similar reports from other competition authorities, they point to overlapping investments and alleged conflicts of interest. First in their crosshairs is NVIDIA, which looks set to face antitrust charges. While we don’t know the details yet, the competition authority’s report specifically warned of a potential conflict of interest around NVIDIA’s investment in CoreWeave and expressed concern about CUDA’s GPU lock-in (sidebar: we’ve written about this in the Press recently). This is unlikely to quell complaints about European authorities’ prioritization of regulation over innovation - building the most popular GPUs probably shouldn’t be a criminal offense… (View Highlight)
  • Another geography keen to see an end to NVIDIA dependence is China, but all is not well in the country’s domestic semiconductor efforts. The 2023 State of AI Report documented China’s claimed breakthrough in sanctions-busting chips, but Noah Smith has documented how Huawei’s A100 copycat appears to have failed, with 80% of those produced so far malfunctioning. (View Highlight)
  • A full 80% of the Ascend 910B chips designed for AI training are defective and SMIC is struggling to manufacture more than small batches. Huawei executives have all but admitted defeat. This, of course, does nothing about the large stockpiles of NVIDIA hardware big Chinese labs have stockpiled or continue to smuggle into the country. The former can’t be solved, the latter probably can be. However, the argument from sanctions skeptics that the measures would provide a significant boost to the domestic chip making industry don’t appear to have aged well. Instead, it looks like Chinese companies are continuing to rely on sanctions-compatible chips, despite the game Whac-A-Mole manufacturers are playing with the Commerce Department. NVIDIA looks set to make $12B on the delivery of 1M of its new H20 chips to the Chinese market. (View Highlight)
  • As well as being a good time for NVIDIA, things are also looking up for AI infrastructure providers. As well as a multi-billion dollar deal with xAI, Oracle is now helping OpenAI meet its compute needs. While Microsoft will continue to provide compute for pre-training, Oracle’s cloud infrastructure will now support inference. (View Highlight)
  • All of this work comes at a cost. Google’s 2024 Environmental Report contained the stark admission that the company will struggle to meet its net zero goals. In fact, the company’s greenhouse gas emissions had jumped 48% since 2019, primarily as a result of its AI work. We’re likely to see the clash between net zero commitments made in haste a few years ago and the physical requirements of the AI boom emerge as a theme in the coming months. (View Highlight)
  • Things got better still with the news that Apple is to integrate ChatGPT into both Siri and its system wide Writing Tools. Apple will also gain a board observer role, elevating them to the same status as Microsoft, and leading to some potentially interesting meetings. Apple is still in talks with Anthropic and other companies about potential integrations, but reportedly turned Meta down on privacy grounds. Given the two’s long-standing feud on the subject, the news doesn’t come as a surprise. (View Highlight)
  • While Apple’s in-house AI team has released a raft of papers over the last few months (more on that later) detailing their progress on efficient LLMs that could run on-device, the company has so far struggled to productize this work. Its grab-bag of writing and emoji generation tools are yet to set the world on fire. A salutary reminder that even when you’re one of the world’s most valuable companies, the journey from good science to good product is still extremely challenging. (View Highlight)
  • One AI product that won’t be seeing the light of day anytime soon is OpenAI’s voice assistant (of Scarlet Johansson kerfuffle fame), with the company saying that more time is needed for safety testing. This kind of responsible release strategy will be music to the ears of departed co-founder and AI safety devotee Ilya Sustkever, who has re-entered the arena along with former Apple AI lead Daniel Gross and former OpenAI engineer Daniel Levy, to launch Safe Superintelligence (SSI). Ilya and the two Daniels are promising the “world’s first straight-shot SSI lab” that will be “insulated from short-term commercial pressures”. (View Highlight)
  • The founding team’s star power means initial fundraising is unlikely to be a challenge, but the new venture’s backers will essentially be gambling that i) we really are approaching superintelligence in the near future that is monetizable, ii) a team can catch up with frontier labs capitalized to the tune of billions of dollars from a standing start, iii) the emphasis on safety won’t be an impediment to fast progress. They’ll be taking all of these risks while accepting there will be no attempt by the company to generate revenue anytime soon. Brave. Then again, some readers will be old enough to remember the era when DeepMind would always be free to work on long-term research without having to worry about revenue… (View Highlight)
  • Open-Endedness is Essential for Artificial Superhuman Intelligence, Google DeepMind. This position paper makes the case that open-ended models - those able to produce novel and learnable artifacts - will be essential to reach some form of AGI. The authors argue that current foundation models, trained on static datasets, are not open-ended. They outline potential research directions for developing open-ended foundation models, including reinforcement learning, self-improvement, tasing generation, and evolutionary algorithms. (View Highlight)