Kategori: In English

  • The Human Guide to AI: Top Ten × 2 Celebration

    Today my book ”AI för nybörjare 2.0” reached 10th place on Sweden’s most popular site for price comparisons, competing with almost 1.2 million other books. (Edit: A few days later it reached 5th place!) Version 1 of the book reached 7th place on the same list, and since 2.0 was rewritten from scratch it wasn’t at…

  • Reflections on ”AI as Normal Technology”

    I’m experimenting with a new format in this post. I asked Claude to summarize my notes, being transparent with who is actually writing. I’ve edited the text from Claude slightly, but it is mainly AI-written. This post summarizes Johan Falk’s reading notes on the essay ”AI as Normal Technology” published by the Knight First Amendment…

  • Deep Research in Action: Evaluating AI’s Ability to Analyze the Future

    On February 2nd, OpenAI introduced Deep Research, a tool designed to take complex questions, gather relevant information, and generate in-depth reports. Powered by their latest model, o3, it promises to deliver well-researched insights within minutes or hours. But how well does it actually perform? I decided to put it to the test. The reports from…

  • Highlighting AI Performance with a New Scale

    When discussing how well an AI model performs on benchmarks, it’s common to talk about percentages or percentage points. However, these figures often obscure how significant the difference between two results really is, especially near the boundaries of 0% and 100%. In this post, I want to introduce an alternative: using a scale based on…

  • Thoughts on o1, Two Weeks Later

    In short: A new type of model requires new types of tasks It took me quite some time to conclude that o1 actually is a significant improvement over the GPT-4 class models (including Claude 3.5 Sonnet). This is, I think, because when I give o1 the same type of tasks and questions that I give…

  • A few thoughts on o1: is it a hybrid?

    It’s been a bit more than a day since OpenAI released o1 (preview and mini). I have tested a bit, read quite a bit and watched a bit. I’m left with some questions. This short blog post summarizes my initial thoughts and questions. Update 2024-09-16: o1 is not a hybrid. See link under ”follow-ups” further…

  • Who Believes AI is Conscious?

    This past week I’ve listened to three podcast episodes focused on consciousness, particularly on whether artificial intelligence can be conscious (or perhaps already is). From an ethical perspective this is a highly relevant question, as it concerns whether millions or even billions of AI instances can experience joy and suffering. From a practical perspective, though,…

  • Ilya Sutskever Starts New AI Lab – What Does It Mean?

    June 19, Ilya Sutskever along with Daniel Gross and Daniel Levy announced that they are starting Safe Superintelligence Inc. What does this mean, and what does it not mean? What Did Ilya See? The background to this initiative is partly the firing and re-hiring of Sam Altman as CEO of OpenAI, back in November 2023.…

  • Teams of AI Agents Can Find and Exploit New Cyber Vulnerabilities

    New research shows that AIs can be used to find and exploit previously unknown cyber vulnerabilities. While it was already known that AI could generate code to exploit known vulnerabilities based on descriptions, this is the first documented instance of AI discovering new vulnerabilities. The research, conducted by the University of Illinois Urbana-Champaign and funded…