Topics This Week in AI-Learn Insights
A lot of new developments in AI. Here are the topics I will be covering this week:
How I Review Research Papers Using Generative AI. Last week I covered Steps 1 and 2. This week I will go over Steps 3 and 4. My goal is to introduce you to techniques for using multiple LLMs in your own work, for research or otherwise. These include open-source models and running models locally on your computer.
The Stargate Project: The Bridge to Paradise or Nowhere? The Stargate Project was introduced with great fanfare by President Trump. It’s an “American artificial intelligence (AI) joint venture created by OpenAI, SoftBank, Oracle, and investment firm MGX.[1] The venture plans on investing up to US$500 billion in AI infrastructure in the United States by 2029.” The project’s website touts it as: “All of us look forward to continuing to build and develop AI—and in particular AGI—for the benefit of all of humanity. We believe that this new step is critical on the path, and will enable creative people to figure out how to use AI to elevate humanity.” Gary Marcus, an AI sceptic, has said that: `OpenAI may well become the WeWork of AI’. Is Stargate the bridge to Paradise or the bridge to Nowhere?
Why are China’s DeepSeek Models the Talk of the Town? The talk of the town in Davos was Trump and AI. But the Silicon Valley Bros and Wall Street Tech Titans were overshadowed by DeepSeek. Why? What are they afraid of? Why has the software development community fallen in love with DeepSeek? Ironically, could DeepSeek be an opening salvo in the movement to democratize AI?
The Risk of Algorithmic Alienness: AI Models Think Differently. The current discourse around AI safety often fixates on familiar problems like hallucinations and bias — issues that feel manageable because they mirror human fallibility. When an AI model hallucinates, producing false information, we instinctively compare it to human errors: "Everyone makes mistakes," we say, or "This is just a temporary limitation we'll overcome." Similarly, we frame AI bias as an extension of human prejudice, something we can identify and correct. But this comparison fundamentally misses a more profound challenge: AI systems process information and reach conclusions in ways that are deeply alien to human cognition. This "algorithmic alienness" manifests in several critical ways. This fundamental difference in thinking patterns creates what we might call "unpredictable unreliability" — a system that works brilliantly until it suddenly doesn't, often in ways we couldn't anticipate. Unlike simple errors or biases that can be systematically identified and corrected, this unreliability stems from the core architecture of how AI systems process information. My first post on risk will introduce the concept of algorithmic alienness.