Golden Circle

DeepSeek-R1-0528: The new DeepSeek model will bankrupt OpenAI

And the 2025 Trendsetter in Generative AI DeepSeek is back with a new version of their flagship model DeepSeek R1. Though they haven’t released DeepSeek R2 this time, they have released an updated version of DeepSeek R1 that is DeepSeek R1 0528 or as I like to call it, the “quiet killer.”

https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F04bKnmlmmHc%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D04bKnmlmmHc&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F04bKnmlmmHc%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube

Data Science in Your Pocket – No Rocket Science

Strive for tech news and insights? Subscribe to our newsletter and stay up-to-date with the latest tech trends, expert…

datascienceinyourpocket.com

While some might call it a minor upgrade, this thing’s got major attitude. We’re talking smarter reasoning, smoother code generation, and performance levels that rival the likes of Gemini 2.5 Pro, Claude Sonnet, and even OpenAI’s o3/o4-mini in some scenarios.

And guess what? It’s still open-source.

What Even Is DeepSeek R1–0528?

Think of it as the next-gen sequel to the already impressive DeepSeek R1. This new drop — R1–0528 — isn’t just a patch. Built to go head-to-head with premium, closed models, it brings high-tier reasoning and code chops to the open-source community.

Whether you’re a dev building agents, a researcher doing LLM experiments, or a startup trying to cut down on API bills — this model’s got something for you.

So, What’s New?

Here’s the TL;DR on what makes DeepSeek R1–0528 spicy:

  • Massive Scale: Trained on a 671B token dataset.
  • Open Source (Still): No tokens, no rate limits, no drama. Clone, fine-tune, and deploy at will.
  • Way Better Reasoning: It now flexes harder in logical problem-solving, even in nuanced, multi-step tasks.
  • Cleaner CodeGen: Its ability to spit out usable code is now sharper, more consistent, and closer to closed-source biggies like Claude and GPT-4.
  • Longer Attention Span: Handles complex prompts more gracefully. It doesn’t just hallucinate less — it thinks better.
  • Improved Reliability: Consistent output even with ambiguous or vague queries. A big win if you’re chaining tasks or building agents.

Benchmark Performance

On composite benchmarks like MMLU, GSM8K, BBH, and HumanEval, DeepSeek R1–0528 pulls off a median score of 69.45 — and that’s huge for an open-source model. It surpasses Gemini 2.5 Pro and even Claude Sonnet 4 in several areas, especially when it comes to value for money. And don’t forget it’s open-sourced

Public Review

Not just benchmarks, the public review looks good as well

  • Coding Prowess: One user mentioned, “I just used DeepSeek: R1 0528 to address several ongoing coding challenges in RooCode. This model performed exceptionally well, resolving all issues seamlessly.”
  • Creative Writing: Another user noted, “Well on a side note it does much better creative writing than both new anthropic models.”
  • Desire for Detailed Feedback: Some users expressed a need for more detailed feedback. For instance, one commented, “This is my biggest gripe with posts like this. I wish people would post the actual chats or prompts. Simply saying ‘it does better than Gemini’ tells me nothing.”

⚠️ Areas of Concern

While the model shows promise, there are some concerns:

  • Hallucinations: Earlier versions of DeepSeek R1 were noted to hallucinate, especially when generating quotes or stories. One user observed, “40% of the quotes and stories deepseek-r1 included were unverifiable.”
  • Bias: An analysis highlighted that DeepSeek R1 might have biases baked into its responses, especially favoring certain narratives.
  • Performance Variability: Some users found that smaller versions of the model didn’t perform as well. “Below 14B doesn’t work. 14B kinda works. 32B and 70B actually work.”

How to use DeepSeek–R1–0528 for free?

The model weights (if you can load) are available on huggingface

deepseek-ai/DeepSeek-R1-0528 at main

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

huggingface.co

The free API can be accessed from OpenRouter.

R1 0528 (free) – API, Providers, Stats

May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1)…

openrouter.ai

Conclusion

DeepSeek R1–0528 is a strong step forward in open-source AI. It offers impressive reasoning, solid code generation, and reliable performance — all without the cost or restrictions of many closed models.

While it’s not perfect (especially at smaller sizes), the larger versions perform at a level that rivals top commercial models. And the fact that it’s free and openly available makes it a valuable tool for developers, researchers, and startups alike.

If you’re exploring alternatives to expensive APIs, this model is definitely worth trying.

Source: www.medium.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare
Shopping cart close