DeepSeek R1 is open source and ready to try.
This AI is powerful—and affordable. Credit: DeepSeek
There’s a new AI competitor, and it’s catching attention for good reason.
On Monday, Chinese AI company DeepSeek launched an open-source large language model called DeepSeek R1.
According to DeepSeek, R1 outperforms other well-known large language models like OpenAI’s in several key performance benchmarks. It excels in math, coding, and reasoning tasks.
DeepSeek R1 builds on its predecessor, R1 Zero, which skipped a common training method called supervised fine-tuning. While Zero was strong in some areas, it struggled with readability and language clarity. R1 addresses these issues with enhanced training methods, including multi-stage training and using cold-start data, followed by reinforcement learning.

If you’re not into the technical details, here’s what matters: R1 is open source, meaning experts can review it for privacy and security concerns. It’s also free to use as a web app, and its API access is incredibly affordable—just $0.14 for one million input tokens, compared to OpenAI’s $7.50 for the same tier of reasoning power.
Most impressively, this model is highly capable. For example, I challenged R1 to create a complex web app that pulls public data and displays travel and weather information for tourists. It generated functional HTML code instantly and improved it further based on my feedback, optimizing along the way.
I also asked it for quick tips to improve my chess skills. It provided a well-organized list of helpful advice, though my laziness kept me from actually improving.
Finally, I put R1 to the test by asking it to explain its intelligence in three sentences. It delivered, but the answers were so advanced that I couldn’t fully grasp them. Watching its reasoning process unfold on screen was just as fascinating as the response itself.
Credit: Stan Schroeder / Mashable / DeepSeek

As noted by ZDnet, DeepSeek R1 operates with significantly lower training costs compared to some rivals. It also uses less powerful hardware than what’s typically available to U.S.-based AI companies. This shows that a highly capable AI model doesn’t need to be expensive to train—or to use.
I don’t think the title of your article matches the content lol. Just kidding, mainly because I had some doubts after reading the article.
Thanks for sharing. I read many of your blog posts, cool, your blog is very good. https://accounts.binance.com/ur/register-person?ref=WTOZ531Y