The Medium post goes over various flavors of distillation, including response-based distillation, feature-based distillation and relation-based distillation. It also covers two fundamentally different modes of distillation – off-line and online distillation.
OpenAI thinks DeepSeek may have used its AI outputs inappropriately, highlighting ongoing disputes over copyright, fair use, and training data.
David Sacks says OpenAI has evidence that Chinese company DeepSeek used a technique called "distillation" to build a rival model.
DeepSeek’s success learning from bigger AI models raises questions about the billions being spent on the most advanced technology.
Since Chinese artificial intelligence (AI) start-up DeepSeek rattled Silicon Valley and Wall Street with its cost-effective models, the company has been accused of data theft through a practice that is common across the industry.
The DeepSeek drama may have been briefly eclipsed by, you know, everything in Washington (which, if you can believe it, got even crazier Wednesday). But rest assured that over in Silicon Valley, there has been nonstop,
The San Francisco start-up claims that its Chinese rival may have used data generated by OpenAI technologies to build new systems.
After DeepSeek AI shocked the world and tanked the market, OpenAI says it has evidence that ChatGPT distillation was used to train the model.
Experts say AI model distillation is likely widespread and hard to detect, but DeepSeek has not admitted to using it on its full models.
OpenAI suspects Chinese AI firm DeepSeek is using ChatGPT data to develop a competing model, raising concerns over AI ethics.
Did DeepSeek violate OpenAI's IP rights? An ironic question given OpenAI's past with IP rights. What can we learn from this classic playbook to protect a business?