DeepSeek saving Mind2?

Seeing how Chinese managed to get 1/1000 of the cost running comparable open source ai to closed ai from OpenAI, seems like mind2 will be a lot more capable than expected, anyone with access to hw can share which distilled model can be run on it?

4 Likes

We will see what it can do when we get it but according to the Discord the first models it will initally ship with will be:
Llama 3.1 8b, Llama 3.2 3b, and Llama 3.3 70b in AWS Bedrock

2 Likes

Yeah you can already run the distilled models with ollama on phone, for full r1 terabyte of ram is needed or multiple high end macs, still the llama distilled 7b-14b models are outperforming commercial paid ones supposedly in some tasks, so having a dedicated hw and running one of those at decent speed, mind2 should be in a pretty good place

1 Like