MiniMax M2.7: “Open Source” (With Strings Attached)

MiniMax just dropped M2.7, and the internet’s having a meltdown. Again.

They released the weights. They posted benchmarks. They called it “open source.” Then everyone realized it wasn’t actually open source, and the drama began.

Here’s what actually happened: MiniMax released M2.7 under a non-commercial license. You can download it, run it, study it, fine-tune it—but if you want to use it for anything that makes money, you need to ask permission first. And they’ll probably say yes, if you pay them.

This isn’t complicated. It’s just not what people expected.

The community’s pissed because MiniMax keeps tightening the screws. M2 was MIT-licensed. M2.1 and M2.5 had modified terms. Now M2.7 is fully non-commercial. Each release adds more restrictions, and each time people act surprised.

Some defend MiniMax. They say the community abused the openness—reselling models, removing credits, building competing products. They say MiniMax deserves to protect their work after pouring resources into it. 0xSero’s thread basically argues we had this coming.

Others say calling it “open source” is bait-and-switch. That MiniMax is riding the goodwill of the open-source movement while building walls around their IP. That they’re just another company pretending to be community-friendly until they’re not.

Both sides have a point.

The technical reality is straightforward. M2.7 is a 230B parameter MoE that performs exceptionally well on coding and agentic tasks. It’s competitive with much larger models. The weights are real, they’re downloadable, and you can do plenty with them for free.

The licensing reality is also straightforward. It’s not open source. It’s open weights with a non-commercial license. That’s a meaningful difference. Open source means you can use it however you want, including commercially. Open weights means you can see the model and run it yourself, but usage rights might be restricted.

MiniMax isn’t hiding this. Their license is explicit. The problem is they’re calling it “open source” when it isn’t, and that’s where the trust breaks down.

This reflects a broader pattern in AI. Labs release impressive models, generate buzz by calling them open, then gradually restrict usage as they figure out monetization. It’s happened with other Chinese labs, it’s happened with Western labs, and it’ll keep happening.

The question isn’t whether MiniMax is right or wrong. The question is whether you trust them to be transparent about what they’re actually giving you.

I’m inclined to say: if it’s open source, call it open source. If it’s open weights with restrictions, call it that. Don’t use the goodwill of the open-source movement to market a model you’re planning to gatekeep.

That doesn’t mean the model isn’t valuable. For researchers, hobbyists, and non-commercial projects, M2.7 is genuinely useful. For commercial applications, it’s a conversation starter with MiniMax’s sales team.

The real damage isn’t to users—it’s to the ecosystem. Every time a lab does this, it makes the next lab less likely to release anything openly. It feeds the narrative that open releases are just marketing plays, not genuine contributions.

MiniMax built something impressive. They’re entitled to monetize it however they want. But calling it “open source” when it’s not—that’s the part that erodes trust. And in AI, trust is the only currency that matters.

So yes, download M2.7 if you want to tinker. Study it, learn from it, build something cool with it. Just know what you’re getting: a powerful model with strings attached, wrapped in marketing that overpromises.

That’s the deal. Take it or leave it.