Windsurf Wave 4
2025-03-05
- Previews: Iterate on your apps rapidly by easily sending elements and console errors back to Cascade as context
- Tab-to-import: Another addition to our passive predictive experience
- Linter integration: Cascade checks its outputs against linter
- Suggested actions: After responding, Cascade suggests what your next action should be
- MCP discoverability: Simpler methods of identifying and pulling in helpful MCP servers
- Drag & dropfiles: Attach editor tabs or files from your file explorer to Cascade
- Model options admin control: For enterprises, specify which models your org can use
- Claude 3.7 Sonnet Improvements: A bit less aggression towards tool calling
- Referrals: Give the gift of Windsurf (and get rewards in return!)
Previews for Vibe coding
Instead of just showing you your application when you “locally deploy” your in-progress application, we add listeners and UX so that Cascade can be aware of exactly how you want to iterate:
- Point and click on the component that you want to make changes to
- Button to automatically pull console errors
This context is automatically passed back to your Windsurf Editor
Previews should work with most web projects, React or otherwise, but not non-HTML websites such as those using WebGL or full-canvas screens. As of now, Previews are optimized for Chromium-based browsers and within the IDE, but should also support Safari and Firefox on most operating systems.
Tab to Import
The “tab” key can be used to add imports to the top of files when a new dependency is used in the file
Linter integration into Cascade
If Cascade generates code with linter errors and then takes a step to fix said generated lint errors, to the best of our ability, we will not charge flow action credits for the fix. This may be imperfect, but that’s not a reason to not try to do the right thing!
Suggested actions
When there are multiple reasonable next steps. Now, Cascade can suggest what these next steps might be so that you can stay even more tightly in the flow.
MCP Discoverability
In Wave 3, we launched our MCP integration, and it was generally well received, except for one minor nagging question - “what exactly is MCP and where can I find these MCP servers you speak of?” So we built a lot of educational content such as this, this, and this, but we wanted to bring a new level of discoverability for useful MCP servers directly in the product.
Drag & drop files
Either drag in editor tabs or files from your file explorer to the Cascade input box, and we’ll attach the information to the prompt as context:
Admin Control for Model Options
Now, administrators of Codeium Teams and Enterprise accounts can centrally set the availability of different models for the organization, another step in being the enterprise-ready AI platform:
Claude 3.7 Sonnet Improvements
laude 3.7 Sonnet and Claude 3.7 Sonnet (thinking) was often a bit trigger happy with tool calling, leading to much-faster-than-anticipated usage of flow action credits. We’ve done a lot of work since the model release to reduce that tendency while still maximizing on the underlying strengths of this foundation model.
Rolling out a referral program.
If you are a paying user, you can now go to https://codeium.com/refer to find a personalized referral link. When someone who uses your link subscribes to a paid plan, both of you receive 500 free flex credits. There is no limit to how many people you can successfully get credit for referral, so refer away!
Other things since Wave 3
Other things that have happened since Wave 3 that were not part of any Wave:
Unlimited Deepseek-v3
That’s right, we made DeepSeek-v3 cost zero user prompt and zero flow action credits on any paid plans. We still don’t believe that it is nearly as good as Claude 3.5 Sonnet and others for the very specific tool-calling task that we utilize these models for, and it is probably slightly worse than Cascade Base right now, but if we can make things free, we will!
Claude 3.7 Sonnet, Claude 3.7 Sonnet (thinking), GPT 4.5
We are big fans of model optionality for our users, as it is unlikely for there to be any model that is objectively better than all other models across every use case. So, as a list of powerful models have been released by the foundational labs in the last few weeks, we’ve made sure we’ve rapidly brought them to Windsurf. _Again, we use these models for very specific tool-reasoning tasks, so a lot of the public benchmarks may not be reflective of how well these models may work for Cascade specifically, but this is also just a moment of time - any of these can get really good at any moment.