I'm Building the BEST Open Source AI Coding Assistant with YOUR Help
Science & Technology
Introduction
Recently, I embarked on an exciting journey by diving into Bolt.new, one of the leading AI code generators available today. After experimenting with its functionalities, I decided to create my own version of Bolt.new, incorporating several much-needed improvements. The standout feature of my fork is the ability to select the language model (LLM) used for code generation. Unlike the original Bolt.new, which restricts users to a single model, my version enables the use of various providers, including local LLMs through tools like AMA. This means you can generate an infinite amount of code completely for free!
Last week, I released a video showcasing my fork of Bolt.new, which surprisingly went viral, becoming the third most-watched video on my channel. The engagement it received from all of you was overwhelming and truly unmatched, fueling my passion for this project. A multitude of viewers provided suggestions, feedback, and even contributions to my fork; I’ve made it my mission to incorporate all of that input moving forward. I'm genuinely thrilled that we are fostering a community around this project.
As we navigate this path together, I acknowledge that several AI code generators exist; however, my fork isn't perfect. We have a fantastic opportunity to improve it collaboratively, drawing inspiration from other tools and working towards something genuinely outstanding. Who knows? This initiative could evolve into something greater than just a mere fork of Bolt.new.
Recent Updates and Features
In this article, I’ll share some of the improvements already made to my fork and discuss future work.
Quick Demo of the Fork
Let’s start with a brief demo. In my version of Bolt.new, you’ll notice a couple of dropdown menus that are absent from the standard edition. Here, you can select your preferred LLM provider, like OpenAI, along with the model for code generation. Users can input prompts for code generation or choose from predefined templates. Upon selection, a chat interface appears on the left side, while the code widget on the right displays the generated files. It can even execute npm commands to set everything up and provide a preview of the app.
After just a few seconds, a simple to-do app could be generated, allowing you to browse the source code and test the functionality seamlessly—all thanks to the capabilities of Bolt.new and the enhancements from my fork.
Improvements Made
Viewing the GitHub repository for my fork of Bolt.new, I see there are currently 14 commits ahead of the original version, indicating continual progress. Together as a community, we’ve implemented several requested features:
- Integration with OpenRouter: This allows access to a wide range of models available on the OpenRouter platform.
- Added Gemini Support: Users can now utilize Gemini 1.5 Flash and Pro models.
- Dynamic Model Retrieval via AMA: The list of available models for local users is now dynamic, ensuring users only see models they have downloaded.
- Download as Zip: A highly requested feature, users can now download generated code in a zip file, streamlining the transition from Bolt.new to local environments.
Future Work
Looking forward, I intend to work on several suggestions gathered from the community, including:
- LM Studio Integration: Many users prefer using LM Studio for local models.
- Deep Seek API Integration: Especially useful for utilizing robust models like the 236 billion parameter one.
- Improved Prompts for Smaller Models: Ensuring better performance from smaller models during code generation.
- Image Attachment to Prompts: This feature currently exists in the paid version but is missing in the open-source iteration and could be beneficial.
- Running Agents for Code Generation: Employing multiple LLMs to enhance code generation capabilities.
- Direct GitHub Publishing: Enabling users to publish their projects seamlessly once completed.
- Importing Projects: Allowing users to import existing projects into Bolt.new for continuous development.
This collective endeavor embodies a fantastic opportunity for us to learn how to leverage AI and create amazing tools. I invite you all to get involved—your contributions can significantly enhance what we're building together.
Keywords
- Bolt.new
- AI code generation
- OpenRouter
- Gemini
- AMA
- Dynamic models
- Community contributions
- Future work
- Open-source
FAQ
Q1: What is Bolt.new?
A1: Bolt.new is an AI code generator that allows users to generate coding projects based on prompts.
Q2: What improvements does your fork offer over the original Bolt.new?
A2: My fork allows users to select their choice of LLMs, including local models, integrates various model providers, and simplifies the code download process.
Q3: How can I contribute to the project?
A3: You can contribute by providing feedback or even submitting pull requests for new features you’d like to see.
Q4: What features are planned for future updates?
A4: Planned features include integrations with other APIs, improved prompting for smaller models, and enabling running multiple agents for enhanced code generation.
Q5: Is there a cost associated with using your fork of Bolt.new?
A5: No, my fork is completely open-source and free to use, allowing anyone to access and contribute to the project.