Meet AnyCoder, a new Kimi K2-powered tool for fast prototyping and deploying web apps https://venturebeat.com/programming-development/meet-anycoder-a-new-kimi-k2-powered-tool-for-fast-prototyping-and-deploying-web-apps/ #AI #OpenSource #VibeCoding
Meet AnyCoder, a new Kimi K2-powered tool for fast prototyping and deploying web apps https://venturebeat.com/programming-development/meet-anycoder-a-new-kimi-k2-powered-tool-for-fast-prototyping-and-deploying-web-apps/ #AI #OpenSource #VibeCoding
Grinding down open source maintainers with AI • Terence Eden
「 The emotional manipulation starts in the first line - telling me how frustrated the user is.
It turns the blame on me for providing poor guidance.
Then the criticism of the tool.
Next, a request that I do work.
Finally some more emotional baggage for me to carry. 」
https://shkspr.mobi/blog/2025/07/grinding-down-open-source-maintainers-with-ai/
Self-Host Weekly (18 July 2025)
Commentary on #vibecoding, software launches and updates, a spotlight on #Subtrackr, and more in this week's #newsletter recap!
What Actually Happens When Programmers Use AI Is Hilarious, According to a New Study
「 In the study, 16 programmers were given roughly 250 coding tasks and asked to either use no AI assistance, or employ what METR characterized as "early-2025 AI tools" like Anthropic's Claude and Cursor Pro. The results were surprising, and perhaps profound: the programmers actually spent 19 percent more time when using AI than when forgoing it 」
Not sure where Windsurf is heading? Check out these alternatives:
You cannot call yourself a Luddite without examining what technology can do for you / how it can be subverted.
Fighting technology being used to replace workers and reduce the quality in output goes hand in hand with studying what technology can be used to increase quality and satisfaction in one’s craft, thus making one hopefully more resilient to workplace dynamics.
Luddites embraced technology like machines used to assess gauge count (and thus regulate fabric quality), mechanical hand looms, power driven maintenance tools (sharpening wheels, etc…)
I assure you that llms are fantastic assistants to produce higher quality software, and I try to share as much as I can through streams (because I still haven’t found a good way to stay focused on writing long enough and prefer doing).
Amazon has entered the vibe coding space.
AI slows down open source developers. Peter Naur can teach us why.
「 when open source developers working in codebases that they are deeply familiar with use AI tools to complete a task, then they take longer to complete that task compared to other tasks where they are barred from using AI tools 」
“Software is not about coding, it’s about the bigger picture, iteration, building the right thing”
Enters #vibecoding
“No, not like that”
Here's the full 4h stream using #llms #llm #chatgpt #vibecoding in order to get reacquainted with an old codebase and kind of a tricky problem. Some things went well, some didn't, this is what it looks like in real life.
Maybe I'm deluding myself, but this doesn't even remotely resemble how I used to work or what I was able to achieve pre LLMs.
I also don't think the result is slop infested low quality software, or that my skills are depreciating. They are changing, for sure.
https://www.youtube.com/watch?v=BC3zEvWhURU
One thing is for sure, I only talk about things that I think I can back up with concrete evidence, and I practice what I talk about. None of this is hype driven.
live now, building a DSP controller web UI + some protocol reverse engineering and other shenanigans.
So I was saying in my deleted toot…
I stream 3-4 times a week showing unedited unscripted live #llm assisted programming. I do mostly terminal UI, infrastructure code, backend code, systems, often embedded code as well, sometimes UI.
I believe in #vibecoding as a modality for software development and use it quite a bit like I try to use the AI as much as possible
I’ve been a software developer for 25 +years now. I’ve been building embedded software, I’ve been building back in software, I’ve been building web software and e-commerce.
I strongly believe and producing not just like quality software products, but quality software systems and codebases and that’s what I try to do with #LLMs as well.
I’m happy to take on any request of something I can build on stream because I want to show what these systems are capable of in the hands of an experienced (hopefully) software developer, so that we can have basically grounded discussions around #llm programming, because otherwise what are we doing?
Streams at http://youtube.com/@program-with-ai, one starting imminently building a ui for a jank partially reverse engineered Dsp controller.
In 10 minutes I’ll be streaming building a controller ui software for a badly documented, halfway reverse engineered protocol for a Dsp control board that has brittle firmware. Some will be #vibecoding, some will be agentic coding, some will be more manual #llm assisted coding, maybe there will (I doubt it) even be some manual coding.
This is not just “another react app” (why that is apparently a derogatory term used when speaking about #llms eludes me, but that’s another topic).
This is really getting out of hand.
We all know that #NoCode sucked. We all know that block programming was never good for anything other than learning and prototyping.
But hey, it looks like #AI and #VibeCoding fare a bit better right? And they could actually be the killers of the draggable blocks that we were all looking for.
But of course AI makes a lot of mistakes as well, it hallucinates APIs and configurations that don’t exist, and it will probably break production if you don’t double-check its code - and leave you alone to clean up the mess.
So how do NoCode companies try to pivot in the age of LowCode/VibeCode?
Simple, mesh VibeCoding and NoCode together!
Companies like Bubble apparently are pivoting away from building their Lego Mindstorm IDEs, to building products that leverage AI to solve coding problems. But instead of generating Python, Rust or JS these models would generate output in the NoCode intermediate language developed by Bubble, which in turn gets translated into real code. The advantage, according to the proponents, is that an output in their NoCode framework makes it easier for a non-programmer to understand what’s going on (compared to some undecypherable Python I guess).
Like, how many layers of abstraction, points of failure, corporate lock-ins and unaccountable production errors are you as a business likely to accept, just to avoid paying a salary to an engineer, and just because you really want your sales people to deploy data pipelines instead?
Technology only exists within the context it is created and used. What I find missing from pretty much all discussions of #llm and #vibecoding is how to use it for progressive purposes. We use computers, phones, the internet and while not being shy of discussing their shortcomings, we all know how we can use them according to our values.
One of my strongest belief is that everybody deserves their devices to do the thing they want to use them for. Software is entirely transient, its entire purpose is to make little transistors do things that humans want them to do. Not only do #llms make that eminently possible (everybody can vibe a reasonably useful personal app without struggling with today’s tools), but in the hands of capable developers, they allow incredible things when building software for local communities or individuals. No longer is a webui built while talking at a coffee shop impossible. No longer are nice install readmes and little walkthrough documents and a plethora of really important quality of life features a chore.
#vibecoding is powerful, Especially when you build local self contained little apps. One of my favourite targets is self contained html + minimal js, with local storage, print views and export / import to csv.
I strongly encourage people here to think outside of the narrative of OpenAI and Anthropic and consider what a language model can do for them and their direct community.
For example, I just built a little app to share and coordinate our schedule to visit a festival with friends, so that we can select our concerts and when to take the ferry and where to meet. It took me 10 minutes. It’s a single html file and we can share a json of our prefs and then print out a paper version…
it’s so personalized I can’t share it here but here’s a similar version.
Setup war übrigens: VS Code, Roo Code und Generierung von PHP- und R-Code.
Ergebnis:
Mit Vibe Coding – also klassischem Prompting, Copy & Paste, etwas Geduld und Überblick – hab ich die besseren Ergebnisse erzielt.
I've been using Claude Code, and I like it. It's produced decent code and configuration files and everything, but I've only so far used it for "evergreen", fully vibe coded projects. So having Claude start from scratch.
Meanwhile, I *have* used Cursor on existing projects to add features, fix bugs, and add tests. And I found that to work pretty well too.
The problem I have is that with Cursor, I can see the diffs of the code in my editor, step by step, and approve or deny individual changes.
With Claude, it seems like it just prints a diff in the console and I have to accept or reject the whole thing there, with no context of the rest of my project, and no ability to tweak it.
Am I just doing something wrong? Is this the reason to stick to Cursor?
Looking for insights.
No, just no.
If you do this then you're a shit programmer.
And I say this as someone that uses #AI in work, but I have 27 years of industry experience, so I can confidently say I know what I'm doing. AI can be a useful tool, like any other. But relying on it when your own capability is not better than it is a recipe for disaster.
(No "AI is killing the planet" comments please. This post isn't about that).
I keep a page (https://reillyspitzfaden.com/wiki/reading/ai-criticism/ ) on my site with research and writing that's critical of AI, and you can bet the METR study is listed there now.
This reminds me of @baldur's discussion (https://www.baldurbjarnason.com/2025/followup-on-trusting-your-own-judgement/) of a similar issue:
“It’s next to impossible for individuals to assess the benefit or harm of chatbots and agents through self-experimentation. These tools trigger a number of biases and effects that cloud our judgement. Generative models also have a volatility of results and uneven distribution of harms, similar to pharmaceuticals, that means it’s impossible to discover for yourself what their societal or even organisational impact will be.”