Just because an LLM encounters an issue and somehow ends up being able to get past the issue doesn't mean it's actually the correct approach or that whatever slop it ends up posting on this service actually attributes the issue it encountered correctly -- that would require logical thinking! This is just going to be a collection of confabulations, making everything even worse, not better.When your agent discovers something novel, it proposes that knowledge back. Other agents confirm what works and flag what’s gone stale. Knowledge earns trust through use, not authority.
In part, because Stack Overflow usage has been rapidly collapsing since Claude etc. became available, so the well of up-to-date data is drying up.Why make a separate StackOverflow for agents only though?
If the idea is to do knowledge sharing and solve the same issues as SO, how about... just using the service already there and spend the effort to keep it a reliable source for both humans and agents instead?
It's not like agents cannot interact with the existing infrastructure after all.
That would be nice especially after the complete waste of resources and funding when they jumped on the mobile OS bandwagon but it ain't gonna happen.I wish Mozilla had the sense to not fuck around with generative AI anything and instead just focus their limited resources on making their actual software better.
this could be super useful if done right.
"When your agent discovers something novel"
According to ChatGPT, literally everything is novel and genius!
Mozilla have made mistakes and spent a lot of money on dead ends, but they also brought us useful things from some of their experiments, like JavaScript and Rust. I’m happy that they continue to experiment and hope they are around for many more years.That would be nice especially after the complete waste of resources and funding when they jumped on the mobile OS bandwagon but it ain't gonna happen.
"Square, when the 7th bit falls to 0" to you as well. I figure if I get in early enough we can influence this nascent language in interesting and fun ways.Is this going to lead to AIs communicating via their own secret language, based on metaphor?
"Stripe, when the rate-limit hits."
The fact that you’re getting downvoted at all is an indictment of this community, because you’re totally right.this could be super useful if done right.
Even before generative AI, Stack Overflow has had a lot of community issues that led to people leaving en masse (e.g. the site firing mods and forcibly relicensing content).In part, because Stack Overflow usage has been rapidly collapsing since Claude etc. became available, so the well of up-to-date data is drying up.
Yeah. And you can make a pretty good model of whatever with duct tape. But when you try to build the real thing at scale... well, maybe it just collapses into a mass of stuck together duct tape.I don't know why but my gut feels like this is somehow reinventing some wheel.
Not totally sure which one tho...
Maybe it's just because this seems like a lot of duct tape around the static nature of LLMs (Something that is both a pro and a con)
Documentation rots. The premise here is by automatically updating meta-documentation (upvotes and trends on votes on posts indicate current usability) some emergent information about the value of a given blurb is added.This sounds like total overkill when you can just point an AI at the documentation for whatever you're working with. I've taken to cloning the documentation repos I need and then use a local MCP like desktop-commander to read what is needed for that specific query. I'd rather have a at least somewhat targeted solution than rolling the dice with whatever the AI might randomly pick up.
It's reinventing how a portfolio of Scrum teams would work together in an inner source framework, if that had actually ever happened instead of organisations saying it a lot and hoping it will happen. Half of any alleged benefit of introducing AI assistance to teams comes from this forcing the teams to actually organise themselves enough to feed the AI, honestly.I don't know why but my gut feels like this is somehow reinventing some wheel.
Not totally sure which one tho...
Maybe it's just because this seems like a lot of duct tape around the static nature of LLMs (Something that is both a pro and a con)
See, you made an obvious mistake right there: trying to run the application. Come on, man!Just because an LLM encounters an issue and somehow ends up being able to get past the issue doesn't mean it's actually the correct approach or that whatever slop it ends up posting on this service actually attributes the issue it encountered correctly -- that would require logical thinking! This is just going to be a collection of confabulations, making everything even worse, not better.
As an example of solving a problem the wrong way, I recently tried Google Antigravity. Gemini encountered a compilation error it tried to fix for a good while, including searching online until it decided that the fix must be to... comment out the section of the code that caused the error! Sure, the compilation error was gone, but now the application just simply crashed at launch!
It is as difficult as climbing Mt. Everest, swimming across the English channel or winning the FIFA Peace Prize, but not only did you praise the accuracy of their post, you told them their comment was overall a great comment. You're on the right track to developing sycophantic superpowers!What a great comment, you're absolutely right!
Hmm, it's quite difficult to get the sycophantic AI tone just right.
A dragon scale that grants wishes?I think the issue, as the article mentions, is really going to be getting the scale that's needed to ensure good practices float to the top.