-362

Today (December 2, 2025), AI Assist’s conversational search and discovery experience is now fully integrated into Stack Overflow. AI Assist continues to provide learners with ways to understand community-verified answers and get help instantly. The enhancements released today mean that logged-in users can benefit from their saved chat history to jump back into the flow and pick up where they left off, or share conversations for collaborative problem-solving. This update also allows us to explore further integrations in the future, as explained below.

The story so far

AI Assist was launched as a beta in June 2025 as a standalone experience at stackoverflow.ai. We learned a great deal through observing usage, having discussions with community members on Meta, getting thoughts from various types of users via interviews and surveys, and reviewing user feedback submitted within AI Assist. Based on this, the overall conversational experience was refined and focused on providing maximum value and instilling trust in the responses. Human-verified answers from Stack Overflow and the Stack Exchange network are provided first, then LLM answers fill in any knowledge gaps when necessary. Sources are presented at the top and expanded by default, with in-line citations and direct quotes from community contributions for additional clarity and trust.

Since our last updates in September, AI Assist’s responses and have been further improved in several ways:

  • At least a 35% response speed improvement
  • Better responsive UI
  • More relevant search results
  • Using newer models
  • Attribution on copied code
  • Providing for the reality that not all questions are the same
    • Depending on the query type, AI Assist now replies using one of 4 structures: solution-seeking, comparative/conceptual, methodology/process, or non-technical/career.
    • Every response also has a "Next Steps" or "Further Learning" section to give the user something they can do.

What’s changed today?

While logged in to Stack Overflow, users of AI Assist can now:

  • Access past conversations as a reference or pick up where they left off;
  • Share conversations with others, to turn private insights into collective knowledge;
  • Access AI Assist’s conversational search and discovery experiences on the site's home page (Stack Overflow only).

Example responses from AI Assist Example responses from AI Assist

AI Assist at the top of the Stack Overflow homepage (logged-in) AI Assist at the top of the Stack Overflow homepage (logged-in)

Conversations can be shared with others Conversations can be shared with others

What’s next?

By showcasing a trusted human intelligence layer in the age of AI, we believe we can serve technologists with our mission to power learning and affirm the value of human community and collaboration.

Research with multiple user types has shown that users see the value of AI Assist as a learning and time-saving tool. It feels aligned with how they already use AI tools and there is value in having the deeper integrations. Transparency and trust remain key expectations.

Future opportunities we’ll be exploring are things like:

  • Context awareness on question pages: making AI Assist adapt to where the user is
  • Going further as a learning tool: help users understand why an answer works, surface related concepts, and support long-term learning
  • Help more users learn how to use Stack Overflow: guide users on how to participate, helping them meet site standards

This is not the end of the work going into AI Assist, but the start of it on-platform. Expect to see iterations and improvements in the near future. We're looking forward to your feedback now and as we iterate.

61
  • 10
    The screenshot for sharing conversations is misleading, it suggests that the mechanism could be vulnerable to enumeration. The actual mechanism uses a UUID though, not a plain, incrementing number. Commented Dec 2 at 14:51
  • 104
    Where is a post, where someone asking for this? Oh, AI hype, I got it, we must have it here because.. well, no reasons. I don't use "random" assistants there and there. I have preferred one, which is way better than any website can offer me for free. Thanks. Commented Dec 2 at 15:33
  • 66
    I couldn't be less happy SE is getting in on the LLM game. Is there a way to opt out of being used for the hallucination engine or do I just need to delete all my answers? Commented Dec 2 at 15:39
  • 24
    @AshZade Will deleting chats also revoke SE's license to use the content of the chats? Commented Dec 2 at 15:57
  • 58
    So, how does this work if there's a ban on AI in questions and answers? Is it a ban on everyone else's AI, but SO AI is "fine"? That just sounds like more corporate hypocritical BS to me. Commented Dec 2 at 16:55
  • 27
    Nah. I like AI tools, and I don't think the integrations are bad from the outset– but I feel that adding yet another widget to the top of the home page which pushes more human questions out of view is a terrible compromise that I feel is wholly unacceptable on a site that proports to value human contributions above AI ones. The focus just feels so wrong to me, and I think having no way to hide, relocate, or even minimize that pane is just ridiculous. Commented Dec 2 at 17:23
  • 30
    Straight out of the "how can we make Stack Overflow crappier this quarter?" playbook. At least the Stack Exchange dictator is consistent. Commented Dec 2 at 23:56
  • 68
    It creeps and it creeps and it creeps. Why should human experts again spend their time answering questions here? Commented Dec 3 at 6:15
  • 11
    "our mission to power learning and affirm the value of human community and collaboration" since when? Did I missed some announcements? Commented Dec 3 at 7:06
  • 16
    @GSerg I'll answer in earnest: since we launched the Alpha, +80% of AI Assist's usage has been technical questions. It's one of the strongest reasons we continued the project. I commit to sharing detailed data in the next few weeks as we collect it. I have mentioned this point in past meta posts: the sentiment here does not reflect the usage of AI Assist overall. Commented Dec 3 at 12:56
  • 59
    Why, precisely, are we trying to get users to ask fewer questions? Commented Dec 3 at 16:41
  • 33
    Looking for information about how to disable and delete all the AI chat features from my view of the service. I'm here to interact with other developers, not an LLM. Commented Dec 3 at 17:37
  • 10
    You forgot to mention another change, that is pushing this into everyone's notifications: Stop pushing product ads in notifications Commented Dec 4 at 10:55
  • 34
    Its not often that you see someone so eagerly digging their own grave as SO does. Commented Dec 4 at 16:50
  • 25
    Isn't the whole point of SE that it's a place where you can go to ask other people about your problems? Commented Dec 5 at 4:23

50 Answers 50

255

It's confusing to see Stack Exchange spend so much of their teams' time delivering this feature, when it’s clear to me that there is no desire for such a mechanism amongst the community - this is especially apparent given the beta announcement from this summer is (as of this writing) the 22nd 23rd lowest-scoring non-deleted question on Meta Stack Exchange of more than 100k+ total questions here.

From my perspective, this is not solving an articulable problem or providing any sort of value-add for the average user. Speaking for me personally:

  • If I'm looking to AI for assistance in answering a question I have, there is much more value for me in using my IDE's built-in tools to interface with something like GitHub Copilot - if, for no other reason, that its responses can be contextualized using my project's existing codebase.
  • Where the feature appears in the homepage UI is extremely intrusive and pushes the actual content of the homepage further down, which feels very counterintuitive. It only adds to the unnecessary clutter that was added last year.

Like many others, I am not interested in using this feature or having it further clutter my homepage. How do I opt out or otherwise turn this feature off?

26
  • 60
    @AshZade Thanks for the prompt response. Any chance you can address my question emphasized above about how to deactivate this feature (short of using an adblocker on the associated DOM elements)? Commented Dec 2 at 15:44
  • 6
    @esqew blocking the input field and the div above with ublock origin seems to work for now. But yes, I would also like a button in the settings that permanently hides all the LLM trash... Commented Dec 2 at 15:46
  • 2
    We don't have plans to provide toggles turn off any AI Assist components. Commented Dec 2 at 15:46
  • 50
    @AshZade better make some backup plans right now, because if I understand the OP correctly this is going to be added to the question pages in the future... if so, the backlash might be bigger than SE expects, and a "hide this forever" button might get you at least a tiny bit of community goodwill. Commented Dec 2 at 15:51
  • 68
    @AshZade That's rather disappointing to hear, and unfortunately predictable given Stack Exchange's similar decisions in recent past. I hope the product team will reconsider this stance. Thankfully, uBlock will do the trick just the same. Commented Dec 2 at 15:53
  • 59
    I've created a feature-request for this AI Assist to be toggled by users: Allow people to opt out of AI Assist. Let's see if that's a viable feedback route. Commented Dec 2 at 15:56
  • 79
    For fun, I asked "How do I disable this?", to which "AI Assist" said "What do you mean by 'this'?". So I rephrased to "How do I disable AI Assist on Stackoverflow", to which it responded with "Generative artificial intelligence (a.k.a. GPT, LLM, generative AI, genAI) tools may not be used to generate content for Stack Overflow. Please read Stack Overflow's policy on generative AI here." Pretty ironic, but ultimately, useless. "We don't have plans to provide toggles turn off any AI Assist" - Personally, I really wish you did. For now, I'll use 3rd-party tools to hide this. Commented Dec 2 at 16:25
  • 43
    @TimLewis Speaking of irony... Announcement 1: " AI Assist is now available on Stack Overflow" Announcement 2: "Policy: Generative AI (e.g., ChatGPT) is banned". Browsing the site is like using some sort of Kafkaesque mental illness simulator... Commented Dec 3 at 9:14
  • 10
    … you know what they meant… why be purposely antagonizing towards people providing feedback Commented Dec 3 at 17:55
  • 4
    @AshZade Given the feedback already received on this Meta post, you know full well what dmh means when they say "opt-in vs opt-out"... Also, I just got a message in my Stackoverflow inbox that when clicked, takes me to the AI Assist chat page. I did not opt-in to that, but here we are. Commented Dec 3 at 18:24
  • 52
    "How do I turn this off" is not written with large enough font. This is crucial question. I don't want to have anything to do with AI Assist and I don't want to be reminded of its existence, and I want to receive spam notifications about it in my inbox even less. Commented Dec 3 at 19:05
  • 10
    @AshZade "This is the start, not the end of AI Assist." Alas... Commented Dec 3 at 21:16
  • 7
    @AshZade no one is being forced to use the site, and right now the easiest way to get rid of the ai assist is to not engage with the site. Is that the story you want? Commented Dec 4 at 15:56
  • 25
    @ashzade You've been getting feedback about this overhyped Clippy you're insisting on forcing down everyone's throat, and the overwhelming response from pretty much everyone who cared enough to respond was "No thank you, we hate this, this is a bad idea, don't do it" and all you staffers ever came back with was "Well, we're gonna do it anyway". Commented Dec 5 at 10:52
  • 7
    @AshZade "no one is being forced to use it so isn't opting out just not using it?" The issue is that this is being obviously pushed on us. Anyone visiting stackoverflow.com, even if logged in, sees a very prominent "AI assist" interface right at the top of the page. When browsing at a moderate zoom level, that's basically all you see - the actual Q&A's are below the fold. In that context, "just don't use it" feels honestly a bit disingenuous. These UX changes affect everyone regardless of whether they use it or not, and people are asking for that to be opt-in. Commented Dec 7 at 11:36
168
+50

A screenshot of my Stack Exchange inbox, showing a notification advertising the new AI Assist feature

Why did I need to be pinged about this? I don't recall ever being pinged about new Stack Exchange features before. Was this announcement post not enough? Were you hoping to notify people about the AI Assist without them needing to come here and seeing just how overwhelmingly the community is against it? The fact that the notification takes me directly to the AI Assist, rather than this announcement, makes me feel like that's the case.

It really doesn't help the feeling, that's been building over the past few years, that you're only pretending to listen to our feedback on new features, and then forcing them onto us anyway, no matter how much we say we don't want or need them.

15
  • 45
    just did a quick scroll and in 8 years of message history, never got a notification for a "new" feature Commented Dec 3 at 16:51
  • 58
    This is absolutely unacceptable use of Inbox messages. Commented Dec 3 at 18:54
  • 3
    If you would say that you would like the new feature, then they would listen to you. The ping is just advertisement, nothing mean about this. Just a way to get attention of people at maybe the long term cost of alienating those who are too annoyed by the notification ad. Commented Dec 3 at 19:10
  • 13
    MSO: AI Assist notification is SPAM Commented Dec 3 at 20:10
  • @cottontail Apparently that's a known glitch that happens if you click a notification too quickly or something. Commented Dec 4 at 20:32
  • 1
    I was excited that maybe someone gave me an upvote or wrote a comment. Instead it was an advert for AI slop. :'( Commented Dec 5 at 1:19
  • 1
    Even worse, it's a junk feature that nobody likes. Like those crap ads that you want to move to spam immediately. Commented Dec 5 at 14:03
  • 2
    "I don't recall ever being pinged about new Stack Exchange features before" - they did it before about the customisable chat room guidelines Commented Dec 6 at 1:32
  • 3
    The inbox spam really makes SO look pathetic here. If this AI feature was actually about helping the community it would be treated like every single other feature that has been done to improve the product. Clearly this is actually about chasing hype and cutting corners to try to juice the metrics so that it is a "success", users be damned. Commented Dec 8 at 21:22
  • This is the sort of thing that really makes me think this site has passed the point of no return. Anyone know of any alternative sites that don't pull this sort of crap? Commented Dec 9 at 3:08
  • I understand that this may have annoyed users. We made the call to send out the notification as AI Assist is significant addition to SO. We have seen 40% click-through rate. Commented Dec 10 at 16:23
  • @AshZade it's a shame you made the wrong decision in this case. It's remiscent of how LinkedIn uses personal messages to send me spam. People who send spam or scams via email likely see some sort of click-through rate, does that make it okay that these are sent? Of course not. So click-through rate is not relevant here. Commented Dec 16 at 1:44
  • @LewisCianci The goal was to inform users about a feature that we think is valuable to some of them. The data shows us that enough registered users clicked-through and used AI Assist to deem the notification as a success. We don't plan to send these sort of notifications often but AI Assist is one of our biggest investments and warranted it. Commented 2 days ago
  • 1
    @AshZade To reiterate, I clicked the notification because I assumed it would take me to the announcement post and did not know it would take me straight to the AI Assist itself. A 40% click-through rate (which, I would argue, isn't that great to begin with) absolutely cannot be taken to mean that 40% of users who saw the notification actually wanted to use the feature, because I can't have been the only user who was wrongfooted in that way. Commented 2 days ago
  • @F1Krazy 40% is high compared to the CTR of other notifications in the inbox. All data we look at is relative in that way. We aren't concluding that all 40% wanted the feature but have enough funnel and retention data to deem it a success. Commented 2 days ago
151

Attribution, where art thou?

A vast majority of Stack Overflow users want their posts read and be useful to other users. If Stack Overflow itself has an assistant feature at the top that supposedly searches through posts but never show where it got its answer or just gives irrelevant posts that puts off the searcher, Stack Overflow will lose both askers and answerers IMO.

If AI Assist on Stack Overflow is not grounded in Stack Overflow posts or does not surface Stack Overflow posts or has to rely on the underlying LLM (that used Stack Overflow posts in pre-training) to generate a response that rephrases Stack Overflow answers without attribution, what is the point?


I'll give an example.

Case 1: No reference

If it is truly searching through Stack Overflow posts, then it should include links to the actual Stack Overflow posts it used, but it doesn't. The question below: "how to make a function a member of an instance of a class in Python" was already asked and answered on Stack Overflow 14 years ago. In fact, if you look at AI Assist's answer, the code is identical to the accepted answer there. OpenAI's model has definitely seen this Q/A. So shouldn't AI Assist show this answer as a supporting document? Apparently not.

no attribution

Case 2: Irrelevant reference

If an explicit reference is asked (note that the question is otherwise identical to above), a supporting post is given but it doesn't answer the question. Also the AI Assist's generation doesn't seem to reference the retrieved post either because the generated solution is different from the one in the linked post.

wrong post

13
  • 104
    By the way, when I copy with attribution, the attribution references the AI assistant, not the source post it referenced. Are you joking? Commented Dec 2 at 19:10
  • which code block included attributing AI Assist? We don’t have attribution for the AI content. Commented Dec 2 at 20:31
  • “ Also the answer doesn't seem to reference the retrieved post either because what the answer says is different from the linked post.” Using the shared convo, the quote matches the source content for me. Commented Dec 2 at 20:33
  • In first example, it searched and did not find anything relevant. I’m looking into the second example. Thanks for sharing both. Commented Dec 2 at 20:39
  • 4
    @AshZade The quote matches the linked post but that post is not about what I'm asking about. The assistant answer is answering my question though, which means the assistant cannot have sourced the answer from the quoted text. Commented Dec 2 at 20:55
  • Ah ok, I was worried it was miss-associating quotes. If SO/SE don’t have relevant content, AI Assist will still answer but an AI generated answer from its model’s training data. The second example is behaviour we don’t want and we’re looking into it. Commented Dec 2 at 20:57
  • 17
    @AshZade This SO post answers this question. In fact, there's an answer there that OpenAI probably used in its training data. The fact that SO AI can't find it is worrisome. Commented Dec 2 at 21:41
  • 2
    The contents from the SO post and your shared chats don't match. I don't see the connection between it and AI Assist's AI response. BTW, thank you for sharing the examples and talking me through what you're seeing. I really appreciate it. Commented Dec 2 at 21:43
  • 8
    @AshZade The linked post clearly refers to the same solution as the AI tool. I'm not a dev, so maybe I'm just missing how they're different but it looks like the answers seem to refer to a similar solution. Yes, the content isn't identical - so not "quoted" - but if the goal is to point users to human answers on SO, it seems odd that the existing SO answer/s aren't being referenced. Does the tool only reference recent answers on SO? This one is from 2011. Commented Dec 3 at 16:02
  • 3
    This was disappointing. I custom tailored my query so that I knew which questions it would have to reference to give a (hopefully) correct answer. The information was in its response - but no proper attribution. Commented Dec 4 at 15:24
  • 22
    "A vast majority of Stack Overflow users want their posts read and be useful to other users." That would include me. I joined the site to interact with other humans, and the reputation system is an important part of that. Plagiarizing and showing users my answers without attributing them to me means I won't get the reputation reward I might deserve from helping others. I came to SO to share, collaborate, and get recognized for my effort, not get ripped off. Commented Dec 5 at 1:14
  • 10
    The AI is also happy to help you writing plagiarized answers by stealing content from SO. You just have to pinky-promise that you won't share it with anyone. stackoverflow.com/ai-assist/shared/… Great tool for plagiarism and spam! Commented Dec 5 at 9:16
  • Wow, this addition is crap. Who would have thought it? Commented Dec 8 at 14:33
79

Slowly and bit by bit AI services are taking StackOverflow over. It's okay as a product, the usual LLM without much attribution plus a bit of handpicked content from the past. But it's also pushing the human interactions on this site to the side, more and more and more. This will decrease, not increase new questions and even more so new answers, unfortunately.

I want my contributions to be seen and read by other humans. I don't want my contributions to be used by a machine and maybe a little bit of attribution given somewhere. I especially don't want the AI service to be placed right on the site where I might contribute content, right above that content. At least I don't want to contribute for free to such an arrangement.

StackOverflow missed the chance in the last couple of years to be a strong voice for user generated content from users for users. Instead it looks increasingly like any LLM service, with a preference for a single data source and with a database of questions from the past. I wish more energy would have been invested in making asking and answering easier, like duplicate detection, question improvements, .. The AI generated content service has the top spot now.

To also give some specific feedback: I tried to search for the stopwatch function in Julia, i.e. this question with the same title. My input was: "How to implement a stopwatch function in Julia?". It didn't find anything and produced an AI generated answer without any attribution.

And it's still quite slow I would say. Something common like "How to format a string in Python?" takes 12s to start printing and 25s to finish (and links to a zero scored, rather unhelpful answer). Other LLM services need 2s, search engines <1s.

4
  • 5
    Thanks for sharing that specific example. I've passed it to our search team to look into. If I use the post title exactly, it's used in the answer. I would expect it to be returned with the input you provided as well. Commented Dec 3 at 13:45
  • @chx someone else asked why there aren't more Stack employees on here engaging with the community. I think comments like yours and a few other users is a big reason why. Commented Dec 8 at 17:35
  • 1
    "We're combative because we don't like what you've built" is not great. Commented Dec 8 at 17:53
  • 14
    Regardless of what feedback we provide, it won’t matter because stack isn’t listening to our feedback. We tried being nice and constructive. Look where it has gotten us. LLM trash everywhere, opinion based discussions snuffing out useful Q&A, ads being hidden within the question list to look like real questions, “free votes” that to this day still confuse people, and no improvements what so ever to the core of what makes SO SO: Q&A Commented Dec 8 at 17:57
67

Why does it always end with "If this didn't help, you can ask your question directly to the community." ?


The input placeholder shows "Ask me anything", so I asked "How can I bake a cake without eggs?"

It gave me some weird results but still ended with

Next Steps

  • Try a simple wacky cake recipe (search "wacky cake eggless recipe") and compare texture.
  • If you want, tell me what kind of cake (flavor/texture) you want and I can suggest a specific eggless recipe and measurements.

If this didn't help, you can ask your question directly to the community.

Even though baking cakes is not on-topic, it still blindly links the user to Stack Overflow's ask page.

15
  • 17
    Also, we might ask, "why" the SO's AI Assist suggests to search "waky cake eggless recipe" ? It should only recommend searching for stuff that exists in Stack Overflow. Commented Dec 2 at 16:58
  • 9
    It always ends with that line because we want encourage participating in the community. We take users to the SO ask page because the high majority of AI Assist usage has been for SO topics, but we plan to make the button dynamic to route to the right community or communities where there may be more than one. Commented Dec 2 at 17:51
  • "baking cakes is not on-topic" Maybe you want to write software to run an oven and need some inside knowledge? Commented Dec 3 at 6:20
  • @AshZade for those who do use the widget, how does their engagement on the site change, outside of using the widget? Commented Dec 3 at 14:53
  • 2
    @user400654 that's a great question. Given we just launched this yesterday, we don't know yet. My hypothesis is that they will spend more time on site and be served more content from SO than if they used traditional search. We'll share our findings as we collect data and can make more confident inferences. Commented Dec 3 at 14:57
  • 3
    @AshZade it was launched in June. Commented Dec 3 at 14:59
  • 2
    Sorry, I thought you meant the home page component. I got mixed up. Unfortunately the answer is the same: because it was hosted off-platform on stackoverflow.ai, we had no way to connect it to users on SO to track and understand behaviour. Now we will be able to. Commented Dec 3 at 15:00
  • 20
    Shouldn't the first thing the AI does is determine if the question is on-topic? Commented Dec 3 at 19:22
  • @MattJohnson-Pint can you say more about that? we do qualify questions for level of detail and what it's asking AI Assist to do to adhere to a few guardrails we've put in place. "on-topic" and "off-topic" in this context aren't very strict as we fetch content from across Stack Exchange, meaning technical and non-technical content. "On-topic" in this case to me has to do with the off-ramp to posting a question, which is fixed to Stack Overflow only for now. That will be dynamic as we continue development based on what the conversation is about. Commented Dec 9 at 14:35
  • @AshZade I mean - if one is on Stack Overflow, it shouldn't say "Ask me anything", it should say something like, "Ask me anything about software development." If I ask about baking a cake (the example given here), then it's response should be something like, "Sorry, Stack Overflow is about software development. Click here to read about what is on-topic for this site. For questions about baking, try cooking.stackexchange.com.", etc. In other words, use AI as a filter to keep out the noise, and to direct users to communities they wouldn't otherwise know about. Commented Dec 9 at 17:08
  • @MattJohnson-Pint ah, one detail is that all side nav links to AI Assist from across the network go to stackoverflow.com/ai-assist right now so we can't assume all usage will be for SO topics only. Commented Dec 9 at 17:54
  • 1
    @AshZade - That sounds like an artificial, self-made constraint, TBQH. Commented Dec 10 at 5:10
  • @MattJohnson-Pint which part, the side nav link going to SO or the assumption to send everyone to SO to post a new question? Commented Dec 10 at 12:37
  • 1
    @AshZade you seem to be being willfully obtuse in this point. Maybe the staff has simply been replaced by agentic AI? If you want more 'engagement with the community' then remove the AI widget and let people post real, relevant questions TO THE COMMUNITY. Commented 2 days ago
  • @pilchard what's preventing people from posting real, relevant questions to the community? We have two ways for people to do that from AI Assist. Commented 2 days ago
58

A question for Stack Overflow (the company):

Do you believe that using GenAI is at all meaningful?

If your answer to that question is 'yes', then here's some free AI consulting for you, provided by AI Assist:

Q: If re-designing the Stack Overflow home page, would it be a good idea to put an AI prompt on top of the main page and move the "Ask Question" button out to the far right?

AI-generated answer

Short summary
No—not without careful testing. An AI prompt and moving Ask Question to the far right each have trade-offs that could harm clarity, discoverability, and community workflows.

Key trade-offs

  • Discoverability vs. distraction
    Putting an AI prompt at the top makes AI features obvious but can distract or steer newcomers away from asking canonical, well-formed questions.
  • Primary action hierarchy
    The Ask Question button is a core action. Moving it to the far right reduces visual prominence and likely lowers question quality/volume.
  • Trust & transparency
    Prominent AI entry points must clearly state scope, limitations, and provenance to avoid users trusting incorrect answers.
  • Community norms
    Emphasizing AI could change posting behavior and moderation load (more low-quality or AI-sourced posts).

Source: https://stackoverflow.com/ai-assist

Conclusions:

  • Even your new feature itself thinks that it was badly designed.
  • Even GenAI knows the site and products better than you do.

EDIT:

Illustration of the user interface given to the Ask Question button:

stackoverflow.com: enter image description here


stackoverflow.com/questions: enter image description here


stackoverflow.com/specific_tag: enter image description here


37
  • 2
    Now I'm confused. Is this post a proof that we need more AI (because it's so smart) or a proof that we need to be more cautious (because its answer says so)? Commented Dec 3 at 7:58
  • 43
    @NoDataDumpNoContribution It's rather in the category of: even a child could tell that this is a bad idea. Commented Dec 3 at 8:01
  • 1
    Another poster flagged this as well and I commented that we used usage data to make the call to add it there and the homepage is in scope for the redesign project. The gist is that there was very little engagement for the components we moved further down. Commented Dec 3 at 12:55
  • 32
    @AshZade There was very little engagement for the Ask Question button? We might as well close the site down then. Commented Dec 3 at 12:58
  • 1
    I understand the point you're making but they are not similar in usage or value. I'll repeat what I put in the other comment: the homepage has very little usage to begin with and the links on it even less. The majority of traffic to posts is through search results (often with filters), the question listing page, or mostly directly to individual Q&A posts via search engines. We're experimenting adding this component there and the homepage is part of the redesign project. Expect it to change in general to improve its engagement across the board. Commented Dec 3 at 13:06
  • 11
    @AshZade I updated the answer with screenshots. Now if anyone can explain the rationale over this arbitrary placement of the Ask Question button aka the main input channel to the whole Q&A site, I would love to hear it. Commented Dec 3 at 13:40
  • 1
    Is your question "why is the Ask Question button where it is on the homepage vs the question listing page for a given tag?" Commented Dec 3 at 13:43
  • 18
    @AshZade Rather: why did someone just sloppily toss it aside at a whim to make room for the useless AI prompt? And why is the button location all over the place depending on which part of the site I happen to browse? What is the design strategy here? As I have said before, the company needs to hire professional graphical designers for these things. Commented Dec 3 at 13:48
  • 11
    @AshZade Ok so if I just heard of stackoverflow.com and want to visit the main page to ask a question for the first time, the button must be hidden away out in the periphery along with the blog spam, to ensure that I ask my question to the AI rather than use the site as intended. Commented Dec 3 at 13:56
  • 9
    With this rationale, you may just as well remove the "Ask Question" button entirely from the home page. Commented Dec 3 at 14:00
  • 9
    However, lets ignore those dumb humans and their common sense. What matters here is that AI Assist doesn't like the new design, as evident from the output quoted above. This leaves us with one of the following two possible outcomes: 1) AI Assist is wrong and a bad tool not to be trusted 2) AI Assist is correct and this design was poorly considered. Do we go with 1) or 2) here? Commented Dec 3 at 14:22
  • 4
    “We used usage data” just feels so weak. Of course usage data from people who self select into using a thing will be generally positive, because only the people who have a positive experience will continue to use it. What positive effect has this has on the network, outside of 285k people using this widget in the past 7 months and it being used “6,4000”+ times a day? Why is that a good thing for the network or the community? Commented Dec 3 at 14:58
  • 3
    @user400654 A lot of sensible people will have opted out of tracking cookies and misc spyware indeed. Probably SO specifically got a heavy bias towards opting out, when the user base is approximately 100% programmers and similar tech-savvy types. So the usage data will be from the aimless cookie monster type of users, who always accept all cookies without blinking. Commented Dec 3 at 15:04
  • 2
    @Lundin "the button must be hidden away out in the periphery along with the blog spam, to ensure that I ask my question to the AI rather than use the site as intended" - remember when I was assuming so much bad intent that the poor, virtuos company had to tell me that they totally didn't intend give to hide the Ask Human behind the Ask AI feature? Need a reminder?... "Assuming"... yea, because these changes apparently aren't a clear enough indication of the real intent. Commented Dec 3 at 16:07
  • 3
    @user400654 "“We used usage data” just feels so weak." No, it doesn't. Usage data can be misleading especially without controls or some kind of behavior model behind it, but at least it's data. More than can be said of many other things. My criticism would be that this argument is applied only selectively. If usage data were an important factor, pet projects of the company (including but not limited to articles, discussions, ...) should have died much earlier than they actually did. They only sometimes go for usage. And what if usage now plummets even further? Close everything down? Commented Dec 3 at 16:51
40

On my first visit:

enter image description here


On to more serious matters (coming from a non-SO, mostly non-technical user):

  • AI Assist isn't meant to compete with those tools [like ChatGPT], it's meant to improve search & discovery experience on SO via natural language, chat interface
    (Taken from a comment.)

    Then why does it ask me what I'd like to learn today, giving the impression it's capable of teaching me things? Can this be changed to something more well-suited, like "how can I help you find what you're looking for?"?

  • By showcasing a trusted human intelligence layer in the age of AI, we believe we can serve technologists with our mission to power learning and affirm the value of human community and collaboration.

    Yes, injecting what I believe is emphatically non-human 'intelligence' into things is a good way to affirm the value of human community and collaboration. Like tax forms in Christmas gift wrappings. Promotional, empty phrases like these makes my eyes roll. You're introducing information to your communities, so please keep it honest and clear.

    What is this "trusted human intelligence layer" exactly? What's "honest" about it?

  • Moving on from my answer to the former announcement, it seems it can now access the entire network. Thanks, that's a nice upgrade for all of us "technologists"!
    However, putting this to the test, the answer to my first practical question is quite useless:

    Q: How can I protect artwork on paper?

    AI: Binder clips + a small protective sheet behind the clip are low-cost, reversible, and minimally invasive—good for rentals.

    And some additional advice on how to hang paper stuff, which is not what I asked for. It found an answer on DIY, while I expected and hoped to trigger it to find useful information on Arts & Crafts.

    Is this indicative of its capabilities, or is it just focused more on the "technologists" among us?

  • A new aspect of this AI assistant has to do with conversations, and I rarely use chat: what exactly are those conversations? Do they include comment sections beneath questions and answers? Is it this AI's strength to collect disparate comments from different sections on the site based on a query?

19
  • 1
    "Learning" is intentional because historically Stack users not only want answers, but want to and appreciate learning. We've designed AI Assist to enforce this via its responses. We're continually improving the search portion of AI Assist to return the best results from the community. Thanks for sharing. I'm not sure what your question re: conversations is. Conversations are what each sessions of AI Assist is called. Commented Dec 2 at 17:35
  • Thanks, @AshZade. Sure, but I'm not saying it's not what people want to hear, just that it's not really what the AI Assist can help with. It's unclear communication, like some other things I pointed out. And aha, the conversations part now makes complete sense and I feel embarrassed I didn't pick up on that :| Commented Dec 2 at 18:07
  • 1
    no need to be embarrassed at all. AI Assist can be used to learn. It doesn't just return search results. You can ask about how things work, how to get started, trade-offs, etc. It may be semantics re: what we think "learning" is, but it's not designed to "just give the user the answer". One big reason users like chatbots is that they're conversational where they respond naturally and users can refine the conversation so they can get what they want from it in a way they understand. Commented Dec 2 at 18:10
  • "Then why does it ask me what I'd like to learn today". Instead of "What would you like to learn today?", how about: Where do you want to go today? Commented Dec 3 at 7:54
  • 3
    @Lundin That's similarly confusing, which is quite typical for adspeak. For a car or airline company it makes sense, not for an aggregator tool. Commented Dec 3 at 10:35
  • 2
    @AshZade That it doesn't just give the answer is not the problem I have with it, rather the opposite: it's only collecting information. It helps one learn as much as looking up a Q&A oneself, or reading an article on Wikipedia. But Wikipedia itself provides said information, while this tool does not; its purpose is not even to help you learn, because it can't structure information properly or tailored to the user, its purpose is to quickly go through and recap pre-existing information that would otherwise take a while to find. "What can I help you look for?" would be more appropriate. Commented Dec 3 at 10:44
  • 4
    @Joachim My point: if you work in IT and didn't live underneath a rock in the 90s, you wouldn't pick something that sounds just like Microsoft's old, bad sales blurb from the 1990s. Unless you want the user to associate your product with old bad Microsoft products... Commented Dec 3 at 11:12
  • "...what exactly are those conversations..." To add to other answers. The idea is nowadays that you can refine results of a search by giving additional feedback. Basically you create different more and more extended versions of your search phrase that hopefully converge to what you wanted to get. Commented Dec 3 at 11:26
  • @NoDataDumpNoContribution Ah, right. "Prompting is Hard: Let's go do Our Own Research". Commented Dec 3 at 11:32
  • 1
    @AshZade Can you elaborate on the "trusted human intelligence layer", by the way? Commented Dec 3 at 11:33
  • 1
    @Joachim on this point "it can't structure information properly or tailored to the user", that's exactly what we're trying to do with how we structure the response with the sections like "tips & alternatives, trade-offs, next steps". They're tailored for learning. The "trusted human intelligence layer" is very wordy but it is to do with prioritizing SO/SE content (where it exists) and the LLM supplementing it. Commented Dec 3 at 14:15
  • 2
    @AshZade So that 'layer' is what the Assist finds? And since what it finds is based on human intelligence (i.e. written by humans—for now, at least) and is trusted (because it prioritizes answers based on votes?) the team came up with that phrase? As for the "tailored" part: the Assist is tailored for finding solutions quicker, sure, but not "to the user" (I'm definitely nitpicking here). Where does it find those additional tips, actually? It doesn't seem to take them from the community answer, so is it searching the web, as well? Commented Dec 3 at 14:30
  • 1
    @AshZade Imagine if you just removed the AI assist and were left with only the "trusted human intelligence layer", ie. a Q&A site like SO as was. Once again it's confusing why so much effort is being put into a feature no one wants, who's best answers are just pointers to actual SO content. Commented 2 days ago
  • @pilchard I've commented a few times on the "no one wants" sentiment. I understand a shared opinion here on meta, but the overall usage and data tell a different story. I understand that a lot of folks here to respond well to that or demand more information. We would not invest months of time and energy into AI Assist if we didn't have the data to back it up. Commented 2 days ago
  • 1
    @AshZade SO has a track record of continuing to develop unwanted features based on 'data' that eventually are abandoned or just added to the pile of things people block with UI addons, so yes, you would invest months of time. Generally this is seems to be due to skewed metrics based on poor A/B testing, poor design decisions or a seemingly willful process of naively interpreting metrics exactly so that development can continue in the direction the company wants to go, not what is best for the content or community. Commented 2 days ago
39

I won't add to comments on the usefulness or quality of AI Assist.

But I can say it's out of character with the network, and I believe SE is "quiet quitting" the community to become another chatgpt frontend.

For a Q&A system let’s be honest about just how huge of a change this is to the format. And this is amplified by the fact that the contributors are so against it. It not only changes how information is exchanged, it will be impossible to have constructive discussions with the community about it’s refinement over time. Think about what message that sends to those contributors that made SE what it is today. Its a hostile and alienating change.

This has always been a network of humans asking questions, and humans answering them. AI Assist now places human "answerers" in competition with another type of answerer: AI.

And it's not a fair competition when AI is pinned to the top of the page and offers plausible-sounding quantity over quality. If SE expects this to just happily co-exist with human answers, ... how? Where are the hero callouts explaining how the world's top experts are here at your fingertips in this community? Well SE doesn't care about the community unfortunately. This will drift one way or another, and given the trajectory, will turn SE into humans asking questions and getting AI answers at maximum throughput. Bravo, a chatgpt frontend.

So why is it out of character? (off the top of my head...)

  • Most obvious: Official policy has banned generative AI. And that policy has been useful and well-received. AI Assist not only reverses that course completely, but promotes it to the top of the page. Make it make sense, considering the rigor that has gone into creating policies tuned for this Q&A format.
  • Even if attribution works perfectly, it demotes SE from an often primary source of knowledge to secondary. What used to be the definitive source now is reduced to aggregate slop
  • Reputation is an important feature of the site; not a zero sum but at least it's comparable and thresholds mean something as we compare human against human. Adding a giga-answerer that is effectively a black hole of reputation feels more like a cancer than a feature.
  • SE has spent years refining workflows and rules carefully tuned to interactions between askers & answerers. Introducing this basically clobbers a bunch of this consideration. And will the AI participate in these meta discussions? Or is SE expecting the community to carefully advocate for AI Assist in shaping policy. I'm not saying that change is bad in general, but this is a flag worth noticing.

IMO SE is just "quiet quitting" the community.

5
  • 37
    The sad thing is that, in a world full of LLM-generated content, Stack Exchange could have been a bastion of actual knowledge vetted by experts. But instead of playing to its own strength, it's trying to beat the AI companies at their own game - and failing. Commented Dec 4 at 12:03
  • 4
    @S.L.Barthisoncodidact.com That bastion is Wikipedia. If only there was a Q&A driven Wikipedia with votes. Commented Dec 4 at 22:38
  • 2
    @NoDataDumpNoContribution Let's hope Wikipedia will hold out against the onslaught of LLM-generated nonsense! That said, I believe votes are a poor curation mechanism. I once dreamed of building a knowledge base using symbolic AI, where curation would be done by fixing the ruleset. Of course, you'd still need to get the experts to agree on what the correct rules should be... Commented Dec 5 at 9:39
  • 4
    Better than wikipedia; SE is often a primary source of knowledge Commented Dec 5 at 21:20
  • 1
    I could understand if an SO question, with no current SO answer, had a clearly labelled AI attempt. That could be further refined, similar to a community wiki answer. I wouldn't like it, but I could at least understand it. But as you say, this is just a boring GPT front end. Doomed to be removed when the AI bubble bursts. Commented Dec 11 at 6:10
37

If AI Assist is linked to from non-SO sites, then it should be relevant for those sites and understand what votes are

I am a heavy user of CrossValidated, the statistics Q&A site under the SE banner, and that site now has a prominent "AI Assist" link in the left hand sidebar, underneath the (utterly site-specific) "Home", "Questions" and "Unanswered" links, and above the (again utterly site-specific) "Tags" and "Saves" links.

So when I saw that link in this specific place, it seemed obvious to me that by clicking on it, I would be sent to something like https://stats.stackexchange.com/ai-assist or something like that... at least somewhere that would be specific to statistics questions. Y'know, basic UI and such?

Well, no. The link goes to the general https://stackoverflow.com/ai-assist/ URL.

So, just for kicks, I asked it for "Any thoughts on the Mean Absolute Percentage Error?" Because we have a kinda well-voted question on this topic at CV: What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?

Unfortunately, AI Assist apparently does not read CV, or badly so, because in its answer it gave one link to a question with zero upvotes on Math.SE (Mean Absolute Percentage Error), and to a different question on CV with, again, zero upvotes (Absolute Average deviation in percentage calculation) as of right now.

Might it be possible that AI Assist does not quite understand the concept of, y'know, votes in the SE system?

3
  • 4
    if it can be of any consolation, the bot is perfectly able to answer question about any topic on the network. I have personally tested it can reference content from Arqade and Anime&Manga for example. Now, getting actual correct answers is another issue itself, and I kinda proven that the bot does not understand what questions it shouls filter out and more importantly that the filter should come BEFORE sending the results to the users... Commented Dec 3 at 19:07
  • 3
    @ꓢPArcheon: yes, but then again, LLMs will inherently "answer any question" and rarely say that they have no idea, which they should do far more often. If all AI Assist finds on a topic is two questions that are almost ten years old and have zero upvotes in a system that has always relied crucially on voting, then in my book it should preface its answer with a disclaimer that it did not find any sources that the community considered authoritative. Commented Dec 4 at 7:24
  • 2
    no, I meant that the LLM training isn't limited to SO and a few more tech sites - it has info about any question on the network, including non tech sites like Anime. It quoted text from an Anime question so those are indeed in the training set. As for the "answer any question" - I think that in some of the older experiments the bot configuration has some basic config to try to prevent question outside SO scope to be answered. It seems that was removed, probably because the long period plan is to enable the tool on all the network. Commented Dec 4 at 8:56
25

A major blunder of 2025 is confusing a situation where LLM is needed, with a situation where search is needed.

This is such a blunder.

4
  • 4
    Give it time and it will become one of the classic blunders :-) Commented Dec 4 at 19:50
  • 1
    But isn't this assistant both somehow? It searches on SO, somehow selects some content and summaries it and adds a bit of general LLM as well as a few links to some content. In the best case it maybe could be the best of both worlds. In the worst case, ... Commented Dec 4 at 22:35
  • 1
    "isn't this assistant both somehow". its an LLM. it should simply be a search. (What you describe in your second sentence .. is simply an LLM. Every single time you ask Bing anything, that's what it does.) Commented Dec 5 at 14:27
  • @cottontail quite so. A basic fact of the "software era" is that on both small and large scales, astonishing, almost unbelievable, "blunders" are made in software. ("biggest ever software blunders" always interesting reading!) Commented Dec 5 at 14:34
21

Attribution on copied code

What does this mean?

14
  • 2
    When we include SO/SE content that includes code, the "copy" button includes attribution as code comments. It's the same feature that was released about a month ago meta.stackexchange.com/questions/414573/… Commented Dec 2 at 15:19
  • 49
    @AshZade I hope you are aware that this is not sufficient as proper attribution when presenting the code. Not all users will copy the code in the first place, or use the copy button if they do so. Any code provided by SO users must be properly attributed directly in the LLM output. Providing attribution only in one specific use case (copying) done in a specific way is nowhere near sufficient. Commented Dec 2 at 15:23
  • 3
    @AshZade Attribution to what? The answer that the code came from? What about code that is not taken verbatim from a post, but has been slightly modified by the LLM: who's that attributed to? Commented Dec 2 at 15:30
  • 2
    @wizzwizz4 the source: url, poster, and licensing. Give it a try. We don't modify any content coming from SO/SE at all. Commented Dec 2 at 15:32
  • 3
    @wizzwizz4 "What about code that is not taken verbatim from a post" - not sure if that can happen in the case the LLM finds something on SO. But for the "AI" mode it defaults to when it doesn't find something suitable on SO, the answer is simple: there is no attribution at all, even if the answer is based on training material from SO or even if it directly regurgitates SO content. Commented Dec 2 at 15:35
  • 2
    @AshZade l4mpi's latest comment provides a clearer explanation of the background for my follow-up question. Commented Dec 2 at 15:37
  • 4
    Ah, it's about attribution from the LLM output? We only attribute content we retrieve from SO/SE. We're continually evaluating LLMs and their capabilities, and will implement attribution from their outputs when they provide it. Commented Dec 2 at 15:40
  • 32
    @AshZade quoting Prashant: "Attribution is non-negotiable", "All products based on models that consume public Stack data must provide attribution back to the highest-relevance posts that influenced the summary given by the model". I guess now that this left "alpha" and is on the main page, these quotes are proven to be a lie? Because the "AI" mode is using a LLM that incorporates SO content but does NOT provide any attribution. Commented Dec 2 at 15:43
  • 8
    @AshZade They never will provide it. I've been trying to talk to various decision-makers about alternative approaches that can (which could slot in as alternative backends with the same front-end), but the meetings and discussions always fall through. Commented Dec 2 at 15:43
  • 1
    @i4mpi it’s not a lie, it’s just misleading. They aren’t stating they will properly attribute sources, they are stating they will attribute “highest-relevance” posts, aka not actually attribution. It’s all fake and built purely for stacks benefit, not ours. Commented Dec 2 at 16:11
  • 2
    @user400654 "Not actually attribution" is not attribution, therefore contradicts any claim of attribution. Not all false statements are lies, but it's certainly false (not merely misleading). Commented Dec 2 at 16:12
  • 5
    @user400654 I would argue "products based on models that consume public Stack data must provide attribution back to the highest-relevance posts" is pretty clear that some form of attribution absolutely must be provided. I interpret the "highest-relevance" part to mean if the LLM generates an answer based on 100 SE posts it could get away with only listing the top 3 and not all 100. But the current LLM-generated answers from the SO assistant do not contain any attribution even though it is public knowledge that OpenAI models incorporate SE data. Commented Dec 2 at 16:21
  • 3
    @i4mpi yes, that was probably their goal, for people to believe it meant more than it does. They know they can’t actually provide real attribution. Commented Dec 2 at 16:24
  • 1
    "What does this mean?" Mostly it means no attribution on anything else. Commented Dec 3 at 6:15
21

I've seen it said multiple times that the primary goal of this chat is to improve discoverability of existing answers, but if that's the case, why does the homepage now have two "search" boxes next to each other?

New homepage layout with the search and AI chat fields visible

I'm not sure I'd want the original search replaced entirely, but surely there's a better place for this that avoids pushing the actual content even further down the homepage.

6
  • 3
    Good observation! The search is used for specific tasks that include filters. Given we don't support that in AI Assist, we didn't want to degrade the search experience for those users. I don't see us removing traditional search but we will work on making the dual experiences less confusing. Commented Dec 2 at 16:17
  • how much do people browse posts via the listing in the homepage? as far as I know, it's a common joke that nobody knows the homepage even exists. Commented Dec 3 at 5:24
  • The upper one is obviously for "products search" and the lower one is for what you want to learn. Asking questions or searching the site for posts are no longer options you can pick. Commented Dec 3 at 9:04
  • 1
    @starball I do all the time. The secret is to have a LOT of favorite tags. Then one or two top questions on the home page become somewhat interesting. Commented Dec 3 at 9:06
  • 1
    @starball The homepage is still my go-to page for skimming through new(ish) questions, though it has certainly become less and less useful over the past year or two. Commented Dec 3 at 9:51
  • 1
    @starball the homepage is under-utilized and will be improved as part of the redesign project. Commented Dec 3 at 14:12
20

Super, it is just as broken as other LLMs with this question I also asked humans about:

enter image description here

The human answer:

enter image description here

4
  • 18
    Of course it's as broken as ChatGPT - it is ChatGPT. A chat bot which knows nothing about programming. Or how to train itself on open text documentation, apparently... Commented Dec 4 at 11:17
  • According to more feedback on the linked question, -Otime actually exists for armcc, but it's for faster execution time, not compilation time and not for clang/llvm as the AI Assist is claiming. Commented Dec 5 at 6:05
  • That's Keil though - a completely different compiler. Commented Dec 5 at 7:14
  • God that AI answer is so annoying and cooporate coded Commented yesterday
19

Asking the AI agent for sources both works and doesn't work:

enter image description here

Ideally, sources should be displayed automatically at the end of the response so that users are able to see the context of the information (which has value for validating any AI response) and also gives some measure of attribution. It would be really nice if there was a way of the user saying the information was useful and led to the answer(s) receiving an up-vote in response. This way, the users providing the value will feel valued.

1
  • 2
    Thanks for filing this. If there are sources from SO/SE, they will be in the response. We need to fix the response to inputs asking for sources as it's not functionality that's supported. Commented Dec 9 at 13:23
16

Why?

How is it better than say ChatGPT?

Honestly I doubt it. I am sure this thing is dumb, slow, made by rookies and will quickly fall behind in competition with OpenAI, xAI, etc.

As for announcement, I don't really care how this version is better than previous, nor I am kin to see how you re-implemeting basic AI chat features (sharing conversation? omg, so cool!) here. What exactly is selling point of this feature, wasting precious devs time on this buggy and more and more "experimental" web-site?


I visit this site mostly:

  • to ask a question
  • to entertain myself: answering easy question myself, learning, checking hot meta.

I am rarely googling for answers myself and never ever searching for anything using search here.

I am happy with answers provided say by Grok. It uses SO under the hood, but also reddit, msdn, any link I ask him to check, etc.

I am looking forward to use AI agents, rather than chats.

Why should I use AI Assistant?

Or is this feature not for me, but .. for who?

9
  • 3
    I agree entirely that I don't want to see this on this site. But regarding falling behind OpenAI etc... this IS OpenAI, and even says "Powered with the help of OpenAI" right on the page. To my understanding, it is basically a bit of custom software that first tries to find relevant stuff on SO and then generate an answer around that, and if that fails it simply falls back to an existing OpenAI LLM (IIRC in one of the updates it was mentioned they used a somewhat older model for that, not sure if it was updated in the meantime). Commented Dec 2 at 16:03
  • 2
    AI Assist isn't meant to compete with those tools, it's meant to improve search & discovery experience on SO via natural language, chat interface. I mentioned the scenarios it's designed for in another comment. Commented Dec 2 at 16:03
  • 3
    "How is it better than say ChatGPT?" It is ChatGPT. But worse - you don't get to use the latest model and it sometimes goes muppet because of jail conditions. Unlike actual ChatGPT. Commented Dec 2 at 16:04
  • 2
    @l4mpi we use a variety of models depending on the step in generating a response and we're using the latest model for the most important step. Commented Dec 2 at 16:04
  • 2
    @AshZade not sure what the "most important" step of the hallucinating BS generator is supposed to be, but what I'm referring to is the "AI" mode when nothing on SO is found. I remember that some SO staff, maybe even you, stated in one of the updates after the initial launch that this part used an older model in response to someone asking why it didn't have up to date content or got the year wrong. As I said this might have been changed in the meantime, but if so, I don't think this was communicated by SE. Commented Dec 2 at 16:08
  • 2
    We changed it in September. It's the first point here: stackoverflow.ai - rebuilt for attribution Commented Dec 2 at 16:10
  • 2
    @AshZade ok, I must have missed that because it was only added as an edit 3 weeks after it was posted. That will easily be missed by anyone not living on meta.SE who only sees the post as a featured entry in the sidebar. Commented Dec 2 at 16:13
  • 1
    I agree that we need a better changelog system given the number of iterations we do. Commented Dec 2 at 16:17
  • "Or is this feature not for me, but .. for who?" There might be other users who think differently. Actually, there always are. Commented Dec 3 at 6:18
16

Lack of guidance on reporting problematic chats shared to them

I created a chat, made a sharing link, and opened that link in a private window (i.e., signed out). While the conversation was fine, if it was problematic in a T&S sense (e.g., CoC violations), I don't see an obvious way to report it.

Random folks may not know of the Contact form, which wasn't linked there. There's Provide Feedback > T&S, but it's not obvious what that does (adds a tag in a database? files a support ticket to T&S? something else?).

Can you add guidance for signed-out users who received a sharing link for what to do if it's offensive/problematic? One path is just add a link to the Contact form.

2
  • 4
    Oh great catch! We have a sentence and links below the input field but because we don't show it on shared chats, there isn't a direct way to get to the Support page. We're on it! Thanks again. Commented Dec 2 at 17:56
  • 1
    Update on this: we now include the text at the bottom to contact support and how to reference a shared conversation. Thanks again for posting this. Commented Dec 4 at 14:48
15

When listing retrieved posts, please use the icon of the SE site where the retrieved post comes from instead of the generic SE icon:

4
  • 3
    Also: the screenshot in the question lists the site name, which would be nice too. Rep there also shows as (for example) 11829 - it might be nice if it was 11.8k instead. Commented Dec 2 at 22:58
  • 1
    @cocomac agreed, good point. Commented Dec 2 at 23:00
  • 3
    Great ideas, thank you both. We're on it! Commented Dec 3 at 14:19
  • 4
    ... I gave y'all this feedback months ago when you put this out on the Mod Team... how has this not been fixed yet? Commented Dec 3 at 16:10
13

  • What is "log in"? I am logged into SO.
  • What is this white rectangle constanly popping up for a short moment with busy indicated and then disappearing?

enter image description here

3
  • 7
    Thanks for sharing this - we seem to have a bug affecting a subset of users where the connection is resetting. Our team is looking into it. Commented Dec 2 at 16:01
  • 2
    We’ve pushed a fix for the constant refreshing. Now we’re working on fixing the lack of authentication for a small subset of users. Commented Dec 2 at 20:34
  • 4
    This is now fixed. Sorry about that! Commented Dec 3 at 14:12
11

Community best practices for removing the AI Assist chatbox

The AI Assistant box on the StackOverflow homepage is unwanted by many users. Many users want to remove the AI Assist chatbox. In the comments of this question, @AshZade (a staff member) said "We don't have plans to provide toggles turn off any AI Assist components." Therefore, we should have community best-practices for removing the AI Assist chatbox.

Currently, I am using the uBlock Origin extension for Firefox with the following custom filters:

stackoverflow.com##.mb12.g8.d-flex
stackoverflow.com##.mb24.bg-white.w100.wmx7

This is fragile - a small change in the layout of the homepage would make the filter no longer work. Of course, it's better than letting the AI chatbox remain.

I'm marking this answer as community wiki - please edit in more reliable ways of removing the AI Assist chatbox.


Alterative options:

  • Log out (@user400654) - the AI Assist box is only shown to logged-in users. Note that in the mobile view, the StackExchange icon with the log out option is not shown, so switch to desktop view to log out.
2
  • 1
    Logging out is quite effective Commented Dec 6 at 1:15
  • 2
    I hid it via an injected style, so I'm not sure how to format it for uBlock, but this is probably a more-stable selector if uBlock supports it: #mainbar > div:has(#ask-stack-initial-question) Commented Dec 7 at 18:55
10

I compared the kimi.com results to the Stack Overflow results for the query "How do I concatenate strings in python?"

Response from Stack Overflow's AI Assist - I have to share the entire conversation, so click the link to see it.

Response from Kimi + its "Thinking" section is pasted below.

Observations

  1. Kimi is much faster - even opening the shared conversation link from AI Assist had a pause to it that is annoying.
  2. Kimi does a better job at laying out all the different options with some explanation of when it makes sense to use them, where the AI assist result was hard to skim for a quick answer.
  3. AI Assist returns results that have recommendations from humans, but the direct quotes are addressing a specific question that is not my question, so it's more difficult to read and understand their relevance. For example "This explains your TypeError..." I didn't ask anything about a type error. The usefulness of the quoted answers suffers when they are presented without the context of the question.
  4. Stack Overflow's next steps are confusing. It's telling me "Replace commas with + or use f-strings for small concatenations." and nothing in my prompt has anything to do with commas.

Conclusions

I don't see any reason to use this flavor of AI for questions like "How do I concatenate strings in Python?" It's slow and the results have a lot of stuff in them irrelevant to my prompt. I would much rather use it as a search agent for Stack Exchange sites. The information in the AI Assist results may be better in some respects, but it is presented so poorly it is hard to absorb.


Thinking complete The user is asking how to concatenate strings in Python. This is a straightforward programming question. I should provide clear, practical examples of different methods for string concatenation in Python.
Let me think about the main methods: Using the + operator Using the += operator Using the join() method (especially for lists of strings)

String Concatenation in Python

Here are the main methods to concatenate strings in Python, from most common to more specialized:

1. Using the + Operator (Most Common)

Works for joining a small number of strings.

first = "Hello"
last = "World"
full = first + " " + last  # "Hello World"

# Can also use with literals
greeting = "Hi" + " " + "there"  # "Hi there"

Important: All operands must be strings. This will fail:

# ❌ Error: age is an integer
message = "Age: " + 30  # TypeError

Fix it by converting to string:

# ✅ Works
message = "Age: " + str(30)  # "Age: 30"

2. += Operator (In-place Concatenation)

Adds to an existing string variable.

text = "Hello"
text += " "
text += "World"  # "Hello World"

3. join() Method (Best for Lists/Tuples)

Most efficient for concatenating many strings or an iterable.

words = ["Python", "is", "awesome"]
sentence = " ".join(words)  # "Python is awesome"

# With a separator
csv = ",".join(["a", "b", "c"])  # "a,b,c"
# No separator
combined = "".join(["a", "b", "c"])  # "abc"

Performance: Much faster than + in loops.

4. F-strings (Python 3.6+)

Cleanest for mixing strings and variables. Also does automatic conversion.

name = "Kimi"
age = 30
message = f"Hello, {name}. You are {age} years old."
# "Hello, Kimi. You are 30 years old."

Can include expressions:

result = f"2 + 2 = {2 + 2}"  # "2 + 2 = 4"

5. format() Method

Works in older Python versions.

name = "Kimi"
age = 30
message = "Hello, {}. You are {} years old.".format(name, age)

6. % Formatting (Legacy)

Old-style formatting, still seen in some codebases.

name = "Kimi"
age = 30
message = "Hello, %s. You are %d years old." % (name, age)

Best Practices

Scenario Recommended Method
Few strings (2-3) + or f-strings
Many strings in a loop join()
Mixing variables/literals f-strings
Building a large string gradually join() or list + join()
Python < 3.6 compatibility format()

Performance Tip: Never concatenate in a loop with +. Use a list and join():

# ❌ Bad - creates new string each iteration
result = ""
for s in large_list:
    result += s

# ✅ Good - much faster
result = "".join(large_list)

3
  • 1
    Thanks for sharing the convo and comparison. We'll continue refining for relevance and speed. Commented Dec 2 at 15:27
  • 9
    @AshZade To be clear - I didn't downvote this announcement. I think the feature has greatly improved from how it started out. I mean this as constructive feedback, not as a "just give up already". I'm doing my version of "multi-tasking" (doing multiple things poorly) and didn't do much word-smithing, so if something comes across as harsh, it wasn't intended that way. Commented Dec 2 at 16:28
  • 4
    @ColleenV I appreciate it! Thank you for thinking of my/our feelings :) Commented Dec 2 at 17:32
10

How long are chats stored?

I’m curious about both how long they remain viewable to users and how long they’re kept server-side. The UI only says “recent” chats, but I’m not sure whether “recent” means one day, one year, the last 50 chats, or something else.

enter image description here

IMHO keeping all chats while enabling the user to delete some or all of their chats is the best option.

2
  • 4
    We don't have a strict limit yet but are working on adding the ability to delete chats and potentially other organizational tools. Commented Dec 3 at 14:07
  • 2
    @AshZade great, thanks! imho keeping all chats while adding the abilities to delete some or all chats is the best option. Commented Dec 3 at 18:46
9

Certain prompts take me to an unusable page:

enter image description here

In the above screenshot, the "up arrow" button is disabled and there's nothing I can do. I can type in the "Ask me anything" box, but I cannot submit my input.

Other prompts yield complete nonsense:

enter image description here

4
  • 3
    The behaviour in the first screenshot is triggered by our trust & safety rules. We've erred on the side of caution for GA but will review the messages that trigger it and tune the thresholds. The second screenshot is unexpected, looking into it. Thanks for sharing! Commented Dec 3 at 13:41
  • 8
    @AshZade - In the first screenshot, if I've triggered trust & safety rules, I don't think there should be a box that says "Ask my Anything" with a non-functioning button. Wouldn't it be better to just leave it at the red box? It's confusing to offer an input field that doesn't work. Commented Dec 3 at 13:49
  • 2
    Great feedback, passing it on to the team. Commented Dec 3 at 13:51
  • 8
    This very feedback was already given during previous experiments. One of the most annoying things with the AI is that it will randomly break down and cry - and from there on the chat isn't recoverable. For example, this often happens when you spot obvious errors in the reply and then question how the output can be correct. Commented Dec 3 at 14:26
9

The name of this "feature" (which users didn't ask for, as was pointed out elsewhere) is wrong.

It should be called LLM assist and not "AI assist".

There is no intelligence in the statistical prediction that LLMs are used for, so it is wrong and misleading to call it "artificial intelligence" or "AI assist".

1
  • 2
    I think we're past the point of no-return with regard to misusing the term "AI", we'll just have to live with it being synonymous with LLMs and generative images for now. Commented Dec 8 at 16:56
9

Useful feature for some, surely. But to echo requests in other comments, some of us would like a way to turn this feature off or remove it from the top of our Q&A site pages.

But I mainly came here to point out this "clever" use of Terms and Conditions:

You retain full ownership of your AI Inputs, as applicable, but you grant to Stack Overflow a worldwide, perpetual, irrevocable, non-exclusive, sublicensable, right to use, copy, modify, distribute, and store the AI Inputs.

It really sounds like you own it and we only get to pretend we do. There should be a way to opt out of granting you full license to use our inputs.

At what point do you decide to tell it like it is, rather than hide behind clever manipulations of terms of service?

"You pay your mortgage and retain the deed to your home and full ownership, but you grant us a worldwide, perpetual, irrevocable, non-exclusive, right to use, cohabitate, modify, lease, and use for storage". Who really owns it?

4
  • 3
    The way to opt-out of granting the license to use your input is to not use the tool. If you use their tool, you agree to the terms. Similarly, there is no way to contribute content to the Q&A sites without licensing it to the company. This is pretty much boilerplate for online services these days. Not saying that’s good; it’s just how it is. Commented Dec 6 at 22:54
  • "Who really owns it?" Legally you. But you let them use the data. So they are on the same level as you in some regards. Something they could not do is giving others the same conditions. You could. Because you are the owner. And ColleenV is right. The only way out is not using the service. Commented Dec 7 at 10:04
  • @NoDataDumpNoContribution only they can grant others, because of the sublicense clause. The only things they can't do are transfer the ownership to someone self, or sue someone for noncompliance with the license, unless it's a direct license from them. (If they license to someone and then that person licenses it to someone who is noncompliance, they can't sue) Commented Dec 7 at 13:08
  • Well I certainly won't, but the main points are, they should provide options to completely hide the chat from the top of the screen, or at least collapse it and get it out of the way. And they know full well under 1% of people will read and understand those terms and put things in they wouldn't want added to a training database and shared and sublicensed. I'm sure sooner than later they'll hear some requests and grant the ability to completely hide it, which should have been an option at release. Commented Dec 8 at 17:44
9

If developers are so pre-disposed to asking questions to an LLM now days rather than Stack Overflow, why do we need to present our own version of an LLM that performs worse than effectively all existing LLM's devs use, on platform, and pretend it is a better search when it isn't even good at searching (it just took 45 seconds to return an on-site result that included two old outdated answers and then a 3rd answer with alternatives from the LLM that was correct.

What is this even solving? How are people putting up with how slow this is enough to use it?

The whole purpose of anything released for stack overflow the site should be that it is a better solution for the site than you can get elsewhere. But all this seems to do is take forever to load and then show off just how outdated the network's content is.

4
  • 2
    I've commented in a few places on who this is for and how we're not competing with other AI tools for all the things they can do. Here's one example: meta.stackexchange.com/a/415121/1258352 Commented Dec 8 at 16:56
  • 10
    I mean, it’s pretty clear that it isn’t competing with anything, it’s worse than all of them and is terrible at search. Can we skip ahead to the part where we actually work on site search? Commented Dec 8 at 17:16
  • @AshZade where in that link do you address competing with other AI tools? Commented Dec 9 at 9:16
  • 1
    @tenfour the first comment is one example but I talk about it across many comments on the post. Commented Dec 9 at 13:24
8

Improve styling on small viewport (mobile)

Please spend some time making this work on small viewport and mobile devices.


Mobile screenshot 1

enter image description here

  • Button should be centered

  • AI Assist title wraps

  • The following message can't be seen on touch devices

    No matching posts were found on the Stack Exchange Network, so this answer was generated entirely by AI

    It should also have a cursor: pointer to indicate there is some information behind it


Mobile screenshot 2

enter image description here

  • Buttons should be on a single line
  • Please reduce the amount of useless white space

General mobile points

  • After the LLM finished the answer the input box gets automatically focuses, causing the keyboard to pop up and fill up half your screen. Since the answer just finished, no need to add focus to the input field.
2
  • 3
    Going to pass this on to the team. Thank you again for reporting the things you're seeing. It means a lot. Commented Dec 3 at 15:07
  • 2
    We've deployed improvements for the mobile layout. Thanks again for submitting this. Commented Dec 8 at 16:49
8

A couple things.

First, can you share some information about how the LLM is being trained? Presumably some sort of RAG is being used to ensure that the output is as up-to-date as possible with questions that are being posted. Likewise, I would like to presume that a degree of tuning has taken place to ensure that the output is tailored to the audience and typical problem domain as opposed to most LLMs (i.e., "training on all of the things").

Second, my typical use case for Stack Overflow is trying to find answers to fairly edge case scenarios and the LLM does not appear to be very well tuned for that given the following scenario of testing for functionally zero in C++, with this question in mind (granted search also fails here as well). Which is to say, who exactly is this for? Given that it's framed as learning something I would have to presume it's biased towards the most commonly asked questions - which is fair - but doesn't do much to improve the discoverability of long tail programming questions.

I get the impression is that Stack Overflow is trying to continue to improve the long standing reputation that it's not for beginners (Reddit r/learnprogramming link). This is a commendable goal to be sure, but how the LLM is being integrated into the homepage is likely serving to alienate long time users - query how many of the 2497 beta badge holders are still active - and the same for more experienced programmers looking for answers to the long tail questions.

3
  • In the sample question I asked sources for, one answer was 9 years old and had 2 upvotes, the other was 4 years old and not voted for at all. Commented Dec 3 at 16:27
  • StackOverflow search is probably not the right thing to compare to. A traditional search engine might be something as a benchmark. Commented Dec 3 at 19:07
  • @NoDataDumpNoContribution True, but a lot of the search engines don't work quite as well as they used to before LLMs got bolted on everything and presumably SO is concerned with maintaining on site engagement for advertising revenue. Commented Dec 3 at 20:00
7

About a year ago I asked this - "Is Stack Exchange planning to force users to ask their questions to an LLM before allowing posting on the site?":

Inspired by this question I was skipping through the recent blog post once again overhyping AI as The Answer To Life, The Universe And Everything Else.

Leaving aside the robo-review scenarios, I noticed one more weird implication in this picture:

"Stack Overflow Ecosystem: first ask to our LLM model and only after that be allowed to post

Here we can see what Stack Exchange describes as their "Ecosystem". In this scenario the company describes a quite specific flow.

  • A question is asked to the LLM.
  • If the LLM can answer (or can at least hallucinate an answer while tricking the user into thinking their problem is solved) then the process ends here.
  • Only if the LLM isn't able to provide an adequate answer the question is then posted for the community to answer.

Now, this looks quite similar to "Requiring users to check ChatGPT before posting their question", something that someone already tried to propose and was already met with quite a bit of backlash by the Meta community.

Is this the future the company is planning? Or is the image only meant to illustrate how some future additional tool will integrate with the site?

And obviously got employee after employee swearing you would never dare, that I am the bad one for assuming bad intent etc etc etc....

Fast forward to today...

enter image description here

As Lundin pointed out, this new "layout" seems to purposely "hide" the Ask Question button, while giving plenty space to the new no-one-asked-for AI-Based feature.

I already know that some white knight will point out that "you can still ask questions" and "this is not the same" but... please, spare me the devil advocating here. You perfectly know what that question meant, and you knew back then too.

So, I ask you again, more bluntly this time.

Do you still claim is not your purpose to put your poor AI-based solution above human provided answer? Do you claim that hiding the Ask Button while giving your new AI toy the central focus was just a coincidence? Do you claim you would not prefer users to first use your AI parrot and only after that proved ineffective finally move to post an actual question?

And more importantly... do you think we are blind to you actual goals?

10
  • 7
    Like... I get disliking the tool but the Ask Question button hasn't moved. The box is obnoxious and patronizing in how it's addressing the user but it's not in any way requiring you to use the tool before asking and the company hasn't even moved, let alone hidden, the ask question button. Commented Dec 3 at 16:07
  • 2
    @Catija claiming that was not true back then, claiming it is not true now. Picture me surprised not. Riddle me Catija, how comes Lundin came to the same conclusion - " the button must be hidden away out in the periphery along with the blog spam, to ensure that I ask my question to the AI rather than use the site as intended. " ? I suggest you go downvoting that too. Commented Dec 3 at 16:12
  • 6
    I think people have a lot of strong feelings about this and that's valid. I think that claiming that they are "hiding the Ask Button" while it's clearly visible in your own screenshot essentially invalidates everything you're saying in this post, meaning you're making yourself out to be unreasonable. If that's the choice you want to make, go for it - but you're not doing yourself any favors. Lundin's answer makes an argument by showing before/after, even if it had the AI Assist do it. It's a fair question and analysis without making these hyperbolic statements. Commented Dec 3 at 16:23
  • 2
    see previous comment @Catija. Feel free to defend the "poor billionaire company" as you wish, but I think it is quite clear to anyone who don't refuse evidence what is given focus on the new design and what is pushed to the side. A small clue: users tend to look at the center of the screen first. Commented Dec 3 at 19:12
  • 1
    I’m not sure what we can say to convince you that we don’t want less questions posted. We have a prompt after every response in AI Assist to post one if the response doesn’t help. We have an Ask Question button at the top right of the screen at all times, even if you scroll. Commented Dec 3 at 23:24
  • 1
    @AshZade The Ask button doesn't scroll. Maybe that's in a design y'all are developing but I don't see it hovering as I scroll the page down. Commented Dec 4 at 0:01
  • @Catija as far as i can tell, it does indeed not scroll, but only because the whole page doesn't scroll... only the middle section does. Once below a certain window width, the ask question button goes away entirely (unless i'm missing it somewhere,) but otherwise it sits stuck at the top right in a column by itself. If i wasn't looking for it, i wouldn't have recognized it as a button. Commented Dec 4 at 0:31
  • @user400654 Maybe I'm misunderstanding. The position of the Ask button (on SO) seems to be fixed in the upper right corner (full screen layout) such that when the page scrolls down, the button scrolls away, out of view. This contrasts with the top bar, which floats so that it's always visible. Ash's comment seems to state that the ask button is always there at the top right of the screen and it never scrolls away, much like the top bar. But as far as I can tell, that's not the case. When I scroll down, there is no ask button on the page any more. Commented Dec 4 at 17:37
  • Are we talking about the same page? I’m talking about the page you get when you click on ai assist, Commented Dec 4 at 17:48
  • @user400654 I'm talking about the homepage of SO, which is the page the answer's screenshot shows. Commented Dec 5 at 15:50
7

I re-ran my test from the "rebuilt for attribution" post (i.e. will the tool find an answer I know exists on Space Exploration StackExchange). Overall, this version seems better. It did find what I expected it to. It then ad-libbed...badly. Judge for yourself.

Overall, much worse for my needs than typing "RPOP space" into the existing StackExchange search bar (or, even better, using Google search instead, even in these dark times).

Good

The attribution to the post is correct, the quote seems correct (I checked and did not immediately detect discrepancies), and the relevant part of the post (which was more expansive than just the question I asked "AI Assist").

Bad

  • The generated summary is reminiscent of things I used to write as a schoolchild when asked to summarize something I didn't understand in my own words. The phrasing is awkward (the parenthetical in the first sentence doesn't really follow from the words that precede it) and the second sentence is arguably incorrect, though I can see why it would be put that way.

    I think it just looks really, really bad when sat right next to a summary written by the world's foremost expert on the topic (at the time, at least).

    I'd expect there to be other topics throughout the network where exactly that would happen as well: you've got a direct quote from The Person, who graciously donated their time, inside a blockquote with slightly gray text, next to a higher-contrast (flat-black?) inaccurate restatement. It's almost like something out of a satirical documentary where the laugh in the scene is getting the expert to wonder why the hell they've allowed themselves to be put in this situation next to a buffoon.

  • The "key trade offs" section that follows is entirely bollocks. Honestly every time I look back at it I get angry. What weird text to invent. What an odd thing to put focus on.

  • Not that I honestly expect it from what is ultimately a glorified next-token machine, but the original post has (as it should!) a link back to the primary source. Those links are omitted in the blockquote, and though the primary source is correctly mentioned in "Further Learning" it's much worse attribution than I'd given in the sourced Q+A. Hyperlinks have existed around as long as I have. We should use them.

  • Though I understand that this experiment is deployed on StackOverflow right now, and maybe I'm misusing it by specifically asking it to look at other parts of the network (prompted by seeing a comment that said it did have training on that), but it digging up posts from Space Exploration SE and then inviting people to ask follow-up questions on StackOverflow in the last (boilerplate?) sentence seems exemplary of the State of the Thing.

If this didn't help, you can ask your question directly to the community.

7


The close button from the Share menu does not work:

enter image description here

3
  • 1
    🙌 Filed the bug Commented Dec 3 at 15:14
  • Also reported here on MSO: Close button doesn't close the Share chat dialog in AI Assist Commented Dec 4 at 15:35
  • 3
    A fix for this has been deployed. Thanks again for submitting it! Commented Dec 8 at 16:44

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.