-
Notifications
You must be signed in to change notification settings - Fork 526
AI Tutor Permissions: alerts for multi-section assign dialog #70681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI Tutor Permissions: alerts for multi-section assign dialog #70681
Conversation
| facilitated_workshops.sort_by {|ws| ws.sessions.first.start} | ||
| end | ||
|
|
||
| def ai_chat_tools_dependency |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit concerned about this being a hefty query (it'd be an ActiveRecord query per unit in all the versions of the course here right? for each course offering in the catalog when the catalog loads? :\ ) but I don't have good mental model for why we need it in course offering vs. only unit_group. when we're assigning from the catalog, we are only technically assigning one particular version, is that true?
Maybe we can talk this one through on a quick call so I can wrap my head around it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking out loud... I see 4 initial general options:
-
Live with it on this PR and update the query count tests. This feels like only a very temporary option, but may be worth considering given the tight timelines for broad tutor?
-
Maybe there's a way with scopes to cut down on the heftiness of the query...? This could make it potentially better but still requires hitting the db additional times
-
Add boolean flags on course_offering or course_version that we update when units change, read the booleans directly rather than hitting the db with more queries but would require work to sync up with unit editing and probably an initial backfill script
-
compute once and cache the results. I don't know a ton about our current caching strategies so can't speak to how difficult or complex this would be without additional investigation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#2 is what I did for unit_groups - I originally had something similar to this with units.any? and then updated them to use scopes. I think that's harder here with looking through all the versions of the course - is there a way to get around that? We're only ever assigning one specific version of a course, and if we know that version we should know the unit_group to query
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's also maybe not that big a deal - maybe we can look at these query count tests - do they time the query too? could be a little bit of testing finds that it's super fast anyway so not an issue
| text: 'Learn more', | ||
| openInNewTab: true, | ||
| }} | ||
| icon={{iconName: 'ai-bot-solid', iconFamily: 'kit'}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The icon color is a bit too light, but my in progress PR has a 3rd very similar alert in it, so I will refactor the 3 into 1 that just has the text as a prop and applies the style override to make the icon match the designs. This comment is mostly a note to self :P
Nokondi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have anything to add beyond what Cassi has mentioned. Once you address those issues, I think this is ready to go.
| end | ||
|
|
||
| test_user_gets_response_for :show, response: :success, user: :teacher, params: -> {{course_name: @unit_group_regular.name}}, queries: 11 | ||
| test_user_gets_response_for :show, response: :success, user: :teacher, params: -> {{course_name: @unit_group_regular.name}}, queries: 13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cnbrenci here are the controller tests that I had to update - it increases the count by 2 for a few cases. Ideally we'd implement a solution to bring these counts back down, especially for the signed out and student cases since the prop is only used for the alerts (at least at the moment - one idea here) which are only seen by teachers. I can't think of a quick and tidy way to do this though so maybe we get this in at least to unblock you from combining our work and then talk through the right scope for an optimization? What do you think?
cnbrenci
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a ton for picking up this work! 🙏🚀
Figma
This PR adds warnings that appears in the
MultipleSectionsAssignerDialog.If the course selected requires the use of AI chat tools to complete, we show this alert:

If the course selected has AI chat tools available, but does not require them for course completions, we show this alert:

For courses that do not have AI chat tools, there's no change:
