aidial_assistant/application/assistant_application.py (3 lines): - line 200: # TODO: Add max_addons_dialogue_tokens as a request parameter - line 245: # TODO: else compare the history size to the max prompt tokens of the underlying model - line 280: # TODO: Add max_addons_dialogue_tokens as a request parameter aidial_assistant/model/model_client.py (2 lines): - line 127: # TODO: Use a dedicated endpoint for counting tokens. - line 150: # TODO: Use a dedicated endpoint for discarded_messages. aidial_assistant/chain/command_chain.py (1 line): - line 128: # TODO: Limit the error message size. The error message should not exceed reserved assistant overheads.