-
Notifications
You must be signed in to change notification settings - Fork 68
Upgrade to llama-3.3-70b-instruct #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to llama-3.3-70b-instruct #357
Conversation
Reverted to undo unnecessary changes
WalkthroughThis update changes the default model version for the Tenstorrent provider from "Meta-Llama-3.1-70B-Instruct" to "Meta-Llama-3.3-70B-Instruct" in both the chat output JSON and the text generation arguments. The chat output JSON is also updated with new content, a new interaction instance, and revised token counts reflecting the updated model and response. Changes
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
edenai_apis/apis/tenstorrent/outputs/llm/chat_output.json (1)
11-11
: Review sample content for unintended tokens
The assistant’s response text includes words like “Robots” and “hyperlink” which look like artifacts or placeholders. Please verify and clean up any stray tokens in the sample content.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
edenai_apis/apis/tenstorrent/outputs/llm/chat_output.json
(3 hunks)edenai_apis/features/text/generation/generation_args.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (6)
edenai_apis/features/text/generation/generation_args.py (1)
16-16
: Update default Tenstorrent model version
The default model identifier for the Tenstorrent provider has been correctly updated to"tenstorrent/Meta-Llama-3.3-70B-Instruct"
, aligning with the PR objective to bump from version 3.1 to 3.3.edenai_apis/apis/tenstorrent/outputs/llm/chat_output.json (5)
2-2
: Sample JSON ID update
Theid
field has been refreshed to a new unique identifier for this interaction sample. This is expected when regenerating example outputs.
4-4
: Refresh created timestamp
Thecreated
timestamp has been updated to the current epoch time, reflecting the new sample’s generation time.
5-5
: Align model version in sample
Themodel
field now matches the updated"tenstorrent/Meta-Llama-3.3-70B-Instruct"
identifier, ensuring consistency between the code defaults and sample output.
23-23
: Update prompt token count
Theprompt_tokens
value incompletion_tokens_details
has been increased from 264 to 340 to match the new prompt length—this aligns correctly with the updated sample.
40-40
: Adjust total token usage
The top-leveltotal_tokens
count is now 406, reflecting the sum of the revised prompt and completion token usage.
This PR upgrades the model to be used for chat and generation to llama-3.3-70b-instruct
Summary by CodeRabbit