Skip to content

SD2 1334 add deepseek with deepseek api #348

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Apr 2, 2025

Conversation

juandavidcruzgomez
Copy link
Contributor

@juandavidcruzgomez juandavidcruzgomez commented Mar 31, 2025

  • Change deepseek to deepseek (not togheter_ai)
  • Remove llm engine model autoload
  • Remove llm engine model autoload models
  • Remove some prints

Summary by CodeRabbit

  • Chores
    • Reduced unnecessary log outputs across several API operations for a cleaner runtime experience.
    • Removed outdated configuration elements and registration routines for language models.
    • Adjusted file inclusions in packaging to support new features and reorganize existing ones.
  • Refactor
    • Improved provider integrations by dynamically assigning provider names and streamlining parameter formatting for enhanced consistency.
  • New Features
    • Introduced a new JSON output structure for chat completions from language models.
  • Bug Fixes
    • Updated method signatures to enforce stricter input requirements for certain parameters in the Picsart API.

Copy link

coderabbitai bot commented Mar 31, 2025

Walkthrough

This update removes several debugging print statements from multiple API modules and helper functions. In addition, the Deepseek API now employs a dynamic provider name and simplifies model parameter formatting. The changes also include the complete removal of the model registration function in the Litellm client and the deletion of a configuration file containing model parameters. No alterations were made to public entity declarations across these modifications.

Changes

File(s) Summary of Change
edenai_apis/apis/{amazon,api4ai,base64,google,microsoft,nyckel,openai,twelvelabs}/... and edenai_apis/tests/outputs.py Removed various print statements used for debugging and output logging.
edenai_apis/apis/deepseek/deepseek_api.py Refactored API initialization to use a dynamic provider name (self.provider_name) and simplified model parameter formatting in chat methods.
edenai_apis/llmengine/clients/litellm_client/__init__.py Removed the register_litellm_models function along with related import and logging code.
edenai_apis/llmengine/.../models.json Deleted the configuration file containing parameters for multiple language models.
.github/workflows/test.yml Updated commands for installing dependencies and running tests in the GitHub Actions workflow to use Poetry.
pyproject.toml Moved testing dependencies to a separate development group in the configuration file.
MANIFEST.in Adjusted file inclusions for packaging, adding support for LLM features and reorganizing text and OCR-related files.
edenai_apis/apis/microsoft/outputs/llm/chat_output.json Added a new JSON file representing the output structure from a chat completion with a language model.
edenai_apis/apis/picsart/picsart_api.py Updated method signature for image__background_removal to enforce stricter input requirements for the file parameter.

Poem

I'm a little rabbit, coding with delight,
Hoppin' past the debug prints that cluttered night.
Dynamic names and cleaner views now pave the way,
Models and logs have found a brighter day.
With a twitch of my nose and tail in a spin,
I cheer for smoother code—let the clean-up begin! 🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Lite

📥 Commits

Reviewing files that changed from the base of the PR and between ddae827 and 90d5d06.

📒 Files selected for processing (3)
  • .github/workflows/test.yml (1 hunks)
  • edenai_apis/tests/outputs.py (1 hunks)
  • pyproject.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • .github/workflows/test.yml
  • edenai_apis/tests/outputs.py
  • pyproject.toml

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
edenai_apis/apis/deepseek/deepseek_api.py (2)

50-50: Simplified model parameter but f-string is redundant.

The model parameter formatting has been simplified, which aligns with the PR objectives. However, the f-string formatting f"{model}" is redundant since it's equivalent to just using model.

-            model=f"{model}",
+            model=model,

104-104: Same redundant f-string formatting.

Similarly to the previous comment, the f-string formatting is unnecessary here and could be simplified.

-            model=f"{model}",
+            model=model,
edenai_apis/tests/outputs.py (2)

15-17: Duplicate Import Warning: Remove Redundancy
The module validate_all_provider_constraints is imported twice—once at lines 6–8 and again at lines 15–17. This duplication can be removed for clarity and maintainability.

Proposed change:

-from edenai_apis.utils.constraints import (
-    validate_all_provider_constraints,
-)
🧰 Tools
🪛 Ruff (0.8.2)

16-16: Redefinition of unused validate_all_provider_constraints from line 7

Remove definition: validate_all_provider_constraints

(F811)


28-43: Refinement Suggestion: Prefer Logging over Print in Test Helper
Within the fake_cron_check function (especially at line 36), consider using a logging mechanism (e.g., Python's logging module) rather than print statements. This approach enables better control over verbosity—particularly in CI environments and when running large test suites.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f5ab9fb and e06f382.

📒 Files selected for processing (12)
  • edenai_apis/apis/amazon/amazon_audio_api.py (0 hunks)
  • edenai_apis/apis/api4ai/api4ai_api.py (0 hunks)
  • edenai_apis/apis/base64/base64_api.py (0 hunks)
  • edenai_apis/apis/deepseek/deepseek_api.py (3 hunks)
  • edenai_apis/apis/google/google_translation_api.py (0 hunks)
  • edenai_apis/apis/microsoft/microsoft_image_api.py (0 hunks)
  • edenai_apis/apis/nyckel/nyckel_api.py (0 hunks)
  • edenai_apis/apis/openai/openai_api.py (1 hunks)
  • edenai_apis/apis/twelvelabs/helpers.py (0 hunks)
  • edenai_apis/llmengine/clients/litellm_client/__init__.py (0 hunks)
  • edenai_apis/llmengine/clients/llm_models/models.json (0 hunks)
  • edenai_apis/tests/outputs.py (1 hunks)
💤 Files with no reviewable changes (9)
  • edenai_apis/apis/base64/base64_api.py
  • edenai_apis/apis/microsoft/microsoft_image_api.py
  • edenai_apis/apis/api4ai/api4ai_api.py
  • edenai_apis/apis/amazon/amazon_audio_api.py
  • edenai_apis/apis/nyckel/nyckel_api.py
  • edenai_apis/apis/google/google_translation_api.py
  • edenai_apis/llmengine/clients/litellm_client/init.py
  • edenai_apis/apis/twelvelabs/helpers.py
  • edenai_apis/llmengine/clients/llm_models/models.json
🧰 Additional context used
🧬 Code Definitions (1)
edenai_apis/apis/deepseek/deepseek_api.py (1)
edenai_apis/llmengine/llm_engine.py (1)
  • LLMEngine (68-851)
🔇 Additional comments (4)
edenai_apis/apis/deepseek/deepseek_api.py (2)

23-23: Good use of class attribute for provider name.

This change removes hardcoded provider name reference and uses the class attribute self.provider_name instead, which improves maintainability and consistency throughout the code.


27-27: Good consistency with provider naming.

Similar to the change above, using self.provider_name here ensures consistency between how the provider is referenced when loading settings and initializing the LLM client.

edenai_apis/tests/outputs.py (1)

61-62: Good Cleanup: Removal of Debug Print for args
The removal of the debug print statement that previously printed the args variable after validation is a positive change, cleaning up the test output.

edenai_apis/apis/openai/openai_api.py (1)

34-34: Removal of Debug Print Statement

The removal of the debugging print (likely printing self.api_settings) is a positive change because it prevents sensitive configuration details from being logged in production. If logging remains necessary, consider leveraging a proper logging framework configured with the appropriate log levels.

Comment on lines 35 to 37
self.api_key = self.api_settings.get("api_key")
self.headers = {"Authorization": f"Bearer {self.api_key}"}
self.api_key = self.api_settings["api_key"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Duplicate API Key Assignment

The code assigns self.api_key twice—first using the .get() method (line 35) and then directly via indexing (line 37). Additionally, self.headers is set on line 36 but is subsequently overridden between lines 41 and 45. This redundancy may lead to confusion and could mask potential misconfigurations if the key is missing. Consider consolidating the API key extraction and header assignment into a single, clear block with proper error handling. For instance, you might replace the duplicated assignments with:

-        self.api_key = self.api_settings.get("api_key")
-        self.headers = {"Authorization": f"Bearer {self.api_key}"}
-        self.api_key = self.api_settings["api_key"]
+        self.api_key = self.api_settings.get("api_key")
+        if self.api_key is None:
+            raise ValueError("API key is missing in the settings.")
+        openai.api_key = self.api_key

Then, you can proceed with constructing the definitive header at line 41.


Duplicate API Key Assignment Detected

  • The constructor in edenai_apis/apis/openai/openai_api.py assigns the API key twice:
    • First, via .get("api_key") on line 35, which may return None if the key is missing.
    • Then, directly through indexing on line 37, which overwrites the initial assignment.
  • Additionally, the temporary header construction on line 36 is redundant since a definitive header is built later (lines 41–45).

To improve clarity and prevent potential misconfigurations, consolidate the API key extraction with proper error handling. For example:

-        self.api_key = self.api_settings.get("api_key")
-        self.headers = {"Authorization": f"Bearer {self.api_key}"}
-        self.api_key = self.api_settings["api_key"]
+        self.api_key = self.api_settings.get("api_key")
+        if self.api_key is None:
+            raise ValueError("API key is missing in the settings.")
+        openai.api_key = self.api_key

After this refactoring, proceed with constructing the definitive headers at line 41.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.api_key = self.api_settings.get("api_key")
self.headers = {"Authorization": f"Bearer {self.api_key}"}
self.api_key = self.api_settings["api_key"]
self.api_key = self.api_settings.get("api_key")
if self.api_key is None:
raise ValueError("API key is missing in the settings.")
openai.api_key = self.api_key

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.github/workflows/test.yml (1)

47-48: Ensure Dependency Lock Consistency

Removing the existing poetry.lock file and regenerating it with poetry lock ensures that the build uses the latest dependency specifications. However, please verify that this behavior aligns with your project's goals for reproducible builds. Regenerating the lock file on every CI run might lead to non-deterministic dependency versions over time. Consider caching strategies or conditional regeneration if strict reproducibility is required.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e06f382 and 56875be.

📒 Files selected for processing (1)
  • .github/workflows/test.yml (1 hunks)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
edenai_apis/apis/picsart/picsart_api.py (1)

30-31: Type signature improvements, but consider implementation alignment.

The parameter type changes from Optional[str] = None to more specific types (str for file and str = "" for file_url) are good improvements that better express the API's requirements. However, there's a potential inconsistency:

  1. The type hint for file now indicates it's required (non-optional), but the implementation at lines 49-55 still checks if file and not file_url, which suggests file could be falsy.

  2. Consider either:

    • Adding validation at the beginning to ensure file is provided if it's truly required
    • Or updating the implementation to match the type signature's intent
def image__background_removal(
    self,
    file: str,
    file_url: str = "",
    provider_params: Optional[Dict[str, Any]] = None,
    **kwargs,
) -> ResponseType[BackgroundRemovalDataClass]:
+    # Validate required parameter if file is truly required
+    if not file and not file_url:
+        raise ProviderException("No file or file_url provided")
+
    """
    Calls the Picsart Remove Background API.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Lite

📥 Commits

Reviewing files that changed from the base of the PR and between 25936be and ddae827.

📒 Files selected for processing (1)
  • edenai_apis/apis/picsart/picsart_api.py (1 hunks)

@juandavidcruzgomez juandavidcruzgomez merged commit 9110555 into master Apr 2, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants