-
Notifications
You must be signed in to change notification settings - Fork 68
Moderation for no-llmengine providers (image generation) #337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Moderation for no-llmengine providers (image generation) #337
Conversation
WalkthroughThe changes add a new decorator, Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ImageGenMethod
participant Moderate
Client->>ImageGenMethod: Call image__generation(text, resolution, num_images, model, **kwargs)
ImageGenMethod->>Moderate: Perform moderation check (@moderate)
Moderate-->>ImageGenMethod: Return validation result
ImageGenMethod->>Client: Return generated image(s)
Poem
Tip ⚡🧪 Multi-step agentic review comment chat (experimental)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
edenai_apis/apis/amazon/amazon_image_api.py (1)
463-516
: Consider handling moderation rejection feedback to the user.The decorator will likely block prohibited content, but there's no visible mechanism to inform users why their request might be rejected.
Consider adding error handling that catches moderation rejections and provides user-friendly feedback:
@moderate def image__generation( self, text: str, resolution: Literal["256x256", "512x512", "1024x1024"], num_images: int = 1, model: Optional[str] = None, **kwargs, ) -> ResponseType[GenerationDataClass]: # Headers for the HTTP request accept_header = "application/json" content_type_header = "application/json" + try: # Body of the HTTP request height, width = resolution.split("x") model_name, quality = model.split("_") request_body = json.dumps( { "taskType": "TEXT_IMAGE", "textToImageParams": {"text": text}, "imageGenerationConfig": { "numberOfImages": num_images, "quality": quality, "height": int(height), "width": int(width), # "cfgScale": float, # "seed": int }, } ) # Parameters for the HTTP request request_params = { "body": request_body, "modelId": f"amazon.{model_name}", "accept": accept_header, "contentType": content_type_header, } response = handle_amazon_call( self.clients["bedrock"].invoke_model, **request_params ) response_body = json.loads(response.get("body").read()) generated_images = [] for image in response_body["images"]: base64_bytes = image.encode("ascii") image_bytes = BytesIO(base64.b64decode(base64_bytes)) resource_url = upload_file_bytes_to_s3(image_bytes, ".png", USER_PROCESS) generated_images.append( GeneratedImageDataClass(image=image, image_resource_url=resource_url) ) return ResponseType[GenerationDataClass]( original_response=response_body, standardized_response=GenerationDataClass(items=generated_images), ) + except ProviderException as e: + if "moderation" in str(e).lower(): + # Make moderation failures more user-friendly + raise ProviderException( + "Your request contains content that violates our content policy. Please modify your prompt and try again.", + code=400 + ) + raise
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
edenai_apis/apis/amazon/amazon_image_api.py
(2 hunks)edenai_apis/apis/deepai/deepai_api.py
(2 hunks)edenai_apis/apis/leonardo/leonardo_api.py
(2 hunks)edenai_apis/apis/replicate/replicate_api.py
(2 hunks)edenai_apis/apis/stabilityai/stabilityai_api.py
(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (12)
edenai_apis/apis/stabilityai/stabilityai_api.py (2)
24-24
: Appropriate import added for moderation.The import of the
moderate
decorator from the moderation utilities module is correctly added.
41-41
: Good addition of the moderation decorator.The
@moderate
decorator is appropriately applied to theimage__generation
method, which will enable input moderation checks for the text prompts before image generation.edenai_apis/apis/replicate/replicate_api.py (2)
26-26
: Appropriate import added for moderation.The import of the
moderate
decorator from the moderation utilities module is correctly added.
166-166
: Good addition of the moderation decorator.The
@moderate
decorator is appropriately applied to theimage__generation
method, which will enable input moderation checks for the text prompts before image generation.edenai_apis/apis/leonardo/leonardo_api.py (2)
17-17
: Appropriate import added for moderation.The import of the
moderate
decorator from the moderation utilities module is correctly added.
98-98
: Good addition of the moderation decorator.The
@moderate
decorator is appropriately applied to theimage__generation
method, which will enable input moderation checks for the text prompts before image generation.edenai_apis/apis/amazon/amazon_image_api.py (3)
5-5
: Appropriate import added for moderation.The import of the
moderate
decorator from the moderation utilities module is correctly added.
462-462
: Good addition of the moderation decorator.The
@moderate
decorator is appropriately applied to theimage__generation
method, which will enable input moderation checks for the text prompts before image generation.
460-470
:❓ Verification inconclusive
Verify implementation of the moderate decorator.
The implementation of the
moderate
decorator is not visible in the provided files. It's important to ensure that it handles both synchronous and asynchronous methods properly.Let's check the implementation of the
moderate
decorator:
🏁 Script executed:
#!/bin/bash # Look for the moderate decorator implementation rg -A 10 -B 5 "def moderate" --glob "*.py"Length of output: 4011
Action: Review the
moderate
decorator's sync vs. async supportAfter verifying the decorator’s implementation in
edenai_apis/llmengine/utils/moderation.py
, note that themoderate
decorator wraps the target function synchronously. In the wrapper, whenmoderate_content
is flagged, it callscheck_content_moderation
—which internally usesasync_to_sync
to invoke asynchronous moderation logic. This setup works correctly for synchronous methods (such as theimage__generation
method in question).However, please be aware that if there’s an expectation to decorate methods defined as asynchronous (i.e., using
async def
), the current synchronous wrapper won’t await the coroutine. In such cases, consider implementing (or using) an asynchronous version of the decorator so that async methods are handled properly.edenai_apis/apis/deepai/deepai_api.py (3)
16-16
: Good addition of moderation capability.The import adds access to the moderation utility which will help filter potentially harmful content in image generation.
31-39
: Well-implemented moderation pattern.The
@moderate
decorator is cleanly applied to the image generation method without altering its signature or implementation logic. This is a good approach that separates concerns between core functionality and content moderation.This implementation pattern is consistent with the changes to other API providers mentioned in the summary, ensuring uniform moderation across different image generation endpoints.
31-75
:❓ Verification inconclusive
Verify error handling with moderation.
The moderation decorator looks good, but it's worth verifying how errors from the moderation process are handled, especially in relation to the existing error handling in this method (lines 49-56 and 60-63).
🏁 Script executed:
#!/bin/bash # Check how the moderate decorator handles errors and how it interacts with the existing error handling # Look for the moderate implementation to understand its error handling rg -A 10 "def moderate" --include="*.py" # Check if there are any tests for moderation with DeepAI rg "test.*DeepAI.*moderat" --include="*.py"Length of output: 972
Verify moderation error handling integration
The decorator implementation appears to handle errors separately from the method’s explicit error checks (lines 49–56 and 60–63). However, our automated search for the
moderate
decorator and its corresponding tests encountered issues with the search flags, so the exact behavior wasn’t confirmed automatically. Please verify manually that errors raised by the moderation process are properly captured and handled in the overall response flow. Also, ensure that any errors originating from deepai's API (via the decorator or internal checks) do not conflict or cause unexpected behavior.
- Check the
moderate
decorator implementation to confirm its error propagation.- Validate that unit tests exist (or add them) to cover scenarios where moderation errors occur.
🧰 Tools
🪛 Ruff (0.8.2)
52-52: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
Summary by CodeRabbit