You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've converted gguf files with Pytorch files from SparseLLM/ReluLLaMA-7B and predictor files from PowerInfer/ReluLLaMA-7B-Predictor https://huggingface.co/PowerInfer/ReluLLaMA-7B-Predictor/tree/main. Based on my experiments of running gguf files that I've created based on README specifications, I found that model generates strange, non logical output text. However, when I run PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF model https://huggingface.co/PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF such issue doesn't arise. Could you give me an insight for this problem? Below are captured files for output text based on GGUF files that I've generated
The text was updated successfully, but these errors were encountered:
I've converted gguf files with Pytorch files from SparseLLM/ReluLLaMA-7B and predictor files from PowerInfer/ReluLLaMA-7B-Predictor https://huggingface.co/PowerInfer/ReluLLaMA-7B-Predictor/tree/main. Based on my experiments of running gguf files that I've created based on README specifications, I found that model generates strange, non logical output text. However, when I run PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF model https://huggingface.co/PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF such issue doesn't arise. Could you give me an insight for this problem? Below are captured files for output text based on GGUF files that I've generated
The text was updated successfully, but these errors were encountered: