You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
benchmark_app -d NPU -t 1 -m /home/intel/tusimple_res18.onnx will cause results type error
[Step 7/11] Loading the model to the device
[ERROR] 10:16:26.252 [vpux-compiler] Got Diagnostic at loc(fused<{name = "/heads/Slice_16", type = "StridedSlice"}>["/heads/Slice_16"]) : Sequence lengths input size 1 is not equal to batch axis dimension of data input 22
loc(fused<{name = "/heads/Slice_16", type = "StridedSlice"}>["/heads/Slice_16"]): error: Sequence lengths input size 1 is not equal to batch axis dimension of data input 22
LLVM ERROR: Failed to infer result type(s).
ov-llm-bench-env) yufei@ubuntu:~$ benchmark_app -d NPU -t 1 -m "/home/intel/tusimple_res18.onnx"
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2025.2.0-19070-2d384751f7c
[ INFO ]
[ INFO ] Device info:
[ INFO ] NPU
[ INFO ] Build ................................. 2025.2.0-19070-2d384751f7c
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified incommand line. Device(NPU) performance hint will be set to PerformanceMode.THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 181.05 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Model inputs:
[ INFO ] input (node: input) : f32 / [...] / [22,3,320,800]
[ INFO ] Model outputs:
[ INFO ] 2843 (node: 2843) : f32 / [...] / [22,192,78]
[Step 5/11] Resizing model to match image sizes and given batch
[ INFO ] Model batch size: 22
[Step 6/11] Configuring input of the model
[ INFO ] Model inputs:
[ INFO ] input (node: input) : u8 / [N,C,H,W] / [22,3,320,800]
[ INFO ] Model outputs:
[ INFO ] 2843 (node: 2843) : f32 / [...] / [22,192,78]
[Step 7/11] Loading the model to the device
[ERROR] 10:16:26.252 [vpux-compiler] Got Diagnostic at loc(fused<{name = "/heads/Slice_16", type = "StridedSlice"}>["/heads/Slice_16"]) : Sequence lengths input size 1 is not equal to batch axis dimension of data input 22
loc(fused<{name = "/heads/Slice_16", type = "StridedSlice"}>["/heads/Slice_16"]): error: Sequence lengths input size 1 is not equal to batch axis dimension of data input 22
LLVM ERROR: Failed to infer result type(s).
Aborted (core dumped)
Issue submission checklist
I'm reporting an issue. It's not a question.
I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
There is reproducer code and related data files such as images, videos, models, etc.
The text was updated successfully, but these errors were encountered:
OpenVINO Version
2025.1
Operating System
Ubuntu 20.04 (LTS)
Device used for inference
NPU
Framework
ONNX
Model used
CLRNet
Issue description
benchmark_app -d NPU -t 1 -m /home/intel/tusimple_res18.onnx
will cause results type errorNPU Driver Version: https://github.com/intel/linux-npu-driver/releases/tag/v1.16.0
Model Path: https://intel-my.sharepoint.com/:u:/p/yufei_wu/Eayj54X4nhhDoFBNRWY14bYB7Rvv9RCBg4deJMtkEaO7jg?e=wct2ZL
Step-by-step reproduction
benchmark_app -d NPU -t 1 -m /home/intel/tusimple_res18.onnx
Relevant log output
Issue submission checklist
The text was updated successfully, but these errors were encountered: