-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mask rtdetr 导出模型 C++部署使用trt加载报错 #9152
Comments
The error message Conversion to JSON format is not supported indicates that there is an issue with the result type being returned by the model, specifically the SegmentationResult. This suggests that the FastDeploy might not be able to serialize the output in the expected JSON format. |
Hi,根据您提供的错误信息,存在几个问题:
|
你好,我这边是使用教程导出的模型,在cuda和cpu模式下可以运行,RT-DETR系列模型trt导出在文档中有提及使用 --trt 参数,Mask-RTDETR导出代码是否忘记处理这部分了? 这个文件名对应的cuda12.3_cudnn9.0.0_trt8.6.1.6, |
问题确认 Search before asking
Bug组件 Bug Component
Deploy
Bug描述 Describe the Bug
-windows
-c++
-导出环境:paddle3.0b1/paddledetection develop
-推理环境 paddle inference 3.0.0 beat1
使用 cpu 加载模型:正常
使用 cuda 加载模型:正常
使用 trt 加载模型报错如下:
'''
C++ Traceback (most recent call last):
Not support stack backtrace yet.
Error Message Summary:
InvalidArgumentError: paddle::get failed, cannot get value (desc.GetAttr("dim")) by type class std::vector<int,class std::allocator >, its type is class std::vector<__int64,class std::allocator<__int64> >. (at C:\home\workspace\Paddle\paddle\fluid\inference\tensorrt\op_teller.cc:2329)
'''
复现环境 Environment
-windows
-c++
-导出环境:paddle3.0b1/paddledetection develop
-推理环境 paddle inference 3.0.0 beat1
Bug描述确认 Bug description confirmation
是否愿意提交PR? Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: