Replies: 2 comments 1 reply
-
In your code, you have enable_mkldnn=True. This parameter is used to enable MKL-DNN (oneDNN) to accelerate the inference process on CPUs. Since you are using TensorRT (which is for GPUs), you should set enable_mkldnn=False to avoid any conflicts. |
Beta Was this translation helpful? Give feedback.
-
Choose CPU/GPU If your computer has NVIDIA® GPU, please make sure that the following conditions are met and install the GPU Version of PaddlePaddle CUDA toolkit 11.8 with cuDNN v8.6.0(for PaddleTensorRT deployment, TensorRT8.5.1.7) CUDA toolkit 12.3 with cuDNN v9.0.0(for PaddleTensorRT deployment, TensorRT8.6.1.6) GPU CUDA capability over 6.0 You can refer to NVIDIA official documents for installation process and configuration method of CUDA, cuDNN and TensorRT. Please refer to: https://www.paddlepaddle.org.cn/en/install/quick?docurl=/documentation/docs/en/develop/install/pip/windows-pip_en.html |
Beta Was this translation helpful? Give feedback.
-
I tried PaddleOCR with TensorRT but I got this error.
Here is my code:
ocr = PaddleOCR(use_angle_cls=True,
ocr_version="PP-OCRv4",
lang="en",
use_gpu=True,
enable_mkldnn=True, # I am not sure what the param is used for ?
use_tensorrt=True)
Any help?
Beta Was this translation helpful? Give feedback.
All reactions