Category : tensorrt

I’m trying to run TensorRT inference in C++. Sometimes the code crashes when trying to build a new engine or load the engine from the file. It happens occasionally (sometimes it runs without any problem). I follow the below steps to prepare network: initLibNvInferPlugins(&gLogger.getTRTLogger(), ""); if (mParams.loadEngine.size() > 0) { std::vector<char> trtModelStream; size_t size{0}; std::ifstream ..

Read more

I have an issue while trying to run my code based on yolov4.engine that i generated from my onnx file, and i get this error: [E] [TRT] INVALID_ARGUMENT: Cannot find binding of given name: num_detections [E] [TRT] INVALID_ARGUMENT: Cannot find binding of given name: nmsed_boxes [E] [TRT] INVALID_ARGUMENT: Cannot find binding of given name: nmsed_scores ..

Read more

When trying to use Cmake i keep getting this error on generating, no issues with configurating I can clearly see the file at the stated directory and the path is written correctly CMake Error at CMakeLists.txt:52 (add_executable): Cannot find source file: ${MY_PROJECT_SOURCE_DIR}/yolov5.cpp Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm ..

Read more

I am trying to deserialize a tensorrt engine with custom plugins that I built in C++ on windows,I am reading the binary(engine) file like this char *gieModelStream{nullptr}; size_t size{0}; std::ifstream file(filename, std::ios::binary); if (file.good()) { file.seekg(0, file.end); size = file.tellg(); file.seekg(0, file.beg); gieModelStream = new char[size]; file.read(gieModelStream, size); file.close(); } and then deserializing it like ..

Read more

I have Python code along with TensorRT with Docker container 20.03, which has CUDA 10.2 and TensorRT 7.0.0 from __future__ import print_function import warnings import matplotlib.pyplot as plt import numpy as np import pandas as pd from imutils.paths import list_images from keras import backend as K from keras.callbacks import CSVLogger from keras.layers import * from ..

Read more

I need to deploy a yolov4 inference model and I want to use onnxruntime with tensorRT backend. I don’t know how to post process yolov4 detection result in C++. I have a sample written in python but I can not find C++ sample. https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov4 Is there a sample to know how to process yolov4 onnx ..

Read more

I’m setting up a new Windows 10 with NVidia RTX 2080 Super installed and I’ve unzipped and installed TensorRT-7.2.2.3.Windows10.x86_64.cuda-11.1.cudnn8.0.zip and cudnn-11.1-windows-x64-v8.0.5.39.zip into the LIBPATH. The project compiles fine, but attempting to run it results in the nvrtc64_111_0.dll dependency failing to be resolved. Searching the C: drive for this file or even nvrtc64*.dll finds nvrtc64_112_0.dll in ..

Read more

I’m a newbie for GPU programming using Cuda toolkit, and I have to write some code offering the functionality as I mentioned in the title. I’d like to paste the code to show what exactly I want to do. void CTrtModelWrapper::forward(void **bindings, unsigned height, unsigned width, short channel, ColorSpaceFmt colorFmt, PixelDataType pixelType) { uint16_t *devInRawBuffer_ptr ..

Read more