-
Hello, When I am running my 1 channel models on my pc application code (QT application) using ncnn, results are fine. In android, I have a shader code which converts Bitmap from BGRA (little endia) to Y channel and which I am feeding to ncnn model like this void *data = (void *)env->GetDirectBufferAddress(image_buffer); and then I am returning results like this and after results are return my code is converting Y back to BGRA to draw on app. you can see whole white background is looking like a net with holes. Note: I am using same shader and same model with tensor-flow lite on same application and results are fine. P.s. I tried 3 channel model also when I feed data using Please guide Thank you in advance |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
This fixed the issue |
Beta Was this translation helpful? Give feedback.
-
I made several attempts and ended up not proceeding, however I really wanted to test my trained model running on android with ncnn. I made an export on windows of a yolo11 model trained in colab. To test the converted model yolo --> ncnn I did a test with an image with two objects. C:\Users\guips\Downloads>yolo predict model=C:\Users\guips\Downloads\best_ncnn_model source="C:\Users\guips\Downloads\brasil\test129.jpg" save_txt=true image 1/1 C:\Users\guips\Downloads\brasil\test129.jpg: 640x640 2 Car, 103.0ms E/libEGL (14031): called unimplemented OpenGL ES API Build fingerprint: 'google/sdk_gphone64_x86_64/emu64xa:16/BP22.250221.010/13193326:user/release-keys' These are the first layers of the .param file and the last ones. 7767517 Last layer Split splitncnn_27 1 2 313 314 315 I thank you in advance for your attention and help. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
This fixed the issue
opt.use_fp16_packed = false;
opt.use_fp16_storage = false;
opt.use_fp16_arithmetic = false;