Computer vision system that automatically extracts chess board positions and piece placements from images or real-time video, converting them into standard FEN (Forsyth-Edwards Notation) format. This project uses YOLO object detection and image processing techniques to recognize chess pieces and their positions on the board.
There are two versions of this project:
- Extract the chessboard and pieces from an image (.jpeg, .png, etc.).
- Extract the chessboard and pieces using an OAK-D Lite Camera (utilizing DepthAI for real-time video and inference).
C++ version is also available : cpp-version/
For more project, you can check my personal blog website: https://visionbrick.com/
Version 1: Conversion from image
Version 2: real-time conversion with OAK D Lite camera (depthai)
There are two files for converting images to FEN format: the first uses a square-filling algorithm, and the second uses perspective transformation.
- The square-filling algorithm works better with non-angled (straight) images.
- Perspective transformation works better with images taken from different angles.
pip install -r requirements.txt
There is an additional step for installing PyTorch with GPU support. Please check the end of the requirements-gpu.txt file.
pip install -r requirements-gpu.txt
Note: The GPU version requires NVIDIA CUDA toolkit to be installed on your system. If you're unsure, start with the CPU version.
For a containerized environment, you can use Docker to run the project without installing dependencies directly on your system.
Note: The Docker setup currently uses the CPU version (requirements.txt) only. GPU support is not included in the Docker configuration.
docker build -t dynamic-chess-board-demo:latest .
docker run --rm -it -p 8888:8888 -v ${PWD}:/app -w /app dynamic-chess-board-demo:latest jupyter lab --ip=0.0.0.0 --no-browser --allow-root
docker run --rm -it -v ${PWD}:/app -w /app dynamic-chess-board-demo:latest
-
square_filling.py: Script for conversion using the square-filling algorithm.
-
square_filling-step-by-step.ipynb: Jupyter notebook for visualizing the entire process step by step.
-
perspective_transformation.py: Script for conversion using perspective transformation.
-
perspective_transformation-step-by-step.ipynb: Jupyter notebook for visualizing the entire process step by step.
-
chess-model-yolov8m.pt --> Trained YOLOv8 model for chess piece detection.
-
extracted-data --> It contains result (converted image), and all the information (coordinates,board ..)
-
test-images --> Collection of images for testing purposes.
-
example-results --> Contains various images along with their corresponding results.
-
Dephtai-chess (Folder) --> It contains real-life camera version with depthai library , It is not updated but it can still be used. I have different and better algorithms but this depthai version is not using them , it is old . I will update it (not updated)
Important Note: I didn't train the model enough because the first phase of this project was extracting the board and pieces dynamically with changing positions and different boards. As a result, the model cannot predict all the pieces correctly, but the positions are nearly perfect. You can train better models and use them with this code. In the future, I will train better models.
for more example, check example-results folder
Example of image using the square-filling algorithm.
Example of conversion using perspective transformation.
Example of square-filling algorithm on an image.