Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
Model Details
Model Description
- Developed by:Highsky7
- Model type:Image-segmentation
- License:MIT
Model Sources
https://github.com/Highsky7/YOLOTL
- Repository:
Uses
This model is a lane recognition model created for use by Konkuk University Team 2 in The 4th International University Student EV Autonomous Driving Competition (2025-07-11). The accompanying roboflow_final.py code is a ROS-based driving node that utilizes this model to control the vehicle's steering angle in real-time.
Direct Use
This model can be used directly via the ultralytics library.
First, install the necessary libraries:
Bash
pip install ultralytics opencv-python The following is example code to load the model from Hugging Face and perform lane recognition on a single image.
Python
from ultralytics import YOLO import cv2
model = YOLO("YourUsername/YourModelName")
image_path = "path/to/your/test_image.jpg" frame = cv2.imread(image_path)
results = model(frame, conf=0.6)
result_plot = results[0].plot()
cv2.imshow("Lane Detection Result", result_plot) cv2.waitKey(0) cv2.destroyAllWindows()
Downstream Use
The output of this model (lane masks) can be used as a key input for a larger autonomous driving system. For example, the roboflow_final.py code performs the following downstream tasks:
Generates a drivable center path based on the detected left/right lanes.
Calculates a dynamic lookahead point based on the generated path and the vehicle's speed (throttle).
Determines the steering angle using the Pure Pursuit algorithm, based on the lookahead point and the vehicle's current position.
Interfaces with other modules, such as triggering a forced RRT algorithm when an obstacle (cone) is detected on the driving path.
Out-of-Scope Use
This model was designed for a specific purpose and environment. Its use in the following situations may be inappropriate or dangerous:
General Road Driving: Performance is not guaranteed on public roads with different lane markings, lighting, and weather conditions than those of the competition track.
Fully Autonomous System: This model only recognizes lanes. It cannot be used to build a complete autonomous system on its own, as it does not detect pedestrians, other vehicles, traffic lights, or signs.
Changes in Camera Setup: The model's Bird's-Eye-View (BEV) transformation logic is calibrated for a specific camera position and angle, stored in the bev_params_y_5.npz file. If the camera's mounting position or angle is changed, the coordinate transformation will be inaccurate, severely degrading model performance.
Bias, Risks, and Limitations
This model was designed for a specific purpose and environment, and thus has the following biases and limitations:
- Data Bias: The model was trained exclusively on data filmed on 'The International University Student EV Autonomous Driving Competition' track. Consequently, its performance may degrade significantly on public roads with different lane shapes, colors, or lighting conditions.
- Environmental Dependency: It is biased towards clear weather and specific lighting conditions. Its lane recognition accuracy may decrease in rain, darkness, or strong backlight.
- Hardware Dependency: The model's core logic, the Bird's-Eye-View (BEV) transform, is highly dependent on a fixed camera setup (position and angle) defined in
bev_params_y_5.npz
. Any change to the camera's position or angle will invalidate the coordinate system and cause the model to fail.
Recommendations
To mitigate these risks and limitations, we recommend the following:
- Restricted Use: This model is intended for use in environments similar to the training data (i.e., the competition track). It is not suitable for general road driving or other projects.
- BEV Parameter Recalibration: If the vehicle's camera is reinstalled or its position is altered, you must recalibrate the parameters for the BEV transformation.
- Safety Mechanisms: When applying this model to a physical vehicle, it is crucial to implement safety mechanisms such as a manual override system or an emergency stop system to handle prediction failures.
Training Details
Training Data
- Dataset: A custom dataset of driving images captured on 'The International University Student EV Autonomous Driving Competition' track.
- Labeling: The left and right lane areas in the images were labeled with segmentation masks.
Training Procedure
- Preprocessing: Original images were transformed into 2D top-down Bird's-Eye-View (BEV) images using fixed parameters (
bev_params_y_5.npz
) before being used for training. This helps the model recognize lanes from a top-down perspective, facilitating distance calculation and path planning. - Training Hyperparameters:
- model:
YOLOv8
(segmentation model) - img_size:
640
- conf_thres:
0.6
- iou_thres:
0.5
- epochs:
[Enter the number of epochs used for training here]
- batch_size:
[Enter the batch size used for training here]
- optimizer:
[Enter the optimizer used, e.g., AdamW]
- model:
Evaluation
Testing Data, Factors & Metrics
Testing Data
A separate dataset, captured from the same track environment but not used in training, was used for evaluation.
Metrics
Intersection over Union (IoU) was used as the primary metric to measure the overlap between the predicted lane masks and the ground truth masks.
Results
- mIoU (mean IoU):
[Enter your final mIoU score on the test dataset here]
Technical Specifications
Model Architecture and Objective
- Architecture: An Instance Segmentation model based on the YOLOv8 architecture.
- Objective: To detect lanes as objects within an image and to accurately segment (mask) the pixel area corresponding to each lane.
Compute Infrastructure
- Hardware:
[Enter the GPU (e.g., NVIDIA RTX 3080) or CPU used for training and inference here]
- Software:
PyTorch
,ultralytics
,OpenCV
,NumPy
,ROS
Citation
If you find this model or code useful, please consider citing it as follows:
@misc{YourTeamName_YOLOTL_2025,
author = {[Your Team Name or Author Names]},
title = {YOLOv8 based Lane Segmentation Model for EV Autonomous Driving Competition},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{[Paste the Hugging Face URL of your model here]}},
}
## Model Card Authors
Seungmin Lee
## Model Card Contact
gmail: [email protected]
github: https://github.com/Highsky7
Model tree for Highsky7/YOLOTL
Base model
Ultralytics/YOLOv8