Xbai-Epilepsy-1.0 / README.md
eyupipler's picture
Update README.md
44734a1 verified
metadata
license: cc-by-nc-sa-4.0
language:
  - tr
  - en
tags:
  - brain
  - eeg
  - graph
  - epilepsy
  - detection
  - image-processing
  - neuroscience
  - multimodal
base_model:
  - eyupipler/bai
  - Neurazum/Vbai-DPA-2.0
pipeline_tag: video-text-to-text

image/png

Xbai-Epilepsy 1.0 Sürümü (TR)

Tanım

Xbai-Epilepsy 1.0 modeli, doğrudan EEG verisi alma ve EEG grafiğini gerçek zamanlı olarak ekran üzerinden izleme temeline eğitilmiş multimodel versiyonudur. Hastanın epilepsi nöbeti geçirip geçirmediğini tespit etmek amacıyla eğitilmiştir. "bai" serisinin epilepsi versiyonuyla "Vbai" serisinin görüntü işleme yeteneği birleştirilmiştir. Böylelikle daha stabil doğruluk oranı yakalanmış ve kolaylık sağlanmıştır.

Kitle / Hedef

Öncelikli olarak hastaneler, sağlık ve bilim merkezleri olmak üzere herkes için kişisel kullanıma açıktır.

Sınıflar

  • Epilepsi Nöbeti Yok: Kişi nöbet anında değildir.
  • Epilepsi Nöbeti Tespit Edildi!: Kişi nöbet geçiriyordur veya geçirmek üzeredir.

----------------------------------------

Xbai-Epilepsy 1.0 Version (EN)

Description

The Xbai-Epilepsy 1.0 model is a multimodel version trained on the basis of directly acquiring EEG data and monitoring the EEG graph on the screen in real time. It is trained to detect whether the patient is having an epileptic seizure. The epilepsy version of the ‘bai’ series is combined with the image processing capability of the ‘Vbai’ series. Thus, a more stable accuracy rate is achieved and convenience is provided.

Audience / Target

It is available for personal use for everyone, primarily hospitals, health and science centres.

Classes

  • No Epileptic Seizure: The person is not in a seizure.
  • Epilepsy Seizure Detected: The person is having or about to have a seizure.

----------------------------------------

Kullanım / Usage

import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow.keras.models import load_model
from tensorflow.keras import layers
from PIL import Image
import io


def build_cnn(input_shape=(224, 224, 3)):
    cnn_input = layers.Input(shape=input_shape)
    x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(cnn_input)
    x = layers.MaxPooling2D((2, 2))(x)
    x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(x)
    x = layers.MaxPooling2D((2, 2))(x)
    x = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(x)
    x = layers.MaxPooling2D((2, 2))(x)
    x = layers.Flatten()(x)
    x = layers.Dense(256, activation='relu')(x)
    x = layers.Dropout(0.5)(x)
    cnn_output = layers.Dense(2, activation='softmax')(x)

    cnn_model = models.Model(inputs=cnn_input, outputs=cnn_output)
    return cnn_model


def build_lstm(input_shape=(178, 2)):
    lstm_input = layers.Input(shape=input_shape)
    x = layers.LSTM(64, activation='relu')(lstm_input)
    lstm_output = layers.Dense(2, activation='softmax')(x)

    lstm_model = models.Model(inputs=lstm_input, outputs=lstm_output)
    return lstm_model


def bytes_to_image_array(byte_data):
    image = Image.open(io.BytesIO(byte_data))
    image = image.resize((224, 224))
    image = image.convert('RGB')
    image = np.array(image)
    if image.shape == (224, 224, 3):
        return image
    else:
        raise ValueError(f"Unexpected image shape: {image.shape}")


model_path = 'model/path'
multimodal_model = load_model(model_path)
print("Successfully model uploaded.")

multimodal_model.summary()

data_path = 'image/parquet/data/path'
data = pd.read_parquet(data_path)

try:
    X_images = np.array([bytes_to_image_array(img['bytes']) for img in data['image']])
except ValueError as e:
    print(e)

eeg_data_path = 'epileptic/seizure/data/path'
eeg_data = pd.read_csv(eeg_data_path)

eeg_data.columns = eeg_data.columns.str.strip()
eeg_data['data'] = eeg_data['data'].apply(eval)
X_time_series = np.array(eeg_data['data'].to_list())
y = eeg_data['label'].values

X_images = X_images / 255.0

X_test_images = X_images[:32]
X_test_time_series = X_time_series[:32]
y_test = y[:32]

X_test_time_series = np.expand_dims(X_test_time_series, axis=-1)  # (None, 178, 1)
X_test_time_series = np.repeat(X_test_time_series, 2, axis=-1)  # (None, 178, 2)

test_loss, test_accuracy = multimodal_model.evaluate([X_test_images, X_test_time_series], y_test)
print(f"Test Loss: {100 * test_loss}%")
print(f"Test Accuracy: {100 * test_accuracy}%")

y_pred = multimodal_model.predict([X_test_images, X_test_time_series])

for i in range(50):
    print(f"Sample {i+1}: Real Label: {y_test[i]}, Prediction: {np.argmax(y_pred[i])}")

Python Sürümü / Python Version

3.9 <=> 3.13