Classification of epilepsy with keras - Python case

Catalog

This sharing is published in public address for brain machine learner Rose: brain computer interface community (micro signal: Brain_Computer).QQ communication group: 903290195

Introduction to epilepsy

Epilepsy, commonly known as "epilepsy wind", is a chronic brain dysfunction syndrome caused by a variety of causes. It is the second brain disease after cerebrovascular disease. The direct cause of epileptic seizures is the intermittent central nervous system dysfunction caused by the recurrent sudden excessive discharge of neurons in the brain. Clinically, it is often manifested as sudden loss of consciousness, general convulsions and mental disorders. Epilepsy brings great pain and physical and mental injury to patients, even life-threatening when it is serious. Children patients will affect their physical and mental development.

EEG is an important tool to study the characteristics of epileptic seizures. It is a noninvasive biophysical examination method, and the information it reflects is not provided by other physiological methods. The analysis of EEG is mainly to detect the abnormal discharge activity of brain, including spike wave, sharp wave, spike and slow complex wave, etc. At present, medical workers carry out visual detection of patients' EEG according to experience. This work is not only very time-consuming, but also subjective due to human analysis. Different experts may have different judgment results for the same record, which leads to the increase of misdiagnosis rate. Therefore, the use of automatic detection, recognition and prediction technology for the timely and accurate diagnosis and prediction of epileptic EEG, the location of epileptic focus and reducing the storage of EEG data is an important part of the study of epileptic EEG signal [1].

data set

Dataset: epileptic seizure recognition dataset
Download address:
https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition

11500 samples of 178 data points (178 data points = EEG recording of 1 second) 11500 targets with 5 categories: 1 for epileptic seizure waveform, and 2-5 for non epileptic seizure waveform

Keras deep learning case

Code references are organized in:
http://dy.163.com/v2/article/detail/EEC68EH5054281P3.html

#Import tool library
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

from keras.models import Sequential 
from keras import layers 
from keras import regularizers
from sklearn.model_selection import train_test_split 
from sklearn.metrics import roc_curve, auc


# Load dataset
data = "data.csv"
df = pd.read_csv(data, header=0, index_col=0)
"""
//View the head and information of the dataset
"""
print(df.head())
print(df.info())

"""
Set label:
Converting target variables to epilepsy (column y encoded as 1) and non epilepsy (2-5)

Set the target variable of epilepsy to 1 and others to label 0
"""
df["seizure"] = 0 
for i in range(11500): 
    if df["y"][i] == 1: 
        df["seizure"][i] = 1 
    else:
        df["seizure"][i] = 0
# Mapping and observing brain waves
plt.plot(range(178), df.iloc[11496,0:178]) 
plt.show()

"""
Data will be prepared in a form acceptable to neural networks.
First, analyze the data,
Then standardize the values,
Finally, create the target array
"""
#Create df1 to save waveform data points 
df1 = df.drop(["seizure", "y"], axis=1)
#1. Build a two-dimensional array of 11500 x 178
wave = np.zeros((11500, 178))

z=0
for index, row in df1.iterrows():

    wave[z,:] = row
    z +=1

#Print array shapes
print(wave.shape) 
#2. Standardized data
"""
Standardize the data so that the average value is 0 and the standard deviation is 1
"""
mean = wave.mean(axis=0) 
wave -= mean 
std = wave.std(axis=0) 
wave /= std 
#3. Create target array
target = df["seizure"].values

(11500, 178)

"""
//Create model
"""
model = Sequential() 
model.add(layers.Dense(64, activation="relu", kernel_regularizer=regularizers.l1(0.001), input_shape = (178,))) 
model.add(layers.Dropout(0.5))
model.add(layers.Dense(64, activation="relu", kernel_regularizer=regularizers.l1(0.001))) 
model.add(layers.Dropout(0.5)) 
model.add(layers.Dense(1, activation="sigmoid")) 
model.summary()


"""
//Using the train ﹣ test ﹣ split function of sklearn, 20% of all data is regarded as the test set and the rest as the training set
"""
x_train, x_test, y_train, y_test = train_test_split(wave, target, test_size=0.2, random_state=42)

#Compiling machine learning model
model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["acc"])


"""
//Training model
epoch For 100,
batch_size For 128,
//Set 20% of data set as validation set
"""
history = model.fit(x_train, y_train, epochs=100, batch_size=128, validation_split=0.2, verbose=2)


# Test data (forecast data)
y_pred = model.predict(x_test).ravel()
# Calculate ROC
fpr_keras, tpr_keras, thresholds_keras = roc_curve(y_test, y_pred) 
# Calculate AUC
AUC = auc(fpr_keras, tpr_keras)
# Draw ROC curve
plt.plot(fpr_keras, tpr_keras, label='Keras Model(area = {:.3f})'.format(AUC)) 
plt.xlabel('False positive Rate') 
plt.ylabel('True positive Rate') 
plt.title('ROC curve') 
plt.legend(loc='best') 
plt.show()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 64)                11456
_________________________________________________________________
dropout_1 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 64)                4160
_________________________________________________________________
dropout_2 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 65
=================================================================
Total params: 15,681
Trainable params: 15,681
Non-trainable params: 0
_________________________________________________________________
Train on 7360 samples, validate on 1840 samples
Epoch 1/100
 - 0s - loss: 1.9573 - acc: 0.7432 - val_loss: 1.6758 - val_acc: 0.9098
Epoch 2/100
 - 0s - loss: 1.5837 - acc: 0.8760 - val_loss: 1.3641 - val_acc: 0.9332
Epoch 3/100
 - 0s - loss: 1.2899 - acc: 0.9201 - val_loss: 1.1060 - val_acc: 0.9424
Epoch 4/100
 - 0s - loss: 1.0525 - acc: 0.9404 - val_loss: 0.9179 - val_acc: 0.9446
Epoch 5/100
 - 0s - loss: 0.8831 - acc: 0.9466 - val_loss: 0.7754 - val_acc: 0.9484
Epoch 6/100
 - 0s - loss: 0.7291 - acc: 0.9552 - val_loss: 0.6513 - val_acc: 0.9538
Epoch 7/100
 - 0s - loss: 0.6149 - acc: 0.9572 - val_loss: 0.5541 - val_acc: 0.9495
Epoch 8/100
 - 0s - loss: 0.5232 - acc: 0.9558 - val_loss: 0.4717 - val_acc: 0.9484
Epoch 9/100
 - 0s - loss: 0.4443 - acc: 0.9595 - val_loss: 0.4118 - val_acc: 0.9489
Epoch 10/100
 - 0s - loss: 0.3921 - acc: 0.9590 - val_loss: 0.3667 - val_acc: 0.9554
Epoch 11/100
 - 0s - loss: 0.3579 - acc: 0.9553 - val_loss: 0.3348 - val_acc: 0.9565
Epoch 12/100
 - 0s - loss: 0.3302 - acc: 0.9572 - val_loss: 0.3209 - val_acc: 0.9473
Epoch 13/100
 - 0s - loss: 0.3154 - acc: 0.9546 - val_loss: 0.2988 - val_acc: 0.9560
Epoch 14/100
 - 0s - loss: 0.2956 - acc: 0.9596 - val_loss: 0.2899 - val_acc: 0.9500
Epoch 15/100
 - 0s - loss: 0.2907 - acc: 0.9565 - val_loss: 0.2786 - val_acc: 0.9500
Epoch 16/100
 - 0s - loss: 0.2794 - acc: 0.9607 - val_loss: 0.2665 - val_acc: 0.9560
Epoch 17/100
 - 0s - loss: 0.2712 - acc: 0.9588 - val_loss: 0.2636 - val_acc: 0.9598
Epoch 18/100
 - 0s - loss: 0.2665 - acc: 0.9603 - val_loss: 0.2532 - val_acc: 0.9533
Epoch 19/100
 - 0s - loss: 0.2659 - acc: 0.9569 - val_loss: 0.2473 - val_acc: 0.9538
Epoch 20/100
 - 0s - loss: 0.2569 - acc: 0.9591 - val_loss: 0.2451 - val_acc: 0.9614
Epoch 21/100
 - 0s - loss: 0.2464 - acc: 0.9614 - val_loss: 0.2402 - val_acc: 0.9625
Epoch 22/100
 - 0s - loss: 0.2470 - acc: 0.9598 - val_loss: 0.2453 - val_acc: 0.9538
Epoch 23/100
 - 0s - loss: 0.2498 - acc: 0.9601 - val_loss: 0.2408 - val_acc: 0.9538
Epoch 24/100
 - 0s - loss: 0.2433 - acc: 0.9587 - val_loss: 0.2421 - val_acc: 0.9505
Epoch 25/100
 - 0s - loss: 0.2406 - acc: 0.9613 - val_loss: 0.2307 - val_acc: 0.9538
Epoch 26/100
 - 0s - loss: 0.2372 - acc: 0.9601 - val_loss: 0.2301 - val_acc: 0.9538
Epoch 27/100
 - 0s - loss: 0.2294 - acc: 0.9615 - val_loss: 0.2287 - val_acc: 0.9598
Epoch 28/100
 - 0s - loss: 0.2349 - acc: 0.9613 - val_loss: 0.2255 - val_acc: 0.9571
Epoch 29/100
 - 0s - loss: 0.2326 - acc: 0.9579 - val_loss: 0.2206 - val_acc: 0.9554
Epoch 30/100
 - 0s - loss: 0.2257 - acc: 0.9614 - val_loss: 0.2180 - val_acc: 0.9571
Epoch 31/100
 - 0s - loss: 0.2258 - acc: 0.9618 - val_loss: 0.2200 - val_acc: 0.9609
Epoch 32/100
 - 0s - loss: 0.2236 - acc: 0.9611 - val_loss: 0.2213 - val_acc: 0.9538
Epoch 33/100
 - 0s - loss: 0.2201 - acc: 0.9622 - val_loss: 0.2112 - val_acc: 0.9587
Epoch 34/100
 - 0s - loss: 0.2253 - acc: 0.9617 - val_loss: 0.2159 - val_acc: 0.9549
Epoch 35/100
 - 0s - loss: 0.2207 - acc: 0.9629 - val_loss: 0.2114 - val_acc: 0.9598
Epoch 36/100
 - 0s - loss: 0.2228 - acc: 0.9606 - val_loss: 0.2136 - val_acc: 0.9592
Epoch 37/100
 - 0s - loss: 0.2163 - acc: 0.9617 - val_loss: 0.2098 - val_acc: 0.9620
Epoch 38/100
 - 0s - loss: 0.2167 - acc: 0.9621 - val_loss: 0.2179 - val_acc: 0.9560
Epoch 39/100
 - 0s - loss: 0.2137 - acc: 0.9611 - val_loss: 0.2120 - val_acc: 0.9576
Epoch 40/100
 - 0s - loss: 0.2093 - acc: 0.9636 - val_loss: 0.2003 - val_acc: 0.9658
Epoch 41/100
 - 0s - loss: 0.2155 - acc: 0.9621 - val_loss: 0.2016 - val_acc: 0.9625
Epoch 42/100
 - 0s - loss: 0.2076 - acc: 0.9652 - val_loss: 0.1994 - val_acc: 0.9598
Epoch 43/100
 - 0s - loss: 0.2128 - acc: 0.9626 - val_loss: 0.2053 - val_acc: 0.9587
Epoch 44/100
 - 0s - loss: 0.2071 - acc: 0.9643 - val_loss: 0.1974 - val_acc: 0.9630
Epoch 45/100
 - 0s - loss: 0.2078 - acc: 0.9637 - val_loss: 0.2047 - val_acc: 0.9592
Epoch 46/100
 - 0s - loss: 0.2130 - acc: 0.9615 - val_loss: 0.2089 - val_acc: 0.9538
Epoch 47/100
 - 0s - loss: 0.2113 - acc: 0.9617 - val_loss: 0.2007 - val_acc: 0.9582
Epoch 48/100
 - 0s - loss: 0.2072 - acc: 0.9656 - val_loss: 0.2026 - val_acc: 0.9538
Epoch 49/100
 - 0s - loss: 0.2055 - acc: 0.9636 - val_loss: 0.2013 - val_acc: 0.9565
Epoch 50/100
 - 0s - loss: 0.2089 - acc: 0.9610 - val_loss: 0.1974 - val_acc: 0.9582
Epoch 51/100
 - 0s - loss: 0.2033 - acc: 0.9632 - val_loss: 0.1946 - val_acc: 0.9587
Epoch 52/100
 - 0s - loss: 0.2075 - acc: 0.9626 - val_loss: 0.1995 - val_acc: 0.9625
Epoch 53/100
 - 0s - loss: 0.2030 - acc: 0.9635 - val_loss: 0.1948 - val_acc: 0.9603
Epoch 54/100
 - 0s - loss: 0.2038 - acc: 0.9641 - val_loss: 0.1939 - val_acc: 0.9679
Epoch 55/100
 - 0s - loss: 0.2048 - acc: 0.9636 - val_loss: 0.1950 - val_acc: 0.9592
Epoch 56/100
 - 0s - loss: 0.2037 - acc: 0.9637 - val_loss: 0.1917 - val_acc: 0.9636
Epoch 57/100
 - 0s - loss: 0.2014 - acc: 0.9647 - val_loss: 0.1909 - val_acc: 0.9620
Epoch 58/100
 - 0s - loss: 0.1979 - acc: 0.9651 - val_loss: 0.1896 - val_acc: 0.9614
Epoch 59/100
 - 0s - loss: 0.2068 - acc: 0.9629 - val_loss: 0.1909 - val_acc: 0.9609
Epoch 60/100
 - 0s - loss: 0.1990 - acc: 0.9633 - val_loss: 0.1908 - val_acc: 0.9614
Epoch 61/100
 - 0s - loss: 0.1921 - acc: 0.9666 - val_loss: 0.1904 - val_acc: 0.9620
Epoch 62/100
 - 0s - loss: 0.2018 - acc: 0.9629 - val_loss: 0.1896 - val_acc: 0.9614
Epoch 63/100
 - 0s - loss: 0.2041 - acc: 0.9620 - val_loss: 0.1917 - val_acc: 0.9625
Epoch 64/100
 - 0s - loss: 0.2000 - acc: 0.9652 - val_loss: 0.1891 - val_acc: 0.9620
Epoch 65/100
 - 0s - loss: 0.1967 - acc: 0.9656 - val_loss: 0.1916 - val_acc: 0.9609
Epoch 66/100
 - 0s - loss: 0.1961 - acc: 0.9639 - val_loss: 0.1854 - val_acc: 0.9641
Epoch 67/100
 - 0s - loss: 0.1969 - acc: 0.9648 - val_loss: 0.1887 - val_acc: 0.9592
Epoch 68/100
 - 0s - loss: 0.1990 - acc: 0.9630 - val_loss: 0.1874 - val_acc: 0.9636
Epoch 69/100
 - 0s - loss: 0.1923 - acc: 0.9662 - val_loss: 0.1893 - val_acc: 0.9614
Epoch 70/100
 - 0s - loss: 0.1925 - acc: 0.9645 - val_loss: 0.1853 - val_acc: 0.9641
Epoch 71/100
 - 0s - loss: 0.1948 - acc: 0.9622 - val_loss: 0.1905 - val_acc: 0.9592
Epoch 72/100
 - 0s - loss: 0.1994 - acc: 0.9628 - val_loss: 0.1852 - val_acc: 0.9641
Epoch 73/100
 - 0s - loss: 0.1953 - acc: 0.9651 - val_loss: 0.1834 - val_acc: 0.9641
Epoch 74/100
 - 0s - loss: 0.1888 - acc: 0.9670 - val_loss: 0.1816 - val_acc: 0.9620
Epoch 75/100
 - 0s - loss: 0.1933 - acc: 0.9659 - val_loss: 0.1860 - val_acc: 0.9620
Epoch 76/100
 - 0s - loss: 0.1917 - acc: 0.9635 - val_loss: 0.1828 - val_acc: 0.9625
Epoch 77/100
 - 0s - loss: 0.1907 - acc: 0.9677 - val_loss: 0.1828 - val_acc: 0.9603
Epoch 78/100
 - 0s - loss: 0.1990 - acc: 0.9637 - val_loss: 0.1805 - val_acc: 0.9652
Epoch 79/100
 - 0s - loss: 0.1934 - acc: 0.9652 - val_loss: 0.1864 - val_acc: 0.9614
Epoch 80/100
 - 0s - loss: 0.1870 - acc: 0.9667 - val_loss: 0.1808 - val_acc: 0.9674
Epoch 81/100
 - 0s - loss: 0.1901 - acc: 0.9660 - val_loss: 0.1825 - val_acc: 0.9625
Epoch 82/100
 - 0s - loss: 0.1880 - acc: 0.9649 - val_loss: 0.1871 - val_acc: 0.9663
Epoch 83/100
 - 0s - loss: 0.1901 - acc: 0.9677 - val_loss: 0.1808 - val_acc: 0.9620
Epoch 84/100
 - 0s - loss: 0.1941 - acc: 0.9620 - val_loss: 0.1853 - val_acc: 0.9647
Epoch 85/100
 - 0s - loss: 0.1867 - acc: 0.9674 - val_loss: 0.1825 - val_acc: 0.9620
Epoch 86/100
 - 0s - loss: 0.1940 - acc: 0.9651 - val_loss: 0.1877 - val_acc: 0.9576
Epoch 87/100
 - 0s - loss: 0.1913 - acc: 0.9633 - val_loss: 0.1817 - val_acc: 0.9620
Epoch 88/100
 - 0s - loss: 0.1940 - acc: 0.9649 - val_loss: 0.1834 - val_acc: 0.9636
Epoch 89/100
 - 0s - loss: 0.1886 - acc: 0.9656 - val_loss: 0.1844 - val_acc: 0.9625
Epoch 90/100
 - 0s - loss: 0.1835 - acc: 0.9677 - val_loss: 0.1899 - val_acc: 0.9641
Epoch 91/100
 - 0s - loss: 0.1884 - acc: 0.9674 - val_loss: 0.1894 - val_acc: 0.9587
Epoch 92/100
 - 0s - loss: 0.1855 - acc: 0.9675 - val_loss: 0.1894 - val_acc: 0.9582
Epoch 93/100
 - 0s - loss: 0.1864 - acc: 0.9655 - val_loss: 0.1808 - val_acc: 0.9641
Epoch 94/100
 - 0s - loss: 0.1878 - acc: 0.9671 - val_loss: 0.1865 - val_acc: 0.9609
Epoch 95/100
 - 0s - loss: 0.1901 - acc: 0.9662 - val_loss: 0.1859 - val_acc: 0.9641
Epoch 96/100
 - 0s - loss: 0.1836 - acc: 0.9670 - val_loss: 0.1823 - val_acc: 0.9647
Epoch 97/100
 - 0s - loss: 0.1876 - acc: 0.9664 - val_loss: 0.1799 - val_acc: 0.9668
Epoch 98/100
 - 0s - loss: 0.1854 - acc: 0.9675 - val_loss: 0.1912 - val_acc: 0.9565
Epoch 99/100
 - 0s - loss: 0.1881 - acc: 0.9673 - val_loss: 0.1801 - val_acc: 0.9668
Epoch 100/100
 - 0s - loss: 0.1821 - acc: 0.9674 - val_loss: 0.1758 - val_acc: 0.9701

Contents and codes are arranged in:
[1] : Study on EEG synchronous analysis and epileptic seizure prediction method
[2]: http://dy.163.com/v2/article/detail/EEC68EH5054281P3.html

Reference resources
Classification of epilepsy with keras - Python case

This article is shared by Rose notes, a brain computer learner. QQ communication group: 903290195
For more sharing, please pay attention to the public number.

170 original articles published, 46 praised, 100000 visitors+
Private letter follow

Tags: REST Python

Posted on Sun, 02 Feb 2020 01:50:46 -0500 by TimTimTimma