Demo Tutorials
This document provides a step-by-step guide for exciting demos using the IDUN Guardian system.
Contents
Hands-Free Spotify Control
This demo demonstrates how to control Spotify hands-free using jaw clench and eye movement detections with our real-time classifiers.
Key Features:
Play, pause, and navigate playlist on Spotify through simple, intuitive jaw and eye gestures.
Works seamlessly with our real-time classifier system for a hands-free music experience.
Prerequisites and Setup
Spotify Developer Setup:
Go to the Spotify Developer Dashboard and log in.
Create a new app. In the app settings, add a Redirect URI, e.g., http://localhost:8888/callback, for OAuth. Enable the “Web API” and “Web Playback SDK” permissions under scopes.
Retrieve your Client ID and Client Secret from the app settings for use in authentication.
Environment Setup: Make sure you have the IDUN Guardian SDK and spotipy library installed. You can install it using:
pip instal idun-guardian-sdk pip install spotipy
Steps
1.. Recording parameters: set the recording parameters in the script (API key and device address).
my_api_token = "idun_XXXX..." device_address = "XX:XX:XX:XX:XX:XX"
Add Your Spotify Credentials: Replace the placeholders in the script with your Spotify credentials as shown below:
sp = spotipy.Spotify(auth_manager=SpotifyOAuth( client_id="YOUR_CLIENT_ID", client_secret="YOUR_CLIENT_SECRET", redirect_uri="http://localhost:8888/callback", scope="user-modify-playback-state user-read-playback-state" ))
Define Music Control Functions: Use the Spotify Web API to create functions for music control actions, such as play, pause, next track, and volume control.
def toggle_music(): """ Pause or start the music playback """ playback = sp.current_playback() if playback and playback['is_playing']: sp.pause_playback() else: sp.start_playback() def next_track(): """ Play the next track """ sp.next_track() def previous_track(): """ Play the previous track """ sp.previous_track()
Create a Handler for Real-Time Predictions: Implement a handler function to connect classifier outputs (jaw clench and eye movements) to your music control functions. This function will interpret real-time predictions and map them to Spotify control actions.
def pred_handler(event): """ Handler for the live prediction data """ # Jaw clench -> toggle playlist if event.message["predictionType"] == "JAW_CLENCH": print("Jaw clench detected", flush=True) toggle_music() # HEOG if event.message["predictionType"] == "BIN_HEOG": heog = event.message["result"]['heog'] if heog == -1: print("Left HEOG detected", flush=True) previous_track() if heog == 1: print("Right HEOG detected", flush=True) next_track()
Connect to the IDUN Guardian Device and start data streaming: Create a Guardian client instance, subscribe to real-time predictions, and start recording.
if __name__ == "__main__": # Create client client = GuardianClient(api_token=my_api_token, address=device_address) # Subscribe to live predictions client.subscribe_realtime_predictions(jaw_clench=True, fft=False, bin_heog=True, handler=pred_handler) # Start the recording asyncio.run( client.start_recording( recording_timer=RECORDING_TIMER, led_sleep=LED_SLEEP, calc_latency=False ) )
Run the Demo: Open Spotify, run the script and control your music hands-free!
Source Code
Control Philips Hue Lights with our Eye Movement Classifier
This demo demonstrates how to turn on/off Philips Hue lights by looking left and right.
Key Features:
Turn on and off Philips Hue lights by looking left and right.
Works seamlessly with our real-time classifier system to control Philips Hue state using brain power.
Prerequisites and Setup
Install the `IDUN Guardian SDK` and the `Philips Hue SDK phue`: :
pip install idun-guardian-sdk pip install phue
Philips Hue Bridge Setup: :
Connect your Philips Hue Bridge to your network and find its IP address.
Press the button on the bridge to pair it with your network.
Retrieve the bridge IP address.
Get the bridge username:
try: bridge.connect() print(f"Connected! Save this username for future use: {bridge.username}") except Exception as e: print(f"Error connecting to the Bridge: {e}")
Steps
Add Your Philips Hue Bridge Credentials:
BRIDGE_IP = 'YOUR_BRIDGE_IP' BRIDGE_USERNAME = 'YOUR_BRIDGE_USERNAME'
Connect to the Philips Hue Bridge: :
bridge = Bridge(BRIDGE_IP, username=BRIDGE_USERNAME) bridge.connect()
Define Light Control Functions: :
def handle_left_eye_movement(): lamp = "Ceiling Lamp 1" bridge.set_light(lamp, 'on', True) time.sleep(2) bridge.set_light(lamp, 'on', False) def handle_right_eye_movement(): lamp = "Ceiling Lamp 2" bridge.set_light(lamp, 'on', True) time.sleep(2) bridge.set_light(lamp, 'on', False)
Create a Handler for Real-Time Predictions: :
def handle_eye_movement(data): prediction = data.message['result']['heog'] if prediction == -1: handle_left_eye_movement() if prediction == 1: handle_right_eye_movement()
Connect to the IDUN Guardian Device and start data streaming: :
if __name__ == '__main__': client = GuardianClient() client.subscribe_realtime_predictions(bin_heog=True, handler=handle_eye_movement) asyncio.run(client.start_recording(recording_timer=RECORDING_TIMER))
Source Code
Visualize Real-Time Data and Predictions from the IDUN Guardian
This demo demonstrates how to build a real-time visualizer for EEG and IMU data streams and real-time predictions from the IDUN Guardian device.
Key Features:
Visualizing live EEG and IMU data streams
Visualizing real-time predictions (quality score, jaw clench, eye movements, FFT)
Prerequisites and Setup
Python >= 3.9: download and install a compatible Python version from the official website.
Pipenv: install Pipenv using the following command:
pip install pipenv
Create the virtual environment: in the visualizer-demo folder (download link below), run the following command to create a virtual environment and install the required packages:
mkdir .venv pipenv install
Activate the virtual environment to run the scripts:
pipenv shell
You can exit the virtual environment using:
exit
Steps
Visualizer class:
In order to create a live data visualizer, we will create a PyQt6 QWidget class that will display the EEG and IMU data streams and the real-time predictions from the IDUN Guardian device. For classifiers, we are using images to represent the states (jaw clench, left and right HEOG).
Initialize the visualizer: Initialize the visualizer using zeros buffers for EEG and IMU data, and set up the timer for updating the plots.
class EEGVisualizer(QtWidgets.QWidget): """ Class for real-time visualization of EEG data. Attributes: plot_FFT (bool): Whether to plot the FFT data. signal_data (deque): Buffer for EEG signal data. fft_keys (list): Keys for the FFT data. fft_dict (dict): Buffer for FFT data. imu_keys (list): Keys for the IMU data. imu_dict (dict): Buffer for IMU data. timer (QTimer): Timer for updating the visualization. jaw_clench_detected (bool): Whether a jaw clench was detected. heog_left_detected (bool): Whether a left HEOG was detected. heog_right_detected (bool): Whether a right HEOG was detected. jaw_clench_hold_timer (int): Timer for holding the jaw clench detection. heog_left_hold_timer (int): Timer for holding the left HEOG detection. heog_right_hold_timer (int): Timer for holding the right HEOG detection. """ def __init__(self, plot_FFT=False): """ Initialize the EEGVisualizer object. Args: plot_FFT (bool): Whether to plot the FFT data. """ super().__init__() # Parameters self.plot_FFT = plot_FFT # Initialize variables for real-time data self.signal_data = deque(maxlen=window_size_signal) # buffer for EEG signal data self.signal_data.extend([0 for _ in range(window_size_signal)]) self.jaw_clench_detected = False self.heog_left_detected = False self.heog_right_detected = False # Quality score self.quality_score = deque(maxlen=window_size_fft) self.quality_score.extend([0 for _ in range(window_size_fft)]) # FFT data if plot_FFT: self.fft_keys = ['Delta', 'Theta', 'Alpha', 'Beta', 'Gamma'] self.fft_dict = {key: deque(maxlen=window_size_fft) for key in self.fft_keys} for key in self.fft_keys: self.fft_dict[key].extend([0 for _ in range(window_size_fft)]) # IMU data self.acc_keys = ['acc_x', 'acc_y', 'acc_z'] self.magn_keys = ['magn_x', 'magn_y', 'magn_z'] self.gyro_keys = ['gyro_x', 'gyro_y', 'gyro_z'] self.imu_keys = self.acc_keys + self.magn_keys + self.gyro_keys self.imu_dict = {key: deque(maxlen=window_size_imu) for key in self.imu_keys} for key in self.imu_keys: self.imu_dict[key].extend([0 for _ in range(window_size_imu)]) # Set up timer for updating the visualization self.timer = QtCore.QTimer() self.timer.setInterval(50) # Update interval in milliseconds self.timer.timeout.connect(self.update_plot) self.timer.start() # Timers to keep track of when to reset each detection self.jaw_clench_hold_timer = 0 self.heog_left_hold_timer = 0 self.heog_right_hold_timer = 0 self.initUI()
Setup the UI: create the user interface with the EEG and IMU data plots and the classifiers’ images.
def initUI(self): """ Set up the user interface. """ self.setStyleSheet("background-color: white; color: black;") main_layout = QtWidgets.QVBoxLayout() # Main vertical layout for the entire window # First vertical box top_layout = QtWidgets.QHBoxLayout() left_layout = QtWidgets.QVBoxLayout() right_layout = QtWidgets.QVBoxLayout() # Set a title for left and right sections title_style_sections = """ QLabel { font-size: 20px; font-weight: bold; color: #404040; } """ #---------- LEFT: EEG ---------------------------------------------------------------- title_signal = QtWidgets.QLabel("EEG Data") title_signal.setStyleSheet(title_style_sections) title_signal.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter) left_layout.addWidget(title_signal) plot_layout = QtWidgets.QVBoxLayout() if self.plot_FFT: self.figure, (self.ax_signal, self.ax_quality, self.ax_fft) = plt.subplots(3, 1, figsize=(12, 12)) else: self.figure, (self.ax_signal, self.ax_quality) = plt.subplots(2, 1, figsize=(12, 8)) self.canvas = FigureCanvas(self.figure) self.figure.tight_layout(pad=3.0) # EEG signal self.ax_signal.set_title("Filtered EEG Signal (µV)") self.line_signal, = self.ax_signal.plot(self.signal_data, color='#14786e', linewidth=1) self.ax_signal.set_ylim(-100, 100) self.ax_signal.set_xticks([]) # Quality score self.ax_quality.set_title("Quality Score (%)") self.line_quality, = self.ax_quality.plot(self.quality_score, color='black', linewidth=1) self.ax_quality.set_ylim(-2, 102) self.ax_quality.set_xticks([]) # FFT if self.plot_FFT: self.ax_fft.set_title("FFT (z-scores)") self.fft_lines = {} for key in self.fft_keys: self.fft_lines[key], = self.ax_fft.plot(self.fft_dict[key], label=f"{key} {brainwave_bands[key]}", color=band_colors[key], linewidth=1.5) self.ax_fft.set_ylim(-10, 10) self.ax_fft.set_xticks([]) self.ax_fft.legend(loc='upper left') plot_layout.addWidget(self.canvas) left_layout.addLayout(plot_layout) top_layout.addLayout(left_layout) #---------- RIGHT: IMU ---------------------------------------------------------------- title_imu = QtWidgets.QLabel("IMU Data") title_imu.setStyleSheet(title_style_sections) title_imu.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter) right_layout.addWidget(title_imu) self.figure_imu, (self.ax_acc, self.ax_magn, self.ax_gyro) = plt.subplots(3, 1, figsize=(12, 12)) self.canvas_imu = FigureCanvas(self.figure_imu) self.figure_imu.tight_layout(pad=3.0) # Accelerometer self.ax_acc.set_title("Accelerometer Data (m/s²)") self.acc_lines = {} for key in self.acc_keys: axis = key.split('_')[1] self.acc_lines[key], = self.ax_acc.plot(self.imu_dict[key], label=axis, linewidth=1, color=imu_colors[axis]) self.ax_acc.set_ylim(-20, 15) self.ax_acc.set_xticks([]) self.ax_acc.legend(loc='upper left') # Magnetometer self.ax_magn.set_title("Magnetometer Data (µT)") self.magn_lines = {} for key in self.magn_keys: axis = key.split('_')[1] self.magn_lines[key], = self.ax_magn.plot(self.imu_dict[key], label=axis, linewidth=1, color=imu_colors[axis]) self.ax_magn.set_ylim(-2, 2) self.ax_magn.set_xticks([]) self.ax_magn.legend(loc='upper left') # Gyroscope self.ax_gyro.set_title("Gyroscope Data (°/s)") self.gyro_lines = {} for key in self.gyro_keys: axis = key.split('_')[1] self.gyro_lines[key], = self.ax_gyro.plot(self.imu_dict[key], label=axis, linewidth=1, color=imu_colors[axis]) self.ax_gyro.set_ylim(-7, 7) self.ax_gyro.set_xticks([]) self.ax_gyro.legend(loc='upper left') right_layout.addWidget(self.canvas_imu) top_layout.addLayout(right_layout) main_layout.addLayout(top_layout) #------------ Classifiers layout classifier_layout = QtWidgets.QVBoxLayout() image_layout = QtWidgets.QGridLayout() title_style = """ QLabel { font-size: 20px; font-weight: bold; color: #404040; } """ title_jaw_clench = QtWidgets.QLabel("Jaw Clench") title_eye_movements = QtWidgets.QLabel("Left HEOG") title_eye_movements_right = QtWidgets.QLabel("Right HEOG") title_jaw_clench.setStyleSheet(title_style) title_eye_movements.setStyleSheet(title_style) title_eye_movements_right.setStyleSheet(title_style) image_layout.addWidget(title_eye_movements, 0, 0, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(title_jaw_clench, 0, 1, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(title_eye_movements_right, 0, 2, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) self.jaw_clench_image = QtWidgets.QLabel() self.heog_left_image = QtWidgets.QLabel() self.heog_right_image = QtWidgets.QLabel() resting_jaw_clench_image = QPixmap("front_end/images/circle_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation) resting_heog_left_image = QPixmap("front_end/images/left_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation) resting_heog_right_image = QPixmap("front_end/images/right_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation) self.jaw_clench_image.setPixmap(resting_jaw_clench_image) self.heog_left_image.setPixmap(resting_heog_left_image) self.heog_right_image.setPixmap(resting_heog_right_image) image_layout.addWidget(self.heog_left_image, 1, 0, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(self.jaw_clench_image, 1, 1, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(self.heog_right_image, 1, 2, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) classifier_layout.addLayout(image_layout) main_layout.addLayout(classifier_layout) self.setLayout(main_layout)
3. Create functions to update the data buffers: the first function handles the live insights (EEG and IMU) while the second handles the predictions (FFT and classifiers). These functions will then be called from the main script to update the visualizer with the live data.
def update_signals_visuals(self, signal, imu_data): """ Update the EEG signal and IMU data buffers using live insights. Args: signal (list): New EEG signal data. imu_data (dict): New IMU data. """ # Update EEG data self.signal_data.extend(signal) # Update IMU data for sample in imu_data: for key in self.imu_keys: self.imu_dict[key].append(sample[key]) def update_predictions_visuals(self, quality_score, new_jaw_clench, new_heog_left, new_heog_right, fft_dict): """ Update the EEG prediction data using real-time predictions. Args: quality_score (float): New quality score. new_jaw_clench (bool): New jaw clench detection. new_heog_left (bool): New left HEOG detection. new_heog_right (bool): New right HEOG detection. fft_dict (dict): New FFT z-scores data. """ # Get current time for classifier hold timers current_time = QtCore.QTime.currentTime().msecsSinceStartOfDay() # Update detection hold timers if new_jaw_clench: self.jaw_clench_detected = new_jaw_clench self.jaw_clench_hold_timer = current_time + hold_duration if new_heog_left: self.heog_left_detected = new_heog_left self.heog_left_hold_timer = current_time + hold_duration if new_heog_right: self.heog_right_detected = new_heog_right self.heog_right_hold_timer = current_time + hold_duration # Update quality score if quality_score is not None: self.quality_score.append(quality_score) # Update FFT data if fft_dict!={}: for key in self.fft_keys: if fft_dict[key] is not None: self.fft_dict[key].append(fft_dict[key])
4. Create a function to update the plots: update the EEG and IMU plots with the current data buffers. This function is automatically run by the timer at a fixed interval of time and is responsible for live update of the visualizer.
def update_plot(self): """ Update the real-time EEG plots Automatically called by the timer. """ # Update flags current_time = QtCore.QTime.currentTime().msecsSinceStartOfDay() if current_time < self.jaw_clench_hold_timer: self.jaw_clench_detected = True else: self.jaw_clench_detected = False if current_time < self.heog_left_hold_timer: self.heog_left_detected = True else: self.heog_left_detected = False if current_time < self.heog_right_hold_timer: self.heog_right_detected = True else: self.heog_right_detected = False # Update EEG data self.line_signal.set_ydata(self.signal_data) if self.plot_FFT: for key in self.fft_keys: self.fft_lines[key].set_ydata(self.fft_dict[key]) self.line_quality.set_ydata(self.quality_score) self.canvas.draw() # Update IMU data for key in self.acc_keys: self.acc_lines[key].set_ydata(self.imu_dict[key]) for key in self.magn_keys: self.magn_lines[key].set_ydata(self.imu_dict[key]) for key in self.gyro_keys: self.gyro_lines[key].set_ydata(self.imu_dict[key]) self.canvas_imu.draw() # jaw clench if self.jaw_clench_detected: active_jaw_clench_image = QPixmap("front_end/images/jaw_clench.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.jaw_clench_image.setPixmap(active_jaw_clench_image) else: resting_jaw_clench_image = QPixmap("front_end/images/circle_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.jaw_clench_image.setPixmap(resting_jaw_clench_image) # HEOG if self.heog_left_detected: active_heog_left_image = QPixmap("front_end/images/HEOG_left.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_left_image.setPixmap(active_heog_left_image) else: resting_heog_left_image = QPixmap("front_end/images/left_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_left_image.setPixmap(resting_heog_left_image) if self.heog_right_detected: active_heog_right_image = QPixmap("front_end/images/HEOG_right.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_right_image.setPixmap(active_heog_right_image) else: resting_heog_right_image = QPixmap("front_end/images/right_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_right_image.setPixmap(resting_heog_right_image)
5. Define the variables used by the class: set the window size for the EEG and IMU data buffers, the hold duration for the classifiers, and the colors for the EEG bands and IMU axes. The window sizes are adapted to the different data stream frequencies to ensure a smooth visualization.
window_size = 30 window_size_fft = 15 window_size_signal = window_size * 20 * 7 window_size_imu = window_size_signal // 10 image_size = 100 hold_duration = 500 brainwave_bands = { 'Delta': (0.5, 4), 'Theta': (4, 8), 'Alpha': (8, 12), 'Sigma': (12, 15), 'Beta': (15, 30), 'Gamma': (30, 35) } band_colors = { 'Delta': '#A4C2F4', # Pastel blue 'Theta': '#76D7C4', # Pastel green 'Alpha': '#B6D7A8', # Pastel turquoise 'Sigma': '#F9CB9C', # Pastel peach 'Beta': '#F6A5A5', # Pastel red 'Gamma': '#DDA0DD' # Pastel purple } imu_colors = { 'x': 'r', # Red 'y': 'g', # Green 'z': 'b' # Blue }
Main script:
To run the demo, we will create a demo script that handles the recording and the visualizer update.
Recording parameters: set the recording parameters in the script (API key, device address, and recording timer).
my_api_token = "idun_XXXX..." device_address = "XX:XX:XX:XX:XX:XX"
Create the visualizer instance: create a PyQt6 app and an instance of the visualizer class.
app = QtWidgets.QApplication(sys.argv) visualizer = EEGVisualizer(plot_FFT=True) sns.set(style="whitegrid")
Define the handler functions: create two different handler functions to receive real-time predictions and data streams and update the visualizer.
def signal_handler(event): """ Handler for the live signal data (EEG and IMU) """ global visualizer # EEG Signal data = event.message eeg_samples = [sample['ch1'] for sample in data['filtered_eeg']] # IMU data imu_data = data['imu'] # Update the visualizer visualizer.update_signals_visuals(eeg_samples, imu_data) def pred_handler(event): """ Handler for the live prediction data (Jaw clench, HEOG, FFT) """ global visualizer # Initialize the classifiers values jaw_clench = False heog_left = False heog_right = False fft_dict = {} quality_score = None # Quality score if event.message["predictionType"] == "QUALITY_SCORE": quality_score = event.message["result"]['quality_score'] # Jaw clench if event.message["predictionType"] == "JAW_CLENCH": jaw_clench = True # HEOG if event.message["predictionType"] == "BIN_HEOG": heog = event.message["result"]['heog'] if heog == -1: heog_left = True if heog == 1: heog_right = True # FFT if event.message["predictionType"] == "FFT": if len(event.message["result"]["z-scores"]) == 0: fft_dict = {key: None for key in fft_keys} else: fft_dict = event.message["result"]["z-scores"][0] # Update the visualizer visualizer.update_predictions_visuals(quality_score, jaw_clench, heog_left, heog_right, fft_dict)
Define an async function to handle recording: create a function to start recording and perform the following steps.
recording_done = False async def do_recording(client): """ Asynchronous function to handle the recording process. """ global recording_done # Start the recording await client.start_recording(recording_timer=RECORDING_TIMER, led_sleep=LED_SLEEP, calc_latency=False) # Other actions after recording ends rec_id = client.get_recording_id() print("RecordingId", rec_id) client.update_recording_tags(recording_id=rec_id, tags=["tag1", "tag2"]) client.update_recording_display_name(recording_id=rec_id, display_name="todays_recordings") client.download_file(recording_id=rec_id, file_type=FileTypes.EEG) # Close the app recording_done = True
Create a function to end the visualization: create a function to check if the recording is complete and stop the visualizer.
def check_for_completion(): """ Periodically check if the recording is done and close the app if so. """ global app, visualizer if recording_done: visualizer.close() app.quit() else: QtCore.QTimer.singleShot(100, check_for_completion)
Run the demo: in the main, connect to the Guardian client, subscribe to live insights and predictions, start recording and run the PyQt6 app.
if __name__ == "__main__": # Start the real-time visualization visualizer.show() # Create the client and subscribe to live insights client = GuardianClient(api_token=my_api_token, address=device_address) client.subscribe_live_insights(raw_eeg=False, filtered_eeg=True, imu=True, handler=signal_handler) client.subscribe_realtime_predictions(jaw_clench=True, fft=True, bin_heog=True, quality_score=True, handler=pred_handler) # Start a separate thread for the async recording recording_thread = threading.Thread(target=lambda: asyncio.run(do_recording(client)), daemon=True) recording_thread.start() # Periodically check if the recording is done and close the app if so QtCore.QTimer.singleShot(100, check_for_completion) # Exit the app sys.exit(app.exec())
Source Code
Brain-Art demo on Touchdesigner with the IDUN Guardian
This demo demonstrates how to control a Touchdesigner project with data streams from the IDUN Guardian device.
Key Features:
Visualizing live EEG and IMU data streams as well as real-time predictions (jaw clench, eye movements, FFT)
Controlling a Touchdesigner project with brain power
Prerequisites and Setup
Python >= 3.9: download and install a compatible Python version from the official website.
Pipenv: install Pipenv using the following command:
pip install pipenv
Create the virtual environment: in the brain-art-demo folder (download link below), run the following command to create a virtual environment and install the required packages:
mkdir .venv pipenv install
Activate the virtual environment to run the scripts:
pipenv shell
You can exit the virtual environment using:
exit
Steps
Visualizer class:
In order to have a live data visualizer, we will use the EEGVisualizer class created in the previous demo: Visualize Real-Time Data and Predictions from the IDUN Guardian
Touchdesigner project:
The brain art is generated using TouchDesigner’s built-in particlesGPU tool, a GPU-accelerated particle system powered by GLSL shaders for real-time particle simulation and rendering. Particle attributes such as size, lifetime, quantity, and applied forces (e.g., turbulence) are adjustable within custom-defined ranges. Combined with a feedback loop, this setup ensures randomized and continuously evolving outputs. Some of these parameters are directly influenced by incoming EEG signals, that are received via UDP, visualizing the user’s real-time brain activity to create a unique piece of art.
The connected parameters include:
Force
Parameter
Change
FFT
Radius
Radius of brain art
Alpha
Ammount
Force outward strength
Theta and Jaw-clench
External X
Force direction in x
Left/Right eye movement
Blursize
Size of the orb
Gamma
Color Profiles
Depending on the relaxation of the user (determined by the strength of Alpha), the art switches between two color tables with RGB values corresponding to FFT-bands: Cold (Relaxed) when Alpha is High, Warm (Engaged) when Alpha is Low.
RGB
FFT (relaxed)
FFT (engaged)
Red
Beta
Gamma
Green
Sigma
Sigma
Blue
Alpha
Theta
To use the BrainArt demo, follow these steps:
Download TouchDesigner and create an account.
Open the provided TouchDesigner file (BrainArt.toe). To define your own project, you can create a new project and design your own brain-art.
Start the animation, which will automatically visualize the BrainArt when the demo script is running.
Click on the top arrow on top left corner to open the brain art window.
Main script:
To run the demo, we will create a demo script that handles the recording, the visualizer update and sending data to UDP port for Touchdesigner control. We will use the same demo script as in Visualize Real-Time Data and Predictions from the IDUN Guardian, with the following additional steps:
Socket setup: create a socket to send data to the Touchdesigner project via UDP port.
import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
UDP settings: define the IP address and port number for the UDP connection.
UDP_IP = "239.0.0.1" UDP_PORT = 10123
This generic IP address is a multicast address that can be used to send data to multiple devices on the same network. Both address and port should match in the TouchDesigner project. If you need to change those values, you’ll need to do so in the TouchDesigner project as well.
Send data to UDP port: create a function to send the EEG data to UDP port. The data includes the FFT bands and the classifiers (jaw clench, left and right HEOG).
def send_data_to_udp_port(fft_dict): """ Send the data to the UDP port. Args: - fft_dict: dictionary with the frequency bands powers """ # Convert nans to zeros for key in fft_keys: if fft_dict[key] is np.nan: fft_dict[key] = 0.0 # Add classifiers to the dictionary full_dict = {**fft_dict, "jaw_clench": int(jaw_clench_detected_keep), "left_heog": int(left_heog_detected_keep), "right_heog": int(right_heog_detected_keep)} data_bytes = json.dumps(full_dict).encode("utf-8") sock.sendto(data_bytes, (UDP_IP, UDP_PORT))
And update the prediction handler to call the function with realtime data:
# Send data to UDP port if fft_dict!={}: send_data_to_udp_port(fft_dict)
Store temporary values for classifiers: create global variables to store the temporary values for the classifiers (jaw clench, left and right HEOG). As data are sent to UDP port every second (FFT received), we need to keep the classifiers’ prediction in memory until the next data is received.
jaw_clench_detected_keep = False left_heog_detected_keep = False right_heog_detected_keep = False
Then in the prediction handler, update the temporary values:
# Save predictions until next FFT if jaw_clench: jaw_clench_detected_keep = True if heog_left: left_heog_detected_keep = True if heog_right: right_heog_detected_keep = True
Finally, modify the pred handler such that once the data is sent to the UDP port, temporary values are reset:
# Send data to UDP port if fft_dict!={}: send_data_to_udp_port(fft_dict) jaw_clench_detected_keep, left_heog_detected_keep, right_heog_detected_keep = False, False, False
5. Run the demo: in the pipenv shell, run brain-art-demo.py Note that you won’t get any output until you start to receive FFT data from the Guardian device. It generally takes a few minutes.