Demo Scripts
This document provides a step-by-step guide for exciting demos using the IDUN Guardian system.
Contents
Warning
The real-time classifiers used in these demos are currently under development. A new version is scheduled for release in early 2025. Please note that the current classifiers may produce experiences that might deviate from the demo video.
Hands-Free Spotify Control
This demo demonstrates how to control Spotify hands-free using jaw clench and eye movement detections with our real-time classifiers.
Key Features:
Play, pause, and navigate playlist on Spotify through simple, intuitive jaw and eye gestures.
Works seamlessly with our real-time classifier system for a hands-free music experience.
Prerequisites and Setup
Spotify Developer Setup:
Go to the Spotify Developer Dashboard and log in.
Create a new app. In the app settings, add a Redirect URI, e.g., http://localhost:8888/callback, for OAuth. Enable the “Web API” and “Web Playback SDK” permissions under scopes.
Retrieve your Client ID and Client Secret from the app settings for use in authentication.
Environment Setup: Make sure you have the IDUN Guardian SDK and spotipy library installed. You can install it using:
pip instal idun-guardian-sdk pip install spotipy
Steps
1.. Recording parameters: set the recording parameters in the script (API key and device address).
my_api_token = "idun_XXXX..." device_address = "XX:XX:XX:XX:XX:XX"
Add Your Spotify Credentials: Replace the placeholders in the script with your Spotify credentials as shown below:
sp = spotipy.Spotify(auth_manager=SpotifyOAuth( client_id="YOUR_CLIENT_ID", client_secret="YOUR_CLIENT_SECRET", redirect_uri="http://localhost:8888/callback", scope="user-modify-playback-state user-read-playback-state" ))
Define Music Control Functions: Use the Spotify Web API to create functions for music control actions, such as play, pause, next track, and volume control.
def toggle_music(): """ Pause or start the music playback """ playback = sp.current_playback() if playback and playback['is_playing']: sp.pause_playback() else: sp.start_playback() def next_track(): """ Play the next track """ sp.next_track() def previous_track(): """ Play the previous track """ sp.previous_track()
Create a Handler for Real-Time Predictions: Implement a handler function to connect classifier outputs (jaw clench and eye movements) to your music control functions. This function will interpret real-time predictions and map them to Spotify control actions.
def pred_handler(event): """ Handler for the live prediction data """ # Jaw clench -> toggle playlist if any('predictionResponse' in key for key in event.message): jaw_clenches = event.message["predictionResponse"] binary_jaw_clenches = [0 if jaw_clench == 'Nothing' else 1 for jaw_clench in jaw_clenches] if any([pred > 0 for pred in binary_jaw_clenches]): print('Jaw clench detected', flush=True) toggle_music() # HEOG -> switch track if any("heogClassificationsResponse" in key for key in event.message): heog = event.message["heogClassificationsResponse"] if any([pred == 1 for pred in heog]): print('Left HEOG detected', flush=True) previous_track() if any([pred == 2 for pred in heog]): print('Right HEOG detected', flush=True) next_track()
Connect to the IDUN Guardian Device and start data streaming: Create a Guardian client instance, subscribe to real-time predictions, and start recording.
if __name__ == "__main__": # Create client client = GuardianClient(api_token=my_api_token, address=device_address) # Subscribe to live predictions client.subscribe_realtime_predictions(jaw_clench=True, fft=False, bin_heog=True, handler=pred_handler) # Start the recording asyncio.run( client.start_recording( recording_timer=RECORDING_TIMER, led_sleep=LED_SLEEP, calc_latency=False ) )
Run the Demo: Open Spotify, run the script and control your music hands-free!
Source Code
Control Philips Hue Lights with our Eye Movement Classifier
This demo demonstrates how to turn on/off Philips Hue lights by looking left and right.
Key Features:
Turn on and off Philips Hue lights by looking left and right.
Works seamlessly with our real-time classifier system to control Philips Hue state using brain power.
Prerequisites and Setup
Install the `IDUN Guardian SDK` and the `Philips Hue SDK phue`: :
pip install idun-guardian-sdk pip install phue
Philips Hue Bridge Setup: :
Connect your Philips Hue Bridge to your network and find its IP address.
Press the button on the bridge to pair it with your network.
Retrieve the bridge IP address.
Get the bridge username:
try: bridge.connect() print(f"Connected! Save this username for future use: {bridge.username}") except Exception as e: print(f"Error connecting to the Bridge: {e}")
Steps
Add Your Philips Hue Bridge Credentials:
BRIDGE_IP = 'YOUR_BRIDGE_IP' BRIDGE_USERNAME = 'YOUR_BRIDGE_USERNAME'
Connect to the Philips Hue Bridge: :
bridge = Bridge(BRIDGE_IP, username=BRIDGE_USERNAME) bridge.connect()
Define Light Control Functions: :
def handle_left_eye_movement(): lamp = "Ceiling Lamp 1" bridge.set_light(lamp, 'on', True) time.sleep(2) bridge.set_light(lamp, 'on', False) def handle_right_eye_movement(): lamp = "Ceiling Lamp 2" bridge.set_light(lamp, 'on', True) time.sleep(2) bridge.set_light(lamp, 'on', False)
Create a Handler for Real-Time Predictions: :
def handle_eye_movement(data): prediction = data.message for p in prediction['heogClassificationsResponse']: if p == 1: handle_left_eye_movement() if p == 2: handle_right_eye_movement()
Connect to the IDUN Guardian Device and start data streaming: :
if __name__ == '__main__': client = GuardianClient() client.subscribe_realtime_predictions(bin_heog=True, handler=handle_eye_movement) asyncio.run(client.start_recording(recording_timer=RECORDING_TIMER))
Source Code
Visualize Real-Time Data and Predictions from the IDUN Guardian
This demo demonstrates how to build a real-time visualizer for EEG and IMU data streams and real-time predictions from the IDUN Guardian device.
Key Features:
Visualizing live EEG and IMU data streams
Visualizing real-time predictions (jaw clench, eye movements, FFT)
Prerequisites and Setup
Python >= 3.9: download and install a compatible Python version from the official website.
Pipenv: install Pipenv using the following command:
pip install pipenv
Create the virtual environment: in the visualizer-demo folder (download link below), run the following command to create a virtual environment and install the required packages:
mkdir .venv pipenv install
Activate the virtual environment to run the scripts:
pipenv shell
You can exit the virtual environment using:
exit
Steps
Visualizer class:
In order to create a live data visualizer, we will create a PyQt6 QWidget class that will display the EEG and IMU data streams and the real-time predictions from the IDUN Guardian device. For classifiers, we are using images to represent the states (jaw clench, left and right HEOG).
Initialize the visualizer: Initialize the visualizer using zeros buffers for EEG and IMU data, and set up the timer for updating the plots.
class EEGVisualizer(QtWidgets.QWidget): """ Class for real-time visualization of EEG data. Attributes: plot_FFT (bool): Whether to plot the FFT data. signal_data (deque): Buffer for EEG signal data. fft_keys (list): Keys for the FFT data. fft_dict (dict): Buffer for FFT data. imu_keys (list): Keys for the IMU data. imu_dict (dict): Buffer for IMU data. timer (QTimer): Timer for updating the visualization. jaw_clench_detected (bool): Whether a jaw clench was detected. heog_left_detected (bool): Whether a left HEOG was detected. heog_right_detected (bool): Whether a right HEOG was detected. jaw_clench_hold_timer (int): Timer for holding the jaw clench detection. heog_left_hold_timer (int): Timer for holding the left HEOG detection. heog_right_hold_timer (int): Timer for holding the right HEOG detection. """ def __init__(self, plot_FFT=False): """ Initialize the EEGVisualizer object. Args: plot_FFT (bool): Whether to plot the FFT data. """ super().__init__() # Parameters self.plot_FFT = plot_FFT # Initialize variables for real-time data self.signal_data = deque(maxlen=window_size_signal) # buffer for EEG signal data self.signal_data.extend([0 for _ in range(window_size_signal)]) self.jaw_clench_detected = False self.heog_left_detected = False self.heog_right_detected = False # FFT data if plot_FFT: self.fft_keys = ['Delta', 'Theta', 'Alpha', 'Beta', 'Gamma'] self.fft_dict = {key: deque(maxlen=window_size_fft) for key in self.fft_keys} for key in self.fft_keys: self.fft_dict[key].extend([0 for _ in range(window_size_fft)]) # IMU data self.acc_keys = ['acc_x', 'acc_y', 'acc_z'] self.magn_keys = ['magn_x', 'magn_y', 'magn_z'] self.gyro_keys = ['gyro_x', 'gyro_y', 'gyro_z'] self.imu_keys = self.acc_keys + self.magn_keys + self.gyro_keys self.imu_dict = {key: deque(maxlen=window_size_imu) for key in self.imu_keys} for key in self.imu_keys: self.imu_dict[key].extend([0 for _ in range(window_size_imu)]) # Set up timer for updating the visualization self.timer = QtCore.QTimer() self.timer.setInterval(50) # Update interval in milliseconds self.timer.timeout.connect(self.update_plot) self.timer.start() # Timers to keep track of when to reset each detection self.jaw_clench_hold_timer = 0 self.heog_left_hold_timer = 0 self.heog_right_hold_timer = 0 self.initUI()
Setup the UI: create the user interface with the EEG and IMU data plots and the classifiers’ images.
def initUI(self): """ Set up the user interface. """ self.setStyleSheet("background-color: white; color: black;") main_layout = QtWidgets.QHBoxLayout() # Main horizontal layout for the entire window # Set a title for left and right sections title_style_sections = """ QLabel { font-size: 20px; /* Change font size */ font-weight: bold; /* Make the text bold */ color: #404040; /* Change text color if needed */ } """ #---------- LEFT: EEG ---------------------------------------------------------------- left_layout = QtWidgets.QVBoxLayout() # Vertical layout for the left side of the window title_signal = QtWidgets.QLabel("EEG Data") title_signal.setStyleSheet(title_style_sections) title_signal.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter) # Add title label to layout left_layout.addWidget(title_signal) #---------- Horizontal layout to hold the time-series plots top_layout = QtWidgets.QHBoxLayout() plot_layout = QtWidgets.QVBoxLayout() if self.plot_FFT: self.figure, (self.ax_signal, self.ax_fft) = plt.subplots(2, 1, figsize=(12, 12)) else: self.figure, self.ax_signal = plt.subplots(1, 1, figsize=(12, 8)) self.canvas = FigureCanvas(self.figure) self.figure.tight_layout(pad=5.0) # Filtered signal self.ax_signal.set_title("Filtered EEG Signal (µV)") self.line_signal, = self.ax_signal.plot(self.signal_data, color='#14786e', linewidth=1) self.ax_signal.set_ylim(-100, 100) self.ax_signal.set_xticks([]) # FFT plot if self.plot_FFT: self.ax_fft.set_title("FFT (z-scores)") self.fft_lines = {} for key in self.fft_keys: self.fft_lines[key], = self.ax_fft.plot(self.fft_dict[key], label=f"{key} {brainwave_bands[key]}", color=band_colors[key],linewidth=1.5) self.ax_fft.set_ylim(-10, 10) self.ax_fft.set_xticks([]) self.ax_fft.legend(loc='upper left') # Add canvas with the plots to the plot layout plot_layout.addWidget(self.canvas) top_layout.addLayout(plot_layout) left_layout.addLayout(top_layout) #------------ Classifiers layout # Create a grid layout for the images (jaw clench, left and right HEOG images) image_layout = QtWidgets.QGridLayout() # Titles title_style = """ QLabel { font-size: 20px; /* Change font size */ font-weight: bold; /* Make the text bold */ color: #404040; /* Change text color if needed */ } """ title_jaw_clench = QtWidgets.QLabel("Jaw Clench") title_eye_movements = QtWidgets.QLabel("Left HEOG") title_eye_movements_right = QtWidgets.QLabel("Right HEOG") title_jaw_clench.setStyleSheet(title_style) title_eye_movements.setStyleSheet(title_style) title_eye_movements_right.setStyleSheet(title_style) # Add title labels to layout image_layout.addWidget(title_eye_movements, 0, 0, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(title_jaw_clench, 0, 1, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(title_eye_movements_right, 0, 2, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) # Initialize image labels for jaw clench and HEOG states self.jaw_clench_image = QtWidgets.QLabel() self.heog_left_image = QtWidgets.QLabel() self.heog_right_image = QtWidgets.QLabel() # Set default images (resting state) resting_jaw_clench_image = QPixmap("front_end/images/circle_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation) resting_heog_left_image = QPixmap("front_end/images/left_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation) resting_heog_right_image = QPixmap("front_end/images/right_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation) self.jaw_clench_image.setPixmap(resting_jaw_clench_image) self.heog_left_image.setPixmap(resting_heog_left_image) self.heog_right_image.setPixmap(resting_heog_right_image) # Add images to layout image_layout.addWidget(self.heog_left_image, 1, 0, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(self.jaw_clench_image, 1, 1, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) image_layout.addWidget(self.heog_right_image, 1, 2, alignment=QtCore.Qt.AlignmentFlag.AlignCenter) # Add image layout to the main layout (below the plots) left_layout.addLayout(image_layout) #--------- Add left layout to the main layout left_layout.setContentsMargins(20, 20, 20, 20) main_layout.addLayout(left_layout) #---------- RIGHT: IMU ---------------------------------------------------------------- right_layout = QtWidgets.QVBoxLayout() # Vertical layout for the right side of the window # Title for the IMU section title_imu = QtWidgets.QLabel("IMU Data") title_imu.setStyleSheet(title_style_sections) title_imu.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter) # Add title label to layout right_layout.addWidget(title_imu) # Create a figure and axis for the IMU data self.figure_imu, (self.ax_acc, self.ax_magn, self.ax_gyro) = plt.subplots(3, 1, figsize=(12, 12)) self.canvas_imu = FigureCanvas(self.figure_imu) self.figure_imu.tight_layout(pad=3.0) # Accelerometer plot self.ax_acc.set_title("Accelerometer Data (m/s²)") self.acc_lines = {} for key in self.acc_keys: axis = key.split('_')[1] self.acc_lines[key], = self.ax_acc.plot(self.imu_dict[key], label=axis, linewidth=1, color=imu_colors[axis]) self.ax_acc.set_ylim(-20, 15) self.ax_acc.set_xticks([]) self.ax_acc.legend(loc='upper left') # Magnetometer plot self.ax_magn.set_title("Magnetometer Data (µT)") self.magn_lines = {} for key in self.magn_keys: axis = key.split('_')[1] self.magn_lines[key], = self.ax_magn.plot(self.imu_dict[key], label=axis, linewidth=1, color=imu_colors[axis]) self.ax_magn.set_ylim(-2, 2) self.ax_magn.set_xticks([]) self.ax_magn.legend(loc='upper left') # Gyroscope plot self.ax_gyro.set_title("Gyroscope Data (°/s)") self.gyro_lines = {} for key in self.gyro_keys: axis = key.split('_')[1] self.gyro_lines[key], = self.ax_gyro.plot(self.imu_dict[key], label=axis, linewidth=1, color=imu_colors[axis]) self.ax_gyro.set_ylim(-7, 7) self.ax_gyro.set_xticks([]) self.ax_gyro.legend(loc='upper left') # Add canvas with the plots to the right layout right_layout.addWidget(self.canvas_imu) right_layout.setContentsMargins(20, 20, 20, 20) main_layout.addLayout(right_layout) #------------------------- # Set the main layout for the window self.setLayout(main_layout)
3. Create functions to update the data buffers: the first function handles the live insights (EEG and IMU) while the second handles the predictions (FFT and classifiers). These functions will then be called from the main script to update the visualizer with the live data.
def update_signals_visuals(self, signal, imu_data): """ Update the EEG signal and IMU data buffers using live insights. Args: signal (list): New EEG signal data. imu_data (dict): New IMU data. """ # Update EEG data self.signal_data.extend(signal) # Update IMU data for sample in imu_data: for key in self.imu_keys: self.imu_dict[key].append(sample[key]) def update_predictions_visuals(self, new_jaw_clench, new_heog_left, new_heog_right, fft_dict): """ Update the EEG prediction data using real-time predictions. Args: new_jaw_clench (bool): New jaw clench detection. new_heog_left (bool): New left HEOG detection. new_heog_right (bool): New right HEOG detection. fft_dict (dict): New FFT z-scores data. """ # Get current time for classifier hold timers current_time = QtCore.QTime.currentTime().msecsSinceStartOfDay() # Update detection hold timers if new_jaw_clench: self.jaw_clench_hold_timer = current_time + hold_duration elif current_time < self.jaw_clench_hold_timer: new_jaw_clench = True if new_heog_left: self.heog_left_hold_timer = current_time + hold_duration elif current_time < self.heog_left_hold_timer: new_heog_left = True if new_heog_right: self.heog_right_hold_timer = current_time + hold_duration elif current_time < self.heog_right_hold_timer: new_heog_right = True # Update detection flags self.jaw_clench_detected = new_jaw_clench self.heog_left_detected = new_heog_left self.heog_right_detected = new_heog_right # Update FFT data if fft_dict!={}: for key in self.fft_keys: if fft_dict[key] is not None: self.fft_dict[key].append(fft_dict[key])
4. Create a function to update the plots: update the EEG and IMU plots with the current data buffers. This function is automatically run by the timer at a fixed interval of time and is responsible for live update of the visualizer.
def update_plot(self): """ Update the real-time EEG plots Automatically called by the timer. """ # Update EEG data self.line_signal.set_ydata(self.signal_data) if self.plot_FFT: for key in self.fft_keys: self.fft_lines[key].set_ydata(self.fft_dict[key]) self.canvas.draw() # Update IMU data for key in self.acc_keys: self.acc_lines[key].set_ydata(self.imu_dict[key]) for key in self.magn_keys: self.magn_lines[key].set_ydata(self.imu_dict[key]) for key in self.gyro_keys: self.gyro_lines[key].set_ydata(self.imu_dict[key]) self.canvas_imu.draw() # jaw clench if self.jaw_clench_detected: active_jaw_clench_image = QPixmap("front_end/images/jaw_clench.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.jaw_clench_image.setPixmap(active_jaw_clench_image) else: resting_jaw_clench_image = QPixmap("front_end/images/circle_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.jaw_clench_image.setPixmap(resting_jaw_clench_image) # HEOG if self.heog_left_detected: active_heog_left_image = QPixmap("front_end/images/HEOG_left.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_left_image.setPixmap(active_heog_left_image) else: resting_heog_left_image = QPixmap("front_end/images/left_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_left_image.setPixmap(resting_heog_left_image) if self.heog_right_detected: active_heog_right_image = QPixmap("front_end/images/HEOG_right.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_right_image.setPixmap(active_heog_right_image) else: resting_heog_right_image = QPixmap("front_end/images/right_grey.png").scaled(image_size, image_size, QtCore.Qt.AspectRatioMode.KeepAspectRatio,QtCore.Qt.TransformationMode.SmoothTransformation) self.heog_right_image.setPixmap(resting_heog_right_image)
5. Define the variables used by the class: set the window size for the EEG and IMU data buffers, the hold duration for the classifiers, and the colors for the EEG bands and IMU axes. The window sizes are adapted to the different data stream frequencies to ensure a smooth visualization.
window_size = 30 window_size_fft = 15 window_size_signal = window_size * 20 * 7 window_size_imu = window_size_signal // 10 image_size = 100 hold_duration = 500 brainwave_bands = { 'Delta': (0.5, 4), 'Theta': (4, 8), 'Alpha': (8, 12), 'Sigma': (12, 15), 'Beta': (15, 30), 'Gamma': (30, 35) } band_colors = { 'Delta': '#A4C2F4', # Pastel blue 'Theta': '#76D7C4', # Pastel green 'Alpha': '#B6D7A8', # Pastel turquoise 'Sigma': '#F9CB9C', # Pastel peach 'Beta': '#F6A5A5', # Pastel red 'Gamma': '#DDA0DD' # Pastel purple } imu_colors = { 'x': 'r', # Red 'y': 'g', # Green 'z': 'b' # Blue }
Main script:
To run the demo, we will create a demo script that handles the recording and the visualizer update.
Recording parameters: set the recording parameters in the script (API key, device address, and recording timer).
my_api_token = "idun_XXXX..." device_address = "XX:XX:XX:XX:XX:XX"
Create the visualizer instance: create a PyQt6 app and an instance of the visualizer class.
app = QtWidgets.QApplication(sys.argv) visualizer = EEGVisualizer(plot_FFT=True) sns.set(style="whitegrid")
Define the handler functions: create two different handler functions to receive real-time predictions and data streams and update the visualizer.
def signal_handler(event): """ Handler for the live signal data (EEG and IMU) """ global visualizer # EEG Signal data = event.message eeg_samples = [sample['ch1'] for sample in data['filtered_eeg']] # IMU data imu_data = data['imu'] # Update the visualizer visualizer.update_signals_visuals(eeg_samples, imu_data) def pred_handler(event): """ Handler for the live prediction data (Jaw clench, HEOG, FFT) """ global visualizer # Initialize the classifiers values jaw_clench = False heog_left = False heog_right = False fft_dict = {} # Jaw clench if any('predictionResponse' in key for key in event.message): jaw_clenches = event.message["predictionResponse"] binary_jaw_clenches = [0 if jaw_clench == 'Nothing' else 1 for jaw_clench in jaw_clenches] if any([pred > 0 for pred in binary_jaw_clenches]): jaw_clench = True # HEOG if any("heogClassificationsResponse" in key for key in event.message): heog = event.message["heogClassificationsResponse"] if any([pred == 1 for pred in heog]): heog_left = True if any([pred == 2 for pred in heog]): heog_right = True # FFT if any("stateless_z_scores" in key for key in event.message): if len(event.message["stateless_z_scores"]) == 0: fft_dict = {key: None for key in fft_keys} else: fft_dict = event.message["stateless_z_scores"][0] # Update the visualizer visualizer.update_predictions_visuals(jaw_clench, heog_left, heog_right, fft_dict)
Define an async function to handle recording: create a function to start recording and perform the following steps.
recording_done = False async def do_recording(client): """ Asynchronous function to handle the recording process. """ global recording_done # Start the recording await client.start_recording(recording_timer=RECORDING_TIMER, led_sleep=LED_SLEEP, calc_latency=False) # Other actions after recording ends rec_id = client.get_recording_id() print("RecordingId", rec_id) client.update_recording_tags(recording_id=rec_id, tags=["tag1", "tag2"]) client.update_recording_display_name(recording_id=rec_id, display_name="todays_recordings") client.download_file(recording_id=rec_id, file_type=FileTypes.EEG) # Close the app recording_done = True
Create a function to end the visualization: create a function to check if the recording is complete and stop the visualizer.
def check_for_completion(): """ Periodically check if the recording is done and close the app if so. """ global app, visualizer if recording_done: visualizer.close() app.quit() else: QtCore.QTimer.singleShot(100, check_for_completion)
Run the demo: in the main, connect to the Guardian client, subscribe to live insights and predictions, start recording and run the PyQt6 app.
if __name__ == "__main__": # Start the real-time visualization visualizer.show() # Create the client and subscribe to live insights client = GuardianClient(api_token=my_api_token, address=device_address) client.subscribe_live_insights(raw_eeg=False, filtered_eeg=True, imu=True, handler=signal_handler) client.subscribe_realtime_predictions(jaw_clench=True, fft=True, bin_heog=True, handler=pred_handler) # Start a separate thread for the async recording recording_thread = threading.Thread(target=lambda: asyncio.run(do_recording(client)), daemon=True) recording_thread.start() # Periodically check if the recording is done and close the app if so QtCore.QTimer.singleShot(100, check_for_completion) # Exit the app sys.exit(app.exec())