The University of Melbourne
Browse
.XDF
P1_S1_R1.xdf (35.41 MB)
.XDF
P1_S1_R2.xdf (19.21 MB)
.XDF
P1_S1_R3.xdf (9.62 MB)
.XDF
P1_S1_R4.xdf (2.31 MB)
.XDF
P2_S1_R1.xdf (21.38 MB)
.XDF
P2_S1_R2.xdf (27.14 MB)
.XDF
P2_S1_R3.xdf (9.33 MB)
.XDF
P3_S1_R1.xdf (18.43 MB)
.XDF
P3_S1_R2.xdf (18.05 MB)
.XDF
P3_S1_R3.xdf (7.65 MB)
.XDF
P4_S1_R1.xdf (53.44 MB)
.XDF
P4_S1_R2.xdf (9.42 MB)
.XDF
P5_S1_R1.xdf (42.35 MB)
.XDF
P5_S1_R2.xdf (9.31 MB)
.XDF
P6_S1_R1.xdf (49.28 MB)
.XDF
P6_S1_R2.xdf (8.48 MB)
.XDF
P7_S1_R1.xdf (56.6 MB)
.XDF
P7_S1_R2.xdf (11.72 MB)
.XDF
P8_S1_R1.xdf (44.57 MB)
.XDF
P8_S1_R2.xdf (8.56 MB)
1/0
51 files

Dataset of Semi-Autonomous Continuous Robotic Arm Control Using an Augmented Reality Brain-Computer Interface

A detailed description of the study is available here. Please cite the following article when using this data.

Kirill Kokorin, et al. Semi-Autonomous Continuous Robotic Arm Control Using an Augmented Reality Brain-Computer Interface. TechRxiv. May 22, 2024.

Overview

The experiment involved 18 healthy participants using an augmented reality (AR) brain-computer interface (BCI) to continuously control a robotic arm. The system placed five flashing stimuli in a cross pattern around the robot end-effector and decoded which stimulus the participant was attending to based on steady-state visually evoked potentials (SSVEPs). Attending to any of the outer four stimuli caused the robot to move in that direction, while the middle stimulus corresponded to forward. The session was made up of one observation block followed by four reaching blocks, with a 1-3 min rest period in between. This study was approved by the University of Melbourne Human Research Ethics Committee (ID: 20853).

Observation Task

Participants completed 25 trials where they observed the arm move in each of the five direction, while attending to the corresponding stimulus. Each trial comprised a 2-3 s prompt, a 3.6 s go period and a 2 s rest period.

Reaching Task

Participants completed four blocks of 12 reaching trials in an ABBA structure, using direct (DC) or shared control (SC). Participants completed an additional training block of four practice trials when using a control mode for the first time. Objects were arranged in a random configuration for each participant in four out of nine positions. In each trial, a different object was designated as the goal which the participant had to touch with the end-effector. Colliding with the workspace or exceeding its limits, touching the wrong block, or exceeding 38.5 s led to a failed trial. The end-effector starting position was randomised for each participant.

Brain-Computer Interface

The stimuli were displayed at 60 Hz using a HoloLens 2 (Microsoft Inc., USA) at frequencies of 7, 8, 9, 11 and 13 Hz. The frequency layout was randomised for each participant. Electroencephalography (EEG) data was recorded using g.USBamp amplifier and 16 active wet g.Scarabeo electrodes (g.tec medical engineering GmbH, Austria). Every 0.2 s, the system decoded which stimulus the participant was attending to based on the last 1 s of data, filtered between 1-40 Hz, using canonical correlation analysis.

Control Modes

The participants used the system to control an anthropometric robotic arm (Reachy, Pollen Robotics, France). The direction corresponding to the decoded stimulus was converted to a velocity vector used to control robot translation in direct control trials. In shared control trials, this vector was linearly combined with an assistance signal to the object that the system predicted the user wanted to reach. The ratio of user vs. autonomous control was based on how far the end-effector was from the predicted object (the robot confidence).

Data

Participants.csv contains the ID, age, sex, how many hours of previous BCI experience they have, their level of fatigue and which control mode they completed first for all 18 participants.

Trial_details.csv contains the details of trials that were incorrectly labelled in the .xdf files or had to be repeated due to equipment issues. Trials are described by the participant ID, trial number, object number and trial result recorded in the .xdf file and which of this information needs to be updated.

P#_S#_R#.xdf contain the recording for each session labelled by the participant ID, session number and run number. Each participant completed one session with the recording split across 2-4 runs/files. Each run is made up of Data and Events.

Data: EEG data recorded at 256 Hz for 16 channels corresponding to (in order) O1, Oz, O2, PO7, PO3, POz, PO4, PO8, Pz, CPz, C1, Cz, C2, FC1, FCz, FC2. Only the first nine channels were used for online decoding.

Events:

  • 'start session': start of a run.
  • 'start run: mode' start a DC, SC or observation block.
  • 'P# freqs: f1, f2, f3, f4, f5': participant ID and frequencies corresponding to up, down, left, right and forward.
  • 'init:goal', 'go:goal', 'rest:goal': start of a trial segment and the associated goal (object# in reaching trials and direction in observation trials).
  • 'success:goal' or 'fail:goal': trial result.
  • 'X:x, y, z pred:d pred_obj:obj# conf:c alpha:a u_robot:ux, uy, uz u_cmb: ux, uy,uz': system output made up of end-effector coordinates, decoded direction, predicted object, robot confidence, assistance ratio, robot assistance vector and combined control vector.

processed.csv contains the extracted events for each time step across all sessions. The events are organised in rows with the following columns:

  • p_id: participant.
  • block_i: block index.
  • block: block type (observation, DC or SC reaching).
  • trial: trial index.
  • ts: time step (in samples).
  • goal: trial goal (direction or object index).
  • reached: reached object index.
  • success: trial success or failure in reaching trials and correct time step prediction in observation trials..
  • x,y,z: end-effector coordinates (m).
  • pred: decoded direction.
  • pred_obj: SC object prediction.
  • conf: SC prediction confidence (based on distance to predicted object).
  • alpha: proportion of autonomous assistance.
  • u_robot_x, y, z: assistance vector.
  • u_cmb_x, y, z: combined user and assistance vector that controls the robot.
  • dL: distance travelled since last time step (mm).

Software

Software for running the experiment and processing the runs is available here.

Contact

If you have any questions, please contact Kirill at kkokorin@student.unimelb.edu.au.

Funding

ARC Training Centre in Cognitive Computing for Medical Technologies

Australian Research Council

Find out more...

History

Add to Elements

  • Yes