Using gaze data from Cloud#

The lsl stream will contain gaze data with a resolution of ~66 Hz. You can get a higher sampling rate when you’re using the gaze data downloaded from Pupil Cloud. In order to do this, you’ll need to align the timestamps collected during the lsl streaming to the timestamps recorded in Cloud.

We have build a command line interface tool that allows you to perform a linear model fit, mapping time recorded with lsl to time in Pupil Cloud, or vice versa.

The mapping uses events generated by the lsl relay.

Setup#

  1. Install the lsl relay, including the extra dependencies via:

    pip install lsl-relay[pupil_cloud_alignment]
    
  2. Start the lsl relay of your Pupil device.

  3. In your lsl recording software (e.g. LabRecorder), select both the gaze data stream and the event stream.

  4. Start the lsl recording through your software of choice.

  5. In your Pupil Labs Companion App, tap the red “record” button and make sure the recording is running.

  6. Run your experiment/data collection.

  7. Stop the recording in the Pupil Labs Companion App.

  8. Stop the lsl recording.

  9. Wait till the gaze data was uploaded to Pupil Cloud and the 200 Hz gaze data was computed.

  10. Export the gaze data from Pupil Cloud by right-clicking on the recording and selecting Downloads -> Download Recording.

  11. Unzip the export from Pupil Cloud, place all files (xdf files and cloud exports) in the directory where you want them to be and run lsl_relay_time_alignment from your terminal.

Important

If your recording is short (less than 1 minute), you should increase the frequency at which lsl.timesync events are being generated. As a rule of thumb you should aim for at least 3 events being sent throughout the recording, including recording.begin and recording.end events, which are generated by starting and ending the recording in the Pupil Labs Companion App.

You can change the frequency at which lsl.timesync events are being sent by setting the --time_sync_interval argument.

Start the post-hoc time alignment#

The time alignment is started by executing:

lsl_relay_time_alignment <path_to_xdf> <paths_to_exports>

It requires the following positional arguments:

  • --path_to_xdf is the path to the xdf file with your lsl recording. The xdf file must contain event streams for each recorded device.

  • --paths_to_export is one or more paths pointing to the directory of raw data exports from pupil cloud. The raw data exports can be made for entire projects, or for each recording individually. The export must contain the events.csv file.

Output of the post-hoc time alignment#

The post-hoc time alignment outputs a json file ‘time_alignment_parameters.json’ in the same directory where the events.csv file from the pupil cloud export is located. The json file contains three fields:

  • cloud_to_lsl contains an intercept and a slope to map from cloud time to lsl time

  • lsl_to_cloud also contains an intercept and a slope, but for mapping from lsl time to cloud time

  • info contains the type of the model that was fitted and the version of the time alignment.

Example:

 1{
 2    "cloud_to_lsl": {
 3        "intercept": -1657137430.0848174,
 4        "slope": 1.000021399510087
 5    },
 6    "lsl_to_cloud": {
 7        "intercept": 1657101968.9145195,
 8        "slope": 0.9999786009468196
 9    },
10    "info": {
11        "model_type": "LinearRegression",
12        "version": 1
13    }
14}

Use the parameters from the linear model to map between cloud time and lsl time#

Import the installed dependencies before running the example code below:

1import json
2import pandas as pd

Load Pupil-Cloud-exported gaze data and convert nanoseconds to seconds

1column_timestamp = "timestamp [s]"
2cloud_gaze_data = pd.read_csv("./recordings/gaze.csv")
3cloud_gaze_data[column_timestamp] = cloud_gaze_data["timestamp [ns]"] * 1e-9

Import the parameters from the json file:

1with open("./recordings/time_alignment_parameters.json") as file:
2    parameter_dict = json.load(file)

Define a simple linear model:

1def perform_linear_mapping(input_data, parameters):
2    return parameters["intercept"] + input_data * parameters["slope"]

Apply the linear model to transform timestamps from Pupil Cloud to LSL time domain:

1cloud_gaze_data["lsl_time [s]"] = perform_linear_mapping(
2    cloud_gaze_data[column_timestamp], parameter_dict["cloud_to_lsl"]
3)

Save the time-aligned data as CSV file:

1cloud_gaze_data.to_csv("./recordings/time_aligned_gaze.csv")

The new column 'lsl time [s]' contains the lsl-compatible time stamps.

Time alignment under the hood#

The post-hoc time alignment works by matching events from the lsl stream to events recorded by the companion device (available through pupil cloud).

Each time a relay is started, it automatically sends events to the companion device. Each event has a unique name containing a session id which is generated when starting the relay, and an event counter. As soon as the companion device registers these events, they are timestamped and streamed back to lsl.

In the time alignment, we look for the unique session id in the stream header of the lsl-generated xdf file, and in the event names available in cloud. When we find a matching pair, we compute a linear regression(sklearn’s linear_model.LinearRegression) between the time stamps of these events in both directions.