Frequently Asked Questions

CogPilot Data Challenge 2.0

What is the overall goal of the CogPilot Data Challenge 2.0?

This data challenge focuses on developing AI approaches that can turn multimodal physiological measures into accurate quantitative assessments of cognitive state/cognitive workload. Such technology can be applicable to ground training, in-flight training, and any task that's cognitively demanding and has the potential for optimizing the training outcome.

What are the modeling tasks for the CogPilot Data Challenge 2.0?

There are two predictive modeling tasks for you to complete in this data challenge. Given the physiological data measured from a subject during a run:

  • Challenge Task 1: Predict the difficulty level of the run (there are 4 difficulty levels, thus this is a multiclass classification)
  • Challenge Task 2: Predict the performance error of the run

Is there a targeted deployment platform for a final solution?

Not for this data challenge. But solutions that leverage open source resources are preferred.

What are the options for me to post my questions and get answers?

You can always reach the challenge organizers by sending an email to: cogpilot@mit.edu.

You can also join the CogPilot Data Challenge 2.0 Slack channel and post your questions and comments there. It’s a community for participants to discuss data challenge related topics.

Registration

Who is the target audience for the CogPilot Data Challenge 2.0?

Anyone interested in the intersection of AI, human performance, physiology, cognition, and flying! All participants are welcome.

When will the CogPilot Data Challenge 2.0 take place and what is the time commitment of participants?

The data challenge will begin in the middle of October and continue until Spring 2023. Registration will remain open for the entire Data Challenge.

Is the CogPilot Data Challenge 2.0 in-person or virtual?

This challenge will be hosted fully virtually.

If I want to participate as a team, does each member register individually?

Please have all members register individually and list their team's name.

Is there a limit to team size?

No, but usually a team of 4-8 is recommended.

Data Set

How do I access the dataset?

The dataset is freely available on Physionet: https://doi.org/10.13026/azwa-ge48.

How do we learn more about the challenge data, including the data collection set-up, recording, modalities, and preprocessing?

Please check out the reference folder that comes with the challenge dataset download. It contains a wealth of information for this data challenge.

Is this data labeled?

The dataset includes a PerfMetrics.csv that includes, for each run, both the difficulty level label (“Difficulty”) for Challenge Task #1 and the total flight performance error (“Cumulative_Total_Error”) for Challenge Task #2.

Can you provide a process flow with approximate times that covers what the trainees go through before data recording starts until after data recording stops?

Trainees are first outfitted with a suite of wearable sensors. Then, they perform a 5-10 minute practice run on the easiest difficulty level to get familiarized with the scenario. The experimenter loads each scenario to the same starting location, aircraft state, and aircraft attitude, with the simulation “paused” so there’s no aircraft movement. The sequence of difficulties is known to the experimenter but unknown and "random" to the subject. The experimenter then starts the data recording. The only differences between levels of difficulty are changes in weather (visibility, wind, turbulence, and height of cloud ceilings). The start of the data recording happens slightly before the start of the flight. In practice, the simulation is paused, the data logging begins, the simulation is then un-paused, and the participant begins the run. One could plot the aircraft airspeed to see that the initial few points have zero velocity, but it jumps to ~115 knots once the simulation is un-paused. After the trainee lands the aircraft (or crashes), the experimenter asks the trainee to take their hands off the controls and then ends the recording of the data. The experimenter asks the subject to state the level of subjective workload they experienced by completing a Bedford workload assessment at the end of each run. At this point, the next run is loaded. Please see the CogPilot datasheet contained in the dataset for more detailed information.

What is included in the “Rest” periods?

There is a 5-min Rest period before the 12 runs and after. During the rest periods, trainees sit quietly with all the sensors recording data. The rest periods may provide information regarding the physiological baseline of an individual.

How long is each run?

Runs are approximately 7 to 10 mins and may depend on the speed of flight, or errors in movement. Sometimes, a novice subject may crash during the virtual flight, so the run may be shorter. Each of the 12 runs is one of the four difficulty options.

Are the recordings synchronized?

For each modality, the data and time vectors are aligned, and the linking time point is listed in the first column. The time listed in the first column is universal across modalities. However, the first timestamp between two different modality files (e.g., Subject001_EDAfile vs Subject001_EMGfile) may not be the same. One modality stream might start slightly ahead of the other. Nevertheless, the timestamps will be the truth to identify the slight time difference.

For eye tracking, is there an association between the X and Y axis and what instrument the pilot would be looking at?

No, because of head movement and rotation, the position of the instrument panel may change in the VR space relative to the person's eye tracking gaze.

Is a "Trial" the same as a "Run" or are there multiple trials per run?

We use Trial and Run interchangeably. In the data files, we use "runs".

Should we hold out subjects for evaluation?

The entire dataset available for download is for you to develop your AI models. You can partition the data however you like for the goal of model development. A typical approach is to use cross validation. We have an independent dataset outside of the downable dataset for evaluation.

Model Development

Is there a preference for programming language or technology stack?

The starter code has been written in Python. However, you are welcome to use any language that you are comfortable with.

Can you suggest an ML course to help us get up to speed on the key concepts of ML?

While there are good ML courses online, here we recommend a ML course taught by one of the MIT co-PIs of the CogPilot project. Course Link: https://tamarabroderick.com/ml.html.

Model Evaluation

How is the submitted entries being evaluated?

Please refer to the Data Challenge Description Tab for details.

Miscellaneous

How can I participate in the data collection?

If you'd like to be a subject, please email cogpilot@mit.edu (data collections occur at MIT Campus in Cambridge, MA)

Can we run this simulator ourselves and is it the same as the publicly available one on Steam?

We use the Professional version of the X-Plane 11 simulation engine at https://store.steampowered.com/app/269950/XPlane_11/. We use the HTC Vive Pro Eye headset, but X-Plane 11 should be compatible with other types of VR headsets. We use a premium (paid) model of the T-6A Texan II aircraft, which is the aircraft used in the first flying phase of Air Force pilot training. This model is developed by FliteAdvantage and is available for the public to purchase here: https://www.fliteadvantage.com/product/t6a-virtual-model/.