AIforEarthChallenge2024

Challenge closed

Rules

For your submitted solution to the challenge to be eligible, the following rules need to be adhered to.

Challenge evaluation

Notebook requirements:

  • To create a more transparent and streamlined approach, we ask that you incorporate the evaluation code directly in your notebook for each task. The sample submission notebook includes the evaluation code that you should include in your notebook for each task, and it will help clarify what inputs and outputs are needed for each task. When evaluating your code, we will only change the area of interest and the date(s).
  • The submission notebook should summarize all evaluation scores in a dictionary, which will be outputted as ONE final CSV. The dictionary keys should be named according to what is laid out in the sample submission notebook. To summarize the inputs and outputs required for each task:
    • Task 1 - Inputs: geojson + year; Output: F1 score.
    • Task 2 - Inputs: geojson + year; Output: average number of simulated rounds to reach 95% accuracy
    • Task 3 - Inputs: geojson + year; Output: RMSE
    • Task 4- Inputs: geojson + year; Output: RMSE
    • Task 5 - Inputs: geojson + 2 dates; Output: F1
    • Task 6 - Inputs: geojson + 2 dates; Output: RMSE
    • Task 7 - Inputs: path to HLS cloud gap test dataset (uses AOI + 3 dates); Output: MAE
  • We will only adjust the region and time. If the code fails to run or takes more than 5 hours to complete, the team will be given 24 hours to submit a version with the corrected code.
    • The notebook will be scored on a location and dates disclosed 2 days after the submission deadline.
    • Note that the task #4 requires two outputs, with the first prediction being for wheat, and then second for maize.
    • Note that tasks #5 and #6 require two dates as input rather than just one year.
    • Task #7 will have three dates as input with the goal of recreating the masked (cloudy) images on those dates. The specifics of the masking process can be found here. Task #7 requires three outputs, corresponding to the predictions for the three dates in order.
    • Review the example pseudocode submission notebook here and the template submission notebook here. Please use the evaluation code provided in the template submission notebook to include for each of your tasks.
  • Code should scale to produce results over a given region smaller than 2000 km2.
  • Your whole notebook must run in less than 5 hours on a g5.xlarge AWS EC2 instance.
    • In the submitted notebook, please write it so that each model is not being retrained; redownload your models from an S3 bucket or other publicly available link so that we are only running inference on each of the 7 tasks.
  • Please create a variable called ‘eligibility’ in your notebook. Assign it a value of 1 if your submission is prize eligible (i.e. using fully open data, models are open license) or a value of 0 if it is only ranking eligible (i.e. using closed data, or models are not under open license). Review the Prize section to determine if your submission is prize or ranking eligible.
  • As of right now, submissions will be evaluated asynchronously, so you may enter multiple submissions before the deadline, but please allow for some time for your score to be returned upon submission.

The evaluation metric will be the F1 score for classification and object detection tasks, RMSE for regression tasks, and MAE for the generative task. Overall ranking will be determined by the average of the entrant's rank on each task.

The second task, aquaculture detection, will be assessed with a simulation of human-in-the-loop learning, where 1) first 10 similarity search results are returned 2) positive and negative results are labeled, and 3) search is performed once again, using these results for the evaluation metric.

On the test set: We do not hold up hidden test sets for global historical datasets. We consider “overfitting” to the whole Earth across all tasks into a single model a valid strategy.

Any fully open (including for commercial use) data can be used for model training and finetuning, in addition to the data references provided.

Note: We are considering expanding the criteria to a “few shot challenge” where we allow the model to iterate with (automated) feedback on the results, to mimic how a model would be used operationally.

Creating and submitting a solution

Participants must register via the AI4EO platform and submit their notebooks for evaluation. The submitted code will be run using a continuous integration (CI) system. The evaluation code will be released openly as soon as possible.

Timing and duration

The #AIforEarthChallenge2024 will take place from 17 May 2024 to 9 September 2024, 16:00 PM CET.

Participation

Participants can submit multiple entries individually or collaborate in teams or as a company.

We encourage participants to showcase the value of their models, regardless of their licensing.

Those co-organizing can still participate in the challenge, as you will not be privy to any additional evaluation information (i.e. this will be done in a public forum and evaluation code will be shared with all). Please email challenge@madewithclay.org if you’re interested in co-organizing.

If financial resources for model training pose a barrier for you to join the challenge, please reach out to us for support in accessing available resources.

AI4EO is carried out under a programme of, and funded by the European Space Agency (ESA).

Disclaimer: The views expressed on this site shall not be construed to reflect the official opinion of ESA.

Contact Us

CONTACT

Follow Us