Challenge Overview

Problem Statement

    

Prize Distribution

             Prize               USD
  1st                          $ 8,500
  2nd                          $ 6,500
  3rd                          $ 4,500
  4th                          $ 3,500
  5th                          $ 2,500
 Total Prizes                 $ 25,500

Summary

NASA has an experimental Radio-Frequency Identification (RFID) tracking system on-board the International Space Station (ISS) that can provide the location of tagged items with average, standard deviation, and maximum errors of 1.5, 0.5, and 3 meters, respectively. We would like to see how much improvement can be obtained using other algorithms that mine the archived RFID data.

Background and motivation

Tracking items in space habitats can be more challenging than it might at first seem. The environment is predominantly closed, with the exception of visiting vehicles that deliver new cargo and jettison of trash or return of some items. However, there are a number of factors that complicate tracking, including crews that change out in 6 month intervals, laboratory space that doubles as living space, cargo transfer bags (CTBs) that are nearly identical in appearance, and limited stowage space.

To address cargo tracking issues in future deep space missions, NASA has initiated the REALM (RFID-Enabled Autonomous Logistics Management) experiments on the International Space Station. The first phase, REALM-1, involves an RFID reader system with 24 fixed antennas in 3 of the ISS modules, which are about 3.5m in diameter and range in length from 6-8m. These 3 modules are referred to herein as "instrumented modules". There are about 3,200 RFID tags on a variety of items, as well as about 100 marker tags that are placed on the ISS internal structure and serve for calibration or machine learning. Many of the individual tagged items are contained within CTBs, which are also tagged. The raw RFID data contains the tag identification code, estimates of the signal strength and phase received by the reader (or interrogator), the reader and antenna on which the tag was read, and a few other parameters. All RFID data is downlinked and archived on the ground. Scattering of the RFID signals in the confines of the ISS complicates triangulation methods, although a location accuracy of about 1.5m (average) has been obtained.

Objective

Your task will be to detect the location of RFID tagged items within the ISS as accurately as possible. The location your algorithm returns will be compared to ground truth data, and the quality of your solution will be judged by how much your solution correlates with the expected results. See Scoring for details.

Input Data

All files described in this section are available in 3 zip files:

ISS shape

The solution domain for the contest is indicated in Figure 1 and includes modules that are instrumented with RFID readers and antennas as well as some that are not instrumented. The instrumented modules are highlighted in blue in Figure 1. Modules shaded purple are not instrumented, but are considered "in bounds" for the purpose of the contest. That is, some CTBs will be stowed in non-instrumented modules during static events (see later). Modules shaded green in Figure 1 are considered invalid in the sense that targeted cargo will not be stowed there during the static data events.

ISS.pdf contains the approximate dimensions of the relevant ISS modules. Note that in this contest all distances are measured in inches. Also note there are slight differences in naming compared to Figure 1, e.g. "Node 1" is shown as "N1" in ISS.pdf, "LAB" is shown as "US Lab", etc. A detailed 3D model of the ISS is available in .blend and .obj formats.

Antenna locations

antenna_locations.csv contains the positions of the 24 RFID reader antennas. These positions are fixed in this contest.

Tagged items

The contest features 3 kinds of tagged items: marker tags (tags with fixed and relatively accurate positions), community tags (tags stored at ISS stowage racks) and target CTBs (tags stored in or attached to a limited number of cargo transfer bags especially selected for this contest). More details of these 3 kinds follow.

Marker tags

marker_tag_locations.txt contains data of RFID tags that are affixed to ISS structure, and thus have fixed positions. The accuracy of the marker tag positions is within 6 inches. Column epc_id contains the unique identifier of these items (EPC: Electronic Product Code). Marker tags are of two different physical types, Metalcraft and Squiggle, denoted by 1 and 0, respectively, in the "tag type" field. All other tags (in communities and CTBs) are of Squiggle type. The signal strength of Metalcraft tags is approximately 6dB lower than of the Squiggle tags. Raw RF signal coming from marker tags can be used as training data, but some marker tags will be used for testing as well.

Community locations and community tags

community_locations.txt contains the center positions of those stowage racks of the ISS that are available for training. These positions are fixed in this contest. Note that these locations are inaccurate, and the inaccuracy is varied. The dimensions of the stowage locations greatly exceed the accuracy with which the centers are listed in community_locations.txt.

  • racks: 2m high x 1m wide x 0.86m deep
  • PMA-1: 50" diameter at one end x 30" diameter at the other end, length ~ 6ft. Cargo here lines the perimeter 1 CTB layer deep.
  • End cones - there are several of these regions. They are about 4m in diameter and 1m deep . The volume there tends to get filled with CTBs.
  • PMM - by far the largest stowage area, but not instrumented. Only a small number of the RFID tags are read from this module.

communities.txt contains the list of tagged items stored at each of the the community locations. The file contains sections in this format:

COMMUNITY_ID
[EPC_ID,...]

Note that this data is known to contain errors, it's ~90%-95% accurate.

Target CTBs

There are tags attached to the exterior of 6 target CTBs as well as to some of the items within these CTBs. Some CTBs have 1 external tag, while others have 2 external tags on orthogonal faces, each with the same EPC ID code. Each of the 6 CTBs are placed either in specific stowage areas of the ISS or in open regions.

The CTBs were placed in specific locations for a period that includes crew sleep, and it is anticipated that data acquired during crew sleep is less likely to be perturbed by crew movements. These events are referred to as static data events, which encompass specific times at which estimates will be evaluated. At the start of the next crew day, some or all of the 6 target CTBs may be temporarily relocated prior to the next static data event. The next static data event is staged with the crew moving the 6 CTBs to new locations prior to crew sleep, and data is again acquired during the crew sleep period. This was repeated each day during the data collection period.

ctb_locations.txt and ctb_tags.txt contain the known positions and tag distribution per target CTB in the same format as above for communities. The uncertainty of the target CTB positions is 1 foot (12 inch). Note that there may be overlap between the community tags and CTB tags: the same tag that is listed as a community member during a static data event may be placed into a CTB during another static data event. Data from 2 of the 6 CTBs is available for training, one of them has public data spanning 3 static data events, the other for only one static data event.

Raw RF data

rfid_raw_readings_train.txt (4.8 GB unpacked) contains timestamped raw RF signal originating from RFID-tagged items (all 3 kinds listed above), as read by all antennas. The format of this file is the following:

date,epc_id,antenna_id,rssi,freq,phase,power,cnt
2018-07-16 04:03:31.78,5154,27,-48,910750,53,3000,1
2018-07-16 04:03:31.797,61890,27,-57,910750,87,3000,2
...
  • date: date of the reading
  • epc_id: tag id; i.e., the EPC that was read
  • antenna_id: ID of the antenna, corresponds to the "id" column in antenna_locations.txt
  • rssi: RSSI value read in dBm
  • freq: frequency on which the reading was made (kHz)
  • phase: average phase of the signal (degrees)
  • power: signal power strength from reader ( centi - dBm. Maximum is 3000 centi-dBm, or 30 dBm. Signals are attenuated such that power at antennas is approximately 27dBm.)
  • cnt: number of reads of the epc on the specified antenna. The frequency, phase, and power apply to only one of the reads.

Tasks

A task has two components:

  • Raw RF measurement data from a short period of time (between 15 minutes and 4 hours), containing signal from all tags present on the ISS, received by all antennas. This data is available in the same format as the training data described above, in files named rfid_raw_<task-id>.txt .
  • A list of tag IDs given in files named <task-id>.txt. These IDs are not present in the training data, your task will be to predict the location of these tags, using only the training data and the measurement data that belongs to the same task. Note that the same ID may appear in other tasks as well, but you can't assume that the location of the tags remain the same across tasks. However, tags remain at the same location during the time range the task covers. The tags whose locations must be predicted may belong to any of the 3 kinds of tags.

There are 20 tasks in this challenge, with IDs task-01, ... ,task-20.

System Description and Configuration

The REALM readers each transmit approximately 1/2 W signals that are compatible with EPCglobal Generation 2. The readers frequency hop according to a frequency hopping spread spectrum protocol that is defined in Gen2_Protocol_Standard. Each reader cycles through 4 connected antennas, dwelling on each for a period of 1 second. After the 4 antennas have been sampled, there is a very brief off period of approximately 25 ms before the cycle repeats. The readers are configured to operate in Session 1, as defined in Gen2_Protocol_Standard.

The REALM system ceases transmission for periods typically lasting 4-4.5 hours each day, although occasionally these periods are longer. These will manifest as periods without data.

There are multiple reasons why any specific tag may not be read during a 1 second antenna dwell time:

  • The tag is obscured by the crew or other cargo.
  • The tag has been moved.
  • The tag did not reach the top of the queue in the reader inventory round during the 1 second dwell interval
  • The tag is not in read range of the specific antenna.

It should be noted that tags can typically be read in non-instrumented modules that directly connect to the instrumented modules. However, the coverage areas in the non-instrumented modules are not well established at this time.

Output Files

Your output must be a single text file (with a .txt extension) that contains the location predictions for each tag listed in all 20 tasks.

The file should contain comma-separated lines formatted like

task-id,epc_id,x,y,z,confidence_radius

Make sure you measure x, y, z and confidence_radius in inches.

A sample line:

task-01,9876,-192.0,55.0,12.0,25

Your output must only contain algorithmically generated location predictions. It is strictly forbidden to include manually created predictions, or answers that - although initially machine generated - are modified in any way by a human.

Functions

This match uses the result submission style, i.e. you will run your solution locally using the provided files as input, and produce a text file that contains your answer.

In order for your solution to be evaluated by Topcoder's marathon system, you must implement a class named IssRfidLocator, which implements a single function: getAnswerURL(). Your function will return a String corresponding to the URL of your submission file. You may upload your files to a cloud hosting service such as Dropbox or Google Drive, which can provide a direct link to the file.

To create a direct sharing link in Dropbox, right click on the uploaded file and select share. You should be able to copy a link to this specific file which ends with the tag "?dl=0". This URL will point directly to your file if you change this tag to "?dl=1". You can then use this link in your getAnswerURL() function.

If you use Google Drive to share the link, then please use the following format: "https://drive.google.com/uc?export=download&id=" + id

Note that Google has a file size limit of 25MB and can't provide direct links to files larger than this. (For larger files the link opens a warning message saying that automatic virus checking of the file is not done.)

You can use any other way to share your result file, but make sure the link you provide opens the filestream directly, and is available for anyone with the link (not only the file owner), to allow the automated tester to download and evaluate it.

An example of the code you have to submit, using Java:

public class IssRfidLocator {
  public String getAnswerURL() {
    //Replace the returned String with your submission file's URL
    return "https://drive.google.com/uc?export=download&id=XYZ";
  }
}

Keep in mind that your complete code that generates these results will be verified at the end of the contest if you achieve a score in the top 5, as described later in the "Requirements to Win a Prize" section, i.e. participants will be required to provide fully automated executable software to allow for independent verification of the performance of your algorithm and the quality of the output data.

Scoring

A full submission will be processed by the Topcoder Marathon test system, which will download, validate and evaluate your submission file.

Any malformed or inaccessible file, or one that doesn't contain the expected number of lines will receive a zero score.

If your submission is valid, your solution will be scored using the following algorithm.

First item-level scores are calculated for each {task-id,epc_id} pair in the test set. Let {xt, yt, zt} be the coordinates of the tag's true location, {x, y, z} the coordinates you returned, and R is the confidence radius you returned.

Then let E be the Euclidean distance between the true and predicted locations (measured in inches): E = sqrt((xt-x)^2 + (yt-y)^2 + (zt-z)^2)

Your score for this item is 0 if E > R. Otherwise

score = min (Rmin / R, 1),

where Rmin is a tag specific, fixed minimum distance. The value of Rmin is

  • 48 inch (4 foot) for marker tags,
  • 48 inch (4 foot) for tags stored at one of the community locations,
  • 12 inch (1 foot) for tags attached to or placed within target CTBs.

Note that Rmin is not disclosed for each tag, it is known only by the scoring algorithm.

Then the overall score will be calculated as a weighted average of the item-level scores. Weights will be assigned in a way that the target CTBs contribute 4 times more to the score than the other 2 kinds of tags:

FinalScore = (marker_score + community_score + 4 * ctb_score) / 6,

where

marker_score is the average of item-level scores of marker tags,

community_score is a weighted average of item-level scores of the tags stored at community locations, where weights are inversely proportional to the number of items in the community,

ctb_score is a weighted average of item-level scores of the tags stored in target CTBs, where weights are inversely proportional to the number of items in the CTB.

In all 3 components of the score (marker_score, community_score, ctb_score) the average is taken across all tasks. For example if task-01 contains 10 marker tags, task-02 contains 20 marker tags and no other tasks contain marker tags then marker_score is the average of 30 item level scores.

A special case: tags that produce no signal at all during the measurement period of a task will receive a weight of 0. (Typically these are community tags but may belong to the other 2 kinds of tags as well.) Note that you still need to include them in your prediction file but the location you give will have no effect on your score.

Finally, for display purposes your score will be multiplied by 1,000,000.

Example submissions can be used to verify that your chosen approach to upload submissions works and also that your implementation of the scoring logic is correct. The tester will verify that the returned String contains a valid URL, its content is accessible, i.e. the tester is able to download the file from the returned URL. If your file is valid, it will be evaluated, and detailed score values will be available in the test results. The example evaluation is based on a small subset of the training data, see rfid_raw_task-example.txt available in tasks.zip.

Example submissions must contain 5 lines of text, corresponding to the 5 tags listed in task-example.txt. Though recommended, it is not mandatory to create example submissions. The scores you achieve on example submissions have no effect on your provisional or final ranking. Example submissions can be created using the "Test Examples" button on TopCoder's submission uploader interface.

Full submissions must contain in a single file the location predictions that your algorithm made for all {task-id, epc_id} pairs in the test set. Full submissions can be created using the "Submit" button on TopCoder's submission uploader interface.

Final Scoring

The top 10 competitors according to the provisional scores will be invited to the final testing round. The details of the final testing are described in a separate document. The main requirement is that you should create a dockerized version of your system within 5 days after the end of the provisional phase that we can run in a uniform way, using new tasks.

Your solution will be subjected to three tests:

First, your solution will be validated, i.e. we will check if it produces the same output file as your last submission, using the same input files used in this contest. Note that this means that your solution must not be improved further after the provisional submission phase ends. (We are aware that it is not always possible to reproduce the exact same results. E.g. if you do online training then the difference in the training environments may result in different number of iterations, meaning different models. Also you may have no control over random number generation in certain 3rd party libraries. In any case, the results must be statistically similar, and in case of differences you must have a convincing explanation why the same result can not be reproduced.)

Second, your solution will be tested against a new set of tasks.

Third, the resulting output from the steps above will be validated and scored. The final rankings will be based on this score alone.

Competitors who fail to provide their solution as expected will receive a zero score in this final scoring phase, and will not be eligible to win prizes.

General Notes

  • This match is NOT rated.
  • Teaming is allowed. Topcoder members are permitted to form teams for this competition. After forming a team, Topcoder members of the same team are permitted to collaborate with other members of their team. To form a team, a Topcoder member may recruit other Topcoder members, and register the team by completing this Topcoder Teaming Form. Each team must declare a Captain. All participants in a team must be registered Topcoder members in good standing. All participants in a team must individually register for this Competition and accept its Terms and Conditions prior to joining the team. Team Captains must apportion prize distribution percentages for each teammate on the Teaming Form. The sum of all prize portions must equal 100%. The minimum permitted size of a team is 1 member, the maximum permitted team size is 5 members. Only team Captains may submit a solution to the Competition. Topcoder members participating in a team will not receive a rating for this Competition. Notwithstanding Topcoder rules and conditions to the contrary, solutions submitted by any Topcoder member who is a member of a team but is not the Captain of the team may be deleted and is ineligible for award. The deadline for forming teams is 11:59pm ET on the 14th day following the date that Registration and Submission opens as shown on the Challenge Details page. Topcoder will prepare a Teaming Agreement for each team that has completed the Topcoder Teaming Form, and distribute it to each member of the team. Teaming Agreements must be electronically signed by each team member to be considered valid. All Teaming Agreements are void, unless electronically signed by all team members by 11:59pm ET of the 21th day following the date that Registration and Submission opens as shown on the Challenge Details page. Any Teaming Agreement received after this period is void. Teaming Agreements may not be changed in any way after signature.
  • The registered teams will be listed in the contest forum thread titled "Registered Teams".
  • Organizations such as companies may compete as one competitor if they are registered as a team and follow all Topcoder rules.
  • Relinquish - Topcoder is allowing registered competitors or teams to "relinquish". Relinquishing means the member will compete, and we will score their solution, but they will not be eligible for a prize. Once a person or team relinquishes, we post their name to a forum thread labeled "Relinquished Competitors". Relinquishers must submit their implementation code and methods to maintain leaderboard status.
  • In this match you may use any programming language and libraries, including commercial solutions, provided Topcoder is able to run it free of any charge. You may use open source languages and libraries provided they are equally free for your use, use by another competitor, or use by the client. If your solution requires licenses, you must have these licenses and be able to legally install them in a testing VM (see "Requirements to Win a Prize" section). Submissions will be deleted/destroyed after they are confirmed. Topcoder will not purchase licenses to run your code. Prior to submission, please make absolutely sure your submission can be run by Topcoder free of cost, and with all necessary licenses pre-installed in your solution. Topcoder is not required to contact submitters for additional instructions if the code does not run. If we are unable to run your solution due to license problems, including any requirement to download a license, your submission might be rejected. Be sure to contact us right away if you have concerns about this requirement.
  • If your solution includes licensed software (e.g. commercial software, open source software, etc), you must include the full license agreements with your submission. Include your licenses in a folder labeled "Licenses". Within the same folder, include a text file labeled "README" that explains the purpose of each licensed software package as it is used in your solution.
  • External data sets and pre-trained models are allowed for use in the competition provided the following are satisfied:
    • The external data and pre-trained models are unencumbered with legal restrictions that conflict with its use in the competition.
    • The data source or data used to train the pre-trained models is defined in the submission description.
  • Use the match forum to ask general questions or report problems, but please do not post comments and questions that reveal information about possible solution techniques.

Requirements to Win a Prize

In order to receive a final prize, you must do all the following:

Achieve a score in the top 5 according to final test results. See the "Final scoring" section above.

Once the final scores are posted and winners are announced, the prize winner candidates have 7 days to submit a report outlining their final algorithm explaining the logic behind and steps to its approach. You will receive a template that helps creating your final report.

If you place in a prize winning rank but fail to do any of the above, then you will not receive a prize, and it will be awarded to the contestant with the next best performance who did all of the above.

Additional Eligibility

NASA Employees are prohibited by Federal statutes and regulations from receiving an award under this Challenge. NASA Employees are still encouraged to submit a solution. If you are a NASA Employee and wish to submit a solution please contact Topcoder who will connect you with the NASA Challenge owner. If your solution meets the requirements of the Challenge, any attributable information will be removed from your submission and your solution will be evaluated with other solutions found to meet the Challenge criteria. Based on your solution, you may be eligible for an award under the NASA Awards and Recognition Program or other Government Award and Recognition Program if you meet the criteria of both this Challenge and the applicable Awards and Recognition Program. If you are an Employee of another Federal Agency, contact your Agency's Office of General Counsel regarding your ability to participate in this Challenge.

If you are a Government contractor or are employed by one, your participation in this challenge may also be restricted. If you or your employer receiving Government funding for similar projects, you or your employer are not eligible for award under this Challenge. Additionally, the U.S. Government may have Intellectual Property Rights in your solution if your solution was made under a Government Contract, Grant or Cooperative Agreement. Under such conditions, you may not be eligible for award.

If you work for a Government Contractor and this solution was made either under Government Contract, Grant or Cooperative Agreement or while performing work for the employer, you should seek legal advice from your employer's General Counsel on your conditions of employment which may affect your ability to submit a solution to this Challenge and/or to accept award.

 

Definition

    
Class:IssRfidLocator
Method:getAnswerURL
Parameters:
Returns:String
Method signature:String getAnswerURL()
(be sure your method is public)
    
 

Examples

0)
    
"1"
Returns: "Test case 1"

This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2020, TopCoder, Inc. All rights reserved.