We strongly believe in openness, and believe that only open data is trustworthy data. All Safecast data is published into the Public Domain under a CC0 designation. We have always encouraged other citizens’ groups as well as government agencies to implement open data policies and methodologies from the outset. Most don’t, so we’d like to try to set an example.
We’ve made our data available through several routes:
The main way is directly via our API page at api.safecast.org
It’s not necessary to sign up to query or download data, so that can be done anonymously. Quite a few search/filter parameters are provided to filter by time, location, which device the data is from, the user ID from which it was uploaded, etc.. It’s also possible to download the whole dataset at once as a large CSV file. It’s fairly simple for others to link to this database and generate their own maps, graphs, etc.. No permission is necessary to do it. These are the kinds of “best practices” we’ve shared with other groups at our workshops.
Right-clicking at any point on our web map gives a “Query Safecast API here” command. Depending on the zoom level etc, it will give all of the nearby measurements (100 or so usually), which will show the who, what, where, when, etc. of the measurements. You can refine the search from there, by radius in meters, time period, etc, if desired.
Clicking “About” under the Safecast logo on the map leads to more information, including “data processing methodology.” This has very detailed info about how the data is processed, visualized, etc.. For us it’s important to have this easily accessible and out in the open, to quickly answer technical questions people might have. It also makes it easier for people to suggest improvements. We encourage other data sources to provide this kind of detailed information as well so we can all evaluate the strengths and weaknesses of what they’re presenting.
The Safecast API is a constantly-evolving piece of work [repo here on github] which has had many contributors, and it takes constant effort to keep it running smoothly. A while back we realized that in order to implement some very desirable new features that we hadn’t anticipated at the beginning, like allowing people to easily see how levels in a particular area have changed over time, we needed to rebuild a lot of it from the ground up. There is an active volunteer team working on this at the moment, using versions of the entire database system running experimentally in virtual machines. We haven’t seen any other groups gathering and presenting radiation data yet who are really prepared to devote this kind of thought and energy to the data side of their activity. Most seem to use use whatever looks easiest, and soon discover they have a lot of data and no way to share it openly, or to allow others to independently evaluate its validity. We’d be happy if others learned from our experience, however, and are willing to help them get up to speed.