On the Beyoncé tweet map
Kenneth Field, Senior Cartographic Product Engineer with Esri and controversial blogger (to some, at least), felt compelled to write a critique of Simon Rogers‘ Beyoncé twitter map (on occasion of an album release) that was hyped by Time Magazine (in a punny way) as being “flawless” (to be very clear: the “flawless” attribute does not originate from the map author and I don’t get hung up on it, since it’s a pun for the pun’s sake).
I share many of Kenneth’s points in this case though (especially also regarding data quality). I’d thus like to chime in on some as well as add a few of my own:
1. I agree that we see quite a number of not very substantial, link-bait / “map-listicles” type of maps online (“These 20 maps will …” – knock your socks off, or similar). While I definitely can be a sucker for nice visualizations (i.e. I can perfectly enjoy them as entertainment), it sometimes saddens me a bit that not more consideration is given to properly crafting and publishing maps (e.g. basics like specifying data sources, limitations of the data, authors, choosing an appropriate projection, etc.). In the case of this map, for example, the slider shows a certain time and the sub-title states that tweets of December 12–13 have been analysed, but we don’t even know what timezone the time refers to. I tried, but found it quite hard to guess the local times in the map as it’s being animated.
2. The mix of dark basemap and bright purple in the tweet map works well to stimulate the (my) eye and I really like the animated nature of this map, since the temporal dimension adds a lot to the way the spatial patterns unfold. That said, to me the map looks way too nervous when animated. I would have wished for better temporal smoothing, potentially also more spatial smoothing or even aggregation.
3. The parameters for the “density” raster or its mapping into colour space are not well chosen, in my opinion. E.g., looking at the image above eastern US reminds me of what photographers call blown or burnt out highlights.
4. The reason I put “density” in quotes is that to the untrained eye the visualization looks like what we geographers call a typical kernel density estimation, but it isn’t (at least not a good one). I’m not sure if there is a term for this method that paints a semi-opaque disk for every occurrence of a phenomenon and then alpha-blends overlapping disks – faux density surface? (I know, you can classify it as a KDE with uniform kernel, but overwhelmingly we apply density estimation (hint!) to account for uncertainty and thus use a smoothing kernel. Also, that way it is visually way more appealing, in my opinion. I can remember using the faux density surface technique myself, when I tinkered with Processing since it is cheap to implement. It would be great, if the CartoDB folks added the capability for smooth KDEs though (I haven’t checked if it doesn’t already).)
5. Since the map is Web mercator and supposedly the uncertainty attached to the location of tweets is spatially not very variable (at least not systematically), we should technically see the tweet footprints having substantially varying sizes (and shapes) depending on their latitude.
6. Also, because of this it would be worth a thought to have either a reference or some normalisation of some kind.
I’m fully aware that as a professional geographer I may well be strongly over-thinking some of these issues. However, I’m still optimistic that maybe with the occasional map critique we might be able to improve our discipline (and its understanding by its new adopters as well as the appreciation of the wider audience) a bit here and there.
(edit: fixed typos)