Where the time goes

One sign of a large project can be the degree of arcane hoop-jumping foisted on project members. Right now, I’ve a few projects underway where I’m supposed to track my time. In at least one case, the overhead of tracking my time will amount to more of my time than the time on the project I’m tracking. If you see what I mean. But here we are.

For the most part I’m rather enjoying using Toggl, which syncs nicely between web, desktop and mobile apps. It also nags me quite successfully, without being too smug about it. However, entering data is a little clunkier than I’d like, and the visual design and typography feel to me just a little… off, somehow. Like the app should be doing just a little more to render my recent history clearly? I’m not sure.

I’ll very likely stick with Toggl, but Brett Terpstra’s command-line/plain text system doing has caught my attention. I’m working partly on an Ubuntu laptop these days (more about that another time, perhaps, but the short version is: meh, but it was cheap) and tearing core tools out of the Mac/iOS ecosystem has its attractions. Presumably I could stick a doing log file in Dropbox and access it from whatever system I happen to be in front of, but these sorts of shell tools aren’t very usable from my phone, so there’s little net benefit right now. This is also why I haven’t (yet?) moved from Things to something more like .taskpaper format files. Also because Things is delightful and fabulous and sync works in exactly the way Dropbox sync all too often doesn’t.

Still: doing: interesting.

Aperture vs. Lightroom

Stephen Hackett has a history of Apple’s photo management application Aperture.

No doubt the program struggled to shake its early reputation. The performance woes and underwhelming feature set in the first version tainted people’s opinions in a way that was hard for Apple to shake.

I have no doubt that this is the case. But I also know that by the time version 3 rolled around, Aperture felt fast in use. Once the import and preview generation cycle had completed, the triage of a large run of shots was invariably snappy. Picking selects, discarding the remainder, tweaking RAW processing and filing images into destination folders was plain fast.

Fast to the point where I need to spend some quality time with Lightroom on my work iMac, trying to work out why its Library mode feels so darn clunky even though I’m running it on vastly superior hardware. It’s partly the weird semi-skeuomorphic display which wants to mimic 35mm slides, complete with their massive surrounds, and hence shows me bizarrely few images even on a 5K display. But it’s also the lag in flicking from one image to the next, which wasn’t a problem I had with Aperture. Even worse is scrolling through the library. How come my phone can handle scrolling through 20,000 images smoothly, but Lightroom can’t?

Perhaps I need to investigate Lightroom CC again. Is it possible to stop the newer app from uploading everything to Adobe’s cloud, yet? Because apart from ‘not being able to justify the inherent data security risk’, that seemed to have promise.

Filtering fake news

YouTube identifies music and video based on an internal system called ‘ContentID‘. Google, Apple and many others have systems for recognising related images (you can use one of them directly within Google image search, by uploading an image to search against, or you can ask your iPhone to show you pictures of trees). I don’t wish to suggest that ‘finding things like an arbitrary image or video’ is a solved problem, but it’s clearly at least partially addressed.

Meanwhile, Snopes does an excellent job of checking and verifying (or debunking) stories which are doing the rounds of social media. PolitiFact won a Pulitzer. A round-up of fact-checking sites by The Daily Dot adds FactCheck.org, Media Matters, and others.

So… suppose you’re Facebook, looking at the wasteland over which you preside. Wouldn’t you want to do something like:

  1. Parse the message a user is about to post, looking for links or embedded media and extracting some sort of ID metric for that object.
  2. Check that content key against a modest number of sources, querying for a coarse trust score.
  3. Reflect that score back to the user prior to publication, with a link to the source article. For example: “You’re about to republish this image. Snopes thinks it’s likely a fake. Read more here [link]”.
  4. Allow the user to publish anyway, should they so choose.
  5. Perhaps also (and optionally) badge likely-fake items which appear in the user’s feed.

Would this open up a writhing pit of snakes about authority, editorial judgement and censorship? Sure. But Facebook and Twitter are already writing snake pits. It’s surely not beyond the wit of company execs to present this sort of approach as providing tools for users, and anyway, they already do most of what I’m suggesting: post a commercial audio recording, and YouTube or Facebook will flag it as such and (in the former’s case, at least) divert advertising revenue to the copyright holder.

That is: similar systems are already in place to protect copyright holders. What I’m asking here is for some of the same sorts of tools to be surfaced in the interests of asserting and maintaining moral rights. Such as my moral right not to be subjected to an endless stream of recycled crap, or our collective moral right not to accidentally render ourselves extinct as a population by doing something profoundly stupid just because somebody worked out how to make (transitory, as it turned out) money out of the process.

Put it this way: I think most of the people I follow would check their posts for validity, if only it was easy for them. So let’s do the easy bit.

The hard part, as best I can tell, is funding Snopes et al. to maintain the necessary APIs. It’s in music publishers’ interests to maintain databases of the songs over which they claim rights, because there’s a revenue stream to be had from the playing of those tracks. But… oh wait! Facebook is raking in advertising revenue. Ding!

In the end, the question boils down to: how much money is Facebook willing to spend on cleaning up their system? Their current dead tree media  buy is meaningless unless they’re actually building tools which help drain the swamp they’ve created. The objective here shouldn’t be rebuilding our trust in Facebook, it should be providing the tools which help us trust the media we’re seeing on a continuous basis.

I don’t think one can do that by asserting what’s ‘trustworthy’, there are too many value judgements involved. But one could provide access to datasets of what’s clearly bobbins – even for conflicting values of bobbins – and tools to apply those to our media streams.

I’ll trust Facebook when they give me tools to recognise and deal with the problem of fake news, not when they stick a poster on my bus stop asserting how much they care about the issue.

Penn Jillette, In Conversation

Penn Jillette, In Conversation:

“there’s a secret that I would like to take credit for uncovering: The audience is smart. That’s all. Our goal when we started was ‘Let’s do a magic show for people smarter than us.’ No other magicians have ever said that sentence.”

Great interview. One of my biggest regrets about Demo: The Movie (and there are many) is that we ran out of time trying to arrange an interview with Jillette.

Push Off

It’s not that we need finer-grained control over our push notifications. It’s that we need the algorithms to be a whole lot less artificially stupid. This, for example, is not something that needed to interrupt my weekend.

The whole point of newspapers like the Washington Post is editorial taste, their ability to filter the world and find the bits of it which are important. It’s not clear to me that outlets like the Post should be risking algorithmic control.

Which will be the first newspaper to publicly pledge they’ll always keep a human in the loop?