Posts tagged as interaction

  • I was really taken with this, which @scy (Tim Weber) posted on Mastodon the other day:

    The Mastodon post is very clear, so to quickly summarize: The device has crashed; it’s shared its stack trace optically, using the LED button matrix on the device. To share that stacktrace with the development team, the end-user only has to post a photograph of it to the development Discord, where an image-analysing bot decodes it and shares the stacktrace line references as hexadecimal. Neat, end-to-end stack-tracing, bridging the gap between one hardware device and the people who develop its firwmare.

    This feature is particular to the “community firmware” available for the Deluge. Designed by Synthstrom Audible in New Zealand, the Deluge is a self-contained music device - a ‘groovebox’ - that was originally released in 2016. In 2023, Synthstrom released the firmware for the device as open source, and a community has emerged working on a parallel firmware to the ‘factory’ firmware, with many features and improvements. Synthstrom can concentrate on their core product, and on working on maintaining and improving the hardware; the more malleable part of the device, the firmware, is given to the community to be more malleable.

    Why Share This?

    Firstly, as the comments in @scy’s original post point out: this is very cool. The crash itself goes from being an irritation to a shareable artefact - look at what this thing can do! I am pretty sure if that if I owned one of these and it crashed, I would share a similar post with friends.

    I particularly like it, though, for its transparency. When devices with embedded firmware fail, it can be difficult to work out why, or what has happened - and if the device has no network connection (usually completely understandably!) there’s rarely a way of sharing that with the developer. By contrast, this failure state makes it clear that something has failed and it’s OK to share this fact with the user - because the developer is also asking to be told.

    The Deluge makes this reporting even more challenging in one aspect: whilst the Deluge now ships with an OLED screen, original Deluges only have the RGB matrix and a small seven-segment numerical display for output. There’s no way of displaying any text on these earlier models!

    But by encoding the stack trace as four binary numbers, it can be displayed on the screen as four 32-bit numbers. Sharing the stacktrace is left to the owner - and can be done with a ubiquitous phone camera. Finally, decoding the stack-trace is the responsibility of the bot in the Discord channel - no need to involve the end-user in that necessarily. The part of this process that runs on the embedded firmware is the least complex part of the process; networking is left up to the user’s phone, and the more computationally complex decoding is left to server-side code.

    This feature is also an appropriate choice for the product. Sharing the stack trace to Discord might be a little too involved (or “nerdy”) for some deices, for but for an enthusiast product like the Deluge, with an involved and supportive community, it feels like a great choice.

    And it gives the user agency in the emotional journey of a crash. Most of use have clicked “send” on stack traces going to Apple, Microsoft, or Adobe, and then perhaps just given up on the idea that anyone ever sees our crash logs. But here the user is an active participant in the journey: if they choose to share the stacktrace, they have visibility on the fact it’s been seen and decoded in the Discord channel. And now that they’ve shared it to a social space, they have created an artefact to hang future conversation about the crash off - and where they may even learn about future fixes to it.

    The feature is neat, and a talking point - but it’s also interesting to see how a moment of software failure can be captured without a network connection on the device, shared, and ultimately socialised. (It also beats the IBM Power On Self-Test Beeps by a long stretch…)

    One Thing is an occasional series where I write in depth about a recent link, and what I find specifically interestind about it.

  • I did a few days working with the Good Night Lamp team this week, on some interaction design explorations. A couple of days of talking, thinking and sketching with Adrian and Alex led to some writing, wireframes, storyboards, and animatics.

    Alex asked me to write a bit more about the work, for the Good Night Lamp blog, and there’s now a post over there about it.

    Out of all this work, common strands emerged; in particular, a focus on the vocabulary of the product. One of the things I find most important to pin down early in projects – and which design exploration like this helps with a lot – is the naming of things. How are core product concepts communicated to an end-user? How are they made explained? Making sure nomenclature is clear, understandable, and doesn’t raise the wrong associations in a user’s mind, is, for me, a really core part of product design. Even though many of the core concepts of GNL were clear in our head, by sitting down and drawing things out in detail, I started having to discover what to call things, often bringing Alex and Adrian back to my screen to discuss those ideas.

    This kind of design work initially appears very tactical. It focuses on small areas almost in isolation from one another, exploring the edges and seams of the product. But by forcing oneself to confirm what things are called, confirm what interactions or graphic language are repeated throughout the product, it turns into a much more strategic form of design, which impacts many areas of a product.

    You can read more at the Good Night Lamp site. It was a pleasure working with Adrian and Alex. If this kind of work is something you’re looking for, do get in touch.