• Week 380

    17 April 2020

    Where were we?

    I last wrote notes around Week 374. My Makefile tells me this is about Week 380. I think that’s correct. If not, well, time has slightly lost some of its meaning in recent weeks, and who’s counting? Week 380 it is.

    In week 375, I finally finished my career review, and [wrote that up]. That was a prelude to more formally seeking out new projects and clients. Of course, what then happened was COVID-19 made it quite clear that we were not proceeding as normal for a bit. I left the studio and went to work from home, and started trying to investigate new work from there.

    Which was not, at that point, particularly fast-moving, and, coupled with the strangeness of lockdown, everything slowed down a little. It was going to be a challenge to write ongoing weeknotes where I invented euphemisms for “not very busy over here”, so I went a little quiet.

    When I write down what I’ve been up to since then, though, it’s a decent amount:

    • I made voipcards. This was, initially, one of my one-line-gags turned into a small project. It was also a useful tool to prod at learning a bit more Svelte, to practice building PWAs, and to occupy my mind. It turned out quite popular on the internet, which led to me writing about the problems of solutionising, and why I’m still not sure it’s that good a project.
    • I released the 2.0.x firmware for 16n, and wrote about that here. This had been on the shelf for a different project - Mayhill - for a while, and I realised it could easily be ported to 16n. The big feature this introduces is configuring your hardware from the browser, over USBMidi. Really pleased with how this turned out, and the community feedback has been great.
    • I built up a personal electronics project over a few days, which turned out rather well after some fettling. Fun things I learned here included using naked board substrate as a transparent surface for LEDs to shine through, how to drive 256 LEDs off only 64 channels (four sixteen-LED drivers chained over I2C), and then tweaking the update rate of the LEDs so they don’t flicker on cameras as well as to the naked eye. (That involved moving the update rate to an integer division of 30 frames a second…)
    • been spending some time volunteering on Makerveristy’s PPE effort - primarily, on a tangential project to the 3D printing they’ve been launching, where I’ve been lending some support on digital logistics work as well as comms and lightweight project management.
    • pitching a bit and having phone-calls and chats.
    • quite a lot of business-related admin.
    • setting up two new projects. Let’s called them Bradnor and Easington. I signed the contracts for Bradnor, a short project around infrastructure for an IOT project, last week, and we’re nearly there with Easington.

    That felt like enough to finally write about. And now I’m back in the saddle, it should be harder to break the chain next week. As ever: onwards.

  • I recently released the 2.0.0 firmware for 16n, the open-source fader controller I maintain and support. This update, though substantial, is focused on one thing: improving the end-user experience around customisation. It allows users to customise the settings of their 16n using only a web browser. Not only that, but their settings will now persist between firmware updates.

    I wanted to unpick the interaction going on here - why I built it and, in particular, how it works - because I find it highly interesting and more than a little strange: tight coupling between a computer browser and a hardware device.

    To demonstrate, here’s a video where I explain the update for new users:

    Background

    16n is designed around a 32-bit microcontroller - Paul Stoffregen’s Teensy - which can be programmed for via the popular Arduino IDE. Prior to version 2.0.0, all configuration took place inside a single file that the end-user could edit. To alter how their device behaved, they had to edit some settings inside config.h, and then recompile the firmware and “flash” it onto the device.

    This is a complex demand to make of a user. And whilst the 16n was always envisaged as a DIY device, many people attracted to it who might not have been able to make their own have, entirely understandably, bought their own from other makers. As the project took off, “compile your own firmware” was a less attractive solution - not to mention one that was harder to support.

    It had long seemed to me that the configuration of the device should be quite a straightforward task for the user; certainly not something that required recompiling the firmware to change. I’d been planning such a move for a successor device to 16n, and whilst that project was a bit stalled, the editor workflow was solid and fully working. I realised that I could backport the editor experience to the current device - and thus the foundation for 2.0.0 was laid.

    MIDI in the browser

    The browser communicates with 16n using MIDI, a relatively ancient serial protocol designed for interconnecting electronic musical instruments (on which more later). And it does this thanks to WebMIDI, a draft specification for a browser API for sending and receiving MIDI. Currently, it’s a bit patchily supported - but there’s good support inside Chrome, as well as Edge and Opera (so it’s not just a single-browser product). And it’s also viable inside Electron, making a cross-platform, standalone editor app possible.

    Before I can explain what’s going on, it’s worth quickly reviewing what MIDI is and what it supports.

    MIDI: a crash course

    MIDI - Musical Instrument Digital Interface describes several things:

    • a protocol for a serial communication format
    • an electronic spec for that serial communication
    • a set of connectors (5-pin DIN) and how they’re wired to support that.

    It is old. The first MIDI instruments were produced around 1981-1982, and their implementation still works today. It is somewhat simple, but really robust and reliable: it is used in thousands of studios, live shows and bedrooms around the world, to make electronic instruments talk to one another. The component with the most longevity is the protocol itself, which is now often transmitted over a USB connection (as opposed to the MIDI-specific DIN-connections of yore). “MIDI” has for many younger musicians just come to describe note-data (as opposed to audio-data) in electronic music programs, or what looks like a USB connection; five-pin DIN cables are a distant memory..

    The serial protocol consists of a set of messages that get sent between MIDI devices. There are relatively few messages. They fall into a few categories, the most obvious of which are:

    • timing messages, to indicate the tempo or pulse of a piece of music (a bit like a metronome), and whether instruments should be started, stopped, or reset to the beginning.
    • note data: when a note is ‘on’ or ‘off’, and if it’s on, what velocity it’s been played it (the spec is designed around keyboard instruments)
    • other non-note controls that are also relevant - whether a sustain pedal is pushed, or a pitch wheel bent, or if one of 127 “continuous controllers” - essentially, knobs or sliders - has been set to a particular value.

    16n itself transmits “continuous controllers” - CCs - from each of its sliders, for instance.

    There’s also a separate category of message called System Exclusive which describe messages that an instrument manufacturer has got their own implementation for at the device end. One of the most common uses for ‘SysEx’ data was transmitting and receiving the “patch data” of a synthesizer - all the settings to define a specific sound. SysEx could be used to backup sound programs, or transmit them to a device, and this meant musicians could keep many more sounds to hand than their instrument could store. SysEx was also used by early samplers as a slow way of transmitting sample data - you could send an audio file from a computer, slowly, down a MIDI cable. And it could be also used to enable computer-based “editors”, whereby a patch could be edited on a large screen, and then transmitted to the device as it was edited.

    Each SysEx message begins with a few bytes for a manufacturer to identify themselves (so as not to send it to any other devices on the MIDI chain), a byte to define a message number, and then a stream of data. What that data is is up to the manufacturer - and usually described somewhere in the back pages of the manual.

    Like the DX7 editors of the past, the 16n editor uses MIDI SysEx to send data to and from the hardware.

    How the 16n editor uses SysEx

    With all the background laid out, it’s perhaps easiest just to describe the flow of data in the 16n editor’s code.

    • When a user opens the editor in a web browser, the browser waits for a MIDI interface called 16n to connect. It hears about this via a callback from the WebMIDI API.
    • When it finds one, it starts polling that connection with a message meaning give me your config!
    • If 16n sees a message aimed at a 16n requesting a config, it takes its current configuration, and emits it as a stream of hex inside a SysEx message: here is my config.
    • The editor app can then stop polling, and instantiate a Configuration object from that data, which in turn will spin up the reactive Svelte UI.
    • Once a user has made some edits in the browser, they choose to transmit the config to the device: this again transmits over SysEx, saying to the device: here is a new config for you.
    • The 16n receives the config, stores it in its EEPROM, and then sets itself to use that config.
    • If a 16n interface disconnects, the WebMIDI API sends another callback, and the configuration interface dismantles itself.

    Each message in italics is a different message ID, but that’s the limit of the SysEx vocabulary for 16n: transmitting current state, receiving a new one, and being prompted to send current state.

    With all that in place, the changes to the firmware are relatively few. Firstly, it now checks if the EEPROM looks blank, at first boot, and if it is, 16n will itself store a “default” configuration into EEPROM before reading it. Then, there’s just some extra code to listen for SysEx data, process it and store it on arrival, and to transmit it when asked for.

    What this means for users

    Initially, this is a “breaking change”: at first install, a 16n will go back to a ‘default’ configuration. Except it’s then very quick to re-edit the config in a browser to what it should be, and transmit it. And from that point on, any configuration will persist between firmware upgrades. Also, users can store JSON backups of their configuration(s) on their computer, making it easy to swap between configs, or as a safeguard against user error.

    The new firwmare also makes it much easier to distribute the firmware as a binary, which is easier to install: run the loader program, drag the hex file on, and that’s that. No compilation. The source code is still available if they want it, but there’s no need to install the Arduino IDE to modify a 16n’s settings.

    As well as the settings for what MIDI channel and CC each fader transmits, the editor let users set configuration flags such as whether the LED should blink on data transmission, or how I2C should be configured. We’ve still got some bytes free to play with there, so future configuration options that should be user-settable can also be extracted like this.

    The 16n editor showing an available firmware update

    The 16n editor showing an available firmware update

    Finally, because I store the firmware version inside the firmware itself, the device can communicate this to the editor, and thus the editor can alert the user when there’s a new firmware to download, which should help everybody stay up-to-date: particularly important with a device that has such diverse users and manufacturers.


    None of this is a particular new pattern. Novation, for instance, are using this to transfer patch settings, samples, and even firmware updates to their recent synthesizers via their Components tool. It’s a very user-friendly way of approaching this task, though: it’s reliable, uses a tool that’s easy to hand, and because the browser can read configurations from the physical device, you can adjust your settings on any computer to hand, not just your own.

    I also think that by making configuration an easier task, more people will be willing to play with or explore configuration options for their device.

    The point of this post isn’t just to talk about the code and technology that makes this interaction possible, though; it’s also to look at what it feels like to use, the benefits for users, and interactions that might be possible. It’s an unusual interaction - perhaps jarring, or surprising - to configure an object by firing up a web browser and “speaking” directly to it. No wifi to set up, no hub application, no shared password. A cable between two objects, and then a tool - the browser - that usually takes to the wireless world, not the wired. WebUSB also enables similar weird, tangible interactions, in a similar way to the one I’ve performed here, but with a more flexible API.

    I think this an unusual, interesting and empowering interaction, and certainly something I’ll consider for any future connected devices: making configuration as simple and welcoming as possible, using tools a user already understands.

  • A bit over a week ago, I made a small tool - or toy, depending on your perspective, or the time of day - called VOIPcards. I demonstrated it on my public Twitter account:

    It was made after my friend Alice showed pictures of her backwards post-it notes she’d hold up to her videoconference. I thought about making a tool for having on-demand backwards flashcards for video calls. A small toy to make, and thus, some time to practice some modern development practices, make a PWA and put myself to making something during Interesting Times.

    Since then, a lot of people have liked it, or shared it, or been generally enthusiastic. Several have submitted patches and, most notably, translations, to it. And I’ve added some new features: white on black text, choice of skin-tone for emojis, and settings that persist between sessions.

    I’m not sure it’s any good, though.

    I don’t think it’s bad, though, and if it’s making a difference to your remote practice, that’s great. But I don’t think it’s the right tool for what it sets out to do.

    And here’s the thing: it wasn’t meant to be. In some ways, the point of VOIPcards is as much a provocation as it is a thing for you to use. It says: here are things people sometimes need to say. Here are things people sometimes need to do, to support colleagues on a call. Here are things people need to do because it’s fun.

    This is why I think of it as between a tool and a toy: it’s fun to use for a bit, it’s a provocation as to the kind of things we need alongside streaming video, and if you put it down when you’re bored (and your behaviour may have changed) that is fine.

    The single most important card in the deck is a tie between “You have been talking a long time” and “Someone else would like to speak”. These are useful and important statements to make in face-to-face meetings, but they’re doubly important when there’s twelve of you on a Zoom call. Sometimes, the person with better video quality noticing that someone wants to speak, and amplifying that demand, is good.

    If what you come away from VOIPcards with is not a tool to use, but a better way of thinking about your communication processes, that’s probably more important than using a fun app.

    But: equally, if you do find it useful, this isn’t a slight. That’s great! I’m glad it works for you.

    I think the reason it’s popular is that people respond to the idea of it. The idea of the product has immediate appeal - perhaps more so than the reality of it. And that appeal is so immediate, so instant, that it makes me distrust it. Good ideas don’t just land instantly: they stand up to scrutiny. I’m really not sure VOIPcards does. At the same time, there’s value in the idea because of what it makes people think, how it makes them subsequently behave. And I think some of that value really does come down to it being real. A product you can try, fiddle with, demonstrate, lands stronger than a back-of-a-napkin idea - even if it turns out to be not much more than the idea.

    Another obvious smell for me is that I don’t use the product. I enjoyed making it, and I was definitely thinking about other peoples’s needs - however imaginary - when making it. But it’s not for me, which makes it hard to make sensible decisions about it.

    (What do I do instead? Largely, hand gestures and big facial expressions: putting a hand up to speak, holding a palm up to apologise for speaking over someone, lots of thumbs-ups. It puts me in mind of the way Daniel Franck and Ty Abraham describe the way the “Belters” - first-generation space dwellers - communicate in their Expanse novels. Belters talk with lots of broad hand-and-body gestures, rather than facial ones, because the culture developed communication techniques that worked whilst wearing a spacesuit. No-one can see an eyeroll through a visor, but everyone can see theatrical shrugs, sweeping hand gestures. I liked that. It feels like we’re all Belters on voice chat. Sublety goes out the window and instead, a big hand giving a thumbs-up into a camera is a nice way to indicate assent without cutting into somebody’s audio)

    When I’m being most negative about VOIPcards, it is because they feel like solutionising - inventing a solution for a hypothetical problem. In this case, though, the problem is definitely something everybody has felt at some point. But this solution is perhaps too immediate, came too much from the “implementation” end of the brain to be the robust, appropriate answer to said problem.

    There’s a lot of solutionising around right now, and I’m largely wary of it all. The right skills at the moment are not always leaping to solutions, working out what you can offer others, guessing at what might happen, what you might expect, and how you can respond to that. I think that the right skills to have - and the right tone to strike right now - are to be responsive, and resilient. Dealing with the unexpected, the unknown unknowns. Not solving the problems you can easily imagine, but getting ready to solve the ones you can’t.

    Still: there is also value in making things to make other people think, rather than do. The win isn’t necessarily the product, but the behaviour it inspires. If what people take away from the cards is some time spent thinking more carefully about their communications, rather than yet another tool to use: that’s a win for me.

    (You can try VOIPcards here. It works best on a mobile phone, and you can install it to your homescreen as an app.)

  • What I do

    9 March 2020

    I am a technologist and designer.

    I make things with technology, and I understand and think about technology by making things with it.

    A lot of my work starts with uncertainty: an unknown field, or a new challenge, and the question: what is possible? What is desirable? From there, the work to build a product leads towards certainty: functioning code, a product to be used. I have worked on a lot of shipped, production code, but my best work is not only manufacture and delivery; it also encompasses research, exploration, technical discovery, and explanation.

    Research could involve investigating prior art, or competitors, or using technical documentation to understand what the edges of tools or APIs are - beginning to map out the possible.

    Exploration involves sketching and prototyping in low- and high-fidelity methods to narrow down the possible. That might also include specific technical discovery - understanding what is really possible by making things, compared to just reading the documentation; a form of thinking through making.

    Delivery involves producing production code on both client and server-side (to use web-like terminology), as well as co-ordinating or leading a development team to do so, through estimation and organisation.

    And finally, explanation is synthesis and sense-making: not only doing the work, but explaining the work. That could be ongoing documentation or reporting, contextualising the work in a wider landscape, or demonstrating and documenting the project when complete.

    How does this work happen?

    I work best - and frequently - within small teams of practice. That doesn’t necessarily mean small organisations, though - I’ve worked for quite large organisations too. What’s important is that the functional unit is small, self-contained, and multidisciplinary. I have built or led small teams of practice to achieve an outcome.

    I’ve delivered work with a variety of processes. My favourite processes often resemble design practice more than, say, formal engineering: they are lightly-held, built as much around trust as around formal rules. Such processes are constraints, and they serve to help a team work together, but they are also flexible and adaptable.

    A colleague recently described some of the team’s work on a recent project as “a lot of high-quality decisions made quickly”, and I take pride in that description; I hope to bring that sensibility to all my work.

    You can find out more about past projects. If the above sounds of value to you, do get in touch with me; I am available for hire.

  • Week 374

    2 March 2020

    I’ve been freelance for over seven years. In that time, my work has slowly changed a little in its nature, along with my professional interests, and the shape of the wider market. In a quiet week, following just-about wrapping a few projects, it felt time for a more formal review of my work to date.

    I loosely followed Matt’s notes on his own career review as a starting point: looking at all my past work, and analysing it. What did the successes have in common? What would I like to do more of? Are there trends? As usual, it’s easy as a lone practitioner to assume that it’s all just gig-after-gig, but that’s not true.

    Some focused time with pen an paper let me look clearly at my work. It turns out that over all the wide range of projects, there really are trends, and there is definitely expertise built over time that is worth sharing. So it was definitely useful to take that time to do this properly.

    I then performed an extra step after Matt’s original list of questions: I looked at the how of each project. How did those circumstances occur? How did the work happen? Who is involved? Who is committed? That helped me gain a better focus on what circumstances are most successful for me, and how I enable the work to happen.

    Having done all that, there was a small piece of writing work to synthesize and explain everything. At which point, I promptly got a cold - not COVID-19, I should add - which knocked out the end of my week. So finishing up the review work would have to begin in week 375. Still, the groundwork was laid this week, and it was a good way to use some time off project work.

  • Week 373

    23 February 2020

    This week I:

    • spent a half day on Monday wrapping up a polish pass on Willsneck with its designer. I then gave a short talk about Willsneck to the direct client on Thursday Morning. Primarily, I talked about the deployment strategy we were using, and why it was a good fit for the project and client. It was good to validate the thinking we’d been doing as a team, and also to communicate that what looked like off-the-cuff decisions did still have a bunch of thought behind them. There were good questions and some nice feedback, so that was satisfying.
    • spent Tuesday workshopping with Tim. Early investigations into something, digging into what really interested him (and me) about a particular idea, and then some due diligence and research into prior art.
    • continued from my end-to-end breakthrough on Mayhill with further iteration. The firmware feels pretty complete; there’s one error at the level of hardware design that will need resolving before I can confirm that, but otherwise, I’m pleased. I also continued to iterate on the software end of things, adding features to the browser-based editor. I looked into the state of the hardware, but then got a bit downcast as I realised the effort required to take it beyond the workbench. With its own fast microcontroller in, it likely falls under FCC regulation of “unintentional radiators”, which puts another hurdle in front of selling or shipping it. Something to think about, but meant I largely parked thinking about it for a bit.
    • finally, snatched a victory from the work on Mayhill by realising there’s nothing to stop it working with 16n, so started working on a version 2.0.0 firmware for that, which should be ready soon. Early feedback from the community has been highly positive, so it’ll be good to ship that to as many people as possible.
  • Week 372

    16 February 2020

    Lots of work on Willsneck this week, to bring it into land. We wrapped everything up by Thursday PM, and I’ve got a half-day on this left over to finalise the production deployment. Happy with how it’s turned out, I think.

    Hallin got merged into master by the client, so I’m looking forward to hearing how they get on with it in production.

    I had a good chat with Christian and Miguel from Schema who were in town, having been introduced by a friend. Always nice to meet new people, and interesting to chat to them about designing for data, working with clients on that problem, and the tools used to do it. Thanks to Steve for putting us in touch!

    Finally, I spent some of Friday working on Mayhill, and had a really big breakthrough. That breakthrough is what we called end-to-end at Berg: a complete workflow up and running, even if every component is a bit ‘version one’. I began by working on overhauling some of the browser-based UI: making it look a lot tidier, and refactoring a lot of the large App.svelte file into smaller components, including a big wrapper component for handling the MIDI context. Then, I wrote the JavaScript to translate edits in the browser to Sysex data. Finally, back in firmware-land, I wrote the firmware code to intercept that stream of data and write it to Flash RAM. There were other small firmware kinks to iron out, too.

    By the end of the day, I had a system where you could open an editor in your browser, connect a physical object (that I’d both designed the electronics for and coded the firmware on) and see it appear automatically, and reconfigure it in the browser app. The browser app could transmit edits back to the hardware, which would persist those changes. Hugely satisfying: what’s largely left on this is polish, now, and working on the “1.0” hardware (rather than this ‘development board’) that I built for myself.

    Still not quite sure how I feel about Svelte. I like the compiled-up-front approach, especially for something that couldn’t really be done many other ways; I’m not quite sure about its idiosyncratic syntax, and whilst I quite like its two-way reponsiveness, that leads to a propensity to get yourself in a tangle. Still, it’s enabled all the things I’ve needed to do so far in a reasonably straigihtforward manner, and I’m a big fan of its Vue-style single-file components, so it’ll do for now. One thing in its favour is that it was easy enough to pick up after months away: most of it, most of the time, is just browser-based technologies.

    A good week, then: mainly code for clients, some more esoteric code for myself with a serious breakthrough, and some good conversations to round it out.

  • Week 371

    11 February 2020

    I’m writing these notes awfully late, so let’s rattle through them:

    • shipped all the final changes to the client on Hallin. They seem pleased, so hoping to wrap this next week.
    • kicked off a second phase of work on Willsneck. This was primarily focused on front-end design changes: new markup and CSS, and content updates. There was still some more unusual code to implement, though. Some of the content on the site is extracted from JSON files using Hugo’s ability to use JSON data in templates. These JSON files are derived from online sources, and needed to be regularly updated. How to do that with a static site? It turns out it’s now quite straiightforward. We’re already deploying the site using Github Actions on every push to master. Actions also supports cronlike functionality, with scheduled actions. I wrote some scripts to download and process the relevant JSON files, and then wrote some Actions workflows that, once a day, would run the script, commit the results back to master, and deploy the site. Really happy with this: I’m sure you can also do similar with CI, but Github Actions are really lightweight and straightforward out of the box. Might use this pattern again in future.
    • met up with Gabi from Hyper Island, who have moved their London office into Makerversity - just around the corner from me. We debrief on the module I’d worked on, and caught up more generally.
    • did a bit more writing on Ninebarrow. Slowly moving forward; still painful.
    • finally, booked all my accomodation for Loop this year. Really looking forward to this again: a neat combination of being personally interesting and enjoyable, and a good source of inspiration and thinking-time for my work-brain. Can’t wait.
  • Week 370

    2 February 2020

    Back to code, mainly, in week 370.

    I fixed up all the major issues the client had requested fixing on Hallin - two fairly chunky bugs I needed to take apart a little to fix, and two minor tweaks. With those resolved, the client’s tech lead gave my branch a thorough code review. They were very happy with the way I’d dived into their codebase, and most of the feedback came down to notes on minor formatting issues, and on code that was perhaps not so legible at first look. A few quick commits took care of some inconsistent formatting. More important was a second pass on the code that wasn’t so clear. That meant simplifying conditionals, reducing fragility of a few parts, and tidying things that hadn’t seemed overcomplex when I was writing them.. I also extracted some highly specific code into something more general - but not too general - that would set a good precedent for any future refactoring of related tasks. As ever, the integration/feature tests acted as an excellent safety net, and I wrapped up the code review in an afternoon.

    There was some brief discussion around a pacey second phase of Willsneck that will kick off in week 371. This time around, we have a firmer deadline, but also are much firmer in what needs delivering in that timeframe, so I sat down with the designer to go over what changes needed to be done, and wrote up a thorough document to cover my estimates and highlight anything I thought was a risk. I shall dive into that code on Monday.

    I spent some time on Wednesday continuing to work on the writing project that is Ninebarrow. I am making progress - not hugely quickly, but progress nonetheless. It is already proving more challenging than I expected, partly because I cannot quite write as fast as my brain can go, and so I begin to start doubting or questioning what I’m doing whilst in the process of doing it. Shutting down that critical voice long enough to work is going to be something I’ll have to practice!

    I launched the Futurelearn courses that were previously known as Longridge, and wrote them up here. I’m pleased that they’re now out in the world. Next week, I’ll check in on how the learners are getting on in their discussion and comments threads.

    And finally, I payed my tax bill. Thank god that’s done.

  • In the second half of 2019, I worked on a project I called Longridge. This project was to write three online courses in a series called An Introduction to Coding And Design, for a programme of courses from the Institute of Coding launching in 2020. I worked with both Futurelearn - the MOOC they’re hosted upon - and the University of Leeds to write and deliver the courses.

    Those courses are now live at Futurelearn, as of the 27th January 2020!

    The courses are designed as two-week introductions to topics around programming and design for beginners interested in getting into technology, perhaps as a career.

    I’ve written up the project in much more detail here; you can read my summary of the work here. I cover some of the reasoning behind the syllabus, the choice of topics, and the delivery. And, most importantly, I thank the collaborators who worked with me throughout the process, and collaborated on the courses.

    Find out more.