Week 381
27 April 2020I got my head down on an initial phase of Bradnor this week.
Bradnor is a small project to build out a new back-end for an existing IOT platform. I’m replacing the storage and administration end of affairs: the data gathering and transit layer isn’t changing. The week saw me investigating existing code, evaluating options for replacing parts of it, and deploying the code to newly provisioned infrastructure.
My goal for the end of the week was to get data piped from devices into a database. Once that data was safely piped and stored somewhere, we could then build upon it next week with various visualisation tools and management APIs. But first, we just had to put it somewhere.
I framed the work to the client as “research and development”. Not, perhaps, in the traditional sense of an R&D project - here, the task was known, and the problem-space well defined. But I was still going to have to research options for this greenfield project, sometimes by writing software or testing third-party services, and then present that work back so a path could be chosen. Researching what could be done, and only then developing the thing that needed doing.
That meant the first chunk of work was reading documentation, tinkering with small tests, and a lot of synthesis and writing to present back to the client.
Once we’d agreed on an approach, I started building out the skeleton of an application to get us to parity with the existing infrastructure. That first involved reproducing some data-processing that was originally written in Javascript in Ruby, and then getting live data flowing through my code, and into our datastore.
Finally, I provisioned some suitable hosting. It’d be possible to move everything to a large chain of small cloud-based services - queues, on-demand functions, datastores - but we chose, for now, to use a simple PAAS for the application code, and a managed database instance for our storage.
The data is perhaps the most valuable part of the product, so I felt it was worth not pretending we have time to be our own DBAs, and instead invest in someone else scaling it, managing it, and maintaining backups. A traditional application structure, but one that would do the job for now (especially with sensible background of tasks, thanks to Que).
There’s always a trade-off between expending effort on application code versus application infrastructure: do you spend time arranging an array of services, but ultimately writing less code, or do you invest in code and extract to services later?
I tend to prefer starting with monolithic code, and then extracting to services later. That seemed especially apt here, given the code was already a greenfield rewrite, and as such, I was still wrapping my head around the needs of the domain and the other platforms it was built out of. By keeping the infrastructure relatively simple - and knowable - I hoped the next most obvious changes to make to it would emerge in time.
So I focused on getting data from devices, through the pipeline, and into storage - and getting this deployed by the end of the week. With this solid base was in place, I could spend week 382 focusing on fleet management, data-visualisation, and external API acccess - as well as contemplating a roadmap for future upgrades, and perhaps taking better advantage of cloud services.
By the end of the week, we had code running, data flowing into it before being processed and stored, a deployment pipeline set up, and a hefty amount of documentation of both the problem space being explored, and the work that I had produced. A good week’s work, and a good foundation for week 382.