Technical derisking strategies for research and development

The problem with technical weirdness
In the popular imagination, the craziest engineering happens behind the locked doors of Research and Development departments. There lie the flying cars, humanoid robots, magical cures, the output of Willy Wonka’s imagination. Engineers outcompete each other to make the most eccentric and creative solutions, business reality be damned.
Of course in today’s world, R&D is really the process of removing technical weirdness, not pouring it on. With few exceptions, corporate management’s imperative is to funnel useful technology into product pipelines and other revenue-generating activities – not to stockpile neat science just for its own sake. Doing this requires stripping away the scariest ways for an innovation to fail, preparing to turn up the heat on detailed engineering, manufacturing, marketing, and so. If a company cannot do this, R&D is dead weight, and creating new technology will be left to others.
Identifying and eliminating technical risk quickly and cheaply is therefore essential when working in a corporate R&D team. Here are a few perspectives and strategies.
Monkey first
Google[x] is perhaps the world’s best known corporate R&D program. Despite (presumably) generous funding and an exceptionally broad domain of projects, their management colorfully broadcasts the value they place on technical derisking. Their manta: to train a monkey to recite poetry on a pedestal, don’t start by making a pedestal. Train the monkey first. It’s the riskier part, and if the monkey can’t recite poetry, who cares about the pedestal?
So for R&D projects, it’s critical to think through and prioritize the main technical risks. Don’t prioritize the things that are technically low-risk, even if they are fun and show off the team’s core expertise. If the scientific principles involved are new, or still not completely understood, it’s probably high risk. If a feature is cosmetic, amenable to off-the-shelf solutions, or a problem of fine-tuning or system integration, it’s probably lower risk. Properly executed R&D projects address the biggest risks first, proceed if they pass, and cancel if they fail.
Of course, what is technically risky is subjective, and informed scientific specialists may debate it. But it can also be helpful to weigh risks by comparing to analogous engineering problems: if a risk closely resembles one with a routine, off-the-shelf solution, it is probably not high priority.
The temptation to start with low-priority items is particularly high when deciders aren’t experts in the underlying science, for example upper managers and investors. So in practice, it’s best to write down the key technical risks, then argue about and revise them frequently. For all R&D projects now, I start my review meetings with a slide that bullet points the 3-6 main technical risks, followed by a green/yellow/red ranking of progress. The goal of early development projects is to move from red to green quickly – a visual aid that non-experts can clearly associate with progress.
For entire projects, I’m also phasing in a tool that I call a Failchain. This is a document that resembles a watered-down FMEA (design failure modes and effects analysis) – it allows us to imagine, target, and track technical risks throughout a project. What’s neat is that it represents complicated R&D projects as a collection of risks, just as design documents might break down a complicated assembly as a collection of parts.
Don’t fall in love: design-build-test
We overattribute creative success to flashes of brilliance, and forget about the long journey needed to make those ideas actually work. Here, this means many cycles of failure and correction. Don’t fall too in love with brilliant ideas, especially your own – testing will inevitably make it obvious that your brilliant idea needs to change. Instead, plan projects as iterations of design, building, and testing. This will reveal critical failure modes early, giving you time to correct them. The process is complete when an idea passes a collection of tests that adequately proves its risk is low enough to advance out of R&D.
Where this really becomes a problem is in design projects, especially those in SolidWorks and other media that make your work look pretty. I’ve been guilty of it myself – what starts as a simple project becomes my personal masterpiece. I add nonessential features, design for cosmetics over utility, add flowy lines and delicately rounded corners. And I run way behind schedule. As the deadline approaches, I have a beautiful digital model with little evidence that the key principles actually meet engineering requirements. I make Juicero.
To combat this, wherever possible, I adopt from programming what I call F5 engineering – as in the age-old guilty pleasure of hitting the F5 keyboard shortcut (“compile and run”) even though you know a program will crash. When you embrace that failure is inevitable, you pay more attention to where and how the engineering fails. This will either verify your suspicions, or reveal new ways of breaking – both of which are much more actionable early in the design process. And once you’ve seen your beautiful creation crash and burn, you gain new clarity and urgency on fixing it. Of course, whether hardware or software, test in a controlled environment where you can catch the failure safely and cheaply. And mind how others interpret your own work product crashing in spectacular fashion – keep non-researchers clear of the proverbial (or not) shrapnel.
Where there is personal creativity, there is pride. Two inventors will each think their own solutions are perfect. But the finish line is testing, not a brilliant solution. So have both inventors perfect their own solutions, but also perfect the team’s approach to testing. One side’s test will expose weaknesses in the other’s solution, and vice versa. You end up with numerous failed but insightful tests, two incomplete inventions, and a strong start to a third, superior invention that appropriates the best ideas from both.
R&D is the art of reconciling imagination with reality, so embrace reality, too. When it exists only on paper, or in CAD, an invention is only a tiny fraction of the total project. Make early prototypes modular and hackable, not integrated, polished masterpieces. And when overseeing R&D projects, embrace the fact that most of the time and expense will be spent on design-build-test cycles.
First principles and their proper place
In a prior job, my boss and I tried to convince a large camera manufacturer to sponsor development of a component based on our technology. This would fit into camera modules that they would sell, for the first time, in consumer and smartphone markets. This company was no stranger to risky R&D – their position was and remains dominant because they invested many years in new semiconductor materials and foundry processes, from academic research on up. We had great respect for their engineering values, and were prepared to show very detailed plans and simulations to make our case. But the simplicity of their questions surprised us.
To convince his senior management to fund the project, our contact needed three things from us: a spreadsheet estimate of manufacturing cost, a “duct-tape” proof-of-concept experiment, and a quantitative model so simple that you could solve it on a napkin. The first two were expected, and we provided them. The third was not – the underlying physics of what we were doing were rather complicated, involving electrodynamics and multiphase fluids that were suited for finite element simulations. This was much harder to convey than high school-level physics on a napkin.
His reasoning, however, was sound: modeling it with high school physics, even if not terribly precise, established the best- and worst-case outcomes of the technology. If it needed to break such basic laws of physics to work, it was probably doomed. Conversely, the same model could tell us how far we could scale the approach, an important consideration where his product program aimed to increase camera speed, decrease size, and minimize power consumption over a roadmap of several years.
In the end, I found a way to compute two forces from our lab measurements, and plug them into Newton’s second law (F = ma). Low and behold, our long-term claims violated this high-school science principle. Our solution clearly wouldn’t work – but luckily it inspired a new solution that, a week later, did work. Now, I always try to reduce engineering problems along the simplest principals possible: free-body diagrams, Ohm’s law, Bernoulli and thin-lens equation, energy conservation, bandwidth and rate limitations, and so on. They’re rarely a precise answer, but they provide some protection against delusionally complicated reasoning, and some insight into how far a technology can go before hitting a fundamental limit.
Manual first, then automatic
Not too long ago, any automation engineering project was a risky endeavor. Not so today – the availability of computer control, cheap precision motors, machine vision, and network/cloud/IOT resources means that automation is something that often solidifies well beyond the doors of the R&D lab. Of course, exploiting these resources to automate otherwise unremarkable things is a common pitch for many a startup: self-programming thermostats, pick-and-place PC board assembly, internet-connected door locks, and so on.
But automation can be a distraction and delay reckoning of the truly risky aspects of a project. It can become the fancy pedestal. And prematurely tackling automation can waste time and money: when their inputs vary, many processes are much more efficient by hand, as evidenced by Tesla’s recent flufferbot parable. As such, it is often wise to bump the automation aspect of an R&D project to low-priority, and to derisk the high-priority part (the dancing monkey) by hand. In reality, this can lead to some puzzled looks on the part of your peers: “If you’re trying to build a robot to do X, why are you spending so much time doing X with your hands?” The answer is that doing X the first time is much harder than doing it a thousand times thereafter.
Sharing a formal analysis and prioritization of key technical risks helps this conversation. So does presenting an analogous problem where the automation part is essentially the same, but already solved. For example, in life sciences, hundreds of different products converge on liquid handling robots with remarkably similar motion control, fluidic components, consumables, and programming interfaces. As such, programming sophisticated multistep assays or preparing massive molecular libraries no longer demand a high-stakes R&D effort. The technical risk lies in the chemistry and biology, not the automation engineering.
Conclusions
These are strategies for technical derisking that I hope transcend industry, company size, and scientific field. Distinguished researchers and inventors typically run R&D organizations, but managing technical risk is really a profession of its own. Managers that do that well often butt heads with scientists’ instincts to do the cool and novel thing. But this enables the weird and impressive things in the first place. Where corporate managers question the value of investing in R&D at all, strategies such as these are essential to winning over skeptics and making new science work.