Earlier this month, Australia hit $100 billion in farmgate output ahead of the 2030 goal that was set by the National Farmers Federation back in 2018. This is a milestone worth celebrating.
The same week, I spoke at the ABARES Outlook conference where the latest data shows agricultural productivity growth slowing from 2.2% annually before 2000 to 0.7% since, with a growing productivity hit from climate change.
Last week, the Strategic Examination of R&D (SERD) report landed, calling for "bold reform" of Australia's R&D system. The report recommends establishing a National Innovation Pillar for Agriculture and Food, better coordination across R&D, and changes to incentive structures.
In Australia, we invest nearly $3 billion annually in agricultural RD&E. Our world-class research capabilities have undoubtedly helped the sector to achieve the impressive results we’ve seen to date. But the productivity slowdown tells us something: despite all that investment, we're not getting the impact we need. And I worry that we’re not set up for the future.
At Tenacious, we've worked with researchers and research funders, startups, and small and large agribusinesses across the value chain. What we've seen is that the challenge isn't a lack of good research or smart people; it's that the system isn't set up to turn research into impact.
To do better, we must think of RD&E as a system.
Right now, we measure activities instead of outcomes. Those wrong metrics create wrong incentives throughout the chain, from what gets funded to who owns commercialization to what kinds of risks we'll take. And when incentives are misaligned, even $3 billion can't guarantee impact.
Here are three questions we’ve been asking ourselves and our clients to help get to the root of the problem.
Right now, I see several cases where we're not measuring what matters. First, the $100 billion target itself measures total production value, not whether farms are more profitable or resilient. Higher commodity prices or favorable weather could have grown farmgate output from $60B in 2018 to $100B today, without improving underlying productivity at all.
Another example is the commonly-cited figure that we achieve an 8:1 return on our agricultural R&D investments. As far as I can tell, this comes from a 2023 ABARES report that measures the correlation between R&D spending and sector-level gross value added over time. When both trend upward, the methodology assumes R&D caused the growth. Unfortunately, correlation isn't causation: we can't prove the R&D drove the results without measuring what actually happened on farms.
Finally, many ag research organizations point to cost-benefit analyses that they conduct on their R&D activities. While these show we're thinking about the right questions, in practice they often do more to justify the spend we want to make, than to help us prioritize between different R&D investments or enable more adoption.
How might we instead measure more of what actually matters? Did research get adopted? By how many farmers? Did it improve their profitability? Did it move the needle on productivity?
Andrew Whitelaw highlights this well, pointing out that hitting $100 billion doesn't tell us if farms are better off, and suggesting we track metrics like Net Farm Margin or the percent of production value that farmers retain.
We need to apply this kind of thinking across our R&D system. Not just "did we spend the money?" but "did it create the impact we intended?"
It’s well documented that Australia is not great at commercialization, in agriculture and beyond. In agri-food specifically, many of the attempts over the past 5-6 years to address this have been to add startup- and venture capital-like approaches (e.g., accelerators, venture capital funds) onto the end of the R&D process.
While startups are a key tool in the toolkit, they're far from a silver bullet, and the traditional VC model for funding them has challenges that are exacerbated when we apply additional constraints like geographic and industry-specific requirements.
We're making progress on this. For example, Hort Innovation has launched programs to bring established, offshore technology to Australia and re-written their IP policy to be more compatible with downstream investor expectations. We’re also proud of our "more than venture capital" commercialization partnership with Wine Australia, building on our “more than demo farms” adoption work. And Beanstalk’s Drought Venture Studio addresses skill and funding gaps, helping more research turn into products and services that can help Aussie farmers.
But there's more to explore. What if we enabled existing SMEs to solve industry challenges directly, rather than waiting for startups to form? How might we incentivize offshore solutions, whether startups or OEMs, to bring their technology to Australia and adapt it locally? What would it take to explore business model innovations that break down commercialization or adoption barriers before the technology even reaches farms?
The toolkit is broader than "fund research, then hope a startup commercializes it." We need to use more of what's available.
I have heard versions of this conversation too many times:
Research funder: we can get more match funding so your levy dollar can go further
Grower: great, but I don’t see the value I’m getting from the levy already
Research funder: but we can match it to grow the pie and do more
We don’t measure investors by how many investments they make, or basketball players by how many shots they take. When success in the eyes of research funders is dollars deployed and activities achieved, we have an incentive problem.
Unfortunately, the incentive challenge goes deeper: in an (over simplified) model of research → development → extension, each function is often performed by different departments or even organizations, because of the different skills required at each stage. The issue is that no one has an incentive to move a project through each phase. In fact, if you’re sitting in R, D, or E you actively DON’T want a project to move out of your phase, as it will mean less funding for your activities.
Another example is that culturally, we’ve made it acceptable to take scientific or technical risk, but we’re terrified of execution and market risk. If a big research project (or even a multi-million dollar CRC) fails to deliver because the science didn’t work, we chalk it up to the necessary perils of blue sky research. Yet, the idea of “picking a winner” that then fails is too horrifying to contemplate. In both cases, the money was lost and no impact was achieved. Why do we view them so differently?
Until we address the misaligned incentives, we'll keep producing research that never delivers.
The SERD report rightly calls for bold reform, better coordination, and changes to incentive structures. These recommendations create an important opening for change.
But national coordination can't replace the work that needs to happen within each commodity system. RDCs are critical precisely because they're close to their stakeholders. That proximity is a strength we can't afford to lose.
The question is whether research funders and researchers can treat RD&E as an end-to-end system—addressing what they measure, how they commercialize, and what incentives drive behavior from research through to adoption.
The productivity slowdown tells us we need to do better. The $18 billion in global agtech failures tells us what doesn't work. We have $3 billion in opportunity and the foundations to build on. Now we need the systems thinking to deliver the impact farmers need.
