At the InfoAg Conference last week, a common and prevalent theme throughout several discussions on the trade show floor, as well as during sessions, suggested the ag industry is on the cusp of truly beginning to utilize data within agriculture.
Several years ago, the main topic of discussion at the conference was all about drones and high-resolution imagery. Every other booth had some sort of drone or imagery product that would impact farming. Obviously, there was a lot of hype and today imagery in several forms has become a readily available data layer farmers and their advisors employ to make more informed and timely decisions.
The same shift seems to be happening with data at this year’s conference. There were a lot of exhibitors offering a data analysis product. Microsoft was featured in the opening session discussing how their FarmBeats projects are bringing enhanced data collection, connectivity, and decision-making solutions to smallholder farmers in several developing countries. Multiple sessions mentioned AgGateway’s ADAPT open source effort and the progress that is being made toward solving interoperability issues in modern farming. These sessions all highlighted the increasing amount of information coming in as well as the reduced barriers to using all the data from hurdles like file formats.
There is also a growing body of research that is outlining the benefits and returns possible through connecting multiple data sets in a timely manner to better inform in-season decisions or provide improved traceability and insight at a sub-field level. For example, research at North Carolina State University mapped cotton quality at the gin all the way back to the area of the field the cotton bale came from. This was not a simple process but all the data was there in multiple formats and systems and some extra hardware was needed to read RFID tags, but the project proved it could be done.
This might not sound like much to someone growing corn or soybeans, but the process could be applied across multiple crops. What if farmers were able to better understand the conditions that made a wheat crop with higher protein, or corn with a higher test weight. This research is laying the groundwork for being able to trace quality readings at the elevator, or in this case the gin, back to a specific area of the field. The variations of soil type, management practices, and weather patterns were linked together at a sub-field level showing what combinations created better results. This information can provide insight into how to make larger portions of the field achieve better quality to improve margins for the farmer.
The excitement about using all the data being collected, analyzing it using new tools powered by machine learning feels like it is starting to move from the early adopters experimenting with new tools to a broader swath of the market demanding proven, and tested solutions with a tangible return. But one underlying current in the discussions was the need for farmers to be in control of all the data being generated in their operations and to become better stewards of their data.
Multiple panel discussions hit on the need for farmers to start treating the data generated on their operations as a valuable asset similar to their banking information. One example cited was with the growing pressure from consumers to understand where their food comes from and how it was grown; it isn’t difficult to imagine a situation in a couple years requiring farmers to prove that they have been making the right decisions in their fields, not just for complying with regulations, but also the environmental impact of different management practices. If that data is in a desk drawer full of thumb drives or stored on a machine terminal that was traded in with the machine, providing this proof could be difficult. Even if the data has been uploaded into a FMIS type software system, often during the data ingestion process, it is cleaned and processed to fit into that system. The original controller files may be gone if they were not backed-up somewhere else and valuable information needed by a different system is gone since it was not needed for that application.
This thread carried through to the Ag Data Storage and Control session hosted by the Agricultural Data Coalition. The panel discussed a wide range of topics that clearly outlined the need for farmers to be in the driver’s seat when it comes to their data. There is a growing list of reasons why they need to maintain an original record of files from machine controllers in a “data deposit box” they can go back and get historical records from. It was also clear that digital assets are different than other physical assets farmers value and protect. Often the value in the data generated on a farm is only realized when shared with an advisor or merged with another dataset, however the original file does not have to be consumed in the process. It can be saved, and referenced again and again to continue to provide value to the operation, or to contribute to broader learning though research projects and other initiatives.
InfoAg 2019 was a great conference with some of the best and brightest in precision ag getting together for a few days in St. Louis to network and learn from each other. Hopefully we can look back at this year’s conference in three or four years and mark it as the turning point when data utilization moved from early adopter to the mainstream, farmers took control of their data and unlocked new levels of productivity through better informed decision making based on their accumulated experience and data shared with trusted advisors.