Why prioritising customers is critical to AI product development

By Shadi Rostami, SVP of Engineering at Amplitude.

  • Monday, 29th April 2024 Posted 2 weeks ago in by Phil Alsop

The rise of AI has led to a seismic shift in product development. In the past, teams could spend more than six months fine-tuning models, the go-to method for accomplishing specific tasks. Now, thanks to AI and the latest generation of large language models (LLMs), engineers can use prompt engineering to complete those same tasks in minutes. While these advancements are exciting, product and engineering teams need to remember that AI is not a strategy; it’s a tool that supports a strategy. 

Building AI just for the sake of it will only result in companies wasting time and resources rushing out products that users will quickly abandon or not utilise at all. Instead, product and engineering teams must adopt a customer-centric approach to build and launch successful AI products. Here are a few product-building principles you can adopt to keep customer needs as the focal point.  

Put Privacy First 

If customers are going to test out a new product, let alone commit to it, they need to trust the company that built it. However, companies must collect user data to create great AI experiences. These two desires are at odds. To solve this, companies need to clearly outline the checks and balances in place to ensure the security and non-sale of customer data. And it begins with adopting a privacy-first mindset. 

Consider how your entire business model aligns with this principle. Companies must examine the data used to develop AI products closely and assess their privacy implications. For instance, sending anonymised metadata to third-party AI providers may be acceptable, but transferring personally identifiable information (PII) should be strictly avoided.

After establishing proper privacy protocols and tools, conduct regular audits to adhere to privacy compliance. By integrating consistent compliance checks alongside a privacy-first mindset, you maintain a level of trust with your customers. Ultimately, this trust helps companies stand a better chance of gaining lasting user adoption.

 

Continually Improve Governance

In a recent survey, 45% of CDOs ranked clear and effective data governance policies as a top priority. This isn’t surprising. Proper data governance is essential to developing accurate and reliable AI models. Even still, data can be tricky to manage. Organisations must define clear policies and processes for handling and managing data from the very beginning in order to train the most accurate AI models.  

One area organisations struggle with is data discoverability. This is the process of understanding who needs access to what data and how to appropriately grant permissions to those internal teams. What deepens this challenge is the many factors that impact data discoverability, like different naming conventions, unrecorded data transformations, and data copying data. If engineers cannot find or access the data needed to build and fine-tune models, the product will never improve. To address this, organisations should enforce standardised data policies with clear processes for naming, moving, transforming, and storing data. At the end of the day, data will become at least somewhat disorganised over time. Governance must be an ongoing, iterative process that teams commit to for the sake of the models and — critically — the customer outcomes. 

Create Choice for Users

 

It wouldn’t be a true customer-centric AI strategy without transparent practices and user choice. While not discussed as much as data privacy and governance, this is a critical piece of gaining—and keeping—customer trust. Put simply, it’s not enough to make blanket statements on how AI is being used. Instead, call out where AI is showing up in user experiences throughout the entire product journey. And to take it one step further, provide users with the choice to opt in or out at every step. This allows individuals to make informed decisions aligning with their specific needs. And these selections don’t need to be all or nothing, meaning if teams don’t want AI involved in one particular feature, they have to have AI turned off everywhere. 

Instead, provide customers with options like a sliding scale, where they have full control. Of course, the more data you can collect, the more you can continually improve the customer experience, so find where you can strike the right balance. If users opt-in, they reap the benefits of a fine-tuned model that harnesses the collective data of all participants. 

 

In the product world, we’re known to build, iterate, and ship fast. But we cannot lose sight of the end users. By adopting a customer-centric approach to AI and aligning on your product-building principles, companies can strike the balance of privacy and governance with a flow of data that improves AI models. Ultimately, customers will trust brands that are transparent with their practices, clearly show where AI is used, and help customers determine how much—or how little—they want to adopt. Those who successfully strike this balance will lead AI's transformative wave.

Network infrastructure is the backbone of mobile IT

Posted 15 hours ago by Phil Alsop
By Carl Peters, European VP Solutions Engineering at Zayo Group.

The next gold standard is AI native

Posted 15 hours ago by Phil Alsop
By Brian Peterson, Co-Founder and CTO at Dialpad.

Maintaining a squeaky clean cloud environment

Posted 15 hours ago by Phil Alsop
By David Gammie, CTO, iomart.
By Michael Lukaszcyk, CEO and co-founder, Hygraph.
By Igor Epshteyn, CEO at Coherent Solutions.
By Filip Cerny, Product Marketing Manager at Progress.
By Naveen Zutshi, CIO, Databricks.

Is the API security market finally maturing?

Posted 5 days ago by Phil Alsop
By Andy Mills, VP of EMEA for Cequence Security.