Crypto News

We’re measuring AI all wrong—and missing what matters most



There is a strange irony in how we evaluate artificial intelligence: We have created systems to imitate and enhance human capabilities, but we measure their success using metrics that take everything Except What really matters to them to people.

The tech industry dashboards overflow with awesome -AI numbers: processing speed, number of parameters, benchmark marks, user growth rates. The greatest mind of Silicon Valley has eternal algorithms forever to increase these metrics. But in the maze of these dimensions, we lose sight of a basic fact: the most sophisticated AI in the world is worthless if it does not significantly improve human life.

Consider the story of early search engines. Prior to Google, companies competed with the thin number of -index web pages. However, Google is not because it has the biggest database, but because it understands something deeper about human behavior – relevant and trustworthy more than raw volume.

Ai that builds trust

The AI's landscape now feels strikingly similar, with the career companies to produce larger models while potentially losing more design elements centered on people who really drive and affect adoption and impact.

The path to better AI examination begins with trust. Emerging research shows That users are deeper and continue with AI systems that clearly explain their reasoning, even though these systems are sometimes weakened. It makes an intuitive sense – trust, whether technology or people, grows from transparency and reliability rather than pure performance.

But trust is the foundation only. The most effective AI systems make real emotional connections to users by showing a true understanding of human psychology. Research shows a compelling pattern: when AI systems adapt to the psychological needs of users rather than just performing tasks, they become an important part of people's day -to -day life. It's not about programming superficial kindness – it's about creating systems that truly understand and respond to human experience.

Trust is important than technical bravery when it comes to adopting AI. A groundbreaking Studying AI chatbot At about 1,100 consumers have found that people are willing to forgive service failures and maintain brand's loyalty not based on how fast an AI is solving their problem, but if they trust the system trying to help them.

Ai that you get

Researchers have discovered three key elements that build this trust: first, AI needs to show a true ability to understand and address the issue. Second, it needs to show good – a sincere desire to help. Third, it must maintain integrity through the same, honest relationship. When AI chatbots include these attributes, customers are more likely to forgive service problems and are more likely to complain to others about their experience.

How can you create an AI system to trust? The study found that simple things made a big difference: anthropomorphizing the AI, which programmed it to express empathy through its responses (“I understand how much it should be”), and being clear about data privacy. In one for example, a customer who interacts with a delayed delivery is more likely to remain loyal when a chatbot named Russell identifies their failure and clearly explains the same problem and solution, compared to an unnamed bot stated only in the facts.

This perspective challenges the usual assumption that AI needs to be fast and accurate. In health care, financial services, and customer support, the most successful generative AI system is not the most sophisticated – they are the ones who build the real relationship with users. They take time to explain their reasoning, identify concerns, and show the same amount for user needs.

And yet traditional metrics do not always get important performance dimensions. We need frameworks to evaluate AI systems not only in their technical skills, but in their ability to create psychological safety, develop real relevance, and most importantly, help users achieve their goals.

Ai new metrics

In Cleo, where we focus on improving financial health through an AI assistant, we have explored these new dimensions. This can mean measuring factors such as user confidence and the depth and quality of the user's interaction, as well as viewing the entire communication trip. It is important for us to understand if Cleo, our AI financial assistant, will help a user in what they are trying to achieve in any given contact.

A more stylish review framework does not mean abandoning performance metrics -they remain important indicators of commercial and technical success. But they need to balance in deeper steps of human impact. That's not always easy. One of the challenges of these metrics is their subject. This means that reasonable people may not agree with what they look like. However, they are worth chasing.

While AI is becoming deeper woven into the fabric of sunny life, companies that understand this change are to succeed. The metrics we got here are not enough for where we are going. It's time to start measuring what really matters: not just how well AI performs, but how well it will help people develop.

Opinions expressed in pieces of Fortune.com's commentary commentary are only the views of those who have set and not necessarily reflect the opinions and beliefs of Fate.

Read more:

  • Genesys CEO: How Empathetic AI can scale our humanity in times of economic uncertainty
  • When AI builds AI: The next great inventors may not be human
  • Ai cost drops change what's possible – with huge implications for tech startups
  • I spent many years helping female founders accessing capital. Now that they have AI, they may not need

This story was originally featured on Fortune.com

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblocker Detected

Please consider supporting us by disabling your ad blocker