AI Appreciation Day: The Path forward for GenAI
Now Reading
AI Appreciation Day: The Path forward for GenAI

AI Appreciation Day: The Path forward for GenAI

Transparency is essential for AI applications, as it enables users to comprehend how AI systems reach their conclusions

Marisha Singh
AI

As the Middle East rapidly embraces generative AI across various sectors, the fundamental challenge confronting businesses is not just about leveraging this technology, but ensuring it is trustworthy.

The question that perplexes many is: how do we build generative AI applications that provide accurate responses without succumbing to “hallucinations” that is making facts up, and eliminating misleading, or incorrect outputs?

The solution may lie in an older, more established technology: search engines. By analysing how search engines manage to provide reliable answers, businesses can learn valuable lessons about building trustworthy AI applications. This is particularly vital as the potential for generative AI to enhance efficiency, productivity, and customer service in the region is immense, but only if enterprises can be sure their generative AI apps provide reliable and accurate information.

The search engine precedent

Search engines excel in filtering through massive amounts of data to identify the most relevant and credible information. They employ sophisticated algorithms to assess the quality of links and prioritise content from authoritative sources including corporate training manuals and human resources databases, while steering clear of less dependable data. This is an approach that can be mirrored in generative AI to ensure accuracy and reliability.

For instance, when a user queries a search engine about a local event in Dubai, the engine uses location-based filtering and historical data relevancy to provide the most appropriate answer. Similarly, generative AI can be programmed to prioritise data from recognised and reputable Middle Eastern sources, such as government publications and accredited academic research, ensuring the regional relevance and accuracy of the information.

Shoring up dependability

While foundational large language models (LLMs) have made significant developments in language understanding and response generation, they are not dependable. These models often learn from a vast range of internet sources, not all of which are reliable. This is why many reputable AI models can sometimes make up facts and provide incorrect answers.

A key takeaway is that developers ought to view LLMs as conversational partners, not solid sources of truth. Although LLMs are adept at interpreting language and generating responses, they shouldn’t be relied upon as the definitive source of information. To mitigate the risk of inaccurate data, many companies train their LLMs using their own internal data and vetted third-party data sets. By implementing search engine-like ranking methods and prioritising reputable data sources, businesses can significantly enhance the reliability of their AI-driven applications.

Context

Moreover, search engines have become adept at handling ambiguous queries by understanding the context. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation, and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer.

However, when a search engine is unable to deliver the correct answer due to insufficient context or the absence of a relevant webpage, it will often attempt to respond regardless.

For instance, queries such as “What will the economy be like 100 years from now?” or “Who will win the Saudi Pro League next season? – these queries may not have reliable answers. Despite this, search engines operate on the principle of attempting to provide an answer in nearly all situations, even when their confidence in the accuracy of the response is low but the crucial difference is that these resources are not made up.

Transparency

Hence, transparency should be a necessary feature for AI applications. Users need to understand how AI systems arrive at their conclusions.

In the Middle East, where there is a high emphasis on technological innovation and integrity, explaining AI decisions becomes not only a technical requirement, but also a matter of trust and ethical responsibility.

Generative AI applications, therefore, must be designed to explain their ‘work’. Just as high school teachers tell their students to show their work and cite sources, generative AI applications must do the same. They should be able to cite sources and explain reasoning, offering users a clear understanding of why a particular response was given. This level of transparency helps build confidence in AI applications and ensures that they are seen as reliable tools rather than chat boxes.

AI in the Middle East

As the region continues to advance technologically, the integration of AI into business and everyday life presents exciting opportunities.

However, these opportunities come with the responsibility to ensure that AI systems are not only effective but also trustworthy and transparent. By learning from the successes of search technology and adapting these lessons to the unique cultural and business environment of the Middle East, we can unlock the full potential of generative AI.

In this journey, the goal is clear: to develop AI applications that act not just as tools, but as trusted advisors, capable of driving the future of business and innovation in the Middle East.

The author, Mohamed Zouari, is General Manager – Middle East, Africa & Turkey at Snowflake.

Read: Morocco’s Kenza Layli is world’s first Miss AI in virtual beauty pageant

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top