The great GenAI trade-off: Balancing responsiveness with responsibility
Now Reading
The great GenAI trade-off: Balancing responsiveness with responsibility

The great GenAI trade-off: Balancing responsiveness with responsibility

GenAI can expose the business to many risks, ranging from privacy violations to an array of data-related prejudices that can arise when building and using machine-learning models

Gulf Business
The great GenAI trade-off: Balancing responsiveness with responsibility

Heisenberg’s Uncertainty Principle tells us there is trade-off between certainties of measurement. The more accurately we measure position, the less accurately we can measure momentum, and vice versa.

Artificial intelligence (AI) suffers from a similar balancing act between responsive and responsible AI. Business leaders want insights before competitors come to the same conclusions. But they are also faced with appeasing customers, employees, investors, and regulators on the issue of responsible AI.

Unlocking the benefits of generative AI (GenAI) can expose the business to many risks, ranging from privacy violations to an array of data-related prejudices that can arise when building and using machine-learning models. How does the CIO solve this trade-off?

GenAI: Within and without

Like any new tech, GenAI’s adoption does not come without challenges and risks. Brand image is at stake, so it is to the advantage of every AI adopter to consider the risks to security and customer privacy before taking any steps toward adoption.

GenAI’s risks stem from its dual role as both an inward- and outward-facing tool. Increasingly, organisations are going beyond cost optimisation to transform external customer interactions.

If CIOs can get the inward-outward balance right, they can reap the rewards from their GenAI investments and deliver new value for the business. To accomplish rapid response while adequately managing risk, data governance will be critical, as will a coherent AI vision that accounts for the current tendency to overestimate GenAI’s short-term impacts and underestimate its longer-term potential.

This is not a new mistake unique to GenAI. It has become something of a tradition with emerging technologies. However, with GenAI it becomes a potentially critical error. If we, say, rush to reinvent processes for the sake of efficiency, we may overlook vital issues of data integrity.

Given the potential risk of a data breach, CIOs seeking to maximise the value of GenAI at speed and scale must never take their eyes off the importance of data security.

Elastic governance

To properly address the Great GenAI trade-off between responsiveness and responsibility, enterprises must begin and end with governance. CIOs must come to see data and AI governance not as manuals that are written once and rigidly followed thereafter. Policy must be elastic — subject to change as, for example, the CIO’s relationship with risk officers changes.

Culture, including the organisation’s approach to management, must also change to fulfill the goal of maximising gains from GenAI while protecting the business, its people and its customers.

One way of plugging AI skills gaps in the GCC is to upskill from within. As low-code and no-code platforms have aided the rise of the citizen developer, challenges have emerged. GenAI represents a democratisation of development.

CIOs may need to reimagine their role in the organisation. They are now much more than just infrastructure stewards. They are data custodians who must pay due attention to both security and potential for innovation. The CIO must now lead on data governance, which starts with establishing the ability to identify the location of data in real-time. Tech leaders must comprehensively map and monitor each data source inside or outside the business.

This issue may be the greatest challenge in corporate IT today. While visibility has never been more critical, it has also never been more difficult. Data moves to serve the needs of providers and their customers. And if tracking its current location is a challenge for the CIO, this presents a risk when it comes to protecting that data and maintaining its quality.

Verification and refinement are significant elements of responsible AI, so tech leaders must define and develop a forward-looking data strategy that is tightly coupled to business goals, but also imposes strict frameworks that ensure quality, privacy and security.

Learning from others

Sustainable success in AI requires a cultural shift. The CIO will be both AI champion and AI teacher, encouraging colleagues to look at data differently by highlighting the connection between it and business value.

By promoting data fluency, the CIO becomes a value creator. They will also engage with partners and peers to shape the market. They will make use of existing networks while creating new ones. Through this continuous learning, CIOs will improve their understanding of GenAI and what their priorities should be to maximise its value-add.

Within these knowledge-exchange networks, CIOs will find GenAI vendors who will come under increasing pressure to be transparent about how their algorithms operate, going so far as to allow their customers to inspect them. That transparency must extend to clear explanations of where data is stored, sent, and processed. And vendors will be expected to comment openly on how data can be managed within the complex and fast-moving regulatory environments we find in regions such as the GCC.

GenAI represents an opportunity and a challenge for CIOs. Leaders must go back to the drawing board on data governance and put it front and centre. There is an opportunity for AI to play a role in data governance by continuously monitoring for potential vulnerabilities and operating autonomously to remedy them. It appears as if GenAI is not bound by limits in terms of what use cases it can satisfy. It will be up to the CIO to impose such limits in consultation with colleagues, as they decide where to deploy it, what roles should be reshaped, and what business processes should be optimised.

The stakes are high. Mastering the responsiveness-responsibility trade-off means competitiveness and prosperity. Getting it wrong could mean the end of a brand.

The writer is the head of AI Innovation, EMEA, ServiceNow.

Read: ‘AI can serve the greater good’, says Amazon CTO Dr Werner Vogels

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top
<