Tuesday, June 10, 2025
HomeSoftwareHow Enterprises Can Meet AI Literacy Requirements Before They’re Forced

How Enterprises Can Meet AI Literacy Requirements Before They’re Forced

Published on

spot_img



The rapid adoption of Generative AI (GenAI) in businesses highlights its potential to make job roles more strategic and unlock productivity gains. However, fast-paced adoption can bring risk, especially for organisations that lack AI literacy and governance frameworks.

With new AI tools trending in the news, workers naturally want to try them out for themselves and explore ways to be more productive. But without prior training and education around organizational governance and best practices, innocent experimentation could result in significant risk to the business. For example, inputting sensitive client information into ChatGPT could be a breach of contract in some circumstances.

This kind of scenario highlights the urgent need for increased emphasis on AI literacy in both government frameworks and public-private collaborations. I worry that many enterprises that have been bullish about GenAI since ChatGPT’s launch may have neglected AI literacy. Why? High education costs, misaligned incentives and a lack of proven learning tools have made AI literacy initiatives a challenge. However, emerging regulation will likely push organisations to balance AI’s benefits with responsible – and informed – use.

AI Literacy is Central to Responsible Use

In global AI regulation, the EU AI Act is a pacesetter. As of February 2025, all businesses developing, integrating, or deploying AI systems in the EU have been obliged to take measures that ensure staff have a sufficient level of AI literacy. The Act defines AI literacy as the skills, knowledge and understanding required to facilitate the informed deployment of AI systems.

A focus on AI literacy makes sense that given misuse of the technology, even when due to a lack of knowledge, could trigger strict consequences under the Act. A well-meaning HR professional could, for example, employ AI tools to streamline decisions on hiring or promotions. While this use could improve the employee’s efficiency, the EU AI Act may define it as high-risk and penalise the company if they believe the appropriate controls aren’t in place.

Data literacy will be a factor in any potential penalty cost for breaching the EU AI Act’s rules, impacting enterprises based in the EU, or global organisations with EU-based staff. Organisations that don’t slot into the latter category could still be forced to think about AI literacy due to local regulatory requirements. Let’s take the US as an example. While it looks unlikely that the US federal government will introduce a comprehensive regulatory framework for AI anytime soon, many legislative bodies at the state level are, and some of these states are considering how to prioritise AI literacy.

As AI advances, we can expect more regulation. Investing to develop a strong base of AI literacy is therefore a smart play for global enterprises. This is more than a compliance exercise. AI literacy is an essential foundation of responsible AI practice. Selecting initiatives with appropriate nuances and balances is critical to making efforts to establish AI literacy a success.

No Silver Bullets

It should be stated that driving up levels of AI literacy is a problem for society at large. The need to rapidly improve AI literacy across the workforce is so great that we must begin to consider how to integrate AI education into schools and other training or education platforms. Doing so will ensure that future generations will be well-prepared for AI’s transformation of society in the long term.

That being said, organisations have a vested interest in building up AI literacy and can be effective in doing so. Pursuing AI literacy without having addressed data literacy, however, is jumping the gun. AI literacy starts with data literacy. Here’s why: AI is only as good as the data that is used for its training and its inputs; without understanding the fundamentals of working with data, AI workers are unlikely to be able to maximize the potential for AI to achieve transformative results.

To improve foundational data skills broadly across workforces, employers need initiatives that address the varying needs of their workforce and adapt training to technical capabilities. Hands-on training opportunities and on-demand resources for continuous learning help to deliver engaging education on the fundamentals of data science and working with data. I’ve seen first-hand how gamifying education with data challenges and datathons is an excellent method to teach data analytics through experience.

To scale data literacy across a workforce, leaders are required to think beyond technical familiarity and appreciate the value of soft skills in analytical work. Creativity allows employees to identify more innovative ways to use data. Critical thinking is essential in evaluating analytics outputs. Collaboration skills enable team members to work with data with empathy.  In the era of AI, technical skills aren’t a prerequisite to work with data. That’s an important mindset shift enterprises need to enact.

To put data literacy into practice, and scale up its uses, enterprises need to equip employees with the tools to prep and clean data. This is particularly important in the context of increased working with AI systems. High quality data being inputted into AI systems is crucial for AI outputs to be reliable and accurate. Tailored training linked to key practical use cases of AI that utilise internal data can also help. Data and prep tools are a critical aspect of the emerging AI operations discipline in enterprises, which will be an influencing factor in the overall success of AI rollouts.

Building on Data Knowledge to Establish AI Literacy

With a base layer of data literacy and the supporting data stack, enterprises can then crack on with nurturing AI literacy within the workforce. The single most important lesson to get across in these efforts is likely to be the simplest: consult a CIO or equivalent before downloading that killer new AI app. Such downloads are the most likely driver of AI risk, but an organisation-wide AI governance programme that provides clear guidance on approved AI uses and applications can help mitigate this. Governance programmes can also establish an intake process to evaluate and approve AI applications, plus offer employees clear communication channels to seek answers about what’s appropriate and what’s not.

There’s also an opportunity to enhance programmes by acknowledging employee interest in the latest ‘hot’ GenAI tools. Instead of resisting and using strict blockers, enterprises can embrace the interest by offering experiential learning opportunities that improve literacy through actual engagement with the tools. This kind of activity sensitises employees to the risks they need to understand and the steps they need to take within their organisations to make sure they’re not inadvertently aggravating those risks.

The Makings of an AI Success Story

With clear governance frameworks and a base knowledge of data’s foundations, employees are much more likely to use AI responsibly. As a result, risky behaviour that could fall foul of regulation is much less likely.

Importantly, regulation is not the only driver for prioritising AI literacy. Working with data has to be democratised beyond technical workers for AI to be a success and deliver ROI. Establishing AI literacy is crucial to bringing more employees on board and equipping them to work with the technology. With the nuances stressed in this article, AI literacy offers a superior alternative to AI rollouts based on strict oversight and blanket rules. Instead, organisations can capitalise on employee interest in ‘hot’ AI tools with education and training that sticks. Supported by the required infrastructure to get the most from data for AI outputs, these organisations will be well on their way to accelerated AI success.

Tommy Ross, Head of Global Public Policy, Alteryx

 



Source link

Latest articles

Nvidia’s Huang Says UK Needs More AI Computing Power

Nvidia chief executive Jensen Huang...

United Airlines Shuts Down Starlink WiFi Service on Its Planes After the Antennaes Caused Problems With Its Jets’ Equipment

It's another thing to worry about. Expect TurbulenceAir travel in America has been...

iPad OS 26 is here – Getting closer to Mac, learn how

Apple today unveiled iPadOS 26, the biggest update yet, pushing the iPad’s versatility...

More like this

Nvidia’s Huang Says UK Needs More AI Computing Power

Nvidia chief executive Jensen Huang...

United Airlines Shuts Down Starlink WiFi Service on Its Planes After the Antennaes Caused Problems With Its Jets’ Equipment

It's another thing to worry about. Expect TurbulenceAir travel in America has been...

iPad OS 26 is here – Getting closer to Mac, learn how

Apple today unveiled iPadOS 26, the biggest update yet, pushing the iPad’s versatility...