It’s still early days for AI, but the speed at which these technologies are being adopted is, frankly, astounding. It is rapidly becoming part of the "business furniture." When used responsibly—like any other tool—AI can be a massive force for good and drive real, positive change.
However, the key word here is Responsibility. While big tech firms are racing to push these features out the door, their approach can often be described as irresponsible. That means the burden falls on us—as advisors and users—to hold them to account and ensure we aren't contributing to poor behaviour.
At Aperic, we’ve developed a set of guidelines for AI use. We use these four pillars whenever we evaluate a technology product featuring AI functions (which, let’s face it, is most things these days).
The Aperic AI Evaluation Framework
To keep things simple and ethical, we ask four fundamental questions:
1. Is it Safe?
Essentially, we want to ensure AI tools do not cause harm. We look to the EU AI Act for guidance here. A "safe" tool should:
- Not create or facilitate illegal content.
- Not manipulate people or target vulnerable groups (especially children).
- Not contribute to social scoring or the invasive classification of people.
A note on surveillance: Facial recognition is a major red line. For example, the Metropolitan Police's use of this tech remains a high-profile point of scrutiny regarding privacy and bias. Learn more about the Met's approach here.
2. Does it Positively Impact People?
AI should empower humans, not just replace them. This requires investment in staff training to ensure tools are used well and to prevent misuse. We also ask:
- Attribution: Does the model reward the people who created the training data?
- Intellectual Property: Is personal IP being "borrowed" to train the model without consent or compensation?
- Implementation: Is the tooling there to improve personal effi
3. Is it Environmentally Responsible?
The environmental impact of AI needs to be understood so businesses can adopt it responsibly. As a business, this should be a core part of your overall environmental and sustainability policies.
4. Is it Transparent?
Training data should be explicit so customers know exactly what data is being used and where it came from.
- Opt-outs: Users should have a clear, easy way to opt-out of having their data used for training.
- The "Grey Area": This is particularly dangerous for Generalised Pre-trained Models (GPTs). While training on your own private data is fine, benefiting from a base model that used uncompensated public data remains an ethical grey area.
The Scorecard: Putting it into Practice
How do two of the most common AI use cases stack up against these rules?
Case Study 1: Grok (xAI / X)
Grok has had a rocky road and according to our framework, it’s a tough sell.
- Safe? (No): MPs and education leaders have expressed deep alarm over Grok's ability to generate harmful and inappropriate images.
- Human Impact? (Negative): Serious concerns have been raised regarding the model's output, including instances of antisemitism and praising historical dictators.
- Environmental? (No): Video evidence has emerged showing significant pollution plumes from xAI data centres.
- Transparent? (No): Grok-2 moved to an "auto-opt-in" for all X users' data, ignoring GDPR consent rules.
Final Score: 0/4
Case Study 2: ChatGPT (OpenAI)
ChatGPT is the industry leader, and while it performs better, there is still plenty of room for improvement.
- Safe? (Half-Point): On balance, it’s about as safe as any technology you share data with. The NCSC views it as a manageable risk for general use.
- Human Impact? (Half-Point): It is undeniably useful and helping people in various job roles, but because it's an opt-in tool, we give it a half-point for utility.
- Environmental? (Half-Point): There is a massive impact on energy and water. However, frameworks for reporting these impacts are being developed. Since they run on Azure, they benefit from Microsoft’s carbon tracking, showing at least some effort toward accountability.
- Transparent? (Zero): While OpenAI is open about how they develop models, we can't award points for "fair use" training that consumes the world's collective knowledge for a paid service without compensating creators.
Final Score: 2/4
Moving Forward
A lot of these concerns are now being codified into law via the EU AI Act. If you are using or considering AI in your business, we highly encourage you to read up on this framework.
Read the full EU AI Act Summary here
And on a more philosophical note, I’ve leave you with some wise and relevant words from Charlie Chaplin to mull: