It’s still early days for AI, but the speed at which these technologies are being adopted is, frankly, astounding. It is rapidly becoming part of the "business furniture." When used responsibly—like any other tool—AI can be a massive force for good and drive real, positive change.

However, the key word here is Responsibility. While big tech firms are racing to push these features out the door, their approach can often be described as irresponsible. That means the burden falls on us—as advisors and users—to hold them to account and ensure we aren't contributing to poor behaviour.

At Aperic, we’ve developed a set of guidelines for AI use. We use these four pillars whenever we evaluate a technology product featuring AI functions (which, let’s face it, is most things these days).

The Aperic AI Evaluation Framework

To keep things simple and ethical, we ask four fundamental questions:

1. Is it Safe?

Essentially, we want to ensure AI tools do not cause harm. We look to the EU AI Act for guidance here. A "safe" tool should:

  • Not create or facilitate illegal content.
  • Not manipulate people or target vulnerable groups (especially children).
  • Not contribute to social scoring or the invasive classification of people.

A note on surveillance: Facial recognition is a major red line. For example, the Metropolitan Police's use of this tech remains a high-profile point of scrutiny regarding privacy and bias. Learn more about the Met's approach here.

2. Does it Positively Impact People?

AI should empower humans, not just replace them. This requires investment in staff training to ensure tools are used well and to prevent misuse. We also ask:

  • Attribution: Does the model reward the people who created the training data?
  • Intellectual Property: Is personal IP being "borrowed" to train the model without consent or compensation?
  • Implementation: Is the tooling there to improve personal effi

3. Is it Environmentally Responsible?

The environmental impact of AI needs to be understood so businesses can adopt it responsibly. As a business, this should be a core part of your overall environmental and sustainability policies.

4. Is it Transparent?

Training data should be explicit so customers know exactly what data is being used and where it came from.

  • Opt-outs: Users should have a clear, easy way to opt-out of having their data used for training.
  • The "Grey Area": This is particularly dangerous for Generalised Pre-trained Models (GPTs). While training on your own private data is fine, benefiting from a base model that used uncompensated public data remains an ethical grey area.

The Scorecard: Putting it into Practice

How do two of the most common AI use cases stack up against these rules?

Case Study 1: Grok (xAI / X)

Grok has had a rocky road and according to our framework, it’s a tough sell.

Final Score: 0/4

Case Study 2: ChatGPT (OpenAI)

ChatGPT is the industry leader, and while it performs better, there is still plenty of room for improvement.

Final Score: 2/4

Moving Forward

A lot of these concerns are now being codified into law via the EU AI Act. If you are using or considering AI in your business, we highly encourage you to read up on this framework.

Read the full EU AI Act Summary here

And on a more philosophical note, I’ve leave you with some wise and relevant words from Charlie Chaplin to mull:

We have developed speed, but we have shut ourselves in. Machinery that gives abundance has left us in want. Our knowledge has made us cynical. Our cleverness, hard and unkind. We think too much and feel too little. More than machinery we need humanity. More than cleverness we need kindness and gentleness. Without these qualities, life will be violent and all will be lost.