AI Policy

This page was last updated on 19th June 2025

General principles

  • We will not send personal or confidential data into any AI model that’s not proven to have segregation or deletion of private data.
  • Except in some very limited cases (eg. some code generation), we will not use services that create content from scratch where the service’s content database was scraped without authorisation.
  • We will avoid services that employ Machine Learning (ML) where a non-ML alternative exists which gives similar results but takes fewer resources.
  • We will ensure all use of ML tools is done in a considered manner, with a review of each use case.

The term “AI”

  • Using the generic term “AI” can be misleading because it lumps together technologies which might be very different in terms of how they work and the effect they have on the world.
  • Using “AI” generically also helps confuse specific AI applications (eg. “Large Language Models”) with Artificial General Intelligence.
  • To be precise, we aim to use the name of the specific technology (eg. “Large Language Models”) over the umbrella term “AI”.

What we aim to take into account for each AI technology

  • The intellectual property and copyright status of the AI’s inputs.
  • The privacy and wishes of the people who have been involved in generating the inputs for the AI.
  • The systemic biases that are likely to be present in the input.
  • The environmental cost of training the AI technology, storing the data and applying the model. For example, it is estimated that it costs ChatGPT 36 cents to service each query, which is a lot higher than most non-LLM services such as a Google search.
  • The limitations (eg. in terms of accuracy) of the outputs.
  • The impact of a specific technology on the industry (eg. a client’s vertical), and related industries.

Machine Learning in ad optimisation algorithms (“ad tech AI”)

  • Ad platforms such as Google, Meta, LinkedIn, Microsoft, TikTok, Pinterest, Spotify etc. are employing a range of ML algorithms in order to automatically optimise ad campaigns to deliver the campaign objectives efficiently.
  • Their impact on the industry has generally been to free up agencies and in-house team members to work more on the strategy as opposed to real-time bidding. It’s likely that some jobs may have been lost as a result but it’s currently not financially viable to run most campaigns manually given current costs per click.
  • Where these algorithms work well, they can deliver a lot of additional ROI for our clients without any additional financial cost.
  • We aim to use these on a case by case basis, being aware that for some experimental campaign types or objectives these still may not deliver the value they claim to.
  • Because these algorithms try to tie website activity (e.g. purchases) to platform activity (e.g. what Instagram ad someone clicked on), they will inherently require substantial tracking of user behaviour. We aim to advise each client to the best of our ability about the setup that would best balance user privacy and their marketing objectives.

Media Generation (“generative AI”)

  • There are multiple services that take your own image/video/audio inputs and perform useful functions on them, such as cleanup, noise reduction, improving salience, scaling the outputs etc. Because these are based on an org’s own IP, we believe these are the easiest to use ethically and will advocate for their use where appropriate.
  • These tools will still have quality considerations (eg. the uncanny valley) so we advocate for a human reviewer to be part of the process.
  • On the other hand, services that create images/video/audio from scratch are based on large datasets. If they only use data where the creator has meaningfully opted in, there is still some value so we will cautiously recommend their use in specific cases.
  • Based on our understanding of the major media generation services, they currently derive their data from unauthorised scraping of IP and as such we generally recommend against their use, especially since alternatives are available. We will advise clients against them in that capacity and will not use them for our own materials.

Large Language Models (LLMs) – chatbots, ChatGPT etc

  • As with media generation where the sources have meaningfully opted in, we will recommend the use of such tools where appropriate.
  • Based on our understanding of the major LLMs, these also currently derive their data from unauthorised scraping, so we would be very cautious against their use depending on the specific case. We will advise clients about such tools in that capacity and will not use them for our own materials if they have not been derived from opt-in data.
  • With the advent of various robots.txt opt-outs on websites, we believe that once awareness and uptake increases, these LLMs may be fine to use in the general case (assuming they respect the robots.txt directives).
  • There are many niche LLMs designed for a specific topic or use case which do not have these limitations.
  • We are also aware of the wide range in quality of LLM output depending on the topic and genre/format. Misinformation, confabulation, misattribution and removal of context are substantial issues for much of LLM output.
  • Because of this, we always recommend the addition of a human filter step to the outputs, meaning we always recommend against using LLM output in an automated content creation system, for chatbots in areas where miscommunication creates a high risk etc.