Are you a researcher or leader tired of AI being forced down your throat?
Tired of the carte Blanche ‘USE IT’ without any critique?
You are not alone. I feel exactly the same.
We aren’t anti-AI. But we aren’t totally pro AI either. We can see the benefits but no one is talking about how to use it ethically! I hear you - this article aims to fill that gap.
I hope to help the AI agnostic who want a balanced and practical view of its benefits, pitfalls and how and when to use it.
Let’s dive in.
AI is Not an Automatic Panacea - We Need a Balanced Approach
Take one scroll through social or mainstream media, and AI is heralded as the panacea for all things. ‘There’s an AI for that’ seems to have replaced ‘there’s an app for that’. But IS AI the answer, in our incredibly messy, very human world? And what is its role in research communication, leadership and impact?
AI is part of our future and there is a place for it. But we need that place to be critiqued, weighed for ethics and to serve humans, not just churn out ‘workslop’.

The Pitfalls - Ethics, Environment, and Economics
From a research and public sector perspective, there are so many complexities that need to be navigated.
- Environmental impact - In 2024, data centres accounted for about 1.5% of the world’s electricity consumption . But demand is projected to double to 945 TWh by 2030 , potentially exceeding Japan’s current total electricity consumption. Cooling data centres also poses a water availability risk - In 2027, global AI training and use are projected to account for 4.2−6.6 billion cubic metres of water withdrawal . There’s also a drain on mineral and critical resource mining to build chips, and risk on marine biology from underwater data centres. (LSE https://www.lse.ac.uk/granthaminstitute/explainers/what-direct-risks-does-ai-pose-to-the-climate-and-environment/)
- Ethical impact - when dealing with research and personal data, what can we upload to AI or not? Data protection is paramount - we must be hyper aware of what data is saved to train AI models and what that means for
- Intellectual property - AI will scrape every available source on the internet to define its answer, with little regard for IP. How can you ensure that your AI generated work ISN’T infringing the copyright of someone else’s work? This also raises the issue of…
- Citation - how do we credit, cite and be transparent about AI use on the first place?
- Bias - AI reproduces content and imagery that replicates human bias. This means that generated content replicates the white, Eurocentric, male, heterosexual, privileged view of the world, and its stereotypes. Surely we need to be more critical than that? Our responsibility is to challenge inequality, not reproduce it.
- AI hallucinations - AI can plain make stuff up! Spurious citations, incorrect facts and figures, claims from uploaded documents that aren’t true. Without a human checking the work, we can’t guarantee the accuracy of AI work.
- The impact on human cognition - AI has the potential to make us lazy and lose our own abilities to do that thinking. A dumbed down human race - is that what we want to build?

But we can’t deny the potential benefits of AI either:
- Productivity - using AI for repetitive, admin, and busy work tasks frees us up to do more deep, thinking work. Improving efficiency could have exponential potential to boost our productivity, research outcomes and business bottom line.
- Accessibility - AI opens up skills, knowledge and outputs previously too expensive, niche or reserved for experts. For communities, organisations and businesses with low budgets and poor access to these skills, AI could democratise expertise
- Speed - linked to productivity, AI just does stuff fast! This doesn’t mean it does it right, but doing things quickly is an AI superpower.
AI is a potential force for good, that democratises access to skills and creativity that would always be inaccessible to some. For example, working with agencies like us at Nifty will be gate kept by budgets, time and knowing we even exist. There are projects that have no budget, or short term design needs for one off use cases, or limited access to people like us where AI could help. I still argue however, that AI is only as good as the human inputting the prompt.
Authentic intelligence - the ‘third way’ for AI
So what’s the answer - ditch AI altogether and become an analogue hermit, or become an AI evangelist?
I propose a considered alternative: authentic intelligence. A human, ethical approach to using AI. I believe AI has a place in our research and leadership landscape. But a considered, deliberate place, not a ‘use AI for anything’ approach. Authentic intelligence prioritises human thinking. It places AI firmly as a tool, used discerningly for execution of repetitive tasks, tasks that refine or communicate YOUR thinking, NOT thinking for you in itself.
Setting your AI boundaries
It also means setting your own human boundaries on AI. When and what will you specifically use it for? What agents or models will you use based on their data use, IP statements and protections? How will you cite it?
For example, here are some of my ‘rules’:
- I never start in AI. I always do my own thinking, research or writing first.
- I always turn on temporary chat, so my interactions can’t be stored to train the model.
- I only use it to check or refine my own words - e.g. to ensure grammar, accessibility, spelling or succinctness.
- I never upload raw data or research not yet published or public.
- I do not use AI visuals - as an owner of a design agency and a champion of human, hand drawn illustrations made together with other humans, this feels a complete value clash for me.
An ethical matrix for AI use
When I train academics, communities and leaders in visual communication, there are AI visualisation tools I recommend. But here’s my ethical matrix for doing so.
- I always teach the WHY, WHAT and ANALOG HOW, before I discus AI tools. This means that the human inputting the prompts has a clear theoretical understanding of what they are making though AI and why.
- I always get them to write their story first. Good visual communication only comes from a clear narrative. That starts with a human. We teach a repeatable framework for storytelling for visual comms that they have to do themselves first!
- They have to sketch out their own vision for their visual communication before touching any digital - not just AI - tool first.
- I always make learners aware of the 6 considerations above before using.
- I set parameters on use cases: AI can be useful for visual communication that will have a short shelf life (I.e. only used a few times like a one off presentation); that does not require co-designing with communities or stakeholders as you are representing your own personal work; where all other visual experts are unavailable to you due to time, budget or access.
- Whatever AI tool you use - be transparent that it has been used.

To build your ethical matrix, consider:
- What human understanding in your subject area is the foundation needed to use AI purposefully
- What are appropriate and ethical use cases for AI tools - be specific
- When is it appropriate to use and when isn’t it?
- How will you cite AI as a tool in your thinking for the final product?
Take a few minutes to write your own personal AI statement.
TL:DR - AI as tool, not thinker
In sum: AI - like any tool - has to be used appropriately and critically. There are environmental, ethical, legal, and social concerns around AI that we need to consider. ‘Authentic intelligence’ centres human thinking and decision making at the heart of AI use. AI is positioned as a tool to refine and communicate human thinking, not to generate or replace it. When used authentically, AI could be a tool to open access to skills and creativity that was previously reserved for a privileged few.
AI is here to stay - but humans must be critical on how to treat this house guest.
- Laura, Founder and Director