With AI quickly becoming more integrated as part of our daily lives -from chatbots and suggested messaging to songs and images, it’s okay to feel uncertain about how to spot it, use it safely and be aware of the potential impact it may have on things like future careers. We will explore all of this in a series of blogs – catch up on the first one here!
I know I, No5’s Communications Officer, feel this way as it has become so big so quickly and I feel the impact of how it is used and shared isn’t something that we are talking about enough or being educated on. Over the last few months, I have attended various training and webinars for my own knowledge and wanted to share my learnings, thoughts and experiences as a young person with other young people. I hope through sharing my learnings, I can help others to better understand AI, the potential impacts it may have on our lives, and feel safe and comfortable when using it.
Using AI Safely
It’s important to note that there is both ethical and unethical AI software out there. The unethical ones do not have our best intentions in their interest and may not have an understanding of, or actively promote, harmful or unlawful behaviour.
Chatbots are probably one of the top ways to interact with AI-from ‘My AI’ in Snapchat and Microsoft’s Copilot to the many customer service bots on websites (which have replaced the ability to directly speak to an actual human) – they are everywhere! These can be a great tool in helping us find a quick answer for a question or get help on locating a lost parcel, but it can sometimes feel frustrating to use as they don’t always understand or offer what we are looking for.
Different versions of chat bots exist too, such as companion bots – with the core purpose to act as a friend to the user. As explored above AI, should be used as a tool and not replace humans or act as our friend. As a technology that is essentially an algorithm, it does not have the same ability to understand, feel or process emotions like we do and so could be harmful in the responses it provides, especially at times when our mental health is low.
In September 2024, VoiceBox published an article looking at AI companion bots and the risks they may pose to young users, especially when sending, often unsolicited, explicit messages and images to those underage accessing these platforms due to lack of robust age verification. It goes to show that even now many AI companion bots do not have the safety features or research behind them to ensure you can be safe when using them. This is something to be aware of or avoid altogether.
As we know that AI continues to use the information we provide it, we should be mindful of the information we share.
It might be helpful to think:
- “Is it something I would want shared publicly?“
- “Is it something I feel comfortable being shared into the algorithm?”
- “Is it something that AI needs to know to help me, and could I make it more generalised?” – e.g “can you give me a book recommendation for a young person aged 13-15 in the UK” rather than “can you give me a book recommendation for a boy aged 14 living in Central Reading”.
AI Generated Content
Whilst AI generated content may be funny to watch or listen to and entertaining to read, it can also be harmful and spread dis-information (incorrect information that is intentionally created) like wildfire.
Content created by AI is made by taking content (writing, photos, voice recordings, videos, art, etc) that already exists on the internet and pulling it together to create something new. A common way you may see this being used harmfully and to spread dis-information is via deep-fakes – often targeting well known public figures. With this type of content on the rise, it is increasingly important to make sure that you check information before taking it as ‘fact’.
Some ways you can check if something is true is by:
- Checking the verified social media accounts / websites of the public figure
- Use a fact checking website like Full Fact to see if they can verify the information
But how does AI ‘create’ content?
Content you feed AI through filters or software like ChatGPT will often go into its algorithm to aid its ‘learning’ and may be used again without your knowledge. Many social media platforms, such as those run by Meta, have now made updates to their terms and conditions meaning they can use the posts you share publicly to train and inform their AI. This is why it is a good idea to always keep in mind what you are sharing and if you would want this to be spread or used without your knowledge.
One thing that is key to remember is that AI is continuing to grow and develop, We are still in the early stages of modern AI, so the way we use and engage with it will also grow and develop. Ultimately, you need to feel safe and comfortable in the way you use and engage with AI – even if this is different to those around you.
