When people say “AI” usually they mean a chatbot. These chatbots have been transformative in empowering people to work better, understand things more, and even find insights or creative paths. However, is it a concern that interactions with AI chatbots have led some people into conspiracy-theory rabbitholes, emotional or mental breakdowns, self-harm, and sometimes worse. These cases are tragic and shouldn’t happen, and have prompted the companies behind chatbots to reassess the guardrails on their products.
For us consumers, it’s important to know what features to assess when picking a chatbot. To gain a better understanding, I talked to Josh Aquino, Head of Communications for Microsoft in the Philippines, whose work involves AI literacy and fluency building, and capacity building for different communities and stakeholders. He says, "I'm a big fan of this technology and I believe in what it can do as a force for good across really every domain, culture, and society."
While he represents Microsoft, Josh is familiar with the other chatbots. This piece doesn’t endorse any specific chatbot. Choosing your chatbot will be a matter of taste and possibly ethics, so there’s no “right answer". I personally jump around a few different ones depending on what I want to do.
However, it is important to note that not all chatbots are created equal. While we do want to guard against harmful biases, there are also good biases that are part of model design and fine tuning. For example, biasing the chatbot for trust and security.
Green Flags
Josh says, “Key green flags for me include clarity, consistency, and care in their responses. Specifically, it’s reassuring when an AI resists harmful or misleading requests, remains transparent, shows respect, and provides accurate answers that prioritize user well-being and safety.” This is to say that the chatbot will give you answers as accurate as possible while avoiding problematic, sketchy, or biased takes even if you ask for them. It will stay within a safe range of conversation and will say if and when it cannot engage with certain kinds of ideas or topics. Not only does it have limits, but it will spell them out so you know them.
Next green flag for Josh is data privacy. “I want my data used solely to enhance my user experience, with minimal data collection. I also appreciate contextual intelligence, where the AI remembers relevant details and adapts to my style or goals.”
While data privacy is a bigger conversation, important to note here what data you are comfortable with the chatbot collection (please read it in the terms of service) and how much you want it to know about you. It can be helpful for it to learn how you want it to “talk” to you, like in terms of tone, and even your background so it can tie in past conversations. This is another preference point.
The last big green flag Josh brings up is one that I also focus a lot on: “human agency should be at the center, allowing users to understand how AI works, adjust settings, and opt out when needed. For example, when a chatbot, like Microsoft Copilot, demonstrates these values, it’s a strong indication of a reliable and ethical technology partner.”
Other green flags we talked about were the quality of answers and chatbots citing the sources for their answers. This allows you to easily check for hallucinations.
Red Flags
Parallel to our green flag on data collection, here Josh has more guidance on data privacy: “what does the AI application collect from your usage, and how will it be used? …some use this data to train their underlying models. While it might be fine if you’re only asking for weather updates, it’s different if you’re uploading sensitive information like intellectual property or health data.”
Another is about the information that it gives you. Is it clear about its data sources? Does it respect others’ privacy and intellectual property? "When it's not transparent about its limitations, that is a red flag." You will probably want a chatbot that will cite its sources when it gives you answers. And also check if it exhibits any biases that are harmful or discriminatory.
Lastly, does it ever give any output that might put you or others at risk? Josh says, “For instance, if an AI app tells you how to make a weapon, you should steer clear of it.” In addition to this concern is if AI might entertain unhealthy ideation or encourage other harmful thoughts.
There’s so much potential that AI and chatbots have to offer us. We hope that through these green and red flags you can begin to explore and unlock that potential with confidence that you’re doing it safely.