By Yanro Judd C. Ferrer
In Part 1, we looked at how livestream chats are more than scrolling text—they’re co-created spaces where people and technology work together to produce culture, emotion, and connection. But what are the policy implications?
Situations where technologies fail give us a first glance at how design decisions can shape social relationships online: Auto-moderators blocking harmless messages due to misconfigured filters, emojis or GIFs not loading on older devices, or unstable internet connections. These could exclude people not just from the joke or conversation, but from the community altogether. What looks like a technical glitch is actually a social one. It changes who gets to speak, what gets noticed, and how belonging is defined.
The problems can get even bigger. Chatters can flood the chatbox with harassment by bypassing filters faster than moderators can respond. The same tools that usually keep things playful—emojis, alerts, chatbots—can just as easily magnify chaos. The speed of moderation relies on automated systems coupled with human decisions. But human discernment moves at a much slower pace.
These “minor” incidents are good indicators of how technologies are built—and how people re-imagine them for their own social needs. They reveal where design choices enable participation and where they create barriers. For policymakers, this means that regulating digital spaces cannot stop at policing viral content or personalities. Instead, it requires a framework that treats design as governance—recognizing that the smallest choices in code and interface shape who gets to belong, who gets silenced, and how communities sustain themselves online.
Small design choices greatly influence how users interact—or ultimately, who gets heard. Sometimes it’s minor: someone on an older phone can’t see the latest emojis on Messenger, and what was meant as a joke or cheer becomes a blank box. You miss the joke, and too bad for you. But what if these interactions mean something more? What if filters or broken tools silence not just moments of humor, but whole identities? On YouTube Live, for example, auto-moderators sometimes block words like “queer” even when used positively—erasing the very communities that need visibility. These aren’t just technical glitches. They’re design choices that reshape interaction.
The same holds true in the Philippine context. Livestream selling on Facebook has become part of everyday commerce, but the chat system is not neutral—it decides whose comments get highlighted and whose voices remain buried. Political events streamed online carry the same risks: auto-moderation may filter dissenting voices, or bandwidth issues may exclude poorer communities from participating at all. In these cases, access and participation are not just about who has internet, but about how the tools themselves decide who gets seen and heard.
When we talk about livestreaming, the spotlight often falls on influencers and viral personalities. But the policy implications lie beyond them. The same technological designs that govern celebrity streams—auto-moderation, reaction buttons, chat visibility—also govern everyday interactions in smaller, community-driven spaces. If a platform’s design makes it harder for first-time users to find the chat box, or if filters silence certain words without context, then the result is exclusion. These aren’t quirks of the system. They are governance decisions embedded in the code.
This is why policymakers, educators, and communities need to move beyond chasing “bad actors” or scandals. Focusing only on influencers, viral controversies, or harmful content misses the deeper issue. The culture of the internet is being shaped not just by what people say, but by the technologies that decide how—and whether—they can say it at all. Paying attention to these hidden design choices is not only about improving platforms—it’s about ensuring that digital spaces remain inclusive, accessible, and democratic.