Operational security: 3 considerations to have
Operational security

We've learned about security tools, and we've looked at some non-digital aspects of security. Now, let's take a look at operational security, or a list of behaviors we should consider to further boost our security.
Consideration 1: Balance advocacy, visibility, and security
Some security measures that we implement might be visible to others, for instance others can see if you use Signal for your personal number, or if you publish your PGP key. If you stand out because of your security measures, others might suspect that you have something to hide or worth investigating or surveilling. Make sure that your security posture is reasonable but does not feel excessive. If for example everybody around you has a MySejahtera application on their phone and you very publicly refuse to get one, citing security risks, this could draw unwelcome attention to you. Having the application might be a better idea, but worth considering other ways to secure our devices even with the application installed. It could be registered using a different number, or entirely on a secondary phone.
It's usually a better idea to use popular tools than less known ones, as those allow you to easily blend into a crowd. If for example everybody in your community uses WhatsApp, but you insist on only communicating through trusted contacts on Signal and encourage only your trusted contacts and nobody else to install Signal, then there's a risk that anybody with Signal installed on their phone is seen as unusual or suspicious. (It's a much better idea to encourage everybody to install Signal, whether or not they do security-sensitive work and projects with you. That way, those who need the extra security will stand out less.) It’s usually best to be assertive about your community's security rules but don't talk too much about them outside the community.
LGBTQ organising in Malaysia often have balance between being visible and staying low due to the sensitive nature of LGBTQ work that may make it risky and challenging. Sometimes, people who stand out are better protected because their adversaries worry about repercussions. Some people might for example have a big following on social media and post regularly about their locations and movements, and the police know that they would get in trouble if they tried to arrest or silence a popular community leader. This usually applies to people with very public personas like community leaders, journalists, or religious leaders. Others might be better served by being less public, for instance some community based organizations who find it better to be less visible. While deciding how visible we are, we should consider our risk assessment and capacity to address threats.
Consideration 2: Collect data carefully
Healthcare institutions often need to analyze a lot of data in a confidential manner.
The less sensitive and personal data you have about others, the better. Some communities for example will only use pseudonyms and not even know people's official names. Similarly, they might refer to projects or locations by code words. If such a community's devices or documents were searched, then police or security forces would not be able to easily figure out which pseudonym matches which person. If you do not absolutely need to know a piece of personal data (or data which could be used to identify a specific person or small group of people), do not collect it. If you do need this data, delete it as soon as it’s no longer needed. This will reduce the chances of anybody’s personal details leaking out, either accidentally (for example if someone leaves an unlocked phone or open notifications screen somewhere) or through malicious action (for example, if someone breaks into your accounts or security services force you to give data).
At the same time, we often need data in order to better understand our community’s needs, monitor trends, or make decisions. If we want to figure out the state of mental health or homelessness within our community, for example, we will need to collect and analyze some data. When you do work with data on vulnerable communities, doing so responsibly is key. In an ideal world, you would work with an external researcher who is experienced at sensitive data collection to make sure that the whole process preserves people’s privacy. There are also some great resources we recommend, including the Responsible Data Handbook. If you’re working with vulnerable, at-risk, or sensitive groups, it’s very important to effectively anonymize that data. This means that nobody will be able to reverse engineer the data to figure out people’s identities. If you write that you interviewed an anonymous witness who is a 15 year old with a cat who enjoys football and lives in a particular village, then people could easily figure out who this is, even if you didn’t give their name. Anonymization can be even harder with bigger datasets. The UK government has created a fantastic, if quite detailed, guide to data anonymization. Oxfam also built a great training pack for people who will be working with sensitive data. If you are planning to set up any research or other effort that touches on sensitive or vulnerable data, read those guides carefully and take your time to design a framework that effectively preserves people’s anonymity.
Don’t forget about data which you are legally required to keep. If you break the law by destroying data you should have kept and archived, such as financial statements, then this could lead to a lot of scrutiny and harassment from police, courts, and security forces.
In rare cases, keeping more data within the community can keep you safe. Let's say that one community member is investigating government or police corruption. If they share their data or drafts with many others in the community and police know about that, then they will not be able to stop the investigation by arresting a single individual.
Consideration 3: Use multiple layers of security
When we were writing this security guide, we would usually recommend using more than one security mechanism. That way, if one of them were to fail, the others might still protect you. Examples include:
-
We assume that your password will never be leaked. But even if it is somehow leaked or captured, two factor authentication will protect you
-
We assume that you will recognize phishing emails and fake pages and not enter your credentials there. But even if you miss one, your physical security key (such as the Titan key) will protect you if set up as your second method of authentication
-
We assume that your devices are protected by a strong password or passcode. But even if somebody gained access to them, for example by taking an unlocked device from your hand or by forcing you to hand it over, sensitive data would still be somewhat protected if it is auto-deleted through disappearing messages or if your contacts are saved under pseudonyms
All of this is also a key reason we tell you to collect as little data as possible. While we assume and hope that none of it will ever leave your devices or cloud accounts, just in case a security mechanism fails or you are searched, limiting the amount of data you collected also limits the potential damage any leaks might cause.