In the discussion we had at ICCM on crisis mapping security, we discussed about what are the scenarios where we can see the issue of security arising for a crisis mapping project.
According to me those are 4:
- The case of a repressive regime, where the people managing the project are either activists or related to activists
- The case of a repressive regime where the people managing the project are not activists or are so called “improvised activists”
- The case of a humanitarian emergency where there are security concerns related to either the presence of militias or a repressive regime
- The case of a humanitarian emergency in general, where security is very much linked to the delivery of humanitarian aid and the do not harm principle (which indeed should inform also all the other cases).
Case 1 – repressive regime and activism
This was the example I talked about in my previous blog post. In this case the security issues that arise are very much not linked to the protection of the people managing the project, since they normally know the risks and are willing to take them. As much as we are sure they are informed about all the possibilities, it is ultimately their call to decide what to do and how. There is indeed a very important issue to be faced here: when the activists involved other people in the project, what is the knowledge that is shared with those others about the possible risks.?. The example that can be made here is Tahrir Square: the Egyptians that organized the first demonstrations were activists, a lot of them with a history or arrests, tortures and so on. But after a while a lot of ” common citizens” joined the demonstrations: what was their knowledge of the risks? How much of an informed decision was the one they took?
All in all I think that there are 2 important things to keep in mind when approaching a case like this one:
- Activists normally make informed decisions and know the risks much better than we do. We have no right to decide for them if something is worth it or not. I come from a family where my father spent 5 years in jail to fight against a repressive regime; I would never dare to think that he did because he did not know, indeed he did because he did know and he decide to accept the risk.
- The crowd, if we want to call it, may be getting into the process not knowing what the risks are. There is no way for us to prevent this apart from spreading as much as possible the knowledge about cybersecurity. And with spreading I mean produce documentation, use simple language, have software companies and online networks do education and informing people about what is that they are using and what the vulnerabilities are. Information is here more important than food and water.
Case 2: repressive regime and “improvised activists”
I have worked on a case like this some time ago, where the people involved in the project wanted to do a crisis mapping deployment under a very repressive regime and they had 0 or little knowledge of the environment they were acting under. Since we were providing support from abroad, we had to use our knowledge to inform them. All in all the big lesson learned here was that our knowledge of the situation was not enough, and the risk for them was too high. We got under incredible stress, they got very scared and the deployment was closed. The risks that all of us and them run into was really high and we realized that there was no way for us to understand better the situation since we were not there, and for them to learn in such a short time frame without risking to be killed, tortured or worst. In those cases my take away is that BEFoRe you get the knowledge and after that u deploy. There is not such a. Thing as a learning as u do in those cases, because the risks are too high.
Case 3: repressive regime/militias and humanitarian emergency
This was the case of our deployment in Pakistan and Libya. This is a very complicated situation since we are talking about several actors, with several degrees of risks associated with each factor, and different possible outcomes depending on the actor, the beneficiary and the issue. I still think it is very complicated to draw lessons from those kind of situations since it really depends on the cases. In addition to this, the issue here is very much linked to the concept of open data and privacy and how you do provide useful information to both humanitarians and affected communities while making sure that you do not endanger them and respect the do not harm principle.
Those type of deployments are the one that will have to be extremely carefully evaluated, using local or trusted networks, doing a careful risk assessment for each actor involved and making sure that links and connections with key actors are in place. My 2 cents on those type of deployments are the following:
- Treat different actors indifferent ways: not all information is sensitive or useful for everyone, so create different channels, protect them accordingly and deliver different information to different people
- No information does not mean no risks. Not knowing can be as deadly as to give the wrong information to the wrong person, so let’s now panic, but instead find ways to make sure the information flows are built in a way that allows vital information to get to the right people
- Do a very careful assessment of what information people in the ground – being humanitarians, local population or the bad people – have or do not have already, what their information channels are and how they use it. People rely on what they know to gather and get information out, and if you know they channels, you know their possibilities.
Case 4: humanitarian emergency and the do not harm principle
In a recent working group in Geneva, a representatives of ICRC did a very good presentation about the DO NOT HARM principle and how we could apply it to crisis mapping. I think that this is a great starting point – learn from who is mastering it – and I gave a lot of thought to it lately.
In the SBTF for example we have already designed our code of conduct on the base of the ICRC code of conduct, but the issue he goes more inept into the actual implementation of the framework when it comes to applying to the do not harm principle. In this regard the SBTF has already started a discussion about how to use this better and u will soon see some results of those discussions in our blog. The main important thing here is that the DO NOT HARM principle is and should be always the main thing to keep in mind when doing a crisis mapping deployment, especially if there is communication with disaster affect communities involved.
On the other side I am intrigued by how can we make sure we always act under this framework when lots of times we know we do not know. The real risk here is that, since we do not really know all the actual implications of all the crisis mapping deployments, since this field is still growing and developing, how do we make sure that we balance the DNH principle with the urgency to do something, and with the actual benefit of a crisis mapping deployment? The more I think about it, the more it looks to me like a cat eating its own tail: should we not doing anything because the risks to harm are too high, or should we try, knowing that the more we try the more we risk, but also that the more we try the more we learn?
In those kind of situation there are also the so called secondary effects to take into accounts. In fact, while there are risks associated with publishing reports from people on the ground for example, or in making certain information publicly available, there are also other risks that may be associated with those factors that we do not take into consideration. One example may be the fact that, if the crisis mapping deployment is available on line, a repressive regime may be tempted to block the Internet, and in this case also endangering a lot of other situations/ humanitarian operations that need the Internet to work effectively. Another example can also be that, if the crisis mapping deployment is collecting information via. SMS, or social network, the groups in the populations that do not have access to those means may be cut out of the system, and their problems or needs may be completely missed or underestimated because they are not able to express them via those means. Secondary effects can be multiple and various, and it is extremely difficult to understand when and where they are taking place and what to do to avoid them.
In conclusion: I am sorry if readers did not find very good answers in this blog post. The intention is indeed not to give answers but to continue talking about the issue, hoping that a constructive debate could lead to some interesting discussions on real solutions. As final point, I would like to highlight that there is no advantage in the endless battle in between Muggles and Crowdsourceres on the security issue if this battle is only framed as a black and white battle.
The issue of security is there and will always be. Practice and constructive debate on the practical implementation of cybersecurity measures is according to me the only way to face this debate. We can’t go back, we cannot prevent people from using crisismapping in repressive regime environments or in humanitarian crisis. But we can inform them, we can share lessons learned and make as open as possible our failures and our knowledge. Free open source knowledge about security is the best weapon we have to avoid others, and ourselves, making the same mistakes and endanger others in those situation. I am happy to do that, so if you want to do a crisis mapping deployment in one of those situation, feel free to shoot me an email. I may not have all the answers, but I will be happy to share what I have learned ..for free. :-)