The Hague Summit for
November 6-7th 2019 – Peace Palace – The Hague
The Hague Summit 2019
The Hague Summit will focus on safeguarding the role of the internet as a tool for personal, professional, and social engagement. I4ADA is taking concrete steps to increase access to knowledge, evidence-based trust and measures to foster accountability. The goals are to facilitate transparency, a common understanding, and thus to promote a maximum sustainable net benefit for people and societies worldwide.
Below is the co-inventor of the Internet Vint Cerf endorsing the I4ADA Summit 2019, explaining the increasing relevance of Accountability in the Digital Age.
The Hague Summit will be a 2-day conference bringing together a global multi-stakeholder community from national and local governments, international policymakers, civil society, NGOs, the ICT industry and platforms, as well as other relevant Organizations and institutes. The delegates’ conclusions and recommendations will contribute to shaping a global path towards a responsible policy. Together our aim is to foster accountability in and for the digital age.
Introducing the speakers
See a full overview of all people speaking at the Hague Summit for Accountability in the Digital age
Official Program of The Hague Summit for Accountability in the Digital Age
Accountability in the Digital Age
Panel discussion on Accountability & (Social) Media and Journalism
The notion of checks and balances lies at the heart of most Western democracies. These checks and balances have largely been relevant for the branches of government, to keep in check these large institutions with control over their societies. Less thought, however, has been given to checking private actors, such as tech monopolies with increasing power over our lives. It is crucial to consider how to proceed with regards to the balance of power in a democracy.[i]
Digital media forms have allowed citizens all over the world to connect, and to help each other hold political powers to account. But in recent years, the positive effect of this democratization of information seems to have flipped, as governments and organizations struggle with issues like the spread of ‘fake news’ and propaganda and the safeguarding of citizens’ personal privacy and safety.
In a bid to ensure powerful private actors such as Facebook work to improve in this regard, various methods have been suggested. For example, in April of 2019 a UK white paper indicated that it was time to move on from industry self-regulation and start enforcing standards for content regulation by holding individuals in such companies liable.[ii] Meanwhile, Facebook’s Mark Zuckerberg has said he wants governments to take up a more active role in regulating for the purposes of data use, privacy and election integrity, calling for a “globally harmonized framework.”[iii]
Supporters remain in favor of far-reaching freedom of speech online, while critics argue that “rather than uniting and informing, social media deepens social and political divisions and erodes trust in the democratic process.”[iv] Positive outcomes are visible—facilitating for example social movements or democratic uprisings—, but it is important to consider whether certain platforms or organizations have systemic flaws that pose a threat to the world’s democratic institutions, and whether we have the ways and means to hold them accountable.
Panel discussion on Accountability & Artificial Intelligence
Just in the past few years, the world has seen a sharp spike in attention for the governance of AI. Around twenty countries adopted national AI strategies, alongside more examples such as France and Canada launching an International Panel on AI and the US Defense Innovation Board devising ethical principles for the use of military AI for the Pentagon.[v]
Within AI, there is a difference between Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI). The former refers to machine intelligence equaling or even exceeding human intelligence but only for specific tasks, such as IBM’s chess-playing computer, Google’s AlphaGo, or the robot pilot that recently passed its flying test in the US. AGI concerns machine intelligence that matches human performance across any task.[vi]
At the moment, private actors and companies “are creating AI-based solutions to everything from grading students to assessing immigrants for criminality,” often bound by little more than their own ethical statements.[vii] And this issue does not only prevail in the private sector. As predictive algorithms become normalized in law enforcement, they will most likely also move into the (inter)national security spheres.[viii]
Our legal system is largely geared towards human agents. Think, for example, of how criminal law requires there to be active intent behind actions, which results in a certain punishment. At the same time, however, technology will continue to move towards—and will eventually go beyond—Artificial General Intelligence. Complex systems will increasingly make decisions that are difficult to predict beforehand or explain afterward. One of the biggest issues arisen in the past few years of increasingly autonomous, ‘black box’ decision-making in technology, is therefore who can or should be held responsible if AI causes harm—an issue often referred to as an ‘accountability gap’.
The three main problems that may arise from this accountability gap, if not addressed soon enough, are causality, justice, and compensation.[ix] Firstly, it is hard to make a causal link from harm back to a specific person or organization if the harm was due to a computer decision. Secondly, the legal system’s penalties are mostly geared towards punishing humans (or entities that are made up of humans). Fines or prison sentences are not a deterrent for algorithms, but without effective penalties it is difficult to ensure justice is served. Lastly, and in relation to the former two problems: who is to pay compensation to victims of accidents caused by autonomous AI?
Panel discussion on Accountability & Cyber Security and Cyber Peace
Until fairly recently, most governments saw the digital sphere as separate from physical threats to national safety and sovereignty. However, the coming of cyber-physical systems, complex digital networks and the internet of things, along with more sophisticated, has turned cyberattacks.[x]
Algorithms have rapidly increased in importance for national security, now protecting nations from hack attempted cyberattacks around the clock. The value of these algorithms is their ability to immediately respond to attacks. Therefore, as they become more responsible for not only the digital sphere but also for the interconnected patchwork of critical infrastructure, there must be sufficient and timely consideration of how to assure algorithm ‘behavior’ is predictable and explainable enough that we can trust to base more of our peace and security on them. The more we rely on self-learning systems to protect us, the sooner we must find alternative frameworks for transparency and accountability.[xi]
The difficult process of agenda-setting
and accepting international norms for cybersecurity has been ongoing for a
while, but once norms accountability always remains the next challenge. Aside
from this, the relatively low speed at which countries within international
coalitions or institutions manage to agree upon norms has left many non-state
actors unshielded in the meantime. Many companies have started building their
own agreements to ensure a level of mutual cybersecurity, in the form of both
normative and operational alliances.[xii]
[i] Deeks, Ashley. “Facebook Unbound?” Virginia Law Review Online 105 (February 2019): 1–17.
[ii] Stewart, Heather, and Alex Hern. “Social Media Bosses Could Be Liable for Harmful Content, Leaked UK Plan Reveals.” The Guardian, April 4, 2019, sec. Media. https://www.theguardian.com/technology/2019/apr/04/social-media-bosses-could-be-liable-for-harmful-content-leaked-uk-plan-reveals.
[iii] Press Association. “Mark Zuckerberg Calls for Stronger Regulation of Internet.” The Guardian, March 30, 2019, sec. Media. https://www.theguardian.com/technology/2019/mar/30/mark-zuckerberg-calls-for-stronger-regulation-of-internet.
[iv] Intelligence Squared Debates: Social Media Is Good for Democracy. IQ2US, 2018. https://www.intelligencesquaredus.org/debates/social-media-good-democracy-0.
[v] Simonite, Tom. “Canada, France Plan Global Panel to Study the Effects of AI.” Wired, June 12, 2018. https://www.wired.com/story/canada-france-plan-global-panel-study-ai/; Tucker, Patrick. “Pentagon Seeks a List of Ethical Principles for Using AI in War.” Defense One, January 4, 2019. https://www.defenseone.com/technology/2019/01/pentagon-seeks-list-ethical-principles-using-ai-war/153940/; Zwetsloot, Remco, and Allan Dafoe. “Thinking About Risks From AI: Accidents, Misuse and Structure.” Lawfare, February 11, 2019. https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure.
[vi] De Spiegeleire, Stephan, Matthijs Maas, and Tim Sweijs. “Artificial Intelligence and the Future of Defense: Strategic Implications for Small- and Medium-Sized Force Providers.” The Hague: The Hague Centre for Strategic Studies, 2017. https://www.hcss.nl/sites/default/files/files/reports/Artificial%20Intelligence%20and%20the%20Future%20of%20Defense.pdf.
[vii] Coldewey, Devin. “AI Desperately Needs Regulation and Public Accountability, Experts Say.” TechCrunch, December 7, 2018. http://social.techcrunch.com/2018/12/07/ai-desperately-needs-regulation-and-public-accountability-experts-say/.
[ix] Bartlett, Matt. “Solving the AI Accountability Gap.” Medium, April 5, 2019. https://towardsdatascience.com/solving-the-ai-accountability-gap-dd35698249fe.
[x] Dobrygowski, Daniel. “Why Companies Are Forming Cybersecurity Alliances.” Harvard Business Review, September 11, 2019. https://hbr.org/2019/09/why-companies-are-forming-cybersecurity-alliances.
[xi] Deeks, Ashley, Noam Lubell, and Daragh Murray. “Machine Learning, Artificial Intelligence, and the Use of Force by States.” Journal of National Security Law & Policy, Virginia Public Law and Legal Theory Research Paper, 10 (November 16, 2018). https://papers.ssrn.com/abstract=3285879.
[xii] Dobrygowski, Daniel. “Why Companies Are Forming Cybersecurity Alliances.” Harvard Business Review, September 11, 2019. https://hbr.org/2019/09/why-companies-are-forming-cybersecurity-alliances.
Partners 2019 Summit
The Hague Summit for Accountability in the Digital Age is powered by:
International media Partner
Local media Partners
Join the Institute for Accountability in the Digital Age
Our work is only possible with the continued financial support of donors and partners. If you want to become a supporter of the I4ADA, please contact us.
Institute for Accountability in the Digital Age (I4ADA)
Postal address I4ADA
Lange Voorhout 1
2514 EA The Hague
P: +31 (0)70 3184840