Knowledge is power! The public needs to be empowered about the positive aspects and the negative harms of Artificial Intelligence (AI). Core elements of a human rights perspective include empowerment, participation, equality, and respect for diversity (Foundation for Human Rights, 2018). This subject matter touches many fields ranging from law, communications, business and trade, health and fitness, transportation, to psychology. The Trident Foundation’s research is designed to facilitate the continuing development of Public Legal Education and Information (PLEI).The Department of Justice Canada (2015) recognises PLEI’s application to family and related law at provincial levels. This could be extended as an approach to inform the public about cybersecurity in national and international contexts. These curricula would be offered in public virtual spaces including on this website.
PLEI could incorporate philosophies of cybersecurity, national and international sources, tensions between rights including security and liberty, privacy and data access. PLEI can form holistic curricula ranging from physical and network security, to psychological safety, social and cultural practices. This would include the identification of physical and human security limitations, vulnerabilities and strengths. PLEI materials need to be secured both virtually and physically. This is problematic when technology including AI can be misapplied. PLEI best practices may shape regulatory observance programs, crisis response, and preventative measures. In sum, PLEI has the potential to be a capacity builder in this field.
Cybersecurity and AI have intersecting fields shaped by law, regulations, and PLEI resources and practices found in different jurisdictions and locations. PLEI falls short of providing legal advice, however, its development supports the public interest in limiting cybersecurity breaches. The Trident Foundation’s continuing research cannot be depended upon as authoritative legal advice or to make decisions. Information, communications, trade, navigational systems, and infrastructure such as petroleum and maritime are susceptible to breaches (Norwegian Institute of International Affairs, 2018). Lifelong learning, educational services, and systems have the potential to both limit and respond to gaps in research and threats. PLEI supports an informed public to learn about the law and legal processes. In this research the public is placed at center focus. PLEI can help to the effects of breaches. The way that PLEI is interpreted by human experiences and perspectives can affect their interpretation of PLEI. PLEI may be utilised differently by various interests. PLEI can act as a mediator between various stakeholders to frame legal issues before they file suits. PLEI media and political cultures contribute to differences in framing, and interpreting data.
There are a number of risks with AI including software tampering; physical attacks, including environmental, and psychological aspects. A broad range of unfriendly parties continue to locate materials to create disruptive or harmful technology. These can be both inexpensive and accessible. The public could consider the following checklist. What:
• can stakeholders reasonably anticipate would happen with organizational technology and personnel?
• are operators’ performance benchmarks, and are they effective?
• reasonable efforts have they made to detect, and investigate possible intrusions, and to promote quality assurances?
• fail safe measures have been introduced, such as, for the storage of sensitive and potentially dangerous materials?
• communication systems including emergency alerts are in place?
• reputation risk management strategies have been implemented?
• inclusive reporting and conflict management options are available?
• domestic and international possibilities are available should there be conflicts of laws?
• criteria have been applied for the purposes of public safety and education?
• outcome measures are in place, as to such matters, as insurance, finances, and sustainability?
In the 21st century the potential misuse of power and authority, terrorism and cyberterrorism, is heightened with AI. There are gaps and ambiguities in legal sources that range from domestic, to the Chicago, Rome, Vienna (as an interpretive instrument), and other treaties, conventions, customary law, general principles and jus cogens. In the future with the advent of terrorism and cyberterrorism linked to AI questions are likely to continue to become pronounced, as to what sorts of laws should apply. For example, should these laws be based on product liability and tort, air quality, military, civil, criminal, domestic, international, other, or be blended?
The links above are to external websites.
The Trident Foundation is only a phone call or an email away!