Insights

AI vs. AI: Forecasting the Ethical Dilemma Circling Law, Business and Technology

Cybersecurity, Data Privacy, Financial

By: Rebecca L. Rakoski, Esquire
       Patrick D. Isbill, Esquire

Should private, non-governmental companies be able to weaponize sophisticated, well-developed cybersecurity defenses to counter the cause of their own cyber attack? A cyber counterpunch of sorts, or “hack back,” continues to raise all sorts of layered ethical and legal questions for technologists and cybersecurity professionals alike. It is also an especially complicated question for governments with no direct answer yet. Insert artificial intelligence (AI) into the equation and the complications increase exponentially. The keyword for lawmakers is of course cause. Something that if poorly understood ends up often being undefinable, unidentifiable, and largely consequential.

The Study on Cyber-Attack Response Options Act

Introduced last year, the Study on Cyber-Attack Response Options Act is a bill directing the Department of Homeland Security to study and report on its findings of potential benefits and risks of amending “the Computer Fraud and Abuse Act to allow private entities to respond to an unlawful network breach, subject to federal regulation and oversight.” Many industry analysts and observers have derided the acceptance of the private sector onto the cyberwarfare stage as too risky while still some maintain such an introduction should be at least studied, particularly in light of the well-publicized ransomware cyberattacks of industry giants like SolarWinds, Colonial Pipeline, and JBS Foods. SolarWinds garnered added attention from legal watchers in the months following its cyberattack as a result of a group of investors filing a lawsuit that specifically named its former CEO and also its CISO at the time.

The text of this bill, referred to the Committee on Homeland Security and Governmental Affairs, states that the report shall “address any impact on national security and foreign affairs” and include recommendations not limited to “which Federal agency or agencies may authorize proportional actions by private entities” and “what actions would be permissible,” as well as “what level of certainty regarding the identity of the attacker is needed before such actions would be authorized.” Those aforementioned attacks, either individually or collectively, may have at a minimum opened the door further for federal lawmakers to explore the thought of what a framework would look like if private, non-governmental companies were indeed permitted to respond to such crippling cyberattacks.

But make no mistake, the idea of corporate self-defense akin to counter cyberwarfare, similar to complex legal and ethical questions typically found on first year law school exams, is moored to the slipperiest of slopes. All the ingredients for a highly consequential ethical dilemma are present, and the combatants may not even be a person, rather pitting the AI systems of the one (attacked) organization against those of the (attacking) unknown entity. These guidelines of sorts will have to surely determine at a base level (1) whether such a process can be implemented with what safeguards, and (2) whether that process can then work to proportionally respond to a cyberattack to fulfill a goal of future deterrence.

Private Sector Loading Up AI-Based Defenses to “Hack Back”

Under United States law, it is illegal for a private organization to “hack back.” A limited number of agencies by way of the federal government are nevertheless permitted under federal law to take offensive cybersecurity action through either the military or law enforcement. The general concept of such a response for private companies, if granted such ability under the law, is that a threat actor infiltrates an organization’s systems. Those systems are presumably being patrolled and defended by a cybersecurity and data privacy program, foreseeably powered by AI, that either neutralizes or warehouses this threat upon detection but is presumably less advanced/precise than those applications at the highest levels of government. What happens next is where things get murky. Upon neutralization, the presumably AI-based system for the private company then initiates a counterattack strategy to “trace the point of origin” of the threat actor and deploy a cyber counterstrike almost immediately thereafter, likely similar or more devastating depending on the skill of the original threat actor.

Where the ethical dilemma arises is in the subjective definition of a “hack back.” Left to the private organization’s decision makers, their immediate judgment of proportionality will be hard to quantify. These choices could also arguably be placed outside of the executive leadership and squarely into the hands of say in-house/general counsel, triggering serious ethical questions like duty of loyalty, conflicts of interest, and assignment of liability. Is general counsel prepared to fully execute his/her ethical obligation to the company to protect its integrity at the moment of a fast-moving attack in possible contravention of the law? Retaliation by its own standard is immediately fraught with degrees of ethical contradictions and legal headaches, not to mention varying shades of certainty. The additional argument concerning technologists centers less on capability and more accurately on examining ethical judgment of the quality of the information aggregated relating to origin using calibrated technology so as to initiate a clear-cut measured counterattack. All likely in sum to be efficiently executed at some point by AI against AI.

Unlike the reach of the federal government and more to the point its subpoena power, the biggest question may be cause, often unable to be accurately assigned to an entity, government, and/or identifiable individual by the private sector. After all, the first principle of counterattack strategy is in fact accurately locating the target. Add much discussed AI-based defenses into the mix, or defenses enhanced using quantum computing at least, and the answers become even more clouded. Further congressional study must also consider an AI overlay and for obvious reason. Disruption to geopolitics and national security/intelligence is a real threat and worldwide consequences could be triggered as a result of errant information.

As it stands, the issue of “hacking back” is nothing new with the idea having been kicked around before. But like an ocean current, it keeps circulating back and building out, partially because the urgency is growing and partially because the technology is too. Most would agree though that a scattershot approach is detrimental and could end up doing greater additional harm, even if unintended, as companies look to justify instinctively counterpunching in defense.

Ethical Questions Persist

Legal and ethical questions are in the mix as part of the proposed congressional study with tech companies being held liable for damages resulting from design implementation of their potentially AI-based programs and concern over its autonomous control. The discussion easily moves to products liability and the extent to which laws should hold designers, as well as service providers, liable for the decisions executed by the AI technology application, raising lasting ethical questions about control from some emerging tech analysts on whether the AI itself would have to be named as a party to any criminal and/or injury cause of action.

Business and the law are not immune from these ethical questions either. AI triggered cybersecurity and data privacy internal defenses when it comes to corporate governance have been hovering over boardrooms for several years now. The pace and proliferation of AI-based integration will likely increase in the coming years given the high level of interest already over effectiveness. Questions on accountability are ahead though for the private sector like the degree to which AI will serve as either guidance, direction, or a combination of both, and whether industries are prepared for the significant expansion of corporate liability and all its fallout. The law will have to chart its own course as well, leaning on a rather sharp ethical angle of whether to recognize an exception of sorts related to corporate self-defense knowing it is AI-based, or will eventually get there, and therefore by definition not fully predictable or maybe even capable of being held liable as an entity by itself.

Unlike the growing acceptance of legal guardrails when it comes to data privacy that had initially left the law running to catch up, the ethical and actionable use of AI will afford no such understanding. This may just go a long way to creating the spark for corporate vigilantism. The law, as well as lawmakers, should draw on past and present lessons stemming from data privacy and apply it to what could predictably be coming next in the evolution of corporate security and privacy programs.

For now, minimizing cyber threats can go a long way if companies first look to solidify a defensive approach instead of jumping head first into an offensive-minded strategy. So things like shoring up point(s) of attack, training, and using AI to lead system defense that make use of segmentation and/or microsegmentation for example will go a long way too. The idea of allowing the private sector to defend itself and then retaliate as a result of a cyberattack is not yet gaining steam so much as interest. It may however be a distinction without a difference, especially with rising corporate pressures from the costs of a cyberattack to the threat of regulatory compliance enforcement.

All in, an AI versus AI offensive strategy is still ultimately an outside-in plan of action that overlooks in some cases what is really important – core company assets. Adopting an inside-out directional defensive strategy instead layers security around those top priority assets, allowing a company to actively take control of their corporate cyber infrastructure. So rather than “hack back,” a company can “take back” their infrastructural network by defensively shielding their core assets from attack, relying instead on deliberate controls that favor strategic solutions over forced reflexive reaction. AI will ultimately play a role in this equation, but perhaps many of these ethical questions can be either settled or avoided entirely by examining a change of mindset. Just as questions of using an offensive-minded security and privacy program demand a definition of cause and proportionality, we may be best served by considering direction first.

Conclusion

Defensive technical solutions designed for a cyberattack can help prevent and in some cases mitigate the potential damage of such an event without getting tangled into geopolitical and/or national security matters. Anticipatory awareness in most business settings is the key to functional leadership. Strong offensive counterattacks are useful deterrences, and maybe a foregone conclusion, but only if first primed by well-thought-out defensive measures like an incident response plan and training.

So rather than trying to brace for the attack, today’s corporate defense systems could mature to absorb the breach outright and compartmentalize it to an area where damage is either neutralized or at least minimized. A counter strategy could then be launched swiftly to mitigate any damage, restore operations if necessary, and use the capabilities of AI to learn and fix the mode of attack to create an immediate patch to negate any subsequent vulnerability. A clearly defined purpose must however be the premise for any strategy. When a security and privacy program is “designed” with purpose in mind, it can progress to deftly resolve a cyber event incurred by the company with greater practical efficiency. Needless to say minus some of the ethical headaches.

Reprinted with permission from the May 13, 2022, issue of the New York Law Journal. Further duplication without permission is prohibited. All rights reserved. © 2022 ALM Media Properties, LLC.

This article does not constitute legal advice or create an attorney-client relationship. The information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

Cybersecurity, Data Privacy, Financial

Categories

Follow XPAN Law Partners