Artificial Intelligence (AI) has created a plethora of ethical considerations – stemming from its use to its impartiality and its ability to operate autonomously. At the recent ISC2 AI Spotlight virtual event, a panel drawing in insights from academia, as well as legal and industry experts discussed these and other challenging ethical AI considerations.

AI presents many challenges, among which is the key topic of ethics. Much has already been said and written about ethics; in fact, as moderator Brandon Dunlap commented at the ISC2 Spotlight on AI in July 2025: “We could have done a full two days about that”.

The panel session The Ethics of AI - Carving a Principled Path for Your Organization featured Olivia Philips, vice president of the U.S. chapter of the Global Council for Responsible AI, Robert Kang CISSP, adjunct professor at Loyola Law School and Dr. Claudio Cilli, a professor at the University of Rome.

Dunlap opened the session by asking Philips for some examples of the ethical dilemmas faced by the Council. “Using it – AI – responsibly” was top of the list, she said; “We have seen a lot of people who are not using AI ethically because it makes their job easier. It makes their responsibility easier, but they're not thinking of the ethics behind it. There needs to be legal ramifications behind that where, yes, you can do this and no, you can't do this”.

Cilli pointed out, in response, that there are two sides to ethics in AI and that dealing with the behavior of the AI – rather than of the people – is the hard part. “AI technology is a relatively new technology, although the research is very old. People don't know exactly how to manage this powerful tool. The legislation and the governments are trying to prevent people from bad use – not working to regulate how people can use it, but trying to regulate the behavior of the AI, which is massively impossible”.

AI and Legal Decision-Making

The host then turned to Kang, the academic lawyer of the panel, to ask for his experience of how AI is used in his realm. “I was at a conference recently where a prosecutor from a particular major U.S. law enforcement organization indicated that management absolutely prohibits the use of AI”, he said, but went on: “In my personal opinion, people are using AI. It's just that under those rules, under that edict, they're using their personal accounts to do what is probably going to be law enforcement business. That's not good”. Outright prohibition did not sit well with Kang, who analogized wryly that: “I would not tell an engineering student: you cannot use a scientific calculator”.

Ethical Environmental Impact

Moving on, a novel question was raised: the ethics around the massive power consumption and associated environmental impact of the data centers in which AI platforms are hosted. Philips mused on this one: “From an environmental standpoint I think there is a risk, because we're going to have to dig up the Earth. We're going to have to probably take some trees down”, she said. “The other thing is electric: how is that going to affect anything surrounding it? Is it going to affect humans? Is it going to affect … the bees and the pollination?”. Cilli concurred that the impact was a concern, but also saw some positives, pointing out that there is “A great opportunity, to increase the research to solve the energy problem, the energy demand … everything should be seen as an opportunity”. Kang had a particularly novel example, which he termed a “Trivial Pursuit” point: namely that the data center cooling required to answer 10 to 15 ChatGPT questions was equivalent to a half-liter bottle of water.

Moving back to a legal viewpoint, the question was raised of: how do we bridge the gap between what is ethical and what is legal? Kang’s view was one of, in his words: “same wine, different glass”. He noted: “I think this is what is ethical and legal is not something new to AI. It's a battle that has gone on ever since laws have been existed”. In his role, he said: “What I will do is give three options with varying levels of operational, legal and even ethical implications. Then, we work with the client and say ‘Hey, let's figure out which of these options work best for the organization, consistent with our business goals and with our risk tolerance’”.

Reality or Fiction?

Is our approach even based on reality, came the next question, asking: “Are we engineering policy and even ethics around the marketing fiction rather than perhaps the operational fact of this matter?” Philips was of the view that: “I think we're looking at it in not the right format,” pointing out that: “AI keeps on changing every 24 hours and new things keep coming up. So, we have to reevaluate … it's constantly changing and you have to keep your eyes on a swivel, because you just don't know what's coming out next”. She also referred to the well-known threat of AI as a source of sensitive data leakage: “AI has become the new insider threat for a company. It's basically a new user with all this access and can manipulate all this information, and it has root privileges. But how do you quarantine all that and make it safe for everybody?”

So, should we engineer AI for what we expect to come into the future or what we have today? Philips: “I think they need to do not be the rabbit, but be the turtle. You need to come back, reevaluate and then be like: OK, this is where I am today, this is where I want to go and this is how I'm going to get there. Is it going to be a straight line? Absolutely not”. She continued: “You’ve got to take one piece at a time, or you're just going to overwhelm yourself or the organization”.

How do we guide our organizations down the right path, queried Dunlap. Data science said Cilli: “The first thing is to start learning what the data science is, of course, because these systems need data. Without it, the AI cannot work. We need to understand exactly what data – and the implication of the data – does to the data science in the AI”. Trust also got a mention, with Philips asking rhetorically: “How can we improve trusting AI? Because at the end of the day, you have to trust it and if you don't trust it, you're not going to utilize it to its full capacity, which can definitely help your organization”. From his legal angle, Kang’s view on frameworks was that there are two approaches, one voluntary (for example standards under development by ISO and others, of which adoption is not compulsory) and the other mandated (primarily by government regulation). He had concerns about the second option, fearing that: “In many instances, government regulations, when it comes to a new in a new technology, can sometimes be a little heavy handed”.

Kang also offered advice to organizations to educate the top-level executives in the basics of AI. “I encourage you, if you don't already, to provide or encourage the company to provide a certain level setting ‘AI-101’, at the executive level, to help them develop both thoughtful internal policies, but also thoughtful investment decisions as well”.

Takeaways from the Panel

Dunlap asked the panel how should organizations start and how will they mark progress? Tricky, said Cilli: “I don't think it's possible at this time to establish a sort of roadmap to introduce AI in a specific organization, because all are different. They have a different approach, different business model”. He did note, though, that in its basic form, AI is just software. “AI is not human, it is a piece of software, it is a piece of code. It's important to remember this. The software may have bugs, like all software, so it is not possible to generalize … to introduce the AI, the recommendation to start and to begin studying the implication that what exactly AI is and exactly what It can do”.

Kang, whilst being complimentary of the CISSP certification, noting that: “A lot of the elements of the CISSP provide for developing risk management programs. They [translate] very well over to the AI space,” he suggested “building out a governance program, that has stakeholders across the enterprise able to come in and share their voices. We will get that diversity of thought, which is helpful”. Philips, prompted by Dunlap to consider a positive, closed saying: “We should establish care, policies and governance frameworks, ensuring that AI security, posture management and detection is available. We must manage the risk throughout the AI development and deployment lifecycle, not just towards the end, but the entire lifecycle. This is a big one – train and educate your users”.

Related Insights