Chapter 1: Executive summary
Chapter 2: The reality of AI in Australia
Chapter 3: The opportunities for AI in pharma marketing
Chapter 4: Managing risk in AI adoption
Chapter 5: Preparing for an AI-first future
Chapter 4: Managing risk in AI adoption
Alongside opportunity sits responsibility. The pharmaceutical industry’s instinct to start with risk is not accidental; it is foundational to protecting patients.
As Britland put it, “We’re just a very conservative industry. Our first thing to do, which is good in a way, is ask what the risks are. Because we don’t want to compromise patients.”
That mindset shapes how AI must be approached. Rather than seeing risk aversion as a barrier, it provides a framework for responsible adoption that prioritises safety, evidence and accountability from the outset.
Data privacy, enterprise tools and third-party risk
One of the most immediate concerns with AI is how data is handled. Britland said many people had assumptions about the security risks of AI, however in many cases it’s incorrect.
“There’s a misperception that the risk of data breach is higher than if you use things like OneDrive,” he said, pointing out that many enterprise AI platforms offer equivalent levels of protection.
In many cases, fear stems less from evidence and more from unfamiliarity: “People don’t know what they don’t know, and they fear what they don’t know.” Slaven agreed.
“Every pharma company should be using enterprise-level tools so that it’s just like using Google Drive or SharePoint. The information stays private within your organisational structure. It’s not training the model.”
Approved enterprise AI environments allow teams to benefit from AI capabilities without exposing sensitive data or intellectual property. It’s important to give people access to the tools they need, “so they don’t go and use external tools that don’t have that type of privacy and security,” she said.
Without clear guidance, staff may unintentionally introduce risk by turning to consumer AI platforms that fall outside corporate controls. We also need to consider how third parties are using AI.
“We need to make sure they're aware of our contract clauses, that third parties and vendors are aware that they should not be using AI to ingest any of our company data,” Slaven highlighted.
As AI evolves, these “changing goalposts” and expectations need constant review.
Human accountability and the “human in the loop”
While technology can mitigate risk, accountability cannot be automated. Across all use cases, one principle remains non-negotiable: human accountability. Slaven was clear: “it’s always going to be human in the loop.”
While AI can enable and accelerate work, “people need to be always 100 per cent accountable for the work that they provide or do using AI.” Outputs must be “checked, validated and vetted by them as the owner.”
This responsibility extends to understanding why AI produces certain outputs. Slaven described situations where teams blamed AI for errors, only to discover the issue lay in the inputs.
“Sometimes what we’ve done is we’ve added 30 documents, and within one of those sources was an opinion piece and that opinion piece was what it pulled out.”
This highlights the need for ongoing training and education of the tool, including setting strong parameters and guardrails and ensuring that AI systems rely only on high-quality knowledge sources.
The level of control required also varies by context.
As Slaven noted, “if it’s GMP and GxP related, that’s very different to how we would use it if I was just creating workshop material or using it to summarise a clinical paper.”
Risk management must therefore be proportional to the task.
Compliance, content and regulatory safeguards
Compliance doesn’t disappear with AI adoption. Slaven identified output risk as the primary concern, particularly for customer-facing materials.
“All the materials that are going out customer-facing still need to be referenced, checked. We’re still going through all the other normal compliance checks that we go through for approval.”
Strong guardrails are especially important around AI-generated images and videos, which should not be used externally without rigorous review.
Rather than weakening compliance, Slaven argued AI could strengthen it.
“If we can actually use AI to help us navigate through the compliance risk and identify compliance risk, I actually see it as a real positive.”
Ron Eames, Regional Counsel at iNova Pharmaceuticals agreed.
Even a year or two ago, getting an AI tool through pharmaceutical compliance was almost impossible. However, as he highlighted at the NEXT Pharma Summit, AI is now being developed as a compliance tool.
Using a verified database of sources to ground AI with retrieval augmented generation can help reduce the incidence of hallucination.
“We're also seeing things like AI impact assessments that organisations are carrying out to measure the risk and the harm and to document how they're guarding against that,” he said.
Within agentic systems, compliance tools like AI juries or verification agents are using scorecards to assess decision quality of systems and to ensure against misalignment.
“These tools are building trust, and compliance is no longer being seen as a blocker or a barrier to adoption. It's actually a trust enabler,” he said.
Not only that, when AI compliance tools are built in, it can speed up project timelines.
“Traditional pharmaceutical IT deployments that once took nine to twelve months are now being completed in under 90 days, without reducing regulatory scrutiny,” he said.
A recent deployment was completed within four weeks.
“This wasn’t about skipping safety checks. It worked because compliance, legal, medical and tech teams collaborated from day one into the design of the tool. Each stakeholder assessed potential risks early; privacy, regulatory, consumer protection, etc, ensuring speed was matched by functional accountability."
“We are the most compliant, regulated, risk adverse,” Slaven said. That’s not going to change with AI.
But it’s about making sure people aren’t afraid, “but they use it with good guardrails and good validation,” she said.
Misinformation: the risk that outweighs all others
While data security and compliance dominate internal discussions, Britland identified misinformation as the most serious external threat.
“The biggest risk for me, which is bigger than misinformation going to healthcare professionals, is the amount of misinformation that is out there when people use AI improperly,” he said.
Deep fakes, opinion presented as fact, and highly convincing false content pose a genuine risk to public health.
“People buy it. People believe it. It is so real.”
Addressing this challenge will require collaboration beyond pharma.
“We need to come together as a healthcare industry, to bring government, industry, clinicians, patients, people like Google together because misinformation over the next few years is going to be horrible,” he said.
Yet Britland remained clear-eyed about the upside. AI also has the potential to close the long-standing gap between evidence and practice: “There’s still such a gap between the science and practice, and I think AI is going to fill that gap hugely.”
In the end, the experts agreed that while risks must be actively managed, they should not paralyse progress.
“I think the benefits are going to completely outweigh the risks,” Britland said.
The challenge for the pharmaceutical industry is to move forward deliberately. The accountability, education and governance we need shouldn’t stop us losing momentum or confidence.
Chapter 5: Preparing for an AI-first future ->

