Engineering
Tech

Conversations that changed how we think about AI adoption

Whenever YLD engineers attend conferences or engineering events with insightful talks, a great deal of new learning and value is gained. But aside from the talks themselves, it's the candid watercooler conversations that often provide equally valuable insights, precisely because of the unfiltered thoughts and opinions that flow freely.

Recently, at an AI-focused conference, one of YLD's tenured engineers had the chance to engage in unrushed, thoughtful conversations with a few engineering leaders that offered a grounded perspective on where the industry stands today, and where it is realistically heading in terms of AI adoption and utilisation.

Across the day, three memorable conversations stood out - All sat under the broad AI umbrella, yet each highlighted distinct themes: 

  • Ongoing concerns around data security
  • The real-world pace of AI adoption beyond the hype
  • The evolving role AI is playing in strengthening security practices.

Data security concerns and responsible AI adoption

The first conversation was with a thoughtful CTO. He was enthusiastic about the productivity gains AI promises, especially for senior engineers, but his cautious approach to what that means when internal data, business logic, or debugging context enters AI tools was what strongly resonated.

His concerns were not trivial, and they reflect growing anxieties across many organisations today:

  • Data boundaries: When developers copy-paste business logic, private algorithms, or even debugging traces into (open) AI-powered tools, it unintentionally risks exposing sensitive data such as names, internal strategy, or intellectual property. Without strict boundaries, what begins as a helpful prompt could become a data leak.
  • Third‑party SaaS tools. Many AI solutions are offered as SaaS - external, hosted, often multi-tenant. Once proprietary data crosses into those environments, companies lose control over how it’s stored, processed, or retained. The worry is whether these models simply “seeing” the data, or are these models also retaining it for retraining or future output reuse? This is not just hypothetical as several experts warn that “AI-driven data exposure” is among the top risks when businesses adopt GenAI tools too fast without caution. 
  • Regulatory compliance: For industries under strict data‑protection rules such as GDPR, HIPAA, PCI‑DSS, even a simple debugging session could accidentally expose personally identifiable information, or other regulated data. Given AI’s opaque “black box” nature, it becomes very difficult to trace what happens to that data once it enters the model. 
  • Operational friction and costs: The CTO’s final concern was building a robust, internally-hosted, fully isolated AI stack with proper data segregation, compliance, encryption, and governance. Building this feels excessive relative to current needs, and many smaller or mid-sized companies simply don’t have the bandwidth or resources to build that infrastructure right now.

While he was broadly pro-AI, his message was clear: the industry is moving too fast, and without adequate guardrails. This caution reflects a growing sentiment among senior leaders, that enthusiasm for AI must be tempered by a clear understanding of how data is processed, stored, and protected across its lifecycle.

This balance between innovation and responsibility shouldn't be underestimated. For organisations embracing AI, the key takeaway is that adoption isn't just a technical challenge. It is a governance, compliance, and culture challenge as well. Before widespread deployment, companies need clear policies, impact assessments, data retention and deletion guidelines, and transparency about how upstream data is treated.

Avoiding the “shiny object” trap

A second conversation, with another technology leader, centred on the dangers of hype-driven AI adoption and the impulse to rush into AI simply because it is new and visible.

He stressed that organisations should not adopt AI tools merely to keep up or because "everyone else is doing it." Instead, they should carefully evaluate:

  • Whether current business performance actually justifies AI adoption
  • Which risks, including privacy, compliance, or operational concerns, they are genuinely prepared to take on
  • Whether the organisation has the capacity and discipline to implement guardrails before granting tools access to sensitive data.

He acknowledged the genuine upside: AI can compress a developer task that might take hours into a matter of minutes, driving meaningful productivity gains. However, he argued that gains like these shouldn't overshadow serious concerns around privacy, compliance, or long-term exposure.

There is a historical parallel here. With "big data," companies collected too much too fast, and then regulations like GDPR forced a reckoning. AI could follow a similar trajectory if governance is not built in from the start.

Finding the right balance between excitement and caution is itself a marker of organisational maturity. The companies that succeed with AI over the long haul won't be those who blindly embrace every new tool, but those who treat AI as a strategic capability, integrated with their risk model, compliance posture, and governance practices

AI as a driver of proactive security

The third perspective to emerge from the day was more optimistic: The idea of using AI not just as a productivity booster, but as a permanent enhancer and enabler of security itself.

Traditionally, security in many firms is reactive and periodic. An audit once a quarter, a pen-test once a year, or external ethical hackers reviewing code at significant cost. A single manual security inspection might run to £30-40k, and the results are instantly outdated the moment new code is shipped.

The global financial cost of cybercrime is estimated at around $500 billion every year, and that figure reflects a world where most vulnerabilities go undetected for years, sometimes decades.

In relation to how one of the conversations at the conference was pointing toward: Anthropic announced Project Glasswing, a cross-industry initiative built around a new frontier model called Claude Mythos Preview with partners including AWS, Microsoft, Google, Cisco, and the Linux Foundation. 

What the model found in early testing was a 27-year-old vulnerability in OpenBSD, and a 16-year-old flaw in FFmpeg, sitting in a line of code that automated tools had run past five million times without flagging. This wasn't niche or experimental software, but rather flaws just sitting inside code that powers much of the world's critical infrastructure, undetected for years.

AI, by contrast, offers the potential for continuous, real-time security enforcement, shifting the posture from reactive to proactive. There are several dimensions to this shift:

  • Vulnerability detection at scale and speed: AI can scan vast codebases and infrastructure configurations faster than any human, and detect not only known issues, but subtle patterns and context-based vulnerabilities that might escape traditional static analysis. 
  • Reduce false positives and alert fatigue: Because AI can analyse more data and more context, and learn over time, it can become more accurate, reducing the time wasted on noise and helping security teams focus on real, high‑priority issues. 
  • Continuous monitoring and automated response: Once this is configured, AI-driven security tooling can operate 24/7 by monitoring infrastructure, flagging misconfigurations, applying patches or raising alerts, all without manual intervention. 4
  • Better visibility across third‑party dependencies: AI tools can analyse vendor risk, supply‑chain dependencies and ecosystem exposure which is often faster and more consistently than manual audits. 

One of the things that stood out from the Project Glasswing announcement was the democratisation angle. Security expertise has long been a luxury wherein if you had a large, well-resourced team, you were protected. If you were an open source maintainer whose software happened to power critical infrastructure, you were largely on your own. AI has the potential to change that and it's overdue.

This paradigm shift from discrete, occasional audits, to continuous, AI‑powered security hygiene is one of the most practical, innovative  uses of AI in organisations today.

It doesn’t eliminate the need for human expertise, but it augments human capability, enabling security teams to work smarter, not harder. 

Given how rapidly code and infrastructure evolve today, AI will become as fundamental to security as version control is to development.

Why this matters and what organisations should do next

Drawing together the threads from these conversations, a few clear implications emerge for organisations shaping their AI roadmaps. The journey should be treated not as a purely technical upgrade, but as a strategic transformation. In practice, that means the following:

1. Build AI governance from day one: AI adoption shouldn’t begin with random experiments or unmonitored tool usage. It should start with a policy with a clear “AI acceptable use” charter, guidelines on what data can be used, how it must be handled, who can access what, and how long data can be retained. Privacy and compliance shouldn’t be afterthoughts. They need to be foundational.

2. Prioritise data security and privacy before scaling: It can be tempting to let every team experiment, but that’s a recipe for leaks and regulatory headaches. The best approach would be to start with small, well‑scoped pilots using internally controlled, isolated infrastructure or trusted vendor solutions with strong data guarantees.

3. Use AI to power security, from security: If you’re going to bring AI on board anyway, let it help with security too. Use AI tools to constantly monitor, analyse, and flag risks across your code, configurations, third-party dependencies, and user activity. Pair it with human oversight.

 AI can surface issues and automate monitoring, while humans validate, make decisions, and take action.

4. Maintain human-in-the-loop and compliance controls: AI is powerful, but also imperfect, so organisations should always treat AI output as a suggestion, not a ground truth. Models may hallucinate, misclassify or leak sensitive data. Also, there are risks like data‑poisoning, prompt‑injection, model‑inversion, training‑data leaks. To tackle these imperfections, human review, especially for critical contexts i.e. compliance, sensitive data handling, security decisions, should be a non-negotiable. 

5. Establish a culture of AI literacy and accountability: Many of the risks come from the technology itself, but from how people use it i.e. copying entire production logs into ChatGPT to debug. Organisations should invest in training, awareness, and the discipline to treat AI tools as citizens in their security model.

Looking ahead: What the next 18–24 months might bring

Based on the conversations from the conference and broader industry observation, several developments seem likely over the near term:

  • Stronger regulatory pressure and compliance frameworks: As more enterprises adopt AI, regulators will scrutinise how AI tools store and process data, especially in regions with strict privacy laws like the EU/UK. Expect mandatory Data Protection Impact Assessments (DPIAs) and stricter audit requirements.
  • More hybrid approaches: Large organisations will likely split usage into small-scale experimentation on cloud-based SaaS tools, but mission-critical workloads and sensitive data will move to internal, on-prem or “confidential computing” infrastructures.
  • AI as the backbone of security operations: With threat actors increasingly using AI themselves, organisations that don’t adopt AI-driven security may soon lag behind. AI-powered detection, response, threat modelling and supply-chain risk analysis will become standard parts of security operations.

Stay excited, but stay cautious

These conversations reinforced a broader belief at YLD that the evolution of AI is not limited to building smarter products. It is also reinventing how organisations think about data, security, and responsibility.

There is genuine optimism about AI's potential as a continuous, always-on security layer. When done well, with sound governance, responsible use, and empowered security teams, the gains in speed and resilience could be significant.

At the same time, the risks are real and sometimes underestimated. From unintentional data leaks, to regulatory exposure, to complex long-term legal and compliance burdens, organisations need to move with their eyes wide open.

AI adoption should be treated as a strategic, long-term commitment that touches engineering, security, compliance, and culture in equal measure. The organisations that succeed will be those that marry AI's power with disciplined governance, meaningful human oversight, and a forward-looking security mindset.

If you’re exploring the potential of AI models or need support across AI engineering, MLOps, or data science, get in touch.

was originally published in YLD Blog on Medium.
Share this article: