Interview
5 min read

AI and Automation for Business: Insights from Vera CTO Richard

AUTHOR
published
January 16, 2025

Find your focus. 
Hire Vera.

Join Waitlist

Richard Davies (Vera CTO) on understanding how AI and automation are changing business operations for startups and SMBs. And why experts will always matter.

Listen to the podcast now on YouTube.

Richard Davies is the CTO of Vera, bringing years of experience in AI, software engineering, and automation for SMBs. He’s an expert in leveraging machine learning to business operations and also a former UK breakdancing champion. In this podcast he spoke to Luke Vickers host of Evolution Inspires.

Some takeaways:

  • Why AI isn’t replacing jobs, but automating repetitive tasks within roles
  • The most effective ways SMBs and their finance partners can use AI today
  • The limits of large language models and why human expertise is still critical
  • How automation can improve client onboarding and financial reporting
  • The future of AI integration in everyday software
  • Why ethical AI development and transparency are essential for long-term impact
  • What’s next for AI’s role in business growth

Find out more about Vera AI-powered services for startups, SMBs and their Finance Partners.

How do you think AI is significantly reshaping our society now?

Richard: Most people don’t know that AI has been around for about seven decades. The term “artificial intelligence” was first coined in 1956 by John McCarthy at the Dartmouth Conference.

So technically, it’s not new.

But in the last couple of years—since November 2022—it has gone viral. AI has become synonymous with large language models, vision language models, and multimodal models. These include tools like OpenAI’s ChatGPT and Anthropic’s Claude.

But AI is a much broader field, and it’s been around for a long time. I think we first need to look at the types of problems large language models are meant to solve.

So how do large language models actually work?

Richard: From my perspective—and I know others share this view—large language models can be thought of as “fuzzy information retrieval systems.”

You enter some text, and the model responds by generating tokens—essentially the words or phrases that follow—based on patterns it has learned. It seems like these models can solve problems or reason like humans.

But, that’s only because they’ve been trained on such an enormous amount of data.

What should businesses consider when using AI?

Richard: That the problems they address exist in their training data because they’ve been trained on everything from the internet. I assume they’ve also been trained on many books, including copyrighted materials and older books that are in the public domain.

They’re good at predicting the next word or token. But they don’t “reason” the way humans do. You can see this in action when you give a language model a common problem—it can solve it.

But when you present an “edge case” or “black swan event”—something entirely new that it hasn’t seen in its training data—it won’t be able to solve that problem.

So, if we think of it as a fuzzy information retrieval system, where is it most useful in society right now? Content generation and translation are two key areas. It’s also useful for tasks like classification and sentiment analysis.

For knowledge workers, AI can create content and categorize information. However, since machine learning models aim to minimize errors in their training data, they don’t eliminate errors entirely—they approximate solutions.

So, so you're saying that human oversight is still critical when using AI?

Richard: Yes. Absolutely. While it can provide a general answer, they may give incorrect responses in specific, nuanced situations.

For example: If you ask the model to write a document, it can draft something useful. But there’s no way to automatically verify the accuracy of its knowledge—it needs to be reviewed by a human.

This is why knowledge-based tasks require human oversight. A draft created by AI may serve as a strong starting point, but a person still needs to review, refine and adjust it.

Are there areas where they don't need human oversight?

Richard: Well in places like factories, the scenario is different. We’ve seen a lot of manufacturing work outsourced to countries like India, but I believe we’ll start seeing factories return to their home countries, driven by advancements in vision-based AI models.

In knowledge-based work, there’s no clear way to automatically verify the accuracy of information. But in factories, where tasks like product classification and inspection occur, you can verify results through measurements—weight, thickness, consistency, even X-rays. This allows AI to perform quality checks in ways that can be validated.

In manufacturing, AI can automate tasks that require consistent evaluation, while humans still oversee and approve the work in knowledge-intensive tasks. Over time, I believe factories will become more automated, but there will still be plenty of human oversight to ensure quality.

Luke: I completely get it. And you mentioned that AI has been around for a long time. If I asked my mom what AI is, she’d probably say it’s some kind of household robot trying to take over the world! But today, it’s tools like ChatGPT that are changing the way we work. You also mentioned that true advancements might still take another 100 years.

Are there areas of society where AI won’t have much impact?

Richard: The first neural network algorithms were created in the 1940s by McCulloch. Large language models, like ChatGPT, are built on neural networks—but instead of training on a small dataset, we’ve scaled it to include the entire internet.

In the last couple of years, the jump from GPT-3 to GPT-4 was huge. But now, I believe we’re starting to see diminishing returns. I predict that large language models will soon plateau. Most companies will have access to similar large language models, and progress will slow down.

To be clear, I don't think that means that innovation will stop.

It just means that to advance further, we’ll need a completely new approach or architecture. For example, the Transformers architecture introduced in 2017 enabled breakthroughs like ChatGPT. I think we’ll eventually see something new that pushes us beyond what language models can currently do.

Where do you think we are now?

Richard: Right now, everyone is scrambling to learn how to use AI. In five or ten years, AI will be so embedded in software that people won’t even think about it anymore. When you use your phone, you won’t say, “This app is powered by AI”—it’ll just be there, seamlessly integrated into the experience.

Historically, there have been cycles of hype around artificial intelligence. In the 1980s, for example, expert systems generated a lot of excitement before interest eventually cooled off. Today, the focus is on task automation, not job replacement.

I’ve seen predictions that up to 50% of jobs will be automated. I don’t think that’s realistic. Based on what I’ve seen, I’d estimate that maybe 3% to 4% of jobs could be fully automated.

Most of the automation will focus on specific tasks within jobs, not entire roles.

At Vera, we analyze operations roles and ask, “Which tasks can be automated to add the most value?” We’re not trying to replace people—just make their work more efficient.

What about the future of autonomous vehicles and robots?

Richard: They face a significant challenge: edge cases. On a highway, things are predictable. You drive straight, change lanes, or take an exit. But in a city like London, driving is far less predictable.

Someone might dart across the road unexpectedly, and that’s an edge case AI struggles with because it hasn’t encountered enough similar scenarios in its training data. Take Tesla’s robots, for example.

Right now, their test robots are teleoperated—there’s a human in the background controlling them remotely. Fully autonomous robots in uncontrolled environments are still a long way off. In highly controlled spaces like factories, robots can work autonomously. But as soon as you introduce humans and unpredictable elements, things get complicated.

Do you think we need ethical guidelines or regulations for AI development?

Richard: Yes, I do. At the end of the day, artificial intelligence is software. It performs complex tasks, but it’s still software. We need to approach AI development like software engineering.

I believe there should be formal certifications for engineers working in AI—similar to the certifications doctors or civil engineers need.

We shouldn’t limit AI to people with formal degrees, though. We need to keep the field open to self-taught engineers and independent learners. But there should be an accreditation process that includes training on ethics, bias mitigation, and regulatory compliance.

Companies that fine-tune language models also need to be transparent. Pre-training data can include bias from internet users.

Some companies may also introduce political or corporate biases during fine-tuning. Regulations are needed to prevent that from happening.

Can AI help reduce carbon footprints and promote sustainability?

Richard: Right now, the environmental impact of AI is concerning. Training large language models requires enormous amounts of energy and water to cool data centers. Research shows that sending just 20 to 50 messages through ChatGPT can use up to 500 milliliters of water for cooling.

But I believe that progress is being made. OpenAI has reduced its costs tenfold, and innovations are making data centers more efficient. In the long term, I believe AI can have a positive impact—helping automate sustainable farming, optimize energy consumption, and more. But in the short term, the energy and resource costs are high.

Luke: There’s a lot of discussion around AI and job displacement.

Do you think AI will create more jobs or destroy them?

Richard: It’s unrealistic to assume that most jobs will be fully automated. Knowledge-based work will always require human review. Even if AI gives you a correct answer, you still need to understand the context and validate it.

At least, that's my position and the ethos we embody at Vera.

For example: during the Industrial Revolution, automation didn’t replace people—it increased productivity and created new industries. I believe AI will follow a similar path. There may be temporary disruptions, but in the long run, it will create more jobs, especially in areas like AI management, strategy, and innovation.

In the short term, people will need to learn how to use AI tools effectively. But once they do, productivity will increase, and new opportunities will emerge.

At Vera, our mission aligns with the future we’re rapidly moving toward—a world where AI takes on the repetitive tasks that once slowed us down, freeing people to focus on strategic work and make a greater impact in what truly matters.

Want to Learn More?

Listen to the full podcast for an in-depth conversation about AI’s role in shaping the future of work, business operations and society.

Find your focus.
Hire Vera.