AI Explained: How to increase revenues and lower costs with AI?

Maarten Ectors
9 min readJan 8, 2025

--

A guide for non-technical business managers

Artificial Intelligence seems like this magical technology that can do everything but without going into the technical details, how can a business manager understand how to use AI to increase revenues and lower costs and more importantly which areas to avoid because they will only be a money pit or worse.

ChatGPT, GenAI, LLMs and where you should be careful

Let’s start with explaining what AI is first. ChatGPT offers only a limited set of use cases. The core technology behind ChatGPT is called a large language model or LLM. LLMs have learned to acquire knowledge, mostly in text format but sometimes also in image or other document formats. So when you ask it a question, that question gets translated into a language the LLM understands [i.e. embedding tokens] and now its trained knowledge is used to generate a response. Trained knowledge is important because most LLMs have a cut-off date after which no new information is learned. This means that if you ask a question after this date, then the information is not present and the LLM might come up with an incorrect answer, often referred to as a hallucination. Also the data that was used to train the model is very important. If the model only has seen male pilots and female stewardesses, then it will present potentially sexist answers. The same for skin colour and other potentially discriminatory replies. The AI just answers whatever it is asked, so if it learnt how to make a bomb, then it will give terrorists all the step by step instructions. Finally LLMs often have memory and as such can store previous questions and use them in future answers, this means that uploading confidential information is not a good idea. You don’t want a banking support LLM to give a list of customers that are about to go bankrupt to anybody who asks.

To get around some of the core LLM limitations like knowledge cut-off dates [i.e. no recent insights], private data access, hallucinations,… Two main solutions are used. One is called RAG, or retrieval augmented generation, in which a special type of database [i.e. vector database] that stores internal or non-public information is used to provide the LLM with insights. Imagine you have thousands of previous customer emails, product documents, support tickets,… then the LLM would be able to go and check them. So if a customer asks a question, then the answer could be in a product document, a reply to a previous customer email or a support ticket from another customer. Again hallucinations are possible, so don’t just assume this will work 100% of the time. The other solution is to embed outside systems through an API [i.e. application programming interface] or programming code. If you ask ChatGPT the weather in your city, then it will not have learnt that. Instead it will go to a weather service API and ask for the weather prediction and present the answer. LLMs can generate code, so in theory they are able to generate database queries and go into a database and retrieve information. This however can result in heavy load on the database and as such will need a careful approach. In general however, LLMs can not only generate English language but after training, any human or programming language.

You will hear the term GenAI or generative AI. This means that AI is able to generate things. That can be text, programming code, images, videos,… LLMs can be the input which take the instructions and either can generate the text, code, translations,… themselves or use other systems for specialised tasks.

A more advanced form of LLMs are AI agents. They use the language understanding of LLMs but can also use APIs to do things. So if you need to make a reservation to go to the movies, they can ask you simple questions like, e.g. how many people and when do you want to go. They can even check your diary to suggest a good moment. Check available movies, IMDB ratings and your past preferences, and make a suggestion. Finally they can book the ticket for you. AI agents are still cutting/bleeding edge, so lots of hype of what they will be able to do is being published but reality is still trying to catch up.

So when and what to use LLMs for to get better business results?

LLMs are great at responding to human queries. So anywhere faster knowledge distribution can be translated into lower costs or higher revenues, they can be a great option to explore. Think about customer support. If you currently work with human support agents then you can let them ask questions to an internal LLM which has access to systems holding customer or product data. More advanced solutions could have the LLM make a suggestion and the human support agent validating or overwriting it. This would avoid hallucinations and allow overwrites to be used to fine-tune and improve the LLM responses over time. When the LLM answers can be trusted [for 99.999% or whatever is your risk threshold], then they could offer direct support replies to customers, either for any query or certain categories of inquiries. LLMs can be used to qualify incoming requests and assign them to the right department. You could even integrate internal systems so the LLMs become like AI agents and can take action, e.g. enable a new service for a customer. This last example would be a great use case of how LLMs can enable faster revenue growth, especially if they can help customers with product recommendations as well.

Not only is customer support a great use case, LLMs can be used in sales and marketing as well. They can help sales teams respond to RFP questions when they are working on complex bids. Product documentation, financial information, pricing documents, … can all be made available to an LLM which can take an RFP document and fill it out in seconds. The sales team would still have to validate there are no hallucinations but responses can be greatly accelerated as well as consistency and quality of answers. LLMs can generate personalised emails and messages for (potential) customers to make them buy more or try a new product. Programmatic advertisement can be fine-tuned automatically to run a more effective marketing campaign.

Lots of other areas of the business can save costs and be made more productive with LLMs. LLMs can offer general purpose solutions that can be applied in multiple areas of your business:

  • Data reporting: instead of having specialised reporting tools for different systems, employees can just ask human questions and the LLM can go into different systems and present an answer.
  • Translation: any documents which need to be translated can be done automatically.
  • Summarising: LLMs are great at extracting the core information out of long documents or meeting notes.
  • Search: LLMs can respond to complex questions, often also finding similarities between documents or images.
  • Design: any images, videos, sound/music but also 3D models can be generated based on text instructions.
  • Copilot: LLMs can assist customers. If your online products allow customers to personalise your offering, then a copilot can assist them and increase your revenues.

Lots of industry specific or departmental use cases are possible as well:

  • Legal: LLMs can read contracts and flag issues or just generate them [so a lawyer can review them!!!].
  • HR: LLMs can write job specs and review candidate CVs.
  • Finance/Risk/Compliance/Investment: LLMs can retrieve financial data and generate reports. They can read regulatory filings. LLMs can read investment and market reports and flag risk or even make investments. They can read social networks and understand trends and overall mood.
  • IT: Programming code, finding bugs, generating automated tests,…
  • Many many more use cases are possible.

Beyond LLMs

AI is more than LLMs. As said before, AI can be used to generate images, sounds, documents and more recently videos. This means that not only can the LLM generate the text in a Powerpoint, it can also generate the images, music, voice-over and soon videos. Editing images and videos is also possible. Transcribing speech to text or voice recognition allows meeting minutes to automatically be written. Text to Speech [TTS] enables voice to be added to many business use cases. Combining speech recognition and TTS allows for even customer phone interactions to be automated or people to just talk to AI. Combining this with LLMs and AI agents will allow human-to-machine interaction to become a lot more natural. We will soon be talking to our virtual AI assistants. Understanding what is in an image or video and describing this in text can be useful for lots of automation as well.

LLMs are not always good at understanding numbers. In general they are not good yet with general maths. Also generating images and videos with text in it, can still be problematic. DALL-E was famous for generating people with 6 or more fingers on each hand. Now generating images with lots of text in them can often still go wrong. We are likely to see lots of AI videos with issues in 2025.

To address these and other issues let’s explore some other AI technologies:

Data prediction and time series

Data predictions, and especially time series predictions, in which future sales, inventory levels, demand,… and other business items can be predicted. Time series predictions can be really helpful to save costs by not over-ordering or making sure sales are not limited due to out of stock problems. Predicting the stock market is theoretically possible but unless you work in high-frequency trading, you are unlikely to be successful.

Anomaly detection

A special class of time series predictions are anomaly detection in which unusual activity can be spotted. Fraudulent transactions, machinery which needs preventative maintenance before a predicted outage,… are use cases of this class of AI.

Recommendations

We all have experienced how Amazon recommends what others liked or bought together with what we are buying or looking at. Recommendations are a key way to increase revenues when it comes to ecommerce.

Videos and Cameras

An area which is growing in importance each day is the capabilities of AI to understand video streams or things that happen in them. Persons versus animals can be detected. Faces can be located and recognised. Car license plates. Human poses, e.g. a patient or visitor on the ground can trigger an alert. Fire, smoke, water, rodents,… can all be detected. Video activity recognition is an active field of innovation. How many cups of coffee did barista A make versus barista B can provide information around worker’s productivity or help with process optimisations. Are all the workers wearing protective gear, automates compliance reporting.

Robotics

Cars driving themselves. Industrial robots creating personalised products. All of these and more will be available in a few years time to most of us. A time in which robots and machines will be able to do most human activities better is no longer a question of if but when.

Don’t spend on AI without knowing the problem and measuring the outcome

The best way to waste money on AI is not to clearly define the problem you want to solve and to measure the outcome. AI can be extremely expensive. AI hardware from Nvidia costs tens of thousands of Dollars/Euros/Pounds. The latest LLMs need hundreds, if not thousands of Nvidia hardware units and weeks to months of training. AI consultants are some of the most costly in the IT industry.

Business managers are advised to use a multi-step process to implement AI. Start from the business challenge. Have an expert define a solution strategy. Let a small team create a prototype. Launch a pilot. All before scaling out the AI to customers, partners and employees. This way, investment in problems that are too expensive or hard to solve, can be stopped fast and relatively cheaply. AI is a journey of learning, not a one stop solution.

Be careful about “ChatGPT wonders”. Integrating ChatGPT in a solution is very fast and relatively cheap. Unfortunately OpenAI, the company behind ChatGPT, is still a loss making venture and has many issues with reliability. Prices are likely going to increase, similar to how Uber subsidised growth initially but now is often more expensive than local taxis at peak moments. Privacy issues, sharing confidential information and other data issues are real. How is your business going to offer a reliable solution if ChatGPT goes down frequently?

Hopefully this is helpful. If you need help with defining your AI strategy? To create prototypes, run pilots or roll-out solutions? Always happy to chat.

If you are interested in more technical parts of AI, you can read my more technical blog posts or forward them to co-workers:

--

--

Maarten Ectors
Maarten Ectors

Written by Maarten Ectors

Maarten leads Profit Growing Innovator. The focus is on helping businesses strategically transform through innovation and outrun disruption.

No responses yet