8 AI Limitations You Should Know

The man and the AI beast.

Are you familiar with the inherent limitations of the AI you are using? These limitations vary depending on the AI model, so it is crucial to be aware of them in order to choose the best AI for a specific task and thus avoid unintended consequences.

It is attractively easy to employ an AI, such as ChatGPT, to assist with one’s work. The revolutionary aspect lies in the fact that non-technical individuals now have access to this immensely powerful technology through a simple input field, reminiscent of Google.

All one needs to do is ask a question or provide an instruction, even in one’s own language. Within seconds, the AI can generate astonishingly well-written and high-quality content, often better and faster than what we humans can produce. However, it is essential to utilize AI thoughtfully.

I have in this article identified 8 important AI limitations that users should be aware of:

  1. Many AI models do not protect private and sensitive information
    Many AI models, including ChatGPT, utilize the data we feed them to train their algorithms. They store our chat logs and data on servers where we have no control over security or who has access.Consequently, they do not comply with the EU’s General Data Protection Regulation (GDPR). It is our responsibility as users to adhere to data protection laws, ensuring that sensitive personal information, such as customer service interactions or customer data segmentation, is not leaked when utilizing AI.

    Moreover, the use of AI can also compromise sensitive business information. It is our responsibility not to accidentally disclose critical business information to the benefit of our competitors through a negligent use of AI. This could include using AI assistance to develop a new marketing strategy, create a polished PowerPoint presentation based on internal meeting minutes, or debugging computer code for a new product.

    If your workplace does not already have an AI policy, it should certainly be prioritized.

  2. AI hallucinates
    The latest AI language models, such as ChatGPT, can generate high-quality content faster than any human, and for that reason, we utilize them. However, you should be aware that not everything an AI says can be trusted, even if it responds in a particularly convincing manner.

    AI models have a tendency to hallucinate: They can invent their own facts, arguments, and citations – without basis in the dataset they were trained on. Therefore, you should always validate the content an AI generates before using it, as it can harm others, or the company you work for – and thus yourself.

    Read more:
    AI Hallucinates and Invents Facts
    My All-Time Favourite AI Hallucination

  3. AI is limited by humans
    We ourselves are a limiting factor when it comes to harnessing AI and getting it to deliver the answers or content we need, in the desired quality. This heavily relies on our ability to ask good questions and provide clear instructions, known as prompts. Prompt engineering is a science and an art form. Furthermore, it is at this current moment undoubtably one of the most important skills one can learn.

    Poor prompts for generating, for example, new online product descriptions or Google Ads can negatively impact revenue, and it is not guaranteed that the time saved outweighs the negative consequences.

    Therefore, educating all employees in prompt engineering is a wise and highly beneficial investment.

  4. AI is a black box
    One of the challenges with AI is that we cannot always ascertain how they work. We do not know the data they were trained on or the size of the dataset. This has an impact on the quality of the answers an AI can produce. We are, likewise, not informed about its sources of information, making it difficult for us to evaluate the credibility of the responses.

    Furthermore, it is worth mentioning that many AI-assisted tools, which are emerging rapidly, use prompts that we cannot always see, edit or replace with our own higher-quality prompts. This makes them risky to use, especially when integrated into systems that automate entire tasks.

  5. AI has limited memory
    AI models have limited contextual memory, and although they are constantly gaining more, there is an upper limit to the amount of information they can process and the tasks they can perform. Understanding this is important in choosing the right AI model and using it effectively to obtain the highest quality of answers it can produce.

    Read more: How to Manage an AI’s Limited Memory
    Read my LinkedIn post: Why You Should Prompt in English

  6. When did the AI model learn something new?
    An AI’s knowledge, relevance and reliability also depend on how frequently it is trained and learns new information. Large general models like ChatGPT can take up to six months to train, whilst smaller user-specific and specialized AI models can be trained on-demand and much faster, ranging from a few hours to weeks.

    For instance, ChatGPT has not learned anything new since April 2023, meaning it is unaware of events that have occurred since that time. It is undoubtedly a step forward that ChatGPT-4 can now search the internet and incorporate the results into its responses using Bing.

  7. AI models favor English
    ChatGPT and other major commercial AI models are trained on massive amounts of text data from sources such as the internet, which inherently favors English over other languages.

    This is particularly noticeable in regard to smaller languages and cultures. In fact, there are many small languages, such as regional, indigenous or extinct languages, which ChatGPT does not understand at all.

    This naturally has a substantial impact on the choice of AI. For example, ChatGPT-4 represents a significant advancement compared to the free version, 3.5.

  8. AI is constrained by built-in security, morality, and policy
    AI models often have inherent limitations to prevent them from being misused to harm individuals or society. This means that there are certain types of content they will refuse to assist in generating. Naturally, it is reassuring that they decline to aid in the production of chemical or biological weapons.

    However, when it comes to political, moral, or religious topics, striking the right balance can be challenging to say the least. Thus, some of these limitations may conflict with freedom of speech or research.

    The specific boundaries of each AI model are something that may only be discovered over time. One might also encounter attitudes in an AI that reflect the training material it uses, which can be mistaken for safety measures implemented by the manufacturer.

Share this post

Picture of Jakob Styrup Brodersen

Jakob Styrup Brodersen

I have worked with data-driven online optimization for 20 years in 5 different industries. Now, I am a freelance CRO and AI consultant: I teach and advice how to utilize the benifits of AI and I do prompt engineering.