Contact

AI

Sep 05, 2023

Prompt Engineering: A Guide to Navigating AI’s Language

Elena Devnina

Elena Devnina

Prompt Engineering: A Guide to Navigating AI’s Language

Recently, we hosted the second webinar in our AI readiness series, ‘Get AI Ready! Prompt Engineering’ with Keir Bowden. In just an hour, Keir Bowden, Credera’s CTO and Salesforce MVP, guided us through various aspects of prompt engineering, providing practical examples along the way. 

In this blog post, we’ve summarised the key topics covered during the webinar. You can watch the recording of the webinar on our YouTube channel. Be sure to check the time codes in the description to quickly navigate to the topics that interest you the most.

Demystifying the AI Landscape: A Quick Primer 

Before diving into the heart of prompt engineering, let’s set the stage with a quick primer on AI and explain two of the most important AI-related terms: GPT and LLM.  

GPT, or Generative Pre-trained Transformers, is a model that, in essence, creates new data based on existing patterns and prompts. While it might seem to possess a touch of genuine intelligence, it actually doesn’t, its core function revolves around predicting patterns from provided data.

A Large Language Model (LLM) is a neural network that emulates the brain’s neural layer, often comprising billions of parameters, with each parameter acting as a neuron.

When provided with a prompt, the input layer is divided into tokens, typically about four characters each, although that’s not an exact science. These tokens are then distributed to individual neurons. Each neuron decides whether to allow the token to proceed or not based on a predefined threshold.

Large-Language-Model-neural-network
Large-Language-Model-neural-network

This process happens in parallel, with numerous neurons processing simultaneously. Eventually, the LLM generates an output. To determine the best response, the model examines which output has the highest probability and selects it. In essence, LLMs leverage this intricate neural network structure to understand and respond to prompts, making them powerful tools in natural language processing tasks.

Prompt Engineering: Crafting AI Interactions with Precision 

Prompt Engineering is the process of refining interactions with AI systems such as ChatGPT to produce optimal responses. It’s essentially about asking the right questions to get the responses you want. This approach holds great potential, especially in the business context, but it’s important to understand that prompt engineering is not a job! In the future, it’s likely to become a skill or duty integrated into various roles rather than a distinct profession. Admins, developers, and even marketers will incorporate prompt engineering into their tasks. And as AI models improve, this practice will become second nature, and everyone will wonder why we needed specialists for it in the first place.

Prompt-engineering-is-not-a-job
Prompt-engineering-is-not-a-job

The Anatomy of an Effective Prompt 

The secret to effective prompt engineering is knowing what makes up a good prompt. It starts with a clear and concise instruction, setting the task or question clearly. Giving examples of the kind of response you want can really help AI models understand what you’re looking for. Adding context, telling it how long you want the answer to be, and even suggesting the tone you want can make the prompt even more powerful. Another useful trick is to give the prompt a persona, like a specific job or role, to make sure the responses match that character. All these elements come together to create prompts that get you the results you’re after.

Advanced Techniques in Prompt Engineering 

There are several advanced techniques that can take prompt engineering to the next level. For example, chain of thought prompting is a powerful tool that helps you understand how AI reaches its conclusions. You simply ask it to break down complex requests into step-by-step sequences, showing you its thought process. Remember, with older models, providing examples on how to solve these complex problems can be very helpful. 

Next up are parameter adjustments, including Temperature and Top P values. They allow us to fine-tune the creativity of AI responses. To see practical examples in action, check out the webinar’s on-demand video

What’s more, AI can assist you in crafting better prompts itself. Just ask it for help. Say, “Ask me some questions” or “Generate this, then ask me a question that could improve it.” This iterative process can really help you refine your prompts. 

Another thing you can do is having the AI pick its favourites. Say, “Generate 10 possible answers and pick your favourite” and follow up with “Explain why.” This can give you better idea on how to enhance your prompts, and it’s an important tool in your prompt engineering toolkit.

Text to HTML 

But where prompt engineering really shines is when we turn text into HTML content. Through practical examples demonstrated during the webinar, you can see how ChatGPT, guided by the right prompt, can in a matter of seconds produce HTML webpage markup and styled content for landing pages. The resulting landing page might not win design awards, but it’s incredibly handy for demos and quick setups. This is a really good demonstration of ways prompt engineering can help you streamline routine tasks.

Text-to-HTML-prompting
Text-to-HTML-prompting

Prompting for Evil: Navigating Ethical Waters 

While the promise of prompt engineering is exciting, it’s essential to address potential pitfalls. Adversarial prompts, which involve injecting extra instructions into a prompt, can raise ethical concerns. Similarly, such techniques as prompt leaking, where the model interprets added text as instructions rather than part of the prompt, and jailbreaking, or tricking the model into responding to an unethical prompt, are also something to be aware of.

While platforms like Salesforce’s Einstein GPT Trust layer provide substantial protection, a deeper understanding of these concepts is still very important.

Start Exploring: Tools and Resources 

Ready to embark on your prompt engineering journey? Here are some resources to help you start.

  • ChatGPT: It’s an excellent opportunity to get to know this new technology, and the best part? It’s free! 

  • OpenAI Playground: While this one comes with a price tag, it provides greater control over your AI environment without breaking the bank. For a glimpse into how much our host, Keir Bowden, invested in showing you practical prompt engineering examples on the OpenAI playground, check out the last 10 minutes of the recording here. 

  • Anthropic Claude: Another AI model that especially excels in summarising the material. 

  • Hugging Face: AI community / machine learning platform. A valuable resource for broader AI model exploration, offering lots of information and many apps to try. 

Good luck with creating the best prompts! Stay tuned for the final chapter in our webinar trilogy, with even deeper insights into the world of AI and its ever growing possibilities. Don’t miss the announcement on our social media channels:

    Conversation Icon

    Contact Us

    Ready to achieve your vision? We're here to help.

    We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

    Searching for a new career?

    View job openings