Using Large Language Models (LLMs)

See also: Understanding LLMs

There is no question that artificial intelligence (AI) has very definitely arrived. AI-based large language models (LLMs) like ChatGPT and Google’s Bard have rapidly taken the working and academic world by storm. Everywhere you look, organisations and individuals are talking about how they can use AI to improve their efficiency—which usually means get things done quicker and with less effort.

There is also no question that these models have potential to speed up many tasks. However, they also need careful supervision to ensure that the output is useful, and that you do not inadvertently plagiarise anyone or anything. This page discusses how you can use large language models to help with two particular areas, writing and brainstorming.

Using LLMs for Writing

As our page on Understanding Large Language Models makes clear, these models are extremely good at writing.

That is, fundamentally, what they are designed to do: to take information and rewrite it into a clearer and more coherent form. They do this by predicting what word is most likely to come next each time. They are therefore effectively ‘parroting’ what people have previously written—and there are implications to that.

For example, you cannot simply ask an LLM to ‘write an article on [subject]’ and expect to get anything worthwhile.

You can use this prompt, but you will get probably get a very superficial summary of existing knowledge on that topic. This is unlikely to be worth sharing or publishing anywhere because it won’t add anything. It will also NOT be suitable for academic output, because it will not have any references or in-depth arguments.

Many of these models also do not seem to be able to distinguish between ‘information that they have seen’ and ‘information that they have made up that looks like something that they have seen’. Any output may therefore contain errors in crucial areas of detail, including references (and there is more about this in our page on Understanding LLMs). You will therefore need to make a thorough fact-check before going any further.

However, there are ways in which you can use LLMs to support your writing. These include:

  • Providing a list of bullet points, and asking the model to work those up into an article

    This is very much a case of ‘garbage in, garbage out’, so you will need to think carefully about your bullet point list. However, if you provide a list that carefully sets out the main points you want to make, with a rough story arc, you are likely to get a coherent article. It will need checking and editing, but plenty of people are already doing this. It is particularly helpful if you are writing in a second language, because the grammar will be right.

    You probably won’t be able to do this for an academic essay—at least not in a single go. You might be able to do it section by section, but you will still need to edit carefully to ensure that your essay is coherent.

  • Asking questions and prompting improvements

    Ethan Mollick, an academic at the Wharton School in Pennsylvania, wrote an article about using ChatGPT to support writing. He explained that he asked ChatGPT some questions, and then prompted it several times to improve on what it had written. He included the paragraphs it produced in his article, and it is hard to tell the difference from his own writing. He was a bit coy about how much editing went into the process, but this seems like a reasonable starting point for further work.

  • Using the model as an English-language editor

    If you have written something in English, and you want it to be proof-read or edited, you have several options. The grammar- and spell-check function in word processing packages provide a first check. However, large language models may also be useful. You can ask a model like ChatGPT to check your text and suggest amendments to improve the phrasing or grammar. If the text is quite long, you may want to use the ‘compare’ function in your word processing package to check all the proposed changes. You will, of course, also need to be sure that the model has not altered your intended meaning.

    Large language models can therefore be used in several ways to support your writing.

    However, you need to check their output for accuracy and intent. The key is to think of them as tools, not fellow-authors.



Using LLMs as a ‘Thinking Partner’ or for Brainstorming

Another area where writers and content creators may consider using large language models is as a ‘thinking partner’—effectively, to brainstorm ideas for a series of articles, or content on a particular topic.

There is no question that your own original output improves when you have more input. If you don’t have a friend or colleague with whom to brainstorm ideas, ChatGPT or Bard can help. However, a bit like an in-person brainstorming session, the process may take quite a long time, and needs some skill.

You are likely to get the best output if you use some of these ideas:

  • Treat the process like a conversation

    When you are brainstorming, you tend to think around the topic. You need to do the same thing with a large language model.

    Ask it different questions, and compare the output.

    For example, suppose you wanted to create a series of blogs on a particular topic. Instead of simply asking the model to suggest a sequence, you could ask questions such as ‘How would you persuade...?’, ‘What do people think is important about ...?’, and ‘Who are the main stakeholders for...?’. This will get you thinking, but also provide more useful material.

    You may also go down several ‘dead ends’ and ‘blind alleys’—but at least you now know what areas you don’t want to explore further.

  • Provide plenty of context

    Large language models are brilliant in many ways—but they also don’t know anything. One expert describes them as being like bright but naïve interns. This means that you need to give them as much information as possible to get good results.

    In practice, when you use more specific prompts, you get better results. It is also worth exploring the initial results by using prompts such as ‘tell me more about...’.

  • Check your language

    Language really matters to large language models. They are, after all, trying to predict the best answer based on the words you use—and they have no capacity to interpret.

    For example:

    • LLMs don’t seem to be able to refer back to previous answers reliably.

      Suppose you ask the LLM to give you a list of factors that matter to stakeholders dealing with ethical investment. If you then want to explore one point from the list, it is much better to say, ‘tell me more about [topic]’ and name something from the list, than say ‘tell me more about the sixth item on that list’. The model might tell you more about something from the last list, but it probably won’t be the sixth item. It might also tell you about something vaguely related instead.

      If you want to explore something from an earlier answer, it is often better to start again with a stand-alone question.

    • A semi-Boolean approach (“tell me about x but not y”) also doesn’t work.

      The model understands that both x and y are important in some way, but not how. It will tell you about both, without understanding the ‘not’.

Where to be Wary

It is worth being aware of a number of potential pitfalls in the use of LLMs for both writing and brainstorming.

  • Case studies and examples are not necessarily reliable

    If you ask ChatGPT to do so, it will provide examples or case studies. However, these are not necessarily reliable. Sometimes you may know about the case from mainstream media articles, or the example may be broadly consistent with the company’s reputation. However, it is often hard to find a reliable source using a standard search engine. The ChatGPT site contains a disclaimer about the accuracy of information about individuals, companies or events. This suggests that many of these examples may be hallucinations: that the model has made them up.

    There is more about hallucination in our page on Understanding Large Language Models.
  • Editing is crucial

    Editing and checking is absolutely crucial with any output from a large language model.

    This includes fact-checking anything that you did not supply, or which you do not directly know to be true. It also involves checking sources for any new claims. Where you supplied words, you also need to check that the model has not altered your meaning. Remember, it doesn’t know anything, and it doesn’t understand anything: it’s just taking your words and predicting what words fit best with those.

  • Using an LLM is not necessarily a short-cut

    There is a tendency to think of using LLMs as being a ‘quick option’, compared to writing your own article, or doing your own brainstorming. This is not necessarily true. You can produce something quickly using an LLM—and on the face of it, it may be acceptable. However, you will get something better if you engage with it more fully and take longer over the process.

    Both careful thought and input from you are definitely required to get the best outcome.


A Final Thought

Your own thought and input are probably the most crucial ingredients when working with large language models.

They are a tool to aid your work, and not a substitute for your input of either time or energy. You most certainly cannot afford to see them as co-authors or fellow-thinkers—at least not yet.


TOP