Furnishing your documentation with Q&A software using GPT-3, embeddings, and Datasette
If you’ve spent any amount of time using large language models, you may have considered the potential benefits of being able to utilize GPT-3 or ChatGPT with a specific, up-to-date collection of text or documentation to enhance its ability to answer questions. As it turns out, there is a clever method for doing just that. This article is based on research by Simon Willison.
Through some blood, sweat and coding, Willison found a shortcut for this process through a utility he created called Datasette. He used Datasette as a tool for quickly testing new artificial intelligence techniques through the use of custom SQL functions.
Here’s how to do this using Simon’s Semantic Search Answers Pattern:
- Conduct a search (either text or semantic) in your documentation to identify relevant information to the user's inquiry.
- Collect relevant parts of that information and combine them into a single block of text.
- Create a prompt by combining that text with the statement "Given the above content, answer the following question: " and the user's question.
- Use the GPT-3 API to process the entire prompt and return a response.
In Simon’s words: ‘This is a form of prompt engineering (and hence is prone to prompt injection attacks, so bear that in mind). You need to fit your overall prompt into the token limit for GPT-3, which is currently 4,000 tokens. A token is more-or-less the same thing as a word.’
To increase your understanding of Simon’s process, we’ve included a short example of the Q&A section he ran against his personal blog.
Q: ‘What does shot scraper do?’
A: ‘Shot-scraper is a command-line tool that helps automate the process of taking screenshots of web pages and scraping data from them using JavaScript. It can be used to take one-off screenshots or take multiple screenshots in a repeatable way by defining them in a YAML file. It can also be used to execute JavaScript on a page and return the resulting value.’
Simon used his method to generate these two pieces of text. You can also institute a semantic search using embeddings. Here’s how Simon describes the process.
‘An embedding is a list of floating point numbers. As an example, consider a latitude/longitude location: it’s a list of two floating point numbers. You can use those numbers to find other nearby points by calculating the distances between them. Add a third number and now you can plot locations in three-dimensional space—and still calculate distances between them to find the closest points.
This idea keeps on working even as we go beyond three dimensions: you can calculate distances between vectors of any length, no matter how many dimensions they have.
So if we can represent some text in a many-multi-dimensional vector space, we can calculate distances between those vectors to find the closest matches.’
Now, I’m not a coder, and the rest of Simon’s research goes to a place that I’m afraid to say makes my little journalist head spin. So, if you’d like more clarity on Simon is and what he does, you can find his blog here.
Advertisement