This post was partially written by and edited using ChatGPT.
Last month the Resolver development team had their first official hack week, which was dedicated to checking out the potential to improve our software with the help of OpenAI.
What is OpenAI?
You’ve probably heard of OpenAI since their ChatGPT program has made headlines frequently in recent months.
According to my assistant editor:
OpenAI is an artificial intelligence research laboratory consisting of a team of experts and researchers in the field of machine learning, natural language processing, robotics, and other related areas. The organization aims to develop advanced artificial intelligence in a safe and beneficial manner to benefit humanity as a whole. They conduct cutting-edge research and also develop various AI-based products and tools such as language models, chatbots, and robotics applications.
ChatGPT
How could OpenAI help us?
Prior to getting stuck in and integrating with OpenAI’s APIs, we outlined our aims:
- Improve accessibility for consumers
- Lower overhead for engineering
- Simplify dependencies on external services
These aims inspired several prototype projects.
Simplification of our data infrastructure
Within our platform, we employ an extensive suite of applications to process data. One of these applications is sentiment analysis, which detects emotions in text.
We found that ChatGPT can accurately extract sentiment from text with minimal technological barriers. As a result, we can potentially replace a significant amount of infrastructure with simple API calls.
Writing formal emails based on question prompts
We discovered that ChatGPT could compose formal emails to companies using user input obtained from simple forms. However, a drawback was that it often editorialised complaints by adding information not present in the provided data.
We believe that supplying additional contextual messages could constrain the language model to adhere to the given information. However, we also anticipate that the output would still require editing and approval by the end-user.
Accessibility improvement for case management
Raising and managing cases over the phone
Currently, our applications can only be accessed through a web browser, which can be restrictive for some users. Although automated phone lines have been around for a while (since the late ’70s), their rigid structure may not always accommodate the complexity of case information.
We discovered that by using the OpenAI “edits” API, we could analyse transcriptions of phone calls and map them to a JSON structure, enabling us to handle cases discussed over the phone or on video calls more efficiently. The generated JSON could then be submitted to our API, along with the original transcription to help with reviewing its accuracy.
Summarising cases
Complaint and appeal cases often keep growing until they are resolved. This results in long, hard to follow message threads with lots of important information in them.
ChatGPT’s natural language processing capabilities make it particularly adept at summarising large amounts of information into a succinct format. By leveraging this strength, we were able to efficiently transform complex case details into a more digestible format, allowing our customer service team to quickly and accurately assess the current state of a case.
What did we discover?
ChatGPT is a language model developed by OpenAI that uses deep learning techniques to generate human-like responses to text-based prompts. GPT stands for “Generative Pre-trained Transformer,” which refers to the model architecture and the pre-training process it undergoes to learn the patterns and structure of natural language.
ChatGPT
Subjectivity and objectivity
Upon initial experimentation, we observed that ChatGPT would often respond with the statement “As an AI language model, I do not hold personal beliefs or opinions” when posed with questions that may have had subjective answers. This limitation serves as a reminder that ChatGPT is a language model that lacks the ability to think autonomously. As a result, we shifted our focus towards using ChatGPT in a more objective manner, which led us away from considering it as a potential mediator for automated complaints.
ChatGPT excels at extracting and processing data from free-form human-written text. It accurately converts the extracted information into formats that can be processed by our internal applications. This was particularly useful in automating complaint creation from recorded telephone calls, where ChatGPT consistently extracted key issues and requested redresses from transcripts, and structured them to fit our platform’s API.
Structuring your messages
The “edits” API by OpenAI includes a crucial feature that enables users to provide context for their messages. This additional information helps the language model process the request more accurately.
This can be achieved by sending “system” and “user” messages to the API:
openai.ChatCompletionRequest{
Model: openai.GPT3Dot5Turbo,
Messages: []openai.ChatCompletionMessage{
// System messages:
{
Role: openai.ChatMessageRoleSystem,
Content: "The initiating entity is the CALLER.",
},
{
Role: openai.ChatMessageRoleSystem,
Content: "Financial redresses must be a number with two decimal places.",
},
{
Role: openai.ChatMessageRoleSystem,
Content: "Please use the following message to fill out this JSON struct: " + jsonTemplate,
},
// User messages:
{
Role: openai.ChatMessageRoleUser,
Content: "Insert transcription here",
},
},
}
It appears that the “system” role influences how the language model will respond, and “user” messages are prompts for it to process.
Data security concerns
As a company, we place great emphasis on ensuring the security of our users’ data. However, using an external platform such as OpenAI poses a challenge because we can’t control how our data is handled, which conflicts with our company ethos.
While OpenAI’s platform is beneficial for processing text, any implementation of it in our own platform would require user opt-in. If refused, we would have to revert to our existing applications, resulting in no simplification of our infrastructure.
The API’s output is only as good as your input
ChatGPT’s responses rely heavily on the prompts it is given. As a language model, it does not inherently possess knowledge but is trained to learn and predict the patterns in the language based on the vast amount of data it has been trained on. Therefore, the quality of its output heavily depends on the quality of the input prompts provided to it. It can only generate responses based on the information and context it is given. The more precise and informative the prompts are, the better the responses will be. Therefore, it is crucial to frame the prompts in such a way that the language model can accurately understand and generate output that meets the desired objectives.
Conclusion
In our opinion, ChatGPT and OpenAI have the potential to enhance some of Resolver’s processes. However, it would necessitate end-users to opt-in before using them. Although these tools could generate output with considerable accuracy, it is still vital to note that it cannot replace the importance of human interaction.
Moreover, we anticipate that the output of ChatGPT and OpenAI would require extensive monitoring and review to ensure that the generated output meets the company’s standards. Therefore, while we recognise the potential benefits of these tools, we also acknowledge that it cannot replace the importance of human judgement in many aspects.