Image
Eight Things to Consider if You're Considering ChatGPT

Eight Things to Consider if You're Considering ChatGPT

Since ChatGPT was released by OpenAI last year, large language models like it have gone viral.  Cheerleaders extol these AI models as the future of work, maybe the best thing to happen since the invention of the internet, or as the invention that changes everything. Detractors point to their gaffs, failures, and “hallucinations.” Both Google and Microsoft have been embarrassed in the last several days by the outputs of their respective chatbots. Lost in this conflict between irrational exuberance and abject pessimism are the issues that go with deploying these models in small- to medium-sized businesses, with their regulated, private, confidential, or personal information. 

ChatGPT and its relatives are chatbots that users can converse with. They are based on large language models trained on billions of web pages. They model the patterns between words as observed in this huge volume of text so that given some prompt, they produce fluent text outputs that may be difficult to distinguish from human-written text. 

Here is a list of eight things to think about if you are considering ChatGPT for your business:

1. Cost. Few companies can muster the enormous computing resources needed to train and use ChatGPT on their own infrastructure.  The alternative is to use a vendor’s API, which may also be expensive.

2. Privacy. Using a vendor’s public API may violate privacy and security guarantees.  Smaller language models can be implemented on reasonably affordable computing resources, but these models do not have the accuracy and fluency of the largest available language models.

3. Data. Few companies have the millions of text pages needed to train a model from scratch.  These models can be “tuned” to specific subject matters, but this tuning still requires large volumes of text and substantial effort.  Some of the most important text a company holds is focused on the jargon, abbreviations, and proper nouns that are peculiar to the company. Language models representing publicly available text will not include those uniquely valuable terms, and so will miss out on the value of these terms. 

4. Proprietary information. Companies go to great lengths to protect their proprietary information.  They need to be sure that it is not shared with other parties or even, sometimes, with certain people within their own organization.  Again, publicly trained language models cannot keep up with those constraints.

5. Integrity.  In an advertisement, the Google chatbot produced an erroneous answer.  The chatbot deployed by Microsoft produced belligerent responses.  The stock prices for both Google and Microsoft were severely impacted following their respective gaffs.  These models require “guardrails” to ensure that they give accurate and appropriate responses. 

6. Maintenance. The currently available models reflect a snapshot in time, but businesses continue to change and develop.  Even small updates to the training or tuning text may require large computing resources and substantial effort.

7. Compliance. Regulatory compliance presents yet another problem. The GDPR (the European General Data Protection Regulation) provides a “right to be forgotten,” for example.  Because the model combines text from multiple records in building its language model, it may be impossible to simply remove a person’s data from the model without retraining or retuning it. 

8. Use cases. Finally, it may be a problem to identify specific use cases for language models. Companies are unlikely to use the models’ text generation capabilities to write business plans or other critical documents, but they may use them to improve the quality of search, for example, by providing an automated help desk chatbot. Microsoft has suggested that these models might be useful for planning and summarization. Some software developers have been using chatGPT to write snippets of code.

Aside from using these models to generate text, they also contain useful components that might be successfully deployed in other tasks.  For example, at Egnyte we use the core representations (called “embeddings”) from language models in several applications, including the document type classifier and named entity recognition.  We are in the process of testing GPT models in an automated help-desk application.  We are testing language model representations in text summarization.

Because of the privacy concerns outlined above, we are also investigating a proprietary “tiny language model,” called CRSP (Constructive Random Semantic Projection).  It does not write essays about “vacations from London,” but it can preserve confidentiality and can be constantly updated at a very low cost while improving search results.

Egnyte continues to monitor developments in language modeling as well as develop our own models.  We are constantly investigating how these models can be safely and effectively applied to support our customers.

Share this Blog

TAGS

Don’t miss an update

Subscribe today to our newsletter to get all the updates right in your inbox.

By submitting this form, you are acknowledging that you have read and understand Egnyte’s Privacy Policy.