OpenAI is an artificial intelligence lab in San-Francisco that developed a language model by the name of GPT-3 (generative pre-trained transformer) that mimics human-like text. The beta release was June 11, 2020. This blog will explore some of the capabilities and drawbacks of GPT-3.
What Does This Look Like?
GPT-3 is programmed to observe and mimic multiple linguistic patterns. These include but are not limited to: solving language and syntax puzzles, answering medical queries, and changing the type of style from input to output text. For example, a twitter user asked the program to translate “everyday” language to legal jargon. This transformed the input text from “my landlord didn’t maintain the property” to “the defendants have permitted the real property to fall into disrepair and have failed to comply with state and local health and safety codes and regulations”. The capabilities of GPT-3 are quite impressive and are fine-tuned to mimic human translations.
With this being said, there are a few errors that GPT-3 possesses. Given that this program is still in the experimental phase, it is bound to include limitations. These shortcomings include: biological/physical/social/psychological reasoning, semantic understanding, and biases in certain texts. For example, GPT-3 is programmed to translate texts without knowing the meaning of words, thus lacking semantic representation. This means that the program is capable of generating texts that are: biased, racist, sexist, homophobic, and or politically incorrect.
Wrapping it Up
We have come a long way in developing AI models that mimic human translation. GPT-3 is not the first, and definitely will not be the last program in the journey of the development of AI machines. GPT-3 possesses its advantages and limitations, showcasing the required dual nature of human and machine translation services.