(*or at least, an affordable and accesible alternative)
You give an input text, we complete it according to its pattern.
As simple as a Rest API call.
It's not GPT-3. But you can fine tune GPT-2 with your own dataset.
No more neverending waiting lists.
(*Approximately, 1 word = 1.4 tokens, which are pieces of words.
For example, the word “lower” gets broken up into the tokens “low”, “er”.
Tokens are counted for both input prompt and predicted text)