GPT-2 (nonfiction): Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 18: | Line 18: | ||
== Nonfiction cross-reference == | == Nonfiction cross-reference == | ||
* [[Artificial intelligence (nonfiction)]] | |||
* [[Computation (nonfiction)]] | * [[Computation (nonfiction)]] | ||
* [[Machine learning (nonfiction)]] | * [[Machine learning (nonfiction)]] | ||
Line 28: | Line 29: | ||
[[Category:Nonfiction (nonfiction)]] | [[Category:Nonfiction (nonfiction)]] | ||
[[Category:Artificial intelligence (nonfiction)]] | |||
[[Category:Computing (nonfiction)]] | [[Category:Computing (nonfiction)]] |
Latest revision as of 07:43, 6 July 2019
GPT-2 (Generative Pretrained Transformer 2) is a language model that was trained on 40GB of text scraped from websites that Reddit linked to and that had a Karma score of at least two.
The developers at OpenAI describe GPT-2 as "a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training."
Because the model is probabilistic, it returns a different response every time you enter the same input.
In the News
Fiction cross-reference
Nonfiction cross-reference
External links: