08/09 às 04:00
10/08 às 04:00
We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today.
28/07 às 04:00
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
07/07 às 04:00
10/06 às 04:00
Our latest research finds we can improve language model behavior with respect to specific behavioral values by fine-tuning on a small, curated dataset.
10/05 às 04:00
We’re proud to announce that the 2021 class of OpenAI Scholars has completed our six-month mentorship program and have produced an open-source research project with stipends and support from OpenAI.
03/05 às 04:00
OpenAI is committed to developing general-purpose artificial intelligence that benefits all humanity, and we believe that achieving our goal requires expertise in public policy as well as technology. So, we’re delighted to announce that Congressman Will Hurd has joined our board of directors.
25/03 às 04:00
Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
04/03 às 05:00
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.
04/02 às 05:00
25/01 às 05:00
We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models.
05/01 às 05:00
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
05/01 às 05:00
We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
29/12 às 05:00
It’s been a year of dramatic change and growth at OpenAI.
22/09 às 04:00
OpenAI has agreed to license GPT-3 to Microsoft for their own products and services.
07/09 às 04:00
04/09 às 04:00
We’ve applied reinforcement learning from human feedback to train language models that are better at summarization.
09/07 às 04:00
Our third class of OpenAI Scholars presented their final projects at virtual Demo Day, showcasing their research results from over the past five months.
20/06 às 04:00
We’re excited to announce that OpenAI is co-organizing two NeurIPS 2020 competitions with AIcrowd, Carnegie Mellon University, and DeepMind, using Procgen Benchmark and MineRL.
17/06 às 04:00
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.