Implicit generation and generalization methods for energy-based models
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization...
1,013+ articles from 7 top sources — updated every 2 hours.
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization...
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics...
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while includ...
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neuro...
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, vari...
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI....
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actua...
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance o...
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the ...
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a ...
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situation...
We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep reinfo...
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, c...
We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their ...
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond hu...
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to indiv...
We are now accepting applications for OpenAI Fellows and Interns for 2019....