At the time, few people beyond the island world of AI research knew Openai. But as a journalist at MIT review of MIT technology Covering the constantly expanding limits of artificial intelligence, I followed closely.
Until this year, Openai had been a step-research in AI research. He had a bizarre premise that AGE could be reached in a decade, while most of the non-Openai experts doubted that it could be reached at all. In a large part of the field, he had an obscene of funding despite little management and spent too much money to market what other researchers have frequently snubbed as non -original research. It was, for some, also an object of envy. As a non -profit organization, he said that he did not intend to continue marketing. It was a rare intellectual game area without attached strings, a paradise for marginal ideas.
But in the six months preceding my visit, the rapid series of changes in Openai reported a major change in its trajectory. The first was his confusing decision to hold the GPT-2 and boast about it. Then, his announcement that Sam Altman, who had mysteriously left his perch influential in YC, would intervene as CEO of Openai with the creation of his new “capped” structure. I had already made my arrangements to visit the office when she then revealed her agreement with Microsoft, who gave the technology the priority to market OpenAi technologies and I locked it exclusively using Azure, the Microsoft Cloud combination platform.
Each new announcement has aroused new controversy, intense speculation and increasing attention, starting to reach the limits of the technological industry. While my colleagues and I covered the progress of the business, it was difficult to grasp the full weight of what was going on. What was clear was that Optai began to exert a significant influence on research on AI and how political decision -makers learned to understand technology. The laboratory’s decision to reorganize in a partially for profit company would have training effects in its spheres of influence in industry and government.
So, late at night, with the request of my publisher, I rushed to an email to Jack Clark, the director of OpenAi policies, with whom I had spoken before: I would be in town for two weeks, and it was like the right moment in the history of Openai. Can I interest them in a profile? Clark transmitted to me to the communications chief, who returned with an answer. Openai was indeed ready to reintroduce to the public. I would have three days to interview leadership and integrate myself inside the company.
Brockman and I set up in a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side by a long conference table, they each played their role. Brockman, the coder and doing it, leaned forward, a little on the edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled in his chair, relaxed and remote.
I opened my laptop and scrolled my questions. Openai’s mission is to ensure a beneficial act, I started. Why spend billions of dollars on this problem and nothing else?
Brockman nodded vigorously. He used to defend Openai’s position. “The reason why we are so much careful and that we think it is important to build is because we think it can help solve complex problems that are just out of the reach of humans,” he said.