Image created by AI in Microsoft Designer, based on the prompt, "Generate an image reflecting the rise of AI in 2023 and what's next in the field."
"The biggest surprise is that AI used to be narrow. ... And when you went from one application to the other, you had to redo it all from scratch. What's really surprised us is the emergence of general tools. ... It's almost the love child of a search engine, like Google, which you can talk to about anything, with the somewhat limited but very real intelligence of AI systems."
AI2 technical director Oren Etzioni. (AI2 Photo)
That's computer scientist and entrepreneur Oren Etzioni, our guest this week on the GeekWire Podcast, reflecting on the past year in AI, and looking ahead to what's next.
Etzioni, an AI leader for many decades, is professor emeritus at the University of Washington, Allen Institute for AI (AI2) board member, AI2 Incubator technical director, and Madrona Venture Group venture partner. Among other roles, he served on the National AI Research Resource Task Force, which advised the White House on policy issues.
In the first segment of the show, my colleague John Cook and I discuss the big AI news of the week: The New York Times Co.'s lawsuit against Microsoft and OpenAI over their use of its articles in GPT-4 and other AI models.
Listen below, or subscribe to GeekWire in Apple Podcasts, Google Podcasts, Spotify or wherever you listen, and keep reading for key takeaways from Oren Etzioni.
Here are highlights from Etzioni's comments on the show, edited for clarity and length:
The evolution of AI in the past year: "It's really been a wake-up call. ... I like to say our overnight success has been decades in the making. And a lot of us have been aware of the potential of AI for a while. I don't think any of us anticipated just how quickly and how dramatically, it would come in the shape of ChatGPT, and so on. But we all knew that it was coming. Now, it turns out that the rest of the world, literally, is catching up. That includes the politicians, that includes the kids, that includes the teachers. It's now changing every aspect of society."
The role of AI in our work and lives: "Copilot might not even be the right phrase. Maybe an assistant. And an assistant is often only as good as the tasks that you're able to give it. We cannot delegate some things, we delegate badly, and they're ill-specified. ... I think we have here the potential of finding the drudgery, finding the things you don't like to do, and having AI give you a big help with those."
The "toothbrush test" for AI: "What happens next in 2024, is what somebody once called the toothbrush test. So how many times a day do you use the technology. For most of us, toothbrush is two to three times. So I think that in 2024, the toothbrush test for AI is going to explode. We're going to find that we're using it two, three, 10, 20 times a day. And I'm not even talking about its implicit use where you're doing speech recognition in your car, or the Google search engine is using it to re-rank things. I'm talking about us interacting with AI systems, with our music, and our art, or in our job. I think it's going to be easily 10 times a day on average."
The need for strong open-source models: "The consolidation of power in AI is a huge risk. And we've seen some of that with the top corporations. The countervailing forces are, number one, open source models. A great analogy here is what we've seen in operating systems. We had Windows, which billions and billions and dollars went into. But we also had Linux, which the open-source movement championed. So I hope that we will have a Linux of language models, a Linux of AI. And I also think that the government has a role to play in making resources available.
The risks of AI-fueled misinformation: "We've already seen it in previous elections, but it's gotten cheaper, easier to do with generative AI, and I am terrified of its effect on the November election ... on the primaries, on the election itself, the potential for distrust, and so on. And I am determined to do something about it to help figure out how generative AI doesn't become the Achilles heel of democracy."
What it will take to combat AI misinformation: "I think we need strong regulations. I think we need education. People need to understand how to critically evaluate what they're hearing, particularly over social media. ... In addition, we do need watermarking, authentication, provenance, so we know where things come from. And in addition to all that, we need the ability to detect. So when I see a video, when I hear audio, I have to be able to ask, 'Was this altered? Was this manipulated? Was this automatically generated by AI?' With those pieces, I think we have hope to have a robust system. Without any of them, I think we are seeing some major risks."
The prospects for AI startups: "I think some people have the perception that right now, it's hard to launch a successful startup because of the huge amount of compute power required for these massive models. I think nothing could be further from the truth. We're at a moment of disruption, and disruption creates a lot of opportunities."
Advice for aspiring computer scientists: "One, study the fundamentals. Math, statistics, the basic ideas of computer science, those have not changed. Those are the building blocks on which we're building the latest technology. ... The second thing I want to say is follow your passion. So often, people are worried or trying to game the future. Well, I should study this because I could get that job and I should do this. You're young, the world is changing quickly. Follow your passion, enjoy the educational process, enjoy learning what what you need to do, and these things will take care of themselves."
No comments:
Post a Comment