There’s yet another new AI tool out there. It’s called Dream Machine and it’s made by a company called Luma AI. Here’s what the company promises:
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
I experimented with it briefly by typing in the following phrase:
“A Hacker dancing down the street celebrating his latest hack”
This is what I got:
This is kind of interesting. I’ll share my thoughts later. But right now I have a comment from Kevin Surace, Chair, Token & “Father of the Virtual Assistant” on this:
Right now the current group of video generators creates very cool very short videos (in this case 5 seconds). This isn’t storytelling and it’s not movie making nor even shorts, and they can’t talk. It’s just in the toy category. Fun to play with. But you cannot do much with a 5 second clip that’s valuable. Being a filmmaker and an applied AI leader for 25 years…my bar is high.
Of course anything A16Z backs gets attention. So it doesn’t hurt. But the question again is what is the current usefulness of this? And at this point the GPU cost of generation is high. And so is their service cost. They promise to generate 5 seconds of video in 2 minutes but for now it’s taking more than 20 minutes. The GPU costs and load are tremendous. I suspect 99% of users won’t renew given the limited usefulness.
A 5 second deep fake is unlikely to convince anyone. And it’s hard to get these models to utilize an ACTUAL living human in them. If someone can jailbreak them, perhaps a 5 second clip might convince someone…but these also all have built-in technology to ID they were AI-generated. I think the risk here is very low.
Deep fakes can hurt company and exec reputations. The biggest concern is around live deep fakes on Zoom and we will all be using wearable biometric check-ins to be sure that whomever we are talking to is the real deal.
This is a valid point. At some point these gimmicky tools will become useful and dangerous. And we need guardrails in place before that happens. Or this will not end well.
This entry was posted on June 14, 2024 at 8:32 am and is filed under Commentary. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Luma AI Launches Dream Machine
There’s yet another new AI tool out there. It’s called Dream Machine and it’s made by a company called Luma AI. Here’s what the company promises:
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
I experimented with it briefly by typing in the following phrase:
“A Hacker dancing down the street celebrating his latest hack”
This is what I got:
This is kind of interesting. I’ll share my thoughts later. But right now I have a comment from Kevin Surace, Chair, Token & “Father of the Virtual Assistant” on this:
Right now the current group of video generators creates very cool very short videos (in this case 5 seconds). This isn’t storytelling and it’s not movie making nor even shorts, and they can’t talk. It’s just in the toy category. Fun to play with. But you cannot do much with a 5 second clip that’s valuable. Being a filmmaker and an applied AI leader for 25 years…my bar is high.
Of course anything A16Z backs gets attention. So it doesn’t hurt. But the question again is what is the current usefulness of this? And at this point the GPU cost of generation is high. And so is their service cost. They promise to generate 5 seconds of video in 2 minutes but for now it’s taking more than 20 minutes. The GPU costs and load are tremendous. I suspect 99% of users won’t renew given the limited usefulness.
A 5 second deep fake is unlikely to convince anyone. And it’s hard to get these models to utilize an ACTUAL living human in them. If someone can jailbreak them, perhaps a 5 second clip might convince someone…but these also all have built-in technology to ID they were AI-generated. I think the risk here is very low.
Deep fakes can hurt company and exec reputations. The biggest concern is around live deep fakes on Zoom and we will all be using wearable biometric check-ins to be sure that whomever we are talking to is the real deal.
This is a valid point. At some point these gimmicky tools will become useful and dangerous. And we need guardrails in place before that happens. Or this will not end well.
Share this:
Like this:
Related
This entry was posted on June 14, 2024 at 8:32 am and is filed under Commentary. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.