Introduction: My First AI-Powered Video Production
AI-driven video production is an exciting new frontier for many, but most of us have not dove into the deep end with it. So after a handful of meetings with a client of ours, PUI Audio, where we’d excitedly discussed the potential in AI video creation, we were surprised and enthusiastic that the CEO wanted to invest with us in a test project, just to see what could be created right now.
However, I didn’t just want to see what we could do. I wanted to make something actually usable by the end that would uniquely highlight PUI’s new innovative product—a flexible haptic component like something you’d see as part of a VR suit in Ready Player One. Cool stuff that needed cool marketing.
Goals and Conceptualization
My goal was simple yet ambitious: to rely as much as possible on AI for the creative process and output. However, I quickly realized that AI-based video creation demands a solid understanding of this ever-evolving technology’s limitations; I needed a concept that wouldn’t break down under the constraints of current AI capabilities, but I wasn’t sure what that would be just yet. For example, traditional animations or high-detail work that requires meticulous precision often aren’t the best fit for AI, and this would inform how I shaped the video’s story and visuals.
Tools of the Trade: My Personal AI Stack
Trial and error was the name of this game for me as I had to explore a whole new set of tools I was mostly unfamiliar with, as well as develop a new workflow for efficiency. First I had to research the best tools and then test them to see what they could do. After a few days I finally settled on a curated stack of AI tools, each chosen for its specific capabilities:
- Adobe Firefly is where it all began; I used it to create the initial set of still images, setting the visual tone and style for the entire project. Firefly allowed me to generate high-quality images that could match and align in concept, style, and general composition. These still images would form the backbone of the animations.
- Kive came next, helping me upscale these images to a quality level that would hold up in an animated sequence. Image quality is essential for any video project, and Kive allowed me to preserve the clarity needed for close-up details.
- Luma Dream Machine was the true powerhouse for this project, as it brought my static images to life. Rather than starting with prompts, I worked with the images created in Firefly and simply transformed them into animations through Luma. Interestingly, I found this way more effective than generating animations from scratch using prompts alone.
- ChatGPT played an essential supporting role, aiding in the development of the video script and refining certain prompts. Using ChatGPT was like having an assistant who could help craft ideas, smooth out the narrative, and enhance the overall storytelling aspect of the video.
- Suno was my go-to for music creation. The idea of having an AI-generated soundtrack felt natural given the project’s emphasis on AI, and Suno’s capabilities allowed me to compose something that matched the video’s tone seamlessly.
Of course, the project still required a lot of human touch. I handled the final editing personally, merging all the AI-generated elements into a cohesive piece. There was also one specific moment—a clip featuring the product bending—that needed to be animated by a human animator, as the AI couldn’t achieve the precise motion we wanted.
Overcoming Challenges and Learning the AI Landscape
Working with AI was far from straightforward. Each tool had its own quirks, and the learning curve was steeper than I anticipated. The process was one of constant trial and error, particularly with generating visuals that fit my concept. I learned that certain ideas simply don’t translate well in AI-generated video, making it essential to adapt my approach to what AI could handle effectively.
Another major challenge was the sheer volume of iterations required to achieve usable clips. File size and clip length limitations meant that I often had to generate many versions before finding one that met both the visual and technical criteria. These limitations demanded patience, but they also highlighted the importance of pre-planning and adjusting expectations based on what the technology could deliver.
Major Hurdle: The Hands and Haptics
One unique challenge was working with demonstrating haptics in use—a product that’s intrinsically linked to hands. As anyone familiar with AI image generation knows, hands and AI don’t always get along well. This issue added an extra layer of complexity to the project, especially given that hands were central to demonstrating the client’s product. I had to get creative in finding angles and animations that minimized the risk of distortion while still showing the product in action. Oddly, I found the best way to deal with this was to put the hand right up front. Starting with a fully extended hand itself as the focus seemed to reduce melting fingers and morphing thumbs. I let line-based graphics tell a story of how the haptics felt inside the glove in multiple use-case scenarios.
Unique Elements of the AI Process
One interesting discovery was that, in many cases, prompts weren’t even necessary. While prompt-based AI creation is a popular approach, I found that my process benefited more from working with pre-generated images. This allowed for more control over the final result and reduced the unpredictability often associated with AI-driven projects.
Another unique aspect was the unexpected results that sometimes emerged during generation. Each tool brought a bit of its own “personality” to the process, and while I had to sift through a lot of rough outputs, there were also some wonderfully strange, funny, and occasionally eerie visuals produced along the way—perfect material for an “outtakes” reel.
Client Feedback and Final Results
The client was thrilled with the final product, and the video turned out to be an effective and visually compelling showcase for their haptic technology. This experiment in AI-powered production not only produced a high-quality video but also served as a powerful proof-of-concept for the client’s promotional strategy. They plan to use the video for trade shows and other promotional events, confident that it will stand out due to its unique, AI-infused aesthetic.
Final Reflections: What I’d Do Differently and Future Outlook
Looking back, there are definitely things I would approach differently. While AI made some aspects of production faster and more affordable, it also introduced limitations and quirks that called for additional adjustments. For future projects, I’ll be more selective in the concepts I pursue with AI and continue experimenting to find the balance between human creativity and AI assistance.
This project gave me a new appreciation for AI tools and their role in digital creativity. To all the filmers, editors, writers and creators out there who may scoff at AI or my willingness to use it, please know, your jobs aren’t going anywhere anytime soon. In fact I’d argue that you will be protecting your job by learning how to use these new tools and knowing how to collaborate with them.
Sharing the Process: Links to the Final Video and Outtakes
For those interested, here are links to the final video and a special “outtakes” reel featuring some of the funniest and most unexpected AI-generated clips from this project. It’s a reminder that, while AI technology has made incredible strides, human creativity is as vital as ever.