Scenes from a memory: neural audio/video generation
AI/Machine Learning • March 2019
Scenes from a memory: neural audio/video generation
Explore more
About
Scenes from a memory: neural audio/video generation
About

Generating representations is the ultimate act of creativity. Recent advancements in neural networks (and in processing power) brought us the capability to perform regression against complex samples like images and audio.
In this presentation we show the underlying mechanics of media generation from latent space representation of abstract visual ideas, real embodiment of “Platonic” concepts, with Variational Autoencoders, Generative Adversarial Networks, neural style transfer and PixelRNN/CNN along with current practical applications like DeepFake.

Language
English
Level
Advanced
Length
39 minutes
Type
conference
About the speaker
About the speaker
Alberto Massidda
Production EngineerMeta
Computer engineer since 2008, specialized in mission critical, high traffic, high available Linux architectures and infrastructures (before the cloud was out), with a relevant experience in development and management of web services. Infrastructure Lead, SRE, AI researcher, university Teaching Assistant, opensource dev, worked among others at Translated, N26, Meta. Alberto has a variegated bundle of experience, that ranges from devops to machine learning, from the corporate banking to the mutable startup world.
Details
Language
English
Level
Advanced
Length
39 minutes
Type
conference