DAVID FATTAL is the co-founder and CEO of Leia, which has just come out of stealth with over $100 million in 2014. The company was spun out of HP Labs, where Fattal discovered the application and developed it with other scientists. In this interview with Charlie Fink, Fattal explains how it works.
David Fattal, co-founder and CEO of Leia. Fattal was named Innovator of the year 2013 by the MIT … [+]
CHARLIE FINK: Let’s start with the amazing foundational myth about your company.
DAVID FATTAL: The full story is that we were working on a project called Optical Interconnects which uses nanostructures and manipulation of light on a chip, a wafer, to communicate information instead of electricity.
We were caught in a fire drill, and had to leave the lab with everything we had in our hands. We all gathered in the parking lot.
It was a bright, sunny day. The sun was acting pretty much like a laser beam, with very directional light. As it heated the surface of the wafers we saw all kinds of cool patterns emerge, which was due to the directionality of the structures. And we didn’t even notice that people around us were like hey, that’s super cool, what do you have in your hand? So that’s how it all started.
So from then on we spent all our so-called “20 percent time” devoted to that project, and more and more people wanted in.
We were able to project this light field, or this hologram, from a completely transparent piece of material. So it looks magical, right? You turn it off, it’s just transparent. You turn it on, and the hologram pops up all by itself.
A slide illustrating qualities of the light field display.
CHARLIE FINK: What are the fields of view that it has at the present time?
DAVID FATTAL: The field of view is entirely configurable. So imagine that we are able to configure that light field. So really, I want you to imagine a forest of light rays, and we can control exactly where these, you know, each beam, each ray, where it comes from, and where, which direction, with which kind of angular width. We have full control over these parameters. So we can make a very, very narrow field of view, for privacy, or you can make it like, very wide, or anything in between.
CHARLIE FINK: The demo of Leia I saw was the Disney movie, Coco, which is a 2D movie, although it was made using 3D techniques, playing on a table, which is I guess a reference design for an OEM. When I saw that on your display, even though it was a 2D movie, it appeared to be a 3D movie. Does Leia do that to all 2D content, or was it specific to a product like Coco, because it was made with 3D, even though its presentation format is 2D?
DAVID FATTAL: Mm hmm, yeah. So that’s the question of the data format. Before I answer that question, and we have to understand what the light field display is, right? What is coming out of the screen, so then I can tell you why you see things in 3D. So far we talked about light field capture, and that’s what popularized light field. But obviously it’s very asymmetric, and you know, once you’ve captured your light field, you want to be able to render it, right?
So imagine you have this very fancy camera capture all these light rays from different direction on different pixels. Up to very recently, you didn’t have the opportunity to actually re-render, right, these light rays. So conversely, you want to have a display that is able to, from a given pixel, give you different colors, different intensity of light in different direction of space, right? You want to be able to do that. So that’s what a light field display actually is, right?
So now imagine that in a normal display, one pixel is going to emit the same information everywhere in space and you know, anybody, your two eyes and all the viewers are, they’re seeing the same content at a given pixel. But now the light field display is going to break that down into different zones, and is going to send you different colors, different information in different angles. And so that’s really the color part of the light field capture, and now you can re-render the light field from a flat surface, right?
That’s why I have you imagine you have a window in front of you, the window will capture all of these light rays, and that would be, that would be the camera. And the conversely, if the window was able to reemit these light rays, that would be a light field display.
The image that started it all.
CHARLIE FINK: So the reimaging is what makes it seem three-dimensional.
DAVID FATTAL: Yeah, correct. Correct, yeah, yeah.
CHARLIE FINK: Because the colors create different lengths of light.
DAVID FATTAL: Yeah, and essentially what you experience now, when you see a light field display, your two eyes are going to pick up different information. The correct information, as if it was coming from the real world, and the real world is in 3D, so you’re picking up depth. But you’re not picking up only depth, right? You’re picking up, as I said, subtle variation of the lighting and so on that make texture look like a texture, that makes metal look like metal, that makes skin look like skin, and so on and so forth, the sparkle of a diamond.
CHARLIE FINK: Would it work for any 2D movie? What if I — so would it work for any modern, or any old movie, like a black and white movie?
DAVID FATTAL: Yeah, so yeah, exactly. So the movie Coco that you saw was actually already 3D stereo, side by side, so it’s two views. And then we have software at Leia that now, to the point of the Lytro CEO with computer vision, you can actually hallucinate the missing point of view. So you kind of, from two points of view you’re going to recreate, to the best of your ability, you’re going to recreate the missing point of view, so you’re going to reconstruct the light field from very sparse kind of information. So that’s the Coco movie that you saw.
We’re also able to do it also with simple pictures, 2D pictures. So more and more now, and then we’ll get to that, but don’t know — we showed you Holopix, which is our picture-sharing app. A lot of people are uploading this 3D content, stereo, light field content to Holopix. We use that to train a neural network, very big neutral network. So today we’re actually at the point where we have an excellent technology that most 2D pictures, you show me a portrait shot, you show me a landscape, you show me food, it will create a light field for you. So it will actually synthesize digitally these different light rays, and then when you put it on a display, you will actually see the light field image from a 2D picture.
Then what you’re describing is doing real-time 2D video to light field, and that’s the open problem, and our team is working on that. So obviously, once you can do 2D pictures, you want to do 2D pictures fast enough that you actually can take any kind of 2D content, and then create the light field, and that’s what we’re working on right now.
CHARLIE FINK: That is just so amazing, and this story is so mind-blowing, so.
DAVID FATTAL: It’s a lot of fun for the team, I can tell you.
CHARLIE FINK: Once or twice a year somebody comes to me with something that is both real and mind-blowing. Thank you.