Rony Abovitz has never been one for direct information. Over the past few years, the Magic Leap founder has confounded people with not-exactly-updates about his company’s not-exactly-vaporware mixed-reality system—especially on Twitter, where he’s been given to statements like “We are not chasing perfection – we are chasing ‘feels good, feels right’. Tuning for everyday magic.” So last week, when he dropped this teaser, many assumed it would lead to just another YouTube video of frustrating breadcrumbs.
Not this time.
On Wednesday, the obsessively secretive company finally revealed the first solid step on its journey to spatial computing. Or at least pictures of the hardware that will enable it, along with some scant details.
The Magic Leap One system comprises a head-mounted display (which the company calls Lightwear), a wearable processing unit that connects to it (Lightpack), and a handheld controller (Lighthand—kidding! It’s called Control). There’s no announced cost, no specs, no release date, just moonshot language and accompanying hero shots of what looks like a set of space-age steampunk goggles.
What’s immediately most intriguing is the headset’s form factor. It’s also remarkably lightweight, relatively speaking; while you’re never going to mistake it for not-steampunk goggles, its silhouette makes it damn near invisible compared to other AR/MR systems like Microsoft’s HoloLens and the Meta 2, and even a slimmer VR headsets like PlayStation VR or the Oculus Rift. The company has also confirmed with WIRED that the images it released aren’t renders, but fully functioning “PEQ,” or product equivalents.
But that raises a thorny question: Given that low-profile form factor, and the bulky, bench-mounted prototypes from whence it sprung, how close will this first generation come to realizing Magic Leap’s many promises?
Benedict Evans, a partner at Andreeseen Horowitz—one of many investors that have ponied up a grand total of nearly $2 billion to fund Magic Leap’s endeavors—today put the magnitude of the company’s challenge into lay-friendly perspective. “Mixed reality is a display problem, a sensor problem and a decision problem,” he tweeted. “Show an image that looks real, work out what’s in the world and where to put that image, and work out what image you should show.”
In this case, the second part comes first. AR and MR—and, in forthcoming generations, VR as well—depend mapping a user’s physical environment in order to place virtual objects properly within it. That’s why Magic Leap One’s headset is studded with an array of embedded outward-facing sensors; while we don’t know exactly what they all are, it’s safe to assume a combination of RGB and infrared cameras, along with depth sensors. (AR headsets like the Meta 2, and even AR-capable phones like the iPhone X, have such a suite.)
Next comes Evans’ “display problem.” Magic Leap has long attributed its titular magic to a “dynamic digital light field signal.” Generally speaking, that means it captures all the data (location and direction) of light rays in a room, and then uses that to dictate how virtual objects appear and behave in a given space. That has huge repercussions for being able to render live-action VR content in navigable 3-D, the way Lytro does. But perhaps more importantly, it allows a headset to present virtual objects as though they’re close to the viewer, reducing eyestrain.
However, Magic Leap has also refused to elaborate beyond that phrase to discuss how it generates that signal; it simply calls its lenses “photonic wafers,” leaving even experts to speculate about how they can accomplish such an optically challenging process in such a small device compared to the bulkier headsets like the HoloLens and Meta 2.
“Their lightfield technology—that’s what no one really knows about,” says David Nelson, creative director of the mixed-reality lab at USC Institute for Creative Technologies. “Looking at that form factor, I’m a little dubious. There have been different approaches with multiple displays, layered displays that are essentially projecting toward your eye. They might be doing something like what the HoloLens does where they’re projecting onto a piece of glass that then reflects back to your eye, but the form factor for that is even hard to imagine.”
Not so, says Abovitz. “We’re not bouncing a cellphone screen through a half-silvered mirror,” he says, referring to the HoloLens’ method of splitting a light beam to project an image. “I generally don’t like to comment about other companies, but I will focus on a couple of things where we think we’re the only people in the world doing them.”
There are other methods of displaying virtual objects to the user; for instance, rays of light can be beamed directly into the eye. However, these tend to mean a reduction in field of view, the amount of visible space in which digital creations can appear. (The Rift and the HTC Vive, both VR headsets possess a 110-degree FOV, while the HoloLens’ FOV is only 35 degrees, with plans to double that in the next version.)
In my own experience with Magic Leap—all the way back in the relative Stone Age of May 2016—I found the FOV to be somewhat limited, though Rolling Stone reports that the Magic Leap One manages something a bit more impressive, something “about the size of a VHS tape held in front of you with your arms half extended”. That’s roughly comparable to how I’d describe Meta 2’s FOV, making Magic Leap’s technology potentially even more impressive.
Another unresolved issue is whether Magic Leap’s technology will allow users’ eyes to focus on virtual objects at different depths. This multifocal ability is at once the greatest promise of lightfield technology, and its greatest challenge. If you’re able to focus naturally on objects bring presented in various parts of the room, that turns AR/VR/MR from a dip-in technology to a persistent, all-day proposition—a game-changer for industries like design and healthcare that are uniquely suited to the technology. Previous Magic Leap videos seemed to imply that it used multifocal lightfield; however, whether the effect was a result of the technology itself or the camera filming it remains unclear.
On one hand, Abovitz seems to imply that Magic Leap One can do this. “It’s a virtual lightfield output,” he tells me, “not a single plane.” But on the other, Rolling Stone was unable to confirm whether the system can support it. (I don’t recall multiple focal depths in my time with Magic Leap’s technology; it certainly wasn’t explicitly called out of any of the demos.)
“Is it multifocal lightfield? That’s probably the very first question I’d ask,” says Edward Tang, CTO of Avegant, another company developing lightfield-based mixed reality technology. “That could really affect the type of experience you can create. If it’s just a fixed-focus display, I think it’ll probably raise some eyebrows: ‘What’s so interesting about it?'” (Avegant’s own prototypes, as well as its currently shipping devkit, deliver a multifocal lightfield display; again in my own experiences, it allowed me to shift focus to multiple objects in a given demo, as well as hold virtual objects in each hand and move them both around freely.)
Display aside, there are more prosaic concerns with any device like this. “Until a major breakthrough in battery technology, a lightweight pair of AR smartglasses doing heavy duty AR is hard to power all day without a battery pack or hot swappable batteries,” says Tim Merel, managing director of AR/VR analysts Digi-Capital. “This is a non-trivial problem.”
Power management also invites potential tradeoffs, as Tang points out: “How bright do you want the display to be? What resolution?” How Magic Leap will handle those also remains unknown.
So in many ways, Magic Leap’s big hardware reveal leaves us with more questions than answers—not to mention the still outstanding issues of price and specs. And don’t expect the company to fill in those blanks at CES in January; it won’t be there. This is Magic Leap, after all.
“As we get close to launch date we’ll be very open with performance specifications,” Abovitz says. “You gotta give us some bits to keep going. We maxed out what was possible in this day and age, and that’ll be an indicator of what we plan to keep doing.” Until the system ships to early adopters sometime in 2018, what “maxed-out” actually looks like—and feels like—remains to be seen.