The Future of Computing:  A Deeper Integration Within Our Physical World

by Paul Reynolds 1,318 views0

 

During the past two decades, the world of interactive software has dramatically changed. Gone are the days of video game development where you had to build everything yourself. Now, game engines and geometric modeling tools are freely available with mountains of documentation and community support, making 3D software development more approachable and accessible than ever before.

You may think these changes are good news for augmented reality (AR) and virtual reality (VR) platforms. But ultimately, it may be what holds back the revolutionary potential of these technologies.

Because so many developers are already using these game development workflows, AR and VR device platforms must support them to maximize the content available in their app stores. But by only supporting these tools, a very large population of creative people is being inadvertently excluded – and they could be the very people who discover and create uniquely valuable use cases for AR and VR. By limiting accessibility to a wider group of developers, it could be a long, slow haul to any meaningful market adoption. Or worse: a fast haul towards mediocrity and obscurity.

No Shortage of Imagination

The AR/VR/3D computing industry is rife with imaginative ideas that show the unique strengths of these new technologies. Digi-Capital estimates that AR/VR revenue will reach nearly $100 billion by 2022, as the technology moves from gaming into consumer products, eCommerce, financial services, automotive, real estate, travel and other business markets.

So, why aren’t these compelling use cases coming to market yet? The main reason: Individuals most capable of conceiving unique use cases find current development tools inaccessible.

I’ve seen this first hand, as an early employee of Magic Leap where I spent a lot of time thinking about and working on the future. There hundreds of smart, creative people working on this new future of spatial and immersive computing. But only a small percentage of them – those with engineering aptitude – actually had direct access to iterate with the technology.

My company recently polled a group of professional developers, in which nearly half reported that authoring and creating in 3D was their biggest challenge. A lot of this is rooted in designers’ unfamiliarity with the tools currently available, as well as the fact that many of the tools designers are using aren’t made for AR/VR design.

When creative individuals are unable to work directly with the technology or lack a full understanding of it, they make assumptions and dream up ideas that cannot be accomplished with existing tools. Others may play it safe with uninspired ideas that don’t fully leverage all technical capabilities. There’s a delicate balance between brainstorming without limits while carefully considering what’s practical. The best way to achieve this balance is iterating with the technology to fully understand all of the constraints.

The Point of Disruption

Looking at the evolution and convergence of software development, it is easy to see how we got to this point. For many early video game companies, their most valuable asset was their internally developed tools and technology. But, in 2005, a little startup – Unity – changed everything, disrupting game development with an affordable, approachable software development environment. Unity continues to dominate the modern video game market to this day.

Lowering the bar to entry for developing video games resulted in small teams creating innovative and experimental experiences, and an ability to think in interactive 3D. So, you’d assume that, lacking any real alternative, companies embracing new technologies like AR and VR would integrate these more accessible tools into their own workflows, since gaming engines have become the foundation for creating all sorts of non-gaming applications and experiences. It is a testament to the flexibility of these gaming technologies.

However, flexibility obscures the fact that these tools, no matter how feature rich, have limitations. And those limitations ultimately shape, and restrict, what can be built. Here, I’m referring to tools designed for video game software development. They have built-in biases towards the tasks for which they were originally designed.

What is even more concerning is the newer tools focused on AR – Sumerian, Snap Lens Studio, AR Studio, and others – all riff on the Unity-inspired production workflow in one way or another. They only seem to be reinforcing the bad assumptions at this point.

Our reliance on traditional software development processes and tools is making technologies inaccessible to creatives and curbing truly innovative thinking. When we consider the future of computing for AR and VR, we must more deeply integrate it within our physical world.

The new future experiences and applications are contextually aware and responsive to our intents. Achieving this level of natural interaction in software development requires combining layers of intelligence across multiple integrated, complex systems. We need methods, reusable patterns and constructs for controlling artificial intelligence (AI) and integrating services that can be managed at a higher level of abstraction than code.

Thinking Beyond Code for Development

If we are thinking beyond the screen for computing, why aren’t we thinking beyond the code for development?

This is not only possible, but it’s requisite, if we are to realize the true potential of new digital reality platforms. Tools must be simultaneously more accessible and more powerful than the traditional software-based workflows that dominate today.

We must rethink how the new computing world will be conceived and built. Digital reality technologies, like AR and VR with 3D inputs and displays, could lead to visual workflows that are highly accessible. As an additional bonus they would be more supportive of real-time collaboration. In other words: more power to more people, and more opportunity to make compelling AR/VR use cases a reality.

About the Author

Paul Reynolds has been a software developer and technology consultant since 1997. In 2013, after 10 years of creating video games, Paul joined Magic Leap where he was a Senior Director, overseeing content and SDK teams. In 2016 Paul moved to Portland, OR where a year later he founded Torch 3D, a prototyping, content creations, and collaboration platform for the AR and VR cloud.

 

Paul Reynolds

Paul Reynolds has been a software developer and technology consultant since 1997. In 2013, after 10 years of creating video games, Paul joined Magic Leap where he was a Senior Director, overseeing content and SDK teams. In 2016 Paul moved to Portland, OR where a year later he founded Torch 3D, a prototyping, content creations, and collaboration platform for the AR and VR cloud.

You Might Be Interested In These: