National Robotics Week

National Robotics Week: Architectural Robotics

Robotics is positioned to fuel a broad array of next-generation products and applications in fields as diverse as manufacturing, healthcare, disaster relief, national security, and transportation. We are kicking off National Robotics Week with a discussion with Keith Evan Green. He is author of Architectural Robotics, which explores how a built environment that is robotic and interactive becomes an apt home to our restless, dynamic, and increasingly digital society.

What is architectural robotics?

By architectural robotics,” I mean cyber-physical, built environments made interactive, intelligent, and adaptable by way of embedded robotics. “Robots for living in.” In architectural robotics, computation—specifically robotics—is embedded in the very physical fabric of our everyday living environments at relatively large physical scales ranging from furniture to the metropolis.

Historically, architects engaged in designing for the built environment anticipate in the form, function, and aesthetics of the designed artifact how people will use it, how people will drawn to it, and how it will respond to a range of possible, local conditions. In designing architectural robotics, however, there is a fundamental difference: the artifact is additionally a responsive system that actively engages, interacts, and even partners with inhabitants in real time under local conditions. So, unlike a conventional building that has a limited range of responses to dynamic, changing circumstances, an architectural robotic environment is intimately bound together with its inhabitants and its local conditions.

 

What is the mission of Clemson’s Institute for Intelligent Materials, Systems and Environments, of which you are the founding Director? Can you describe some of the projects you are currently working on?

The Clemson University Institute for Intelligent Materials, Systems and Environments (CU-iMSE) focuses its efforts on the design and evaluation of interactive and intelligent built environments. The novelty of CU-iMSE lies in its recognition of the physical, built environment, from furniture to the metropolis, as the next frontier of computing. Partnering Architecture, Electrical & Computing Engineering, and Materials Science & Engineering, CU-iMSE is home to human-centered, trans-disciplinary research-and-teaching teams sufficiently complex in composition to address problems and opportunities in an increasingly digital society. CU-iMSE strives to realize meticulous, artfully designed, cyber-physical systems that cultivate interactions across people and their surroundings that define places of social, cultural and psychological significance. 

CU-iMSE focuses its efforts on five primary applications: healthcare, learning and creativity, working life, disaster relief, and mass urbanization. Our current research projects include LIBRARY CUBED aiming to activate public libraries in underserved communities, home+ which expands our lab’s suite of robotic furnishings for aging in place, and COMPREHEND which serves as a partner for inhabitants in the undertaking of complex, creative activities. 

Beginning in July 2016, these efforts will transfer to my new Architectural Robotics lab at Cornell University, where I’ll become professor in the departments of Design & Environmental Analysis and Mechanical & Aeronautic Engineering.

At the core of robotics study is interactivity, how we interact with a technology and how it responds to its environment. What are some of the challenges facing robotic-environment interactions, especially in a learning environment?

At any instance, the precise configuration assumed by an architectural robotic environment may be determined by any one of these possible paths:

• An inhabitant selecting one of the offered, preprogrammed configurations.

• An inhabitant tuning and saving a configuration for herself.

• The robotic environment assessing the activity of the actors and then configuring itself to accommodate the inhabitant’s activity.

A critical challenge facing human-robotic environment interaction concerns the last of these: can the environment be sufficiently intelligent—nimble, really—to be a dependable, supportive, and safe partner for humans. My position is that architectural robotics should, for the indefinite future, offer the blend of interaction pathways listed above to promise the most productive and pleasing interaction possible. With respect to learning in particular, and all applications of architectural robotics more broadly, I do not see architectural robotic environments as a substitute or replacement for a teacher or any human being. This is a different kind of robotics than industrial robots on the factory floor replacing factory workers, or humanoid robots assuming the role of butler, bell hop, and like kinds of service employees. In architectural robotics, robots and people are envisioned mostly as partners, contributing their individual strengths to complex challenges. In learning applications of architectural robotics, the assumption is that a very capable teacher is present and interacting with students, and that the robotic environment is augmenting the learning activity in ways that would not occur without it.

What will the workstation of the future look like?

The workstation of the future may look a lot like the workstation of the past; this, because the core of what is human being is not changing so fast. I expect we will grow weary of interacting with smart phones and tablets, and yearn for computation embedded in the everyday, physical fabric of our surroundings. In this way, we inhabit a world that seamlessly flickers between the physical one we have evolved in for 200,000 years and the digital, computational one we have only recently conceived. Here, we will find ourselves as the inhabitants of ecosystems made of physical bits, digital bytes, and biology, a dynamic blend of the physical, digital, and biological realms, changing as arise human and ecological opportunities and needs.