International Journal of Robotic Engineering

Teaming up with Robots: An IMOI (Inputs-Mediators-Outputs-Inputs) Framework of Human-Robot Teamwork

Sangseok You and Lionel Robert

School of Information, University of Michigan, Ann Arbor, MI, USA

*Corresponding author

Sangseok You, School of Information, University of Michigan, Ann Arbor, MI, USA, E-mail: [email protected]

Int J Robot Eng, IJRE-2-003, (Volume 2, Issue 1), Research Article

Received: July 14, 2017
Accepted: January 18, 2017
Published: August 03, 2017

Citation: You S, Robert L (2017) Teaming up with Robots: An IMOI (Inputs-Mediators-Outputs-Inputs) Framework of Human-Robot Teamwork. Int J Robot Eng 2:003.

Copyright: © 2017 You S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract


Despite the established volume of literature on human-robot interaction, the ways in which humans and robots work together as a team have been relatively understudied. Current approaches to human-robot teamwork do not fully address issues associated with team phenomena that involve multiple humans and robots in the team. In this paper, we propose a framework for human-robot teams, based on the IMOI (Inputs-Mediators-Outputs-Inputs) framework for teamwork in human teams. The proposed framework describes the developmental process of human-robot teams in which different characteristics of humans and robots produce team outcomes through various mediators within organizational contexts. The framework provides a theoretical guide to better understand how teams working with robots operate and how to improve various team outcomes.

Keywords


Framework, Human-robot teamwork, Robots in groups, Teams working with robots, Collaboration, Teams

Introduction


Robots are increasingly becoming a central part of teamwork [1]. For instance, search-and-rescue teams employ remote-control robots to help respond to emergencies [2]. Teams of construction workers use remote-control robots to tear down concrete walls [3]. The use of robots in the context of teamwork has the potential to transform teamwork by introducing new dynamics between humans and robots [4,5].

The importance of this topic suggests the need to develop a theoretical framework directed at better understanding teamwork with robots. A theoretical framework can help identify factors that enable or hinder the effectiveness of human-robot teams. The identification of such factors is crucial for two reasons: (1) To achieve theoretical progress in the field of teamwork with robots and (2) To gain a practical understanding of promoting outcomes in such teams.

There have been some efforts to develop a research framework for human-robot interaction [6,7]. Goodrich and Schultz [6] provided a survey on potential issues pertaining to human-robot interaction. In their survey, they identified that robots are deployed to teams and emphasized that the level of robot autonomy determines the interaction in teams working with robots. They also acknowledged that a unifying framework of human-robot collaboration is required for advancing the subject, but they did not propose a unifying framework and left it for future work. Scholtz [7] also proposed a framework for the evaluation of interaction by suggesting different roles of a human in working with mobile autonomous robots (e.g., supervisor, operator, mechanic, peer, and bystander).

The current approaches to theorizing teamwork with robots seem to have several shortcomings. First, there is no unifying framework that views teams as an organizational structure of multiple humans and multiple robots. Most efforts to theorize human-robot teamwork are focused on interactions with an operator or a human counterpart and a robot [7,8]. Existing models and metrics for human-robot interaction have failed to identify that teams working with robots can have more a complex composition by having more robots and humans and thus cause various social and psychological phenomena. Second, to our best knowledge, there has been no effort to theorize human-robot teamwork by viewing that teams are dynamic and adaptive throughout their life cycle. Models and frameworks are focused on specific aspects of human-robot collaboration, such as situational awareness [9,10] and workload [11,12]. The existing literature cannot inform us of the team processes regarding how various characteristics of humans and robots are put together to yield team outcomes and what social, emotional, cognitive, and psychological phenomena occur during these team processes.

To address this issue, we propose a theoretical framework that describes how teams working with robots operate and promote their outcomes during developmental life cycles. In this paper, we propose a research framework that integrates the literature on teamwork and human-robot interaction (Figure 1). This framework attempts to capture the dynamic, adaptive, and developmental nature of human-robot teams. In doing so, this framework incorporates the inputs, mediators, and outputs of human-robot teams with an iterative process of feedback loops. We believe this framework is an initial step to motivate the further theoretical development and empirical validation.

The IMOI Framework of Human-Robot Teamwork

Our framework is based on previous frameworks of teamwork where inputs, mediators, and outputs are identified as key elements in team's life cycle (see [13,14] for a review). Constructs in the inputs influence emergent states of teamwork with robots (i.e. mediators), eventually producing outputs. Our model is based on the IMOI (Inputs-Mediators-Outputs-Inputs) framework by Ilgen, et al. [13] to represent the cyclic nature of human-robot teams with feedback loops from outputs to subsequent inputs and mediators during the team life cycle.

Inputs

The inputs represent resources and properties available to teams [15]. These include those from the individual level, including characteristics of individual team members and robots, and the team level, including team composition and job characteristics. The team-level inputs are influenced by the individual-level inputs and are shown by the solid arrow from the individual level to the team level on the left side of Figure 1.

Our framework includes the combination of both robot and human characteristics that can manifest unique team compositions and structures in human-robot teamwork. Robots in teams can be perceived to possess humanlike attributes such as gender, ethnicity, knowledge, ability, and personality [16-18]. This is because people often ascribe agency to robots and treat them as social entities [4]. For instance, a human-robot team can be considered homogeneous when a robot is perceived to have the same attributes as other team members [19]. Similarly, a human-robot team should be viewed as diverse when a robot is perceived to have different attributes from other team members. We would expect diversity between humans and robots to have the same impact it has on teamwork in all-human teams. These effects include decreases in social integration and increases in conflict [20-22]. Therefore, our framework puts the same emphasis on robot characteristics as it does human characteristics when it comes to the makeup of team-level characteristics.

It is possible that robots' perceived gender influences the formation of a sub group within teams working with robots. For instance, female team members may feel close to robots that are perceived to be female, while they may not feel the same positive perception toward male-type robots. In this case, a team can be divided into two subgroups based on the gender of the humans and robots.

Proposition 1: Individual-level characteristics of robots and humans can influence team-level characteristics of human-robot teams.

Our framework depicts inputs influencing subsequent mediators and eventually outputs. This relationship can occur at both the team and the individual levels. For example, at the team level, task interdependence is critical to communication and coordination between humans and robots during teamwork [23]. Task interdependence between humans and robots has been proved to help achieve better mental models on task and team performance [24]. Also, at the individual level, research suggests that individuals positively evaluate robots that are perceived to have a similar personality and social identity such as ethnicity [16,25].

Inputs at the team level can influence mediators and outcomes at the individual level. For instance, the composition of a human-robot team may determine the level of individual motivation and satisfaction of its team members. In teams that involve multiple human team members, individual effectiveness may be a function of both team-level inputs and individual-level inputs [13,26,27].

As an example, the perception toward robotic teammates can differ by the various attributes of human team members and characteristics of the task. When the task structure of teamwork with robots is competitive, male team members may perceive their robotic teammates as less friendly and this might cause anxiety about the task [28]. In this case, the gender of a robot can be as important as that of human team members under certain circumstances.

Proposition 2: Inputs influence mediators and subsequent outputs in human-robot teams.

Proposition 3: The influence of team-level inputs can occur at the individual and team levels.

Mediators

Mediators are emergent processes or states through which the effects of inputs are manifested. For individuals, mediators are often attitudes and beliefs. For teams and groups, they are typically processes that result from the interactions necessary for combining different inputs [29]. Mediators can also be viewed as an output of the team's input.

Mediators of human-robot teams can be present between humans, and between humans and robots. For example, shared mental models are important cognitive mediators. Accurate mental models usually promote team performance and reduce cognitive load [30]. Shared mental models can exist between humans and robots [24], and between humans [30]. In first-responder teams, team members are often scattered across locations [2,23]. Communication among humans and robots is required to maintain accurate shared mental models of the situation [2].

Emotional attachment is a mediator, defined as an affective reaction toward robots or other humans [31]. When team members are emotionally attached to their robots, they are likely to be more motivated to perform tasks with the robots and perceive the work with the robots to be more rewarding [18,31]. However, emotional attachment can also deter teams from deploying robots to risky situations [31]. A study of military bomb disposal teams showed that team members were reluctant to send their robot to too dangerous missions and such behavior could influence the success and performance of the mission operations [31].

As behavioral mediators, it has been shown that effective communication and coordination are important to improve team outcomes with [32,33] and without robots [15]. Communication and coordination with robots are areas that have rich empirical evidence. For instance, behavioral coordination through cross-training is shown to be effective in achieving the accurate shared mental models in teams working with robots [24]. Research also shows that robots that speak the natural language are perceived to be more intelligent and friendly [34].

Proposition 4: Cognitive, affective, and behavioral mediators influence outputs.

Team-level mediators can also influence individual-level outputs. Team trust can affect the relationship between individual trust and individual performance [35]. It is also possible that mediators such as team cohesion and communication can influence whether team members want to remain on the team.

A shared mental model is an example of the impact of team-level mediators. The shared mental model can be formed only among human team members. This case can be found mostly in teams working with robots that are remotely controlled by human operators. Research shows that accurate shared mental models among the operators are crucial to the success of the teamwork [36]. In addition, mental models between humans and robots should influence team performance [24]. When robots are autonomously navigating an area, it is important that the robots have an accurate scheme of the area and humans have the knowledge of the robot's capability and the boundary of the robot's navigation. In this case, the human-robot shared mental model can influence human teammates' workload and effectiveness of robot behaviors.

Proposition 5: The influence of team-level mediators can occur at the individual and team levels.

Outputs

Outputs have three categories: task work, teamwork, and perceptual outcomes. In human-robot teams, task work can include the task time, solution quality, and error rate, while teamwork can include communication efficiency and effectiveness, awareness, and coordination. Perceptual outcomes are attitudinal and emotional reactions, such as satisfaction.

Our framework attempts to capture the role of time. The original IPO (Input-Process-Output) model has been criticized for focusing on a linear path from inputs through outcomes [29]. However, most teams undergo developmental processes and feedback loops as they mature [14]. This means that mediators and outputs can influence subsequent inputs and mediators through feedback loops (shown by solid arrows on the right side of Figure 1). In other words, as past research on appropriation has shown time matters [37-39]. Therefore, we should expect past interactions to play a fundamental role in the future interactions of human-robot teams.

As an example, time matters in the role of task knowledge and skill. For instance, a human-robot team could have little task knowledge (inputs), which could influence its shared mental models (mediators) and ultimately its initial performance (outputs). When a human-robot team repeats the task, the team becomes better, which influences mediators and the outputs of future tasks. However, the impact of previous outputs can be more influential than feedback from previous mediators. Mediators are often subject to change based on a team's previous performances and experiences. Inputs, including specifications of robots and individual traits, tend to be static and less dynamic.

Proposition 6: There are feedback loops in which mediators and outputs influence subsequent mediators and inputs in a cyclic manner.

Last, the organizational context influences inputs, mediators, and outputs associated with human-robot teams. Teams are often embedded in a larger organizational context. Organizations help determine both the operation and management of human-robot teams. Organizations provide the resources to facilitate teamwork. For instance, organizations can provide training and support to human-robot teams [15]. Consistent training and support from the organization can be critical, particularly for human-robot teams [27]. Team members are likely to build strong social relationships with their robots through prolonged interactions throughout the team life cycle.

Proposition 7: Organizational contexts of human-robot teams can influence their inputs, mediators, and outputs by providing positive conditions.

Discussion


Contributions of the framework

There are three advantages of this framework. First, it acknowledges different compositions of human-robot teams beyond one robot and one human. Given that many human-robot teams consist of multiple robots and their operators, both human-human and human-robot collaboration should be examined to better understand how these teams achieve their goals in synergistic ways. Our framework not only incorporates the different individual and robot characteristics but also various compositions among the characteristics of robots and humans. This includes collaboration, as a joint action between and among humans and robots, to jointly accomplish a shared goal [32].

Second, the framework suggests individual, team-level, and multilevel relationships. Most research focuses on the individual level-often ignoring the team context. Our framework describes how team characteristics influence individual mediators and outputs. A multilevel approach is essential to investigate impacts of the team level on the individual level [40,41].

Third, our framework considers the role of time by including feedback loops. It is possible to investigate how different team compositions convert to outputs through mediators. Many researchers have treated such variables as attraction and attachment toward a robot as an end-point of human-robot interaction, mainly for predicting the individual adoption of social robots. However, human-robot teams often repeat similar tasks and interact with robots assigned to them during the team life cycle. In this case, previous performance can alter a team's perception toward its robots and the ways mediators influence interactions.

Future work

The current theoretical framework includes a broad range of constructs that can take place in teams working with robots. Despite its broad scope, the framework is subject to updates based on empirical evidence. Scholars in the relevant fields, including information systems, robotics, and human-robot interaction, should put forth a collective effort to test the relationships and phenomena through empirical studies to enhance the model. That would involve an iterative process, where findings from empirical studies would be reflected to the model and the new model would supply research questions for future research. Validating and enhancing the model could be done both in teams working with robots in practice and in experimental settings. For instance, investigating the impacts of organizational contexts should involve real teams working with robots, while experiments should allow scholars to test impacts of different compositions of humans and robots on team outcomes. We believe this framework is one of the first steps to enhancing the literature on human-robot teamwork and better understanding the phenomena.

In addition, when empirical data are accumulated, the framework can help in developing better machine-learning algorithms for robots deployed to work alongside human team members. For instance, robot engineers could be better informed on designing robot behaviors in teams with particular compositions of humans and robots. Robots could be designed to adapt to human team members' personalities and manifest their own personality similar to or different from humans on the team. Machine-learning algorithms could also include categories of outcomes in teams working with robots, such as task work, teamwork, and subjective outcomes, which are identified in the model. In this sense, the framework is expected to be helpful in developing an algorithm for robots deployed to teams.

Conclusion


Despite the increasing use of robots in teams, research on teams working with robots still lacks a theoretical guide to better understand how such teams operate and enhance their performance. Although the current literature on human-robot collaboration addresses some issues of teamwork between a single human and a robot, it fails to acknowledge a wide range of team phenomena that involve multiple humans and robots in the team. To address this issue, we propose in this paper a theoretical framework for human-robot teamwork. The framework consists of inputs, mediators, and outputs in the life cycle of teams working with robots. The framework describes how team processes can emerge from various resources and environments of teams working with robots and turn into team outcomes in multiple dimensions. The proposed framework considers teams working with robots as a developmental organizational structure in which teams can evolve and learn from the interactions among the team members and robots and improve their outcomes over time. The framework is one of the first steps to establishing a better understanding of teamwork with robots so it requires an iterative process of validation by adding findings from empirical studies.

Figures




Figure 1: The IMOI Framework of Human-Robot Teamwork.




References


  1. Lionel P Robert, Sangseok You (2014) Human-robot interaction in groups: Theory, method, and design for robots in groups. Proceedings of the 18th International Conference on Supporting Group Work, (GROUP '14), 310-312.

  2. Jennifer L Burke, Robin R Murphy, Michael D Coovert, Dawn L Riddle (2004) Moonlight in Miami: Field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise. Human-Computer Interaction 19: 85-116.

  3. CC Sullivan, A Sullivan (2014) Robots, drones, and printed buildings: The promise of automated construction. Building Design + Construction.

  4. Victoria Groom, Clifford Nass (2007) Can robots be teammates? Benchmarks in human-robot teams. Interaction Studies 8: 483-500.

  5. Holly A Yanco, Jill L Drury (2004) Classifying human-robot interaction: An updated taxonomy. SMC 3: 2841-2846.

  6. Michael A Goodrich, Alan C Schultz (2007) Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction 1: 203-275.

  7. Jean Scholtz (2003) Theory and evaluation of human robot interactions. System Sciences, Proceedings of the 36th Annual Hawaii International Conference.

  8. Terrence Fong, David Kaber, Michael Lewis, Jean Scholtz, Alan Schultz, et al. (2004) Common metrics for human-robot interaction. IEEE 2004 International Conference on Intelligent Robots and Systems, Sendai, Japan.

  9. Hsieh MA, Cowley A, Keller JF, Chaimowicz L, Grocholsky B, et al. (2007) Adaptive teams of autonomous aerial and ground robots for situational awareness. Journal of Field Robotics 24: 991-1014.

  10. Mieczyslaw M Kokar, Mica R Endsley (2012) Situation awareness and cognitive modeling. IEEE Intelligent Systems 27: 91-96.

  11. Raja Parasuraman, Thomas B Sheridan, Christopher D Wickens (2008) Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making 2: 140-160.

  12. Matthew S Prewett, Ryan C Johnson, Kristin N Saboe, Linda R Elliott, Michael D Coovert (2010) Managing workload in human-robot interaction: A review of empirical studies. Computers in Human Behavior 26: 840-856.

  13. Daniel R Ilgen, John R Hollenbeck, Michael Johnson, Dustin Jundt (2005) Teams in organizations: From input-process-output models to IMOI models. Annual Review of Psychology 56: 517-543.

  14. John Mathieu, M Travis Maynard, Tammy Rap, Lucy Gilson (2008) Team effectiveness 1997-2007: A review of recent advancements and a glimpse into the future. Journal of Management 34: 410-476.

  15. Steve WJ Kozlowski, Bradford S Bell (2003) Work groups and teams in organizations. Handbook of Psychology.

  16. Emily P Bernier, Brian Scassellati (2010) The similarity-attraction effect in human-robot interaction. Development and Learning (ICDL), 2010 IEEE 9th International Conference, 286-290.

  17. Dingjun Li, PL Patrick Rau, Ye Li (2010) A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics 2: 175-186.

  18. Lionel P Robert, Sangseok You (2015) Subgroup formation in teams working with robots. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, 2097-2102.

  19. Maxim Makatchev, Reid Simmons, Majd Sakr, Micheline Ziadee (2013) Expressing ethnicity through behaviors of a robot character. Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction, 357-364.

  20. Jamie Newell, Likoebe Maruping, Cynthia Riemenschneider, Lionel Robert (2008) Leveraging e-identities: The impact of perceived diversity on team social integration and performance. ICIS 2008 Proceedings 46.

  21. Lionel P Robert, Daniel M Romero (2015) Crowd size, diversity and performance. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1379-1382.

  22. Jaime B Windeler, Likoebe M Maruping, Lionel P Robert, Cynthia K Riemenschneider (2015) E-profiles, conflict, and shared understanding in distributed teams. Journal of the Association for Information Systems 16.

  23. Hank Jones, Pamela Hinds (2002) Extreme work teams: Using SWAT teams as a model for coordinating distributed robots. Proceedings of the 2002 ACM conference on Computer supported cooperative work, 372-381.

  24. Stefanos Nikolaidis, Julie Shah (2013) Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategy. Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, 33-40.

  25. Friederike Eyssel, Steve Loughnan (2013) 'It don't matter if you're black or white'? Social Robotics, Springer, 422-431.

  26. Sankara-Subramanian Srinivasan, Likoebe M Maruping, Lionel P Robert (2010) Mechanisms underlying social loafing in technology teams: An empirical analysis. ICIS 2010 Proceedings 183.

  27. Sangseok You, Lionel Robert (2016) Curiosity vs. control: Impacts of training on performance of teams working with robots. Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, 449-452.

  28. Bilge Mutlu, Steven Osman, Jodi Forlizzi, Jessica Hodgins, Sara Kiesler (2006) Task structure and user attributes as elements of human-robot interaction design. Robot and Human Interactive Communication, ROMAN 2006. The 15th IEEE International Symposium, 74-79.

  29. Joseph Edward McGrath (1984) Groups: Interaction and performance. Prentice-Hall Englewood Cliffs, NJ.

  30. Lionel P Robert, Alan R Dennis, Manju K Ahuja (2008) Social capital and knowledge integration in digitally enabled teams. Information Systems Research 19: 314-334.

  31. Julie Carpenter (2014) Just doesn't look right: Exploring the impact of humanoid robot integration into explosive ordnance disposal teams.

  32. Cynthia Breazeal, Guy Hoffman, Andrea Lockerd (2004) Teaching and working with robots as a collaboration. Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems 3: 1030-1037.

  33. Danielli A Lima, Claudiney R Tinoco, Juan MN Viedman, Gina MB Oliveira (2017) Coordination, synchronization and localization investigations in a parallel intelligent robot cellular automata model that performs foraging task. ICAART 2: 355-363.

  34. Cristen Torrey, Aaron Powers, Matthew Marge, Susan R Fussell, Sara Kiesler (2006) Effects of adaptive robot dialogue on information exchange and social relations. Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 126-133.

  35. Sirkka L Jarvenpaa, Thomas R Shaw, D Sandy Staples (2004) Toward contextualized theories of trust: The role of trust in global virtual teams. Information Systems Research 15: 250-267.

  36. Scott Ososky, David Schuster, Florian Jentsch, Stephen Fiore, Randall Shumaker, et al. (2012) The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. SPIE Defense, Security, and Sensing.

  37. Gerardine De Sanctis, Marshall Scott Poole (1994) Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science 5: 121-147.

  38. Robert M Fuller, Alan R Dennis (2009) Does fit matter? The impact of task-technology fit and appropriation on team performance in repeated tasks. Information Systems Research 20: 2-17.

  39. Sangseok You, Lionel P Robert, Soo Young Rieh (2015) The appropriation paradox: Benefits and burdens of appropriating collaboration technologies. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, 1741-1746.

  40. Lionel P Robert, Sangseok You (2013) Are you satisfied yet? Shared leadership, trust and individual satisfaction in virtual teams.

  41. Sankara-Subramanian Srinivasan, Likoebe M Maruping, Lionel P Robert (2012) Idea generation in technology-supported teams: A multilevel motivational perspective. System Science (HICSS), 2012 45th Hawaii International Conference, 247-256.