Adaptive collaborative control

Adaptive collaborative control is the decision-making approach used in hybrid models consisting of finite-state machines with functional models as subcomponents to simulate behavior of systems formed through the partnerships of multiple agents for the execution of tasks and the development of work products. The term “collaborative control” originated from work developed in the late 1990s and early 2000 by Fong, Thorpe, and Baur (1999).[1] It is important to note that according to Fong et al. in order for robots to function in collaborative control, they must be self-reliant, aware, and adaptive.[2] In literature, the adjective “adaptive” is not always shown but is noted in the official sense as it is an important element of collaborative control. The adaptation of traditional applications of control theory in teleoperations sought initially to reduce the sovereignty of “humans as controllers/robots as tools” and had humans and robots working as peers, collaborating to perform tasks and to achieve common goals.[2] Early implementations of adaptive collaborative control centered on vehicle teleoperation.[1][2][3] Recent uses of adaptive collaborative control cover training, analysis, and engineering applications in teleoperations between humans and multiple robots, multiple robots collaborating among themselves, unmanned vehicle control, and fault tolerant controller design.

Like traditional control methodologies, adaptive collaborative control takes inputs into the system and regulates the output based on a predefined set of rules. The difference is that those rules or constraints only apply to the higher-level strategy (goals and tasks) set by humans. Lower tactical level decisions are more adaptive, flexible, and accommodating to varying levels of autonomy, interaction and agent (human and/or robotic) capabilities.[2] Models under this methodology may query sources in the event there is some uncertainty in a task that affects the overarching strategy. That interaction will produce an alternative course of action if it provides more certainty in support of the overarching strategy. If not or there is no response, the model will continue performing as originally anticipated. Several important considerations are necessary for the implementation of adaptive collaborative control for simulation. As discussed earlier, data is provided from multiple collaborators to perform necessary tasks. This basic function requires data fusion on behalf of the model and potentially a need to set a prioritization scheme for handling continuous streaming of recommendations. The degree of autonomy of the robot in the case of human–robot interaction and weighting of decisional authority in robot-robot interaction are important for the control architecture. The design of interfaces is an important human system integration consideration that must be addressed. Due to the inherent varied interpretational scheme in humans, it becomes an important design factor to ensure the robot(s) are correctly conveying its message when interacting with humans.

History

edit

The history of adaptive collaborative control began in 1999 through the efforts of Terrence Fong and Charles Thorpe of Carnegie Mellon University and Charles Baur of École Polytechnique Fédérale de Lausanne.[1] Fong et al. believed existing telerobotic practices, which centered on a human point of view, while sufficient for some domains were sub-optimal for operating multiple vehicles or controlling planetary rovers.[1] The new approach devised by Fong et al. focused on a robot-centric teleoperation model that treated the human as a peer and made requests to them in the manner a person would seek advice from experts. In the nominal work, Fong et al. implemented collaborative control design using a PioneerAT mobile robot and a UNIX workstation with wireless communications and distributed message-based computing.[1] Two years later, Fong utilized collaborative control for several more applications, including the collaboration of a single human operator with multiple mobile robots for surveillance and reconnaissance.[2][4] Around this same time, Goldberg and Chen presented an adaptive collaborative control system that possessed malfunctioning sources.[3] The control design proved to create a model that maintained a robust performance when subjected to a sizeable fraction of malfunctioning sources.[3] In the work, Goldberg and Chen expanded on the definition of collaborative control to include multiple sensors and multiple control processes in addition to human operators as sources. A collaborative, cognitive workspace in the form of a three-dimensional representation developed by Idaho National Laboratory to support understanding of tasks and environments for human operators expounds on Fong's seminal work which used textual dialogue as the human-robot interaction.[5] The success of the 3-D display provided evidence of the use of mental models for increased team success.[6][7] During that same time, Fong et al.[8] developed a three dimensional display that was formed via a fusion of sensor data. A recent adaptation of adaptive collaborative control in 2010 was used to design a fault tolerant control system using a Lyapunov function based analysis.

Initialization

edit

The simuland for adaptive collaborative control centers on robotics. As such, adaptive collaborative control follows the tenets of control theory applied to robotics at its basest level.[3] That means the states of the robot are observed at a given instant and noted if it is within some accepted bound. If it is not, the estimated states of the robot are calculated using equations of dynamics and kinematics at some future time.[9] The process of entering observation data into the model to generate initial conditions is called initialization. The process of initialization for adaptive collaborative control occurs differently depending on the environment: robotics only and human-robotic interaction.[10] Under a robotics only environment, initialization occurs very similarly to the description above. The robotics, systems, subsystems, non-human entities observe some state it finds not in accordance with the higher-level strategy. The entities that are aware of this error use the appropriate equations to present a revised value for a future time step to its peers. For human-robotic interactions, initialization can occur at two different levels. The first level is what was previously described. In this instance, the robot notices some anomaly in its states that is not wholly consistent or is problematic with its higher-level strategy. It queries the human seeking advice to regulate its dilemma. In the other case, the human feels cause to either query some aspect of the robot's state (e.g. health, trajectory, speed) [2] or present advice to the robot that is challenged against the robot's existing tactical approach to the higher-level strategy. The main inputs for adaptive collaborative control are a human-initiated dialogue based command or value presented by either a human or robotic element. The inputs used in the system models serve as the starting point for the collaboration. A number of ways are available to gather observational data for use in functional models. The easiest method to gather observational data is simple human observation of the robotic system. Self-monitoring attributes such as built-in test (BIT) can provide regular reports on important system characteristics. A common approach to gather observations is to employ sensors throughout the robotic system. Vehicles operating in teleoperations have speedometers to indicate how fast they travel. Robotic systems with either stochastic or cyclic motion often employ accelerometers to note the forces exerted. GPS sensors provide a standardized data type that is used nearly universally for depicting location. Multi-sensor systems have been used to gather heterogeneous observational data for applications in path planning.[11][12]

Computation

edit

Adaptive collaborative control is most accurately modeled as a closed loop feedback control system. Closed loop feedback control describes the event where the outputs of a system from an input are used to influence the present or future behavior of the system.[13] The feedback control model is governed by a set of equations that are used to predict the future state of the simuland and regulate its behavior. These equations – in conjunction with principles of control theory – are used to evolve physical operations of the simuland to include, but not limited to: dialogue, path planning, motion, monitoring, and lifting objects over time. Many times, these equations are modeled as nonlinear partial differential equations over a continuous time domain. Due to their complexity, powerful computers are necessary to implement these models. A consequence of using computers to simulate these models is that continuous systems cannot be fully calculated. Instead, numerical solutions, such as the Runge–Kutta methods, are utilized to approximate these continuous models.[14] These equations are initialized from the response of one or more sources and rates of changes and outputs are calculated. These rates of changes predict the states of the simuland a short time in the future. The time increment for this prediction is called a time step. These new states are applied to the model to determine the new rates of changes and observational data. This behavior is continued until the desired number of iterations is completed. In the event a future state violates or comes within a tolerance of the violation the simuland will confer with its human counterpart seeking advice on how to proceed from that point. The outputs, or observational data, are used by the human operators to determine what they believe is the best course of action for the simuland. Their commands are fed with the input into the control system and assessed regarding its effectiveness in resolving the issues. If the human commands are determined to be valuable, the simuland will adjust its control input to what the human suggested. If the human's commands are determined to be unbeneficial, malicious, or non-existent, the model will seek its own correction approach.

Domain and Codomain

edit

The domain for the models used to conduct adaptive collaborative control is commands, queries, and responses from the human operator at the finite-state machine level. Commands from the human operator allow the agent to be provided with additional input in its decision-making process. This information is particularly beneficial when the human is a subject matter expert or the human is aware of how to reach an overarching goal when the agent is focused on only one aspect of the entire problem. Queries from the human are used to gather status information on either support functions of the agent or to determine progress on missions. Many times the robot's response serves as precursor information for issuance of a command as human assistance to the agent. Responses from the human operator are initiated by queries from the agent and feedback into the system to provide additional input to potentially regulate an action or set of actions from the agent. At the functional model level, the system has translated all accepted commands from the human into control inputs used to carry out tasks defined to the agent. Due to the autonomous nature of the simuland, input from the agent is being fed into the machine to operate sustaining functions and tasking that the human operator has ignored or answered to an insufficient manner. The codomain for the models that utilize adaptive collaborative control are queries, information statements, and responses from the agent. Queries and information statements are elements of the dialogue exchange at the finite-state machine level. Queries from the agent are the system's way of soliciting a response from a human operator. This is particularly important when the agent is physically stuck or at a logical impasse. The types of queries the agent can ask must be pre-defined by the modeler. The frequency and detail associated with a particular query depends on the expertise of the human operator or more accurately the expertise of the human operator identified to the agent. When the agent responds it will send an information statement to the human operator. This statement provides a brief description on what the adaptive collaborative control system decided. At the functional model level, the action associated with the information statement is carried out.

Applications

edit

Vehicle teleoperation

edit

Vehicle teleoperation has been around for many years. Early adaptations of vehicle teleoperations were robotic vehicles that were controlled continuously by human operators. Many of these systems were operated with line-of-sight RF communications and are now regarded as toys for children. Recent developments in the area of unmanned systems have brought a measure of autonomy to the robots. Adaptive collaborative control offers a shared mode of control where robotic vehicles and humans exchange ideas and advice regarding the best decisions to make on a route following and obstacle avoidance. This shared mode of operation mitigates problems of humans remotely operating in hazardous environments with poor communications [15][16] and limited performance when humans have continuous, direct control.[17] For vehicle teleoperations, robots will query humans to receive input on decisions that affect their tasks or when presented with safety-related issues. This dialogue is presented through an interface module that also allows the human operation to view the impact of the dialogue. In addition, this interface module allows the human operator to view what the robot's sensors capture in order to initiate commands or inquiries as necessary.[1][2]

Fault Tolerant System

edit

In practice, there are cases where multiple subsystems work together to achieve a common goal. This is a fairly common practice for reliability engineering. This technique involves systems working together collaboratively and the reliable operation of the overarching system is an important issue.[9] Fault tolerant strategies are combined with the subsystems to form a fault tolerant collaborative system. A direct application is the case where two robotic manipulators work together to grasp a common object.[18] For these systems, it is important that when one subsystem becomes faulty, the healthy subsystem reconfigures itself to operate alone to ensure the whole system can still perform its operations until the other subsystem is repaired. In this case, the subsystems create a dialogue between themselves to determine one another's status. In the event of one system starting to exhibit numerous or dangerous faults the secondary subsystem takes over the operation until the faulty system can be repaired.

Levels of Autonomy

edit

Four levels of autonomy have been devised to serve as a baseline for human-robot interactions that included adaptive collaborative control.[5] The four levels, ranging from full manual to fully autonomous, are: tele mode, safe mode, shared mode, and autonomous mode.[5][19][20] Adaptive collaborative controllers typically range from shared mode to autonomous mode. The two modes of interest are:

  • Shared mode – robots can relieve the operator of the burden of direct control, using reactive navigation to find a path based on their perception of the environment. Shared mode provides for a dynamic allocation of roles and responsibilities. The robot accepts varying degrees of operator intervention and supports dialogue through the use of a finite number of scripted suggestions (e.g. “Path blocked! Continue left or right?”) and other text messages that appear within the graphical user interface.
  • Autonomous mode – robots self-regulate high-level tasks such as patrol, search region or follow path. In this mode, the only user intervention occurs at the tasking level, i.e. the robot manages all decision-making and navigation.

Limitations

edit

Like many other control strategies, adaptive collaborative control has limits to its capabilities. Although the adaptive collaborative control allows for many tasks to be automated and other predefined cases to query the human operator, unstructured decision making remains the domain of humans, especially when common sense is required.[21] Particularly, robots possess poor judgment at high-level perceptual functions, including object recognition and situation assessment.[22] A high number of tasks or a particular task that is very involved may create many questions, thereby increasing the complexity of the dialogue. This complexity to the dialogue in turn adds complexity to the system design.[2] To retain its adaptive nature, the flow of control and information through the simuland will vary with time and events. This dynamic makes debugging, verification, and validation difficult because it is harder to precisely identify an error condition or duplicate a failure situation.[23] This becomes particularly problematic if the system must operate in a regulated facility, such as a nuclear power plant or waste water facility. Issues that affect human-based teams also encumber adaptive collaborative controlled systems. In both cases, teams are required to coordinate activities, exchange information, communicate effectively, and minimize the potential for interference.[2] Other factors that affect teams include resource distribution, timing, sequencing, progress monitoring, and procedure maintenance. Collaboration involves that all partners exhibit trust in the other collaborators and understand the other. To do so, each collaborator needs to have an accurate idea of what the other is capable of doing and how they will carry out an assignment.[2] In some cases, the agent may have to weigh the responses from a human and the human must believe in the decisions a robot makes.

References

edit
  1. ^ a b c d e f Fong, T.; Thorpe, C.; Baur, C. (1999). "Collaborative Control: A robot-centered model for vehicle teleoperation" (PDF). Proceedings of the AAAI Spring Symposium on Agents with Adjustable Autonomy. Menlo Park, California: AAAI.
  2. ^ a b c d e f g h i j Fong, T. (2001), Collaborative Control: A Robot-Centric Model for Vehicle Teleoperation (PDF), Carnegie Mellon University{{citation}}: CS1 maint: location missing publisher (link)
  3. ^ a b c d Goldberg, K.; Chen, B. (2001). "Collaborative control of robot motion: robustness to error" (PDF). Proceedings of the 2001 International Conference on Intelligent Robots and Systems.
  4. ^ Fong, T.; Grange, Sebastien; Thorpe, C.; Baur, C. (September 2001). "Multi-robot remote driving with collaborative control". IEEE International Workshop on Robot-Human Interactive Communication. Vol. 50. Bordeaux and Paris, France. pp. 699–704. doi:10.1109/TIE.2003.814768.
  5. ^ a b c Bruemmer, D.J.; Few, D.A.; Boring, R.L.; Marble, J.L.; Walton, M.C.; Neilsen, C.W. (July 2005). "Shared Understanding for Collaborative Control" (PDF). IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. Vol. 35. pp. 494–504.
  6. ^ Yen, J.; Yin, J.; Ioerger, T.; Miller, M.; Xu, D.; Volz, R. (2001). "CAST: Collaborative agents for simulating teamwork" (PDF). Proceedings of 17th Joint Conference for Artificial Intelligence. pp. 135–142. Archived from the original (PDF) on April 25, 2012.
  7. ^ Cooke, N.J.; Salas, E.; Cannon-Bowers, J.; Stout, R. (2000). "Measuring team knowledge" (PDF). Human Factors. 42 (1): 151–173. doi:10.1518/001872000779656561. PMID 10917151. S2CID 8428699. Archived from the original (PDF) on April 25, 2012.
  8. ^ Fong, T. (2001). "Advanced Interfaces for Vehicle Teleoperation: Collaborative Control, Sensor Fusion Displays, and Remote Driving Tools" (PDF). Autonomous Robots. 11: 77–85. doi:10.1023/A:1011212313630. S2CID 15073729.
  9. ^ a b Al-Bayati, A.; Skaf, Z.; Wang, H. (July 2010). "Fault Tolerant Control for Two Collaborative Systems Using Output Regulation Scheme". Proceedings of the 2010 International Conference on Modeling, Identification, and Control. Okayama, Japan. pp. 791–796. ISBN 978-1-4244-8381-5.
  10. ^ Goodrich, M.; Olsen, D.; Crandall, J.; Palmer, T. (2001). "Experiments in Adjustable Autonomy" (PDF). Proceedings of the 2001 IEEE International Conference on Systems, Man, and Cybernetics. Tucson, AZ. pp. 1624–1629.
  11. ^ Meier, R.; Fong, T.; Thorpe, C.; Baur, C. (1999). "A sensor fusion based user interface for vehicle teleoperation" (PDF). Proceedings of the IEEE Field and Service Robotics. Pittsburgh, PA.
  12. ^ Terrien, G.; Fong, T.; Thorpe, C.; Baur, C. (2000). "Remote driving with a multisensor user interface" (PDF). Proceedings of the Society of Automotice Engineers International Conference on Environmental Systems. Toulouse, France.
  13. ^ Franklin, G.; Powell, J.; Emami-Naeini, A. (2006). Feedback Control of Dynamic Systems (5th ed.). Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN 978-0-13-149930-0.
  14. ^ Colley, W. (2010). "Modeling Continuous Systems". In Sokolowski, J.; Banks, C. (eds.). Modeling and Simulation Fundamentals. Hoboken, New Jersey: John Wiley & Sons. ISBN 978-0-470-48674-0.
  15. ^ Hine, B.; Hontalas, P.; Fong, T.; Piquet, L.; Nygren, E.; Kline, A. (1995). "VEVI: A Virtual Environment Teleoperations Interface for Planetary Exploration" (PDF). Society of Automotive Engineers 25th International Conference on Environmental Systems. San Diego, CA.
  16. ^ Shoemaker, C. (1990). "Low Data Rate Control of Unmanned Ground Systems". Proceedings:Association for Unmanned Vehicle Systems. Dayton, OH.
  17. ^ McGovern, D. (1988), Human Interfaces in Remote Driving, Technical Report SAND88-0562, Sandia National Laboratory, Albuquerque, NM{{citation}}: CS1 maint: location missing publisher (link)
  18. ^ Yun, X.; Tarn, T.; Bejczy, A. (1989). "Dynamic Coordinated Control of Two Robot Manipulators". Proceedings from the 28th IEEE Conference on Decision and Control. Florida, USA. pp. 2476–2481. doi:10.1109/CDC.1989.70623.
  19. ^ Marble, J.; Bruemmer, D.; Few, D. (2003). "Lessons learned from usability tests with a collaborative cognitive workspace for human-robot teams". Proceedings from IEEE Conference on Systems, Man, and Cybernetics. Vol. 1. Waikoloa, HI. pp. 448–453. doi:10.1109/ICSMC.2003.1243856. ISBN 0-7803-7952-7.
  20. ^ Marble, J.; Bruemmer, Few; Dudenhoeffer, D. (January 2004). "Evaluation of supervisory vs. peer-peer interaction for human-robot teams". Proceedings of the 37th Annual Hawaii International Conference on Systems Science. Waikoloa, HI. p. 9. doi:10.1109/HICSS.2004.1265326. ISBN 0-7695-2056-1.
  21. ^ Clarke, R. (January 1994). "Asimov's Laws of Robotics: Implications for information technology". Computer. 27: 57–66. doi:10.1109/2.248881. S2CID 7585833.
  22. ^ Milgram, P.; Zhai, S.; Drascic, D. (1993). "Applications of Augmented Reality for Human-Robot Communication". Proceedings of International Conference on Intelligent Robots and Systems. Yokohama. pp. 1467–1472. doi:10.1109/IROS.1993.583833. ISBN 0-7803-0823-9.
  23. ^ Fong, T.; Thorpe, C.; Baur, C. (November 2001). "Collaboration, Dialogue, and Human-Robot Interaction" (PDF). Proceedings of 10th International Symposium of Robotics Research. Lorne, Victoria, Australia.