Critic Systems – Human-Computer Problem Solving Review
This is our first article on the topic of Human-Computer Problem Solving (HCPS) since we put it forward as a key area of focus for us in a recent article [link]. When searching for past research on the topic, we came across “critic systems” in a review article (Tainfield 2004). While this research was largely left in the 20th century, it aligns very well with our interests and includes many insightful concepts and strategies. We provide a brief review of the key concepts here.
The concept of critic systems were primarily developed by Gerhard Fischer and his research group at the University of Colorado, Boulder (Fischer 1991 for example). Fisher’s group applied the concept of critic systems generally to the area of human computer interaction for ill-structured design and software coding applications at a time when graphical interfaces were just emerging. Fisher noted that “The design and evaluation of the systems … led to an understanding of the theoretical aspects of critiquing”, which are presented in their publication: The Role of Critiquing in Cooperative Problem Solving (Fischer 1991). Here we provide a high level overview of the concept of a critic system.

Figure 1: The critiquing approach. Figure 1 from Fischer 1991.
Figure 1, shows a high level view of a critiquing system. The key features of the system are as follows:
- There are two agents, a computer and a user, working in cooperation.
- The human contributes:
- The problem solving goal or requirements: The goal specification can vary allot depending on the type of critic system. Where the system is very specific to an application, the high level goal is built into the system and only requirements specific to the current problem need to be provided. In more general problem solving situations the user will need to provide a more detailed specification of the goal.
- domain expertise and general common sense about the world: This is largely in the form of the users long term memory and intuition. However, it could also include the users access to external memory in the form of books and other knowledge that is not available to the computer.
- The Computer contributes:
- A model of the user:
- A model of the user helps the computer to formulate critics that will be understandable and useful to the user. The model could include information about the expertise of the users to ensure that the critics are provided at the right level eg. student, expert. The user model can be built up through interaction with the user to personalise the system.
- A model of the user’s problem solving goal: The computer needs to be able to capture the user’s goal in a formal way that allows it to analyse the problem.
- Domain knowledge: The computer has a knowledge base of domain knowledge relevant to the problem area. The formalisation of the knowledge base will depend on the application type and area, but can include both strict and probabilistic rules and constraints that can be used by the system to critic the proposed solution.
- Problem Solving & Proposed Solution: The human’s primary role is to generate the initial solution and provide modified solutions after each round of feedback from the critic system. The human’s approach to problem solving will vary, but is assumed to be different to that of the computer and will likely leverage the human’s long term memory and expert intuition developed through their career. The human will need to work with the system to provide the solution in a form that the computer can understand, though substantial focus is put on making the interface as easy as possible to use eg. visual or natural language.
- Critiquing and Critique:
- The critiquing process begins with an evaluation of the user provided solution. Tainfield (Tainfield 2004) suggests that there are two approaches to evaluation:
- Analysis based: the system analyses the solution against its knowledge based without developing its own solution.
- Comparison based: the system develops its own solution from the requirements and compares the user’s solution to its own.
- Tainfield (Tainfield 2004) notes that critiques are generally of two types:
- statements of the detected defects, e.g., errors, risks, constraint violations, solution’s incompleteness, requirement’s vagueness, etc.
- resolutions about the defects detected based on the first type.
- The critiques provided can be classified for example by importance, positive/negative etc.
- The critics must be formulated in a way that maximises their value to the user. The system utilises the user model and user interface to achieve this.
- The process is iterative with the human taking on board the critic and updating the proposed solution until an acceptable solution is achieved.
A Simple Example – Spell Checking
I would like to put forward what might appear to a trivial example – a spell checker for a word processor. I think it is a good example because, 1. everyone has had exposure to a spell checker and, 2. it is not a problem that has been solved by artificial intelligence. Human-computer collaboration is required to correct spelling. Below is a discussion of the example using the key points from above:
- The human contributes:
- The problem solving goal or requirements: The goal of correct spelling is an assumed goal, built into the word processor.
- Their domain expertise: The user brings their language education and experience, along with innate language capabilities.
- The Computer contributes:
- A model of the user:
- specific language eg. Australian English, possibility profession specific dictionaries, a personal dictionary of words added by the user, etc.
- Smart phone applications take your historical emails, personal notes etc and read them to build a model of the words you regularly use. These systems attempt to guess the next word that you might use and they are often right, but still most of the time we need to select the right word from a group of three or more.
- Domain knowledge: dictionaries, grammatical rules, etc.
- Problem Solving & Proposed Solution:
- The human simply proposes their spelling for each word in the word processor.
- Critiquing and Critique:
- The spell checker analyses the text using the dictionary and grammatical rules.
- The computer identifies possible incorrect spelling by underlining the word and proposes correct spellings when the work is right clicked.
- Note that the spell checker often doesn’t provide only one correct option. In many cases the computer is unable to automatically correct the spelling. Human language can be extremely complex and difficult to disambiguate eg. Will Will will the will to Will (my spell checker is suggesting I delete the second and third will right now).
- Often the computer will offer a correct spelling of the word we intended in the first iteration or focus our attention on remembering the correct spelling. But sometimes we are too far off and we must have a second attempt at the word before the computer is able to guess the word we want. This is the second iteration of the critiquing cycle.
- Finally, the user decides when the spelling in the document is satisfactory and often this involves ignoring several of the computers critics.
Key Advantages of Critic Systems
Fischer covers some of the aspects of cooperative problem solving of special interest (Fischer 1991, p 125), below is a brief summary:
- Breakdowns in cooperative problem-solving systems are not as detrimental as in expert systems: collaborative systems are able to deal with misunderstandings, unexpected problems and changes of the human’s goals.
- Background assumptions do not need to be fully articulated: collaborative systems are especially well suited to ill-structured and poorly defined problems. Humans can know the larger context, learn during the problem solving process and decide when to expand the search space.
- Semiformal system architectures are appropriate: The computer system doesn’t need to be able to interpret all of the information that it has available and can rely on the human.
- Delegation problem: Automating a task requires complete specification. Cooperative systems allow for incremental refinement and evolution of understanding of the task.
- Humans enjoy “doing” and “deciding.”: Humans often enjoy the process and not just the product; they want to take an active part.
Thoughts on application to Research Analysis
At the time of this writing, the Research Analysis applications is a knowledge base with knowledge capture and search tools. We use the application in a process similar to the critiquing approach, but this is a manual process at present. Many researchers would use a collection of databases, software tools and peer feedback in a manual process similar to the critiquing approach to solve hard problems in medical research. The challenge is to bring these manual processes together into a system that substantially increases the efficiency of the researcher without too high a barrier to adoption. Ideally the system could be applied to many niche research areas by updating the domain knowledge, but without needing to change the core software platform.
Research Analysis currently provides very basic critiquing type functions. The researcher can enter a research claim using our semiformal language and the system will provide a list of matching claims that have been made by other researchers with quotations and citation. By relaxing the specification of the claim (eg. any association, rather than a positive correlation), the system will provide a list of similar claims. The similar claims many include claims that contradict or question the significance of the researcher’s claim.
Future functionality for Research Analysis that fits the critiquing concept:
- We are currently working on the capability of extracting claims directly from the abstracts or natural language of the researcher. This would make the system easier to use and would also increase the ability of the system to automatically populate the knowledge base.
- The critiquing approach suggests that we could use the individual researcher’s historical publications to build a claim database that could provide a model of the researcher knowledge and beliefs. This model could be used in the critiquing process to provide critics that were in line with the researcher’s beliefs. This could further be extended to include the researcher’s publication database and further again by tracking the researcher’s position on the publication eg. agree, disagree or neutral.
- Currently the system requires the user to manually vary the search fields to identify related claims. We are working on a reporting type function that would provide the user with a list of claims related to their claim, including supporting claims, conflicting claims and similar claims. Claims at different levels of the physiological hierarchy (eg. artery to cardiovascular system), different physiological locations (eg. artery to liver) and different related species (eg. mice to mammal) could also be compared. The tool will use medical term synonyms and statement analysis to identify claims from other related fields that make similar claims, but with different terminology.
- In the beautiful future we plan to be able to use a combination of the users input, the systems domain knowledge and problem solving tools to present new hypotheses to the researcher through recombination of the existing knowledge base of claims. Generating medical hypotheses is a classic example of an ill-structured and poorly defined problem space. Often new important hypotheses will require the breaking of accepted rules and rejection of historically accepted scientific claims. We believe that a platform like Research Analysis could be valuable in the systematic proposal of new hypotheses based on analysis of historical claims. While we hope these suggestions will be valuable, we have no doubt that human researchers will be critical in the assessment of these hypotheses using their broad domain knowledge and worldly common sense.
Further Reading
- Terveen provides a good Overview of Human-Computer Collaboration (Terveen 1995) and in particular a summary of Critic Systems based on the presentations at a symposium on the topic.
- Miller provides an overview of expert critiquing systems in the field of practice-based medical consultation at the time (Miller 1986).
References
- Fischer, Gerhard, et al. “The role of critiquing in cooperative problem solving. “ACM Transactions on Information Systems (TOIS) 9.2 (1991): 123-151.
- Tianfield, Huaglory, and Ruwen Wang. “Critic Systems–Towards Human–Computer Collaborative Problem Solving.” Artificial Intelligence Review 22.4 (2004): 271-295.
- Miller, Perry L. “Expert critiquing systems.” Expert Critiquing Systems. Springer New York, 1986. 1-20.
- Terveen, Loren G. “Overview of human-computer collaboration.” Knowledge-Based Systems 8.2 (1995): 67-81.