Paul Lyons' pages

VPL Papers

 Brief description



languages that treat VPLs as a beginners' tool:  Tanimoto and Glinert (1986) Designing Iconic Programming Systems: Representation and Learnability
papers that contain a taxonomy of Visual Programming Languages or survey a number of VPLs  Nan C. Shu (1985)  Visual Programming Languages: A Perspective and Dimensional Analysis
 Hils, D.D. (1992)  Visual; Languages and Computing Survey: Data Flow Visual Programming Languages
Green, T.R.G., and Petre, M. (1996a)  Usability Analysis of Visual Programming Environments: A "Cognitive Dimensions" Framework
 are simply essential reading for anyone interested in HCI, or VPL  Shneiderman (1983)  Direct Manipulation: A step beyond Programming languages
Green, T.R.G., and Petre, M. (1996b)  Usability Analysis of Visual Programming Environments: A "Cognitive Dimensions" Framework
 describe special-purpose VPLs:  Serot, J., Quenot, G., and Zavidovique, B. (1995)  A Visual Dataflow Programming Environment for a Real Time parallel Vision Machine
 describe general-purpose Visual Programming Languages  Rasure, J.R., and Williams, C.S., (1991)  An Integrated Data Flow Visual Language and Software Development Environment
Diaz-Herrera, J.L. and Flude, R.C. (1980) Pascal/HSD: a Graphical Programming System
Ambler, A.L. and Burnett, M.M. (1989) Influence of Visual Technology on the Evolution of Language Environments
 consider what's wrong with conventional programming languages  Fix, V, Wiedenbeck, S, Scholtz, J. (1993)  Mental Representation of Programs by Novices and Experts
 compare graphical and textual notations  Petre, M. and Green, T.R.G. (1993)  Learning to Read Graphics: Some evidence that "seeing" an Information Display is an Acquired Skill
Green, T.R.G., and Navarro, R.(1996) Programming Plans, Imagery, and Visual Programming
 have useful ideas about IDEs  Moher, T (1988)  PROVIDE: A Process visualization and Debugging Environment
 describe data flow VPLs  Hils, D.D. (1992)  Visual Languages and Computing Survey: Data Flow Visual Programming Languages
 describes evaluation criteria for VPLs (and much more)  Green, T.R.G., and Petre, M. (1996c)   Usability Analysis of Visual Programming Environments: A "Cognitive Dimensions" Framework
 approaches to the space problem which many people believe afflicts visual programming languages  Lamping, J, and Rao, R. (1996)  The Hyperbolic Browser: A Focus + Context Technique for Visualizing Large Hierarchies
 interesting ideas about interfaces that might be worth adopting into VPLs  Kramer, A (1996)  Translucent Patches
Storey, M.A., Fracchia, F.D. and Muller, H.A. (1999) Customizing a Fisheye View Algorithm to Preserve the Mental Map
 special-purpose techniques for use in VPLs  Ambler, A.A. and Hsia, Y-T (1993)  Generalizing Selection in By-demonstration Programming
 Gives Smalltalk a visual interface builder, in the guise of component library based on a theatre metaphor.  Finzer, W, and Gould, L (1984)  Programming by Rehearsal
 Burnett, M.M. and Ambler, A.L (1994)  Interactive Visual Data Abstraction in a Declarative Visual Programming Language
 Graham, N.T.C., Morton, C.A., and Urnes, T. (1996)  ClockWorks: Visual Programming of Component-Based Software Architectures
 Lakin, L  Spatial Parsing for Visual Languages
 Golin, E.J. (1991)  Parsing Visual Languages with Picture Layout Grammars
 Barfield, L (1992)  Editing Tree structures
 Citrin, W., Doherty, M., and Zorn, B. (1994)  The Design of a Completely Visual Object-oriented Programming Language
 Citrin, W., Doherty, M., and Zorn, B. (1993)  Control Constructs in a Completely Visual Imperative Programming Language
 Cordy, J.R., and Nicholas Graham, T.C. (1992)  GVL: Visual Specification of Graphical Output
 Costabile, M.F. and Missikoff, M. (1994)  Iconit: an environment for Design and Prototyping of Iconic Interfaces
 de Carolis, B., de Rosis, F., and Errore, S. (1995)  A user-adapted iconic language for the medical domain
 Ebrahimi, A. (1992)  VPCL: A Visual Language for Teaching and Learning Programming. (A Picture is Worth a Thousand Words)
 Frei, H.P., Weller, D.L., and Williams, R. (1978)  A graphics-Based Programming -Support System
 Frezza, A.T., and Levitan, S.P. (1993a)   SPAR: A Schematic Place and Route System
  Frezza, S.T., and Levitan, S.P. (1994)  Congestion Router for Schematic Diagrams
  Golin, E.J.(1991)  Parsing Visual Languages with Picture Layout Grammars
 Frank, M.R. and Foley, J.D. (1994)  Inference Bear: Inferring Behavior from Before and After Snapshots
 Nicholas Graham, T.C., Morton, C.A., and Urnes, T. (1996)  ClockWorks: Visual Programming of Component-Based Software Architectures
  Green, T.R.G., and Petre, M. (1992)  When Visual Programs are Harder to Read than Textual Programs
 Henderson, D.A. Jr., and Card, S.K. (1986)  Rooms: The Use of Multiple Virtual Workspaces to reduce Space Contention in a Window-Based Graphical User Interface
 Jakob, R.J.K. (?)  A Visual Programming Environment for Designing User Interfaces
 Kopache, M.E. and Glinert, E.P. (1988)  C2: A Mixed Textual/Graphical Environment for C
 Lyons, P., Simmons, C., and Apperley, M. (1993)  HyperPascal : A Visual Language to Model Idea Space
 Myers, B.A. (1986)  Visual Programming, Programming by Example, and Program Visualisation: A Taxonomy
 McWhirter, J.D., and Nutt, G.J. (1993)  Escalante: An Environment for the Rapid Construction of Visual Language Applications
 Papantonakis, A., and King, P.J.H (1995)  Syntax and Semantics of Gql, a Graphical Query Language
  Paterno, F. (1994)  A Theory of User-interaction Objects
 Pearson, M.W., Lyons, P., and Apperley, M. (1993)  Synthesis of Digital ICs from Data Flow Diagrams
 Poswig, J., Vrankar, G., and Morara, C. (1994)  VisaVis: a Higher-order Functional Visual Programming Language
 Ward, P.T. (1986)  The Transformation Schema: An Extension of the Data Flow Diagram to Represent Control and Timing
 Ware, C. (1993)  The Foundations of Experimental Semiotics: a Theory of Sensory and Conventional Representation
 Wasserman, A.I (1985)  Extending State Transition Diagrams for the Specification of Human-Computer Interaction
Lyons (1999) Programming in Several Dimensions






Data Flow Languages 

IEEE Computer February, 1982, 15-25 


Ambler, A.A., Burnett, M. M.

Influence of Visual Technology on the Evolution of Language Environments 

 Computer, October, 1989, 9-22 


None, but the first paragraph is as follows:

With the availability of graphic workstations has come the increasing influence of visual technology on language environments. In this article we trace an evolution that began with the relatively straightforward translation of textual techniques into corresponding visual techniques that have no natural parallel using purely textual techniques. In short, the availability of visual technology is leading to the development of new approaches that are inherently visual.


Language environments = IDEs. Visual technology = multiple windows with selection, buttons, scrolling. There isn't really very much at all in the paper about the influence of Visual Technology on anything. The authors talk about VPLs with multiple views based on a single underlying representation (e.g., Pecan's abstract syntax tree), syntax-directed editing (which is only barely a VPL), incremental compilation and immediate execution, data visualisation, execution tracing and other ideas which I wouldn't have thought needed any explanation as late as 1989. However, the paper contains quite a nice summary of a number of VPLs.

One of the things that seem up-to-date for the time is the idea of graphical language editing environments as a way of creating provably correct programs (Hamilton, M. and Seldin, S.; "Higher Order Software - Methodology for Defining Software" IEEE Transactions on Software Engineering SE-2, 1, 1976, 9-32, and also Moriconi, M. and Hare, D.F., "The PegaSys System: Pictures as Formal Documentation of Large Programs," ACM Transactions on Programming Languages and Systems, 8, 4, Oct 1986, 524-546 {the diagrams are manually mapped onto code, and then the system formally verifies that the program is consistent with the specs. Why not compile the specs?})

The authors claim that the trend is away from the idea of applying visual transformations to textual approaches and toward the idea of naturally visual approaches. My belief was that naturally visual approaches are satisfactory for some classes of problem, most notable, those with naturally visual data spaces, and that the generality of textual languages needs to be preserved - by retaining at least some textual language components, and possibly by inventing special visual language constructs which can replace powerful general textual constructs. HyperPascal's hyperlinks to declarations, and Active Templates are attempts to achieve these goals.

Ambler, A., and Hsia, Y-T.

Generalizing Selection in By-demonstration Programming

JVLC, 4, 283-300, 1993 


By-demonstration programming attempts to generalize algorithms by observing brief example demonstrations.These example demonstrations are usually repeated sequences of selecting several objects and then applying some action to the selected objects. In generalizing these demonstrations, it is the selections which prove difficult. Approaches to generalizing selection range from strict personal interpretations to heuristic-based inference schemes. These approaches often are not visual and not by-demonstrational.

In this paper we look at various selection generalization approaches and at one particular approach, that of the language PT. PT generalizes selection using the same visual by-demonstration techniques as are used to demonstrate the rest of PT programs.


By-demonstration programming - the computer should be able to solve similar problems after observing a user work few example problems. Ambler and Hsia identify the central problem with programming-by-demonstration systems. "... abstracting from specific example demonstrations to general algorithms." They are particularly concerned with generalising selection. "Early systems recorded exact sequences, but strict recording does not generalize." To generalize about selection of one object from several, the system might infer that the selection was on the basis of the value in the object, its geometrical position in the set of objects, its numerical position in the set of objects, or its coordinates. The system might weight the selections by some rule, refuse to guess until sufficient selections had been made to disambiguate the selection criterion, or ask the user to specify which of the possible selection criteria is appropriate."In general, the only satisfactory result is for the programmer to analyse the resulting logic."

The paper provides a good review of approaches to programming-by-demonstration. It is presenting a scheme for selecting and drawing objects and groups of objects; "any manipulation of the selected concrete data is abstracted as a manipulation of all objects satisfying the selection criteria." The scheme seems to be a visualisation of functional composition. The programmers selects a group of objects (the first function) and then applies a selection criterion (the second function) to select a particular object from the group. Thereafter, when we manipulate the object, we aren't just manipulating a particular piece of data, we're describing how to operate on any data item that satisfies precisely defined selection criteria. It's a nice idea. It seems to apply to programming problems in which it is possible to manipulate the whole of a population of data values on the screen and manipulate them in fairly concrete ways (swapping them, separating them paring them, and so on)

In some ways this approach is like HyperPascal's Active Templates. Neither approach tries to make inferences from incomplete data. On the other hand, it deals with very concrete data in a very concrete way, whereas the Active Templates approach deals with a data abstraction in an abstract way, and seems more appropriate as a component of a serious general-purpose visual programming language.


Babb II, R.G.

Parallel Processing with Large-Grain Data Flow Techniques

IEEE Computer, July 1984, 55-61 


None, but first paragraph is as follows:

Research in data flow architectures and languages, a major effort for the past 15 years, has been motivated mainly by the desire for computational speeds that exceed those possible with current computer architectures.. The computational speedup offered by the data flow approach is possible because all program instructions whose input values have been previously computed can be executed simultaneously. There is no notion of a program counter or of global memory. Machine instructions are linked together in a network so that the result of each instruction execution is fed automatically into appropriate inputs of other instructions. Since no side-effects can occur as a result of instruction execution, many instructions can be active simultaneously. Although dataflow concepts are attractive for providing an alternative to traditional computer architecture and programming styles, to date few data flow machines have been built, and data flow programming languages are not widely accepted.



Barfield, L.G.

Editing Tree Structures

Centrum voor Wiskunde en Informatica

P.O. Box 4079

1009 AB Amsterdam

The Netherlands

CS-R9264 1992


An interactive editor that operates on a projection of a tree structure can offer the user many possible commands. This document attempts to catalogue and classify, in a rigorous way, the commands that can be offered. The discussion starts with a simple tree and then moves on to more complex trees. It is assumed that the reader has some familiarity with the basics of structured editors.



Bederson, B.B., Hollan, J.D., Perlin, K, Meyer, J., Bacon, D., and Furnas, G.

Pad++: A Zoomable Graphical Sketchpad For Exploring Alternate Interface Physics

JVLC, 7, 1996, 3 - 31 


We describe Pad++, a zoomable graphical sketchpad that we are exploring as an alternative to traditional window and icon-based interfaces. We discuss the motivation for Pad++, describe the implementation and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly contrast if with current design strategies. We envision a rich world of dynamic persistent information entities that operate according to multiple physics specifically designed to provide cognitively facile access and serve as the basis for the design of new computationally-based work materials.



Burnett, M.M., and Ambler, A.

Interactive Visual Data Abstraction in a Declarative Visual Programming Language

JVLC, 5, 1994, 29-60 


Visual data abstraction is the concept of data abstraction for visual languages. In this paper, first we discuss how the requirements of data abstraction for visual languages differ from the requirements for traditional textual languages. We then present a declarative approach to visual data abstraction in the language Forms/3. Within the context of this system, issues of particular importance to declarative visual languages are examined. These issues include enforcing information hiding through visual techniques, supporting abstraction while preserving concreteness, conceptual simplicity and specification of a type's appearance and interactive behaviour as part of its definition. Interactive behaviour is seen to be part of the larger problem of event-handling in a declarative language. A significant feature is that all programming and execution are done in a fully-integrated visual manner, without requiring other languages or tools for any part of the programming process.





The authors attribute the lack of success of declarative VPLs to the lack of a data abstraction approach suitable for declarative VPLs. This certainly concurs with my reasons for developing Active Templates for HyperPascal.

They aim to achieve:

Type definitions must include appearance of data, and its behaviour under user interaction.

Margaret Burnett's Ph.D. project was the development of the VPL Forms/3, and the developments described in this paper are part of that language.

Object definition causes immediate object instantiation, so that the user can experiment with the object's behaviour while defining that behaviour. The emphasis seems to be on improved programming reliability through the ability to visualise a simulation of the operations on examples of data.

The paper contains a brief summary of quite a few VPLs from the point of view of their data declaration facilities (Tinkertoy, ThingLab, ActionGraphics, Hi-Visual, GRClass, ObjectWorld, Prograph, InterCONS, ConMan, Fabrik, NoPump).

Forms/3 seems to be modeled on spreadsheets. The programmer places cells on a drawing surface, and inserts formulas in the cells. Circular references are forbidden Cells can be grouped into boxes, which are referenceable as objects - like a Pascal record or C struct. The result is a sort of visual expression a bit like HyperPascal's visual expressions, but without variable names, though cells with formulae can be referenced by name, so that there's an element of functional composition about the language, and the visual expressions may be more comprehensible than spreadsheet cells - at the cost of lower space-efficiency.

The language as a whole is far more general than spreadsheets, with VADTs (Visual Abstract Data Types) are more powerful than conventional spreadsheet components.



Shi-Kuo Chang

Visual Languages: A Tutorial and Survey

IEEE Software, January, 1987, 29-39 





Citrin, W., Doherty, M., and Zorn, B.

The Design of a Completely Visual Object-Oriented Programming Language

to appear in: Visual Object-Oriented Programming: Concepts and Environments (Margaret Burnett, Adele Goldberg, Ted Lewis, eds) 1994


Object-oriented languages provide powerful programming features such as polymorphism, inheritance and dynamic dispatch. While these features allow complex programs to be written more easily, they also make debugging and understanding thee programs more difficult. Object-oriented languages have relied on simple visualisation tools such as class browsers to aid programmers in understanding their programs. In this paper, we argue that a completely visual object-oriented programming language, VIPR, has significant advantages over textual object-oriented languages. We describe how VIPR represents all aspects of object-oriented programs, including objects, classes, inheritance, polymorphism, and dynamic dispatch. By completely visual, we mean that the semantics of programs written in our language can be entirely described by simple graphical rules. VIPR provides a framework for and integrates existing methods of understanding the structure and execution of visual programs. Also, we discuss the programming environment support required by VIPR and argue why the language, with this environment, will be usable by expert programmers for solving large problems.


Citrin, W., Doherty, M., and Zorn, B.

Control Constructs in a Completely Visual Imperative Programming Language

Tech Report CU-CS-672-93

(University of Colorado at Boulder

Department of Computer Science

Campus Box 430

University of Colorado

Boulder, Colorado 80309)


Visual representations of programs can facilitate program understanding by presenting aspects of programs using explicit and intuitive representations. We have designed a completely visual static and dynamic representation of an imperative programming language. Because our representation of control is completely visual, programmers of this language can understand the static and dynamic semantics of programs using the same framework. In this paper, we describe the semantics of out language, both informally and formally, focusing on support for control constructs. We also illustrate how simple programs written in this language will look both statically and dynamically. Our representation makes some parts of program execution that are implicit in textual representations, thus our programs may be easier to understand.




Cordy, J.R., and Graham, T.C.N.

GVL: Visual Specification of Graphical Output

JVLC 3, 25-47, 1992 


The conceptual view model of output is based on the complete separation of the output specification of a program from the program itself, and the use of implicit synchronization to allow the data state of the program to be continuously mapped to a display view. An output specification language called GVL is used to specify the mapping from the program's data state to the display. GVL is a functional language explicitly designed for specifying output. Building from a small number of basic primitives, it provides sufficient power to describe complex graphical output. Examples shown in the paper include GVL specifications for linked list diagrams, bar charts and an address card file. In keeping with its intended application, GVL is also a visual language in which the user draws output specifications directly on the display. It is shown how problems often associated with imperative graphical languages are avoided by using the functional paradigm. A prototype implementation of GVL was used to produce all examples of graphical output in the paper.



Costabile, M.F., and Missikoff, M.

Iconit: An Environment for Design and Prototyping of Iconic Interfaces, 

 JVLC, 5, 1994, 151-174


In this paper we present an environment for the development of iconic interfaces. The environment, called Iconit, has been conceived to support the development of information systems, and in particular the design and rapid prototyping of the part of the system devoted to the interaction with end users. Iconit is essentially composed of two subsystems: the first, referred to as Iconit-D, is aimed at the design and specification of an interface, namely the development of the overall scheme of the user-application dialog and the design of the interface windows, with all their visual elements (e.g. menus, icons). The second, called Iconit-X, includes a verifier and an interpreter; the former performs syntactic checks of the defined interface, while the later is able to execute the interface specifications, even in the absence of the application software. This is possible since the proposed approach guarantees high independence of the interface from the application. The development of an interface, performed with Iconit, is interactive and does not require any sort of coding. A first prototype of Iconit has been developed and is currently being evaluated.




Cox , P.T. and Pietrzykowski, T.

Using Pictorial Representation to combine Dataflow and Object-orientation in a language-independent programming mechanism 


deCarolis, B., de Rosis, F., and Errore, S.,

A User-Adapted Iconic Language for the Medical Domain,

 Int. J. Human-Computer Studies, 43, 1995, 561-577


Although icons are presented as a universal language, some claim that cultural background, education and environment might influence the users' interpretation of their meaning. If this is true, the iconic language should be adapted to the user's characteristics. This paper presents results of a study that was aimed at designing the iconic language of a medical decision support system to be used in several European countries. The study included four main phases: listing and classification of the messages to be presented, collection of proposals about icons from representatives of potential users, preparation of candidates for evaluation and final evaluation of candidates by a sample of users. Results of this study indicate which icons are universally considered as "good" or "bad", and which ones are "controversial", that is, which are clearly preferred or clearly rejected by different interviewed subgroups. These results are also compared with results of previous studies, to single out factors which seem to condition acceptance of iconic messages. Finally, the paper describes the architecture of the interface which supports adapting icons to the user characteristics.




Diaz-Herrara, J.L., and Flude, R.C.,

Pascal/HSD: A graphical Programming System

IEEE Proceedings COMPSAC, 1980, 723-728 


New trends in programming methodology have made it necessary to look for more suitable representational tools to replace conventional ones such as flowcharts. A number of diagrammatic forms for presenting programs have been proposed in the last decade or so, but they have all been unsuitable for program development using pencil and paper. We have developed a diagrammatic language based on the programming language PASCAL to be used in CONJUNCTION with interactive computer graphics to provide an on-line programming system which implements current programming methods. A real-time animation of the compilation and interpretation of the user programs is produced in the graphics display under interactive control. This feature provides a powerful facility for teaching both programming and compiling techniques.


At first glance, Pascal/HSD (the HSD stands for Hierarchical Structured Diagrams) is very similar to HyperPascal. It uses a version of structure diagrams (turned on their sides, so that they follow the order of Pascal statements and indentation - which is an idea I've sometimes toyed with for HyperPascal). However, although the language includes sequence, iteration and choice, it does not include any form of abstraction (subprogram), or declarations. Dynamic data structures are entirely missing.These are the very things that make HyperPascal worthwhile - indeed, I didn't feel that I'd invented a decent VPL until I had invented an elegant way of incorporating declarations and dynamic data structures.

So Pascal/HSD superficially resembles HyperPascal, but it's a toy language - just the sort of language that critics of VPLs have in mind when they say that VPLs are good for beginners, but for real programmers.


Ebrahami, A.

VPCL: A Visual Language for Teaching and Learning Programming (A Picture is Worth a Thousand Words)

JVLC, 3, 1992, 299-317 


There is a need to incorporate visualization in programming. This visualization can be accomplished through various programming steps such as plan composition, language constructs and program execution. Several empirical studies of programmers reveal that major programming errors are related to plan composition and language constructs. These programming steps are considered in the development of a new visual environment known as VPCL. To understand and learn programming, VPCL is divided into three phases: plan observation, plan integration, and plan creation. During the plan observation or elementry (sic) level, the programming steps of a plan are rehearsed. In the intermediate level, the plans of a given problem are integrated by the user. In the advanced level, all the programming steps are developed using VPCL tools and the language constructs library. Each phase of VPCL is illustrated in detail with several examples. The effectiveness of VPCL as an instructional and developmental tool is demonstrated by the analysis of a sample empirical study.





The Tinkertoy Programming Environment



W. Finzer and L Gould,

Programming by Rehearsal

 Byte, June, 1984


None, but the first paragraph goes as follows:

Programming by Rehearsal is a visual programming environment that nonprogrammers can use to create educational software. It combines many of the qualities of computer-based design environments with the full power of a programming language. The emphasis in this graphical environment is on programming visually: only things that can be seen can be manipulated. The design and programming process consists of moving "performers" around on "stages" and teaching them how to interact by sending "cues" to one another. The system relies almost completely on interactive graphics and allows designers to react immediately to their emerging products by showing them, at all stages of development, exactly what their potential users will see.


A system with a stage-and-performers metaphor. Performers send each other cues . The only data in a programming by rehearsal program is visual. Performer primitive such as a text performer and a picture respond to cues such as setText, readFromKeyboard. It all sounds very much like an OO component library, with methods are composed from examples. A prompter's box shows commands that are available in the current situation. The Smalltalk browser is also available, and allows the designer to select from an average of 15 cues for each of 18 primitive performers.. Very much like a modern IDE for a "visual language" like Delphi. The difference is that the programmer doesn't type the commands, but selects commands from menus, and records parameters (some typing there). The paper avoids describing how to program choices (which would involve separate paths through the program, and hence at least a partial repetition of the "rehearsal") by using an example that involves a built-in function that must itself involve choice, but hides it from the programmers.


The paper is of its time. It introduces the concepts of Object-Orientation for an ignorant audience; "soft" buttons are defined; inheritance is used but not mentioned by name. It doesn't offer much to a designer familiar with modern programming techniques, It's fascinating to speculate about how much influence it might have had on the designers of "visual" languages like Visual Basic, Delphi, and Visual C++, though. Lots of the ideas they incorporate were in this system.

Fix, V, Wiedenbeck, S, Scholtz, J.

Mental Representation of Programs by Novices and Experts

InterChi '93, pps 74-79


This paper presents five abstract characteristics of the mental representation of computer programs; hierarchical structure, explicit mapping of code to goals, foundation on recognition of recurring patterns, connection of knowledge, and grounding in the program text. An experiment is performed in which expert and novice programmers studied a Pascal program for comprehension and then answered a series of questions about it designed to show these characteristics if they existed in the mental representations formed. Evidence for all of the abstract characteristics was found in the mental representations of expert programmers. Novices' representations generally lacked the characteristics, but there was evidence that they had the beginnings, although poorly developed, of such characteristics.


A very interesting paper which suggests strategies for designing programming languages so that they facilitate the achievement, in novice users, of expert status. The paper hinges on an experiment in which experts and novices were given a small (135-line) Pascal program and asked various questions about it. The questions fall into two broad categories, those that require an understanding of the logical structure of the program, and of the interrelationships between its parts, and those that do not. Every question in the first category showed up a significant difference between novice and expert, and none of the questions in the second category did.

Specifically, the areas in which experts fared batter than novices were:

matching procedure names to the procedures they call

writing descriptions of the goals of selected procedures

labelling complex code segments with a plan label*

listing names used for the same data objects in different program units

filling in names of program units in a skeleton outline of the program

matching variable names to the procedures in which they occur

*plans are standard patterns of code for handling stereotypical situations, which experts can recognize and (presumably, since the paper doesn't state this) deploy without having to analyse or invent them in detail.

Experts - say the authors - build a hierarchical model of the program. This allows them to map between the high-level goals of the program and their code representation. The model is grounded in an analysis of code segments, and an understanding of the interactions between parts of the program. Novices, on the other hand apparently don't actually analyse the code in detail, but rely on cues such as meaningful variable names, and they don't develop an understanding of the hierarchical structure of the program.

This accords well with the ideas of top-down, structured programming, and it suggests that anything about the development environment that makes the deep structure of program more explicit will reduce development time and errors. This paper was published while Craig Simmons was working on the first version of HyperPascal, and it seemed to vindicate many of the decisions we had made about the structure of the language; structure diagrams allow the users to create the program's hierarchical structure explicitly; hyperlinks allow the user to see a variable's declaration instantly; input and output lists associated with subroutine headers can show formal and actual parameters (cf question 4 above).

The paper finishes with the comment "...appropriate use of symbolic execution in instruction may aid novices in developing more expert-like representations". As a result of decisions made in 1997 about the appearance of expressions, it has become feasible for values of variables and subexpressions to be displayed in the expressions, and it is currently planned for HyperPascal to incorporate an instant-execution interpreter which would show the results of expression evaluations in the expressions, as the program is written. This should further reduce the gap between high-level goals and low-level code.

Frank, M.R., and Foley, J.D.

Inference Bear: Inferring Behaviour from Before and After Snapshots

Technical Report git-gvu-94-12,

Georgia Institute of Technology, Graphics, Visualization and Usability Center,

April 1994


We present Inference Bear (Inference Based on Before And After Snapshots) which lets users build functional graphical user interfaces by demonstration. Inference Bear is the first Programming By Demonstration system based on the abstract inference engine described in (Frank, M., and Foley, J., A Pure Reasoning Engine for Programming By Demonstration, Technical Report git-gvu-94-11, Georgia Institute of Technology, Graphics, Visualization and Usability Center, Atlanta, Georgia, Apr, 1994).

Amongst other things, Inference Bear lets you align, center, move, resize, create and delete user interface elements by demonstration. Its most notable feature is that it does not use domain knowledge in its inferencing.



Frei, H.P., Weller, D.L., and Williams, R.

A Graphics-based Programming-support System

(ACM) Computer Graphics (proceedings SIGGRAPH ), 12, 3, August, 1978 


A programming support system using extended Nassi Shneiderman Diagrams (NSD) is described. The aim of the work is to develop techniques for (improv?)ing the quality and cost of specifying, ?nting, and producing computer programs. NSD's can be executed interpretively or compiled to produce running code. The system implementation has ... and charts can be drawn on a variety of ....y devices. The system is being developed .on top of the Picture Building System developed



Frezza, S.T. and Levitan, S.P.

SPAR: A Schematic Place and Route System 

IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems, 12, 7, 1993 pps 956 - 973 


This paper presents an approach to the automatic generation of schematic diagrams from circuit descriptions. The heuristics which make up the system are based on two principles of schematics readability: Functional Identification and Traceability. SPAR's generation process is broken into five distinct phases: partitioning the netlist, placement of components on the page, global routing, local routing, and the addition of I/O modules. All phases of the generation process use a two dimensional space management technique based on virtual tile spaces. The global router is guided by a cost function consisting of both congestion and wirelength estimates. The local router uses a constraint-propagation technique to optimize the traceability of lines through congested areas. The data structures and algorithms used allow the system to support incremental additions to the schematic without complete regeneration. We describe a technique for evaluating the quality of schematic drawings and apply this to our results.



Frezza, S.T. and Levitan, S.P

Congestion Router for Schematic Diagrams 

Technical Report TR-CE-94-03 (Dept of Electrical Engineering, U. Pittsburg, Pittsburg, PA 15261) 


This paper presents a new approach to routing schematic diagrams. Heuristic global and local algorithms are presented which focus on congestion and crossover issues in the development of aesthetically-pleasing diagrams. Congestion is the basis for the global, and crossover for the local router. This algorithm can work on a completed schematic, but also can be applied to schematics that are incrementally developed.


Golin, E.J.

Parsing Visual Languages with Picture Layout Grammars

JVLC, 2, 1991, 371-393 


Visual programming languages are languages for programming using visual expressions. Picture layout grammars are a mechanism for defining the syntax of visual languages. They allow the specification of both the logical structure and two-dimensional layout of a visual language. Spatial parsing is the process of analysing an input picture to determine its syntactic structure. This paper describes a parsing algorithm for visual languages defined by picture layout grammars. The algorithm is a general parser for visual languages, in that both the grammar specification and the picture are inputs to the algorithm.. The result of parsing is an augmented tree expressing the underlying structure of the input picture, according to the grammar specification.


Graham, T.C.N., Morton, C.A., and Urnes, T.

ClockWorks: Visual Programming of Component-Based Software Architectures 

JVLC, 7, 1996, 175 - 196 


ClockWorks is a programming environment supporting the visual programming of object-oriented software architectures. In developing ClockWorks, we used user interface evaluation techniques, including heuristic evaluation, cognitive walkthrough and user evaluation. The development of ClockWorks was based on a task analysis of ClockWorks programmers. This task analysis revealed that programmers work incrementally. Incremental development implies the need for good support for information filtering and for easy refinement and restructuring of programs. ClockWorks has been implemented and runs on Sum workstations. All examples shown in this paper were programmed with ClockWorks.


Green, T.R.G. and Navarro, R.

Programming Plans, Imagery and Visual Programming

Submitted to INTERACT '95


Spreadsheets and visual programming language raise a challenge for existing schema-based models of programming language, which have been scarcely been (sic) applied outside of Pascal-like languages. Recent demonstrations of the role of mental imagery in spreadsheet programming raise another challenge to schema-based theories, which are propositional in form. We show that a recent schema-based model can be applied to visual languages and report comparisons between elicited mental structures (which is not predicted by schema theory), results suggest modification of schema theory rather than refutation. Programming environments should support 2D layout better.


Green and Navarro examine conventional models of how programmers conceptualise programs. They distinguish two types of model: whole mechanism and schema. One would use a whole-mechanism model to describe the programming concepts that the programmers has at her or his command. More sophisticated programmers have access to more sophisticated concepts, and organise them in more coherent ways, than novice programmers.

One would use a schema to describe the model that programmer has of the essential steps in an algorithm. For example, a Running Total schema would comprise

Total := 0;

{start loop}

Total := Total + x

The authors point out that schema models are text-oriented, and that other programming paradigms exist (visual dataflow languages spreadsheets) in which the concepts involved in whole-mechanism models and schemas are not especially relevant.

They have asked how subjects perceive the program fragments identified by schema theory in terms of identical programs written in LabVIEW, Basic and a spreadsheet. They have thus combined the two approaches, and applied them to non-textual programs. The experimental method involved exposing subjects to fragments of the code so that the effect was rather like viewing a jigsaw piece by piece until one understood the picture.The experimenter then asked two questions about pairs of program fragments. The questions related to how closely related the fragments were to each other. Statistical analysis of the results showed that the mental networks constructed by the subjects to represent the cognitive structure of the programs were different for the three groups Cognitive structures for the three different languages were not closely correlated, and the cognitive structure of the spreadsheet version of the program was related to its physical layout rather than computational similarities.

The researchers concluded that an initial interpretation of these results would suggest that schema theory does not provide a good way to model cognitive models of programs, since providing the subjects with schema information did not lead to the creation of a single cognitive model for the three functionally identical programs. However, they point out that the structures elicited for the spreadsheet match the structure of the objects in the spreadsheet, whereas the structures elicited for the Basic program match the program's goal structure.

This paper promised to provide some hints about how programmers model programs; such information can be useful in designing a VPL, as it gives the designer an idea of what aspects of the program should be visualised by the notation. However, the paper does not really provide much insight into such matters. The programs are too small to provide much of a challenge to a competent programmer, and therefore do not provide much guidance for the VPL designer.

Green, T.R.G. and Petre, M.

When Visual Programs are Harder to Read than Textual Programs

in: G.C. van den Veer, M.J. Tauber, S. Bagnarola and M. Antavolits (Eds) Human-Computer Interaction: Tasks and Organisation, Proc. ECCE-6 (6th European Conference on Cognitive Ergonomics), CUD: Rome, 1992


Claims for the virtues of visual programming languages have generally been strong, simple-minded statements that visual programs are inherently better than textual ones. They have paid scant attention to previous empirical literature showing difficulties in comprehending visual programs. This paper reports comparisons between the comprehensibility of textual and visual programs, drawing on the methods developed by Green (1997) for comparing detailed comprehensibility of conditional structures. The visual language studied was LabVIEW, a circuit-diagram-like language which can express conditionals either as "forwards" structures (condition implies action, with nesting) or as "backwards" structures (action is governed by conditions, with Boolean operators in place of nesting). Green (1977) found that forwards structures gave relatively better access to "circumstantial" information. These differences were supported in the present study for both text and graphics presentations. Overall, however, the visual programs were harder to comprehend than the textual ones, a strong effect which was found for every single subject, even though the subjects were either experienced LabVIEW users or else experienced users of circuit diagrams. Our explanation is that the structure of the graphics in the visual programs is, paradoxically, harder to scan than the text version.



Green, T.R.G. and Petre, M.

Usability Analysis of Visual Programming Environments: A 'Cognitive Dimensions' Framework

JVLC (1996) 7, 131 - 174 


The cognitive dimensions framework is a broad-brush evaluation technique for interactive devices and for non-interactive notations. It sets out a small vocabulary of terms designed to capture the cognitively-relevant aspects of structure, and shows how they can be traded off against each other. The purpose of this paper is to propose the framework as an evaluation technique for visual programming environments. We apply it to two commercially-available dataflow languages (with further examples from other systems) and conclude that it is effective and insightful; other HCI-based evaluation techniques focus on different aspects and would make good complements. Insofar as the examples we used are representative, current VPLs are successful in achieving a good "closeness of match," but designers need to consider the "viscosity" (resistance to local change) and the "secondary notation" (Possibility of conveying extra meaning by choice of layout, colour, etc).


This is an EXCELLENT resource paper. The authors introduce the idea of five criteria for evaluating visual programming languages. They assert that the criteria are independent of each other - hence the idea of five "dimensions." This may be a little far-fetched, and I'm not sure that all of the dimensions are of equal value, but the paper's real value lies first in its literature review, which surveys a large number of highly pertinent notions related to good and bad ideas in VPL design, and secondly in the comparisons it makes between examples of functionally identical programs written in various VPLs.

The dimensions are:

Following this long section describing the cognitive dimensions - which are supposed to provide the vocabulary for broad-brush discussions of the merits or drawbacks of existing VPLs - the authors discuss the benefits of the cognitive dimensions approach for the designer. They emphasise that "changes [to the language] cannot be made arbitrarily. Fixing a problem in one dimension will usually entail a change in some other dimension. The designer can choose (at least to some degree) which other dimension will change. In a properly conceived framework, ... for any pair of dimensions, one could be altered while the second was held constant, so long as some other dimension was allowed to vary". Earlier in the paper they draw the following analogy "heating a body to change its temperature will also change its volume, unless it is compressed, in which case the pressure will change instead." The pairs of related dimensions they identify are:

The cognitive dimensions framework has been used, apparently successfully for the evaluation of a number of systems. It has itself been evaluated (Buckingham Shum, S. and Hammond, N. Argumentation-based design rationale: what use at what cost?, International Journal of Human-Computer Studies, 40, 603-652, 1994 and Shum, S. "Cognitive Dimensions of Design Rationale", in People and Computers VI: Proc HCI '91 (D. Diaper and N.V. Hammond, eds.) Cambridge University Press, Cambridge, 331-344) and it has been developed as a practical system for use by designers of visual languages by Yang, S., Burnett, M.M. DeKoven, E. and Zloof, M ("Representation design benchmarks: a design-time aid for VPL navigable static representations. Dept of Computer Science Technical Report 95-06-3 Oregon State University (Corvallis)). They have proposed a series of easily-measured benchmarks and demonstrated their applicability to two very different types of visual language. They also carried out a small study to test whether their benchmarks were usable by other designers and found that graduate students could comprehend the ideas and use them successfully to evaluate their designs.

The authors of this paper, however, suggest that the cognitive dimensions framework should be used in conjunction with other techniques such as GOMS, programming walkthroughs and claims analysis. None of these is easy to use for a non-specialist, but claims analysis in particular demands high levels of psychological sophistication.

Petre and Green suggest combining three approaches; (i) cognitive dimensions (natch!) for a broad-brush view of a system from a process perspective, (ii) the programming walkthrough for its knowledge-intensive analysis and (iii) GOMS to examine in detail some of the more frequent editing tasks, such as constructing and altering the graphical layout.

Conclusions drawn in this paper are that

Henderson, D.A., Jr., and Card, S.K.

Rooms: The use of Multiple Virtual Workspaces to Reduce Space Contention in a Window-Based Graphical User Interface

ACM Trans. Graphics, 5, 3, 1986, 211-243 


A key constraint on the effectiveness of window-based human-computer interfaces is that the display screen is too small for many applications. This results in "window thrashing," in which the user must expend considerable effort to keep desired windows visible. Rooms is a window manager that overcomes small screen size by exploiting the statistics of window access, dividing the user's workspace into a suite of virtual workspaces with transitions among them. Mechanisms are described for solving the problems of navigation and simultaneous access to separate information that arises from multiple workspaces.




Hils, D.D.

Visual Languages and Computing Survey: Data Flow Visual Programming Languages

JVLC, 3, 1992, 69-101 


The data flow model is a popular model on which to base a visual programming language. This paper describes alternatives available to the designer of data flow languages, describes many of the languages, discusses some strengths of the languages, and discusses some unsolved problems in the design of data flow languages.


The author of this paper is at least subconsciously aware of the equivalence between dataflow and functional languages; he talks about data flowing between filter functions, and about nodes representing functions and arcs representing flow of data between functions. He also talks about the "pure" data flow, in which a function fires when all its data are present (data driven model; he talks about the demand-driven model later in the paper), as opposed to a model which uses control flow constructs to specify (at least partially) the order of execution of functions.

He discusses

The languages themselves are based on the data flow paradigm, which means they're functionally-oriented, and consequently they don't seem to me to offer very much in the way of ideas for HyperPascal; furthermore, many of them seem to apply the visual approach at too low a level - function boxes for plus and minus make arithmetic expressions too unwieldy. HI-VISUAL doesn't go to this extreme, but is runs into the same sort of problem that Phillip Ngan found in Op-shop (Massey University visual language for image-processing) - inventing an unbounded set of meaningful icons is difficult.

A later version of HI-VISUAL is an OO VPL. Icons represent objects, and these incorporate methods. Arcs are used only for data output from function (method) execution. The user causes execution by superimposing icons.

VIVA is an interesting language, based on the metaphor of an electronic circuit - cf. PDL Electronic's VISTA

Some of these ideas are not restricted to the dataflow domain. In particular, the levels of liveness ideas have given rise to a number of thoughts about the development of the HyperPascal IDE. Combining Moher's execution history concept into a system with Tanimoto's fourth "level of liveness" could lead to a very useful development environment.

"Data flow visual programming languages have been most successful when they have a narrow, fairly specialized application domain (e.g. LabVIEW, whose application domain is collection and analysis of data from laboratory instruments), or when they are intended for use by non-programmers or novice programmers"

"Due to their underlying data flow conputational model, data flow visual programming languages work best when their application domain centers on data manipulation" (image processing, computer graphics, data visualisation)

"Like all visual programming languages, data flow visual programming languages take up a great deal of screen space. This difficulty can be alleviated by procedural abstraction, calculator boxes (for entering mathematical formulas textually), and the use, not just of graphical icons, but also of text, for names of variables, data objects, functions, classes and instances. Research aimed at finding new ways to address the screen space problem would be useful."

Robert Jakob

A Visual Programming Environment for Designing User Interfaces

Chapter 3 of Chang? (latest reference he cites is 1985) 


None, but the first paragraph is as follows:

People have long used iconic representations to describe algorithms to other people; mechanical diagrams and procedural flowcharts are examples. But most computers require that algorithms be converted to linear strings of symbols in order to be executed, so algorithms written for computers have been restricted to symbolic representations. The current technology of personal graphics-based workstations will permit people to revert to a more natural visual or iconic mode to describe their algorithms to computers. While linear, symbolic computer languages have been studied and refined over the last 30 years, the challenge facing computer language designers today is to provide convenient and natural visual programming languages



Kopache, M.E., and Glinert, E.P.

C2: a mixed Textual/Graphical Environment for C

IEEE Proceedings Workshop on Visual Languages, 1988, 231-238 


A visual programming environment for a subset of the C language is described. The C2 (C-squared) environment, as it is called, runs on a personal workstation with high-resolution graphics display. Both conventional textual code entry and editing, and program composition by means of an experimental hybrid textual/graphical method, are supported and coexist side by side on the screen at all times. The built-in editor incorporates selected UNIX vi commands in conjunction with a C syntax interpreter. Hybrid textual/graphical program composition is facilitated by a BLOX-type environment in which graphical icons represent program structures and text in the icons represents user-supplied parameters attached to those structures. The two representations are coupled, so that modifications entered using either one automatically generate the appropriate update in the other. Although not all of the C language is yet supported, C2 is not a toy system. Textual files that contain C programs serve as input and output. Graphical representations serve merely as internally-generated aids to the programmer, and are not stored between runs.



Kramer, A.

Translucent Patches

JVLC, 7, 1996, 57 - 77 


This paper presents motivation, design and algorithms for using and implementing translucent, non-rectangular patches as a substitute for rectangular opaque windows. In the context of a pen-based system, the underlying metaphor is closer to a mix between the architect's tracing paper and the usage of whiteboards than to rectangular opaque paper in piles and folders on a desktop.

Translucent patches lead to a unified view of windows, subwindows and selections. They are a dynamic structuring mechanism and provide a base from which the tight and static connection between windows, their contents and applications can be dissolved. This forms one aspect of on-going work to support design activities that traditionally involve the written medium (e.g. paper and whiteboards) with computers. The central idea of that research is to allow the user to associate structure and meaning dynamically and smoothly to marks on a display surface.


Kramer's research goal "is to support design activities in which the written medium is used traditionally. Current computer applications are too rigid for this task, since they require the user to state the type of information entered a priori, before actually entering the information itself. We propose a system in which "any mark goes" and the user structures and assigns interpretations to written material as the need arises out of the design process. On important infrastructure in this quest is a dynamic structuring facility which still allows the user to keep as much context as possible."

Kramer proposes an interesting alternative to conventional windows. Where conventional windows are rectangular and opaque, Kramer's patches are polygonal and transparent. The polygonal nature of the patches doesn't seem to have an enormous effect on their nature - with the exception that they can be shaped and reshaped to just fit round objects they contain. The polygons, it should be noted, are likely to be highly irregular, as the patches are meant to be sketched by hand on a pen-based system.

The real importance of the interface stems from the transparency. It allows - in the author's opinion - a conceptual separation between the window and the information it contains. Thus, when two patches cover each other, the lower patch, and the objects it contains, can be seen palely through the upper patch, and by the use of appropriate user actions, information can be transferred between the two patches. In a further weakening of conventional association between window and contained information, the idea that all information contained in a window "belongs" to a particular application is not supported. Instead, information can be interpreted in different ways according to its nature. For example, if a user incorporates a hand-written columns of numbers, with a double-horizontal line at the bottom, into a drawing, and chooses a calculator-interpretation for that object, a calculator will add them up and write the sum (presumably in typefont, not manual script - a convenient way of determining whether the system or the user has generated parts of the display) below the bar. Later alteration of the numbers will cause immediate, automatic recalculation of the sum. (a way of including spreadsheet functions in any window, without making the window a spreadsheet window).

The patches can be grown and reduced in size, merged and deleted by pen gestures, and can be abstracted as pearls (small circles that can be expanded into a patch by a gesture). It seems likely that the gesture set would prove difficult to disambiguate. The gesture for creating a new patch (drawing a closed shape larger than a certain minimum size) would be in conflict with a gesture for creating a closed polygonal object.

In order to recognise the drawn objects and apply the user-selected interpretations to them, it would be necessary to parse of the drawing spatially, as defined in Lakin, for example.


Lakin, F.

Spatial Parsing for Visual Languages 

Chapter 2 of Visual Languages: Chang, S., Ichikawa, T., and Ligomendides, P., eds (Plenum Publishing, New York

(same typographic style as Jakob's paper) 


None: the article is a book chapter. It starts with an overview, which begins thus:

Theoretical context. The long-term goal of this research is computer understanding of how humans use graphics to communicate. The full richness of graphic communication is exemplified in blackboard activity,* which is one kind of conversational graphics. Conversational graphics can be defined as the spontaneous generation and manipulation of text and graphics for the purpose of communication. But there is a problem with taking informal conversational graphics as a phenomenon for investigation using computer tools. The unrestricted use of text and graphics found on blackboards is too rich and ambiguous to be tackled head-on as a project in machine understanding.

*In this chapter, we will refer to both informal conversational graphics and to the everyday example of such graphics, blackboard activity. The actual phenomenon under investigation is informal conversational graphics on computer displays, which is properly speaking the heir to blackboard activity: the result of augmenting that activity by the power and flexibility of computer graphics, which will in the long run transform it.




Lamping, J and Rao, R.

The Hyperbolic Browser: A Focus + Context Technique for Visualizing Large Hierarchies

JVLC, 7, 1996, 33 - 55 


We present a new focus+context technique based on hyperbolic geometry for visualising and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. We lay out the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a display region. The chosen mapping provides a fisheye distortion that supports a smooth blending of focus and context. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging and for smoothly animating transitions across such manipulation. Enhancements to the core mechanisms provide support for multiple foci, control of the tradeoff between node density and node display space, and for visualizing graphs by transforming them into trees.


Paul Lyons, Craig Simmons, and Mark Apperley

HyperPascal: A Visual Language to Model Idea Space 

Proceedings of the 13th New Zealand Computer Society Conference, August 1993, 492-508 


Programmers develop problem solutions in an abstract idea space, where they can easily visualise different views of the solution. However, conventional programs exist in sequential text space, where these different views often become inextricably tangled. We reject the text-based single-sequence structure - which many visual programming languages have heedlessly adopted - in favour of a hyperspatial environment where optimally-structured, disparate, models of a problem solution can be fashioned, where model integration is facilitated, and where hyperspatial navigation is intuitive.

To explore the hyperspace, we invented HyperPascal, a general-purpose visual programming language with the capabilities of Pascal. The language exploits the power of interactive graphic interfaces, automatically generating syntactically correct programs, but constraining the programmer's actions minimally. Later developments will go beyond the procedural paradigm to explore the relationship between higher level system design paradigms (like OOD) and hyperspace.


McWhirter, J.D., and Nutt, G.J.

Escalante: An Environment for the Rapid Construction of Visual Applications

Tech Report CU-CS-692-93

(University of Colorado at Boulder

Department of Computer Science

Campus Box 430

University of Colorado

Boulder, Colorado 80309)


Escalante is an environment that supports the iterative design, rapid prototyping and automatic generation of complex visual language applications with a modest amount of effort. It enables the application developer to specify the application and its interface by defining its data model and the corresponding visualization model (using a visual specification environment). The data models are general graph models, while the visualization models are relatively unconstrained graphics; this enables the user interfaces to represent a broad set of presentations and views while adhering to a general framework in which well-defined behaviour can be easily specified. Once the data and visualization model have been defined, Escalante will generate a program that implements the data model and the viewing mechanism using a fixed control mechanism; the resulting program can be enhanced to incorporate arbitrary application software. The approach enables a surprising range of interfaces to fit within the meta model; the paper characterizes the spectrum of the domain by describing different example applications (including some quantification of the effort required to construct each example).




G. Moher,

PROVIDE: A Process visualization and Debugging Environment

IEEE Trans Softw. Eng. SE14, 6 1988, 849-857 


This paper introduces PROVIDE, a source-level Process Visualisation and Debugging Environment currently under development at the University of Illinois at Chicago. PROVIDE is a modern coding and debugging environment that is designed to allow the user to configure interaction at a desired level of abstraction. It places a heavy emphasis on the use of interactive computer graphics for the illustration of program execution, with special attention to the requirements of program debugging. This paper presents the major features of PROVIDE, especially the concepts of deferred-binding program animation, which allows users to interactively change the depiction of program execution during the debugging task, and process history consistency maintenance, which guarantees a consistent (automatically updated) record of program execution in the face of changes to program instructions and ruin-time data values. The current PROVIDE prototype is implemented on Macintosh workstations networked to a VAX 11/780 running 4.2 BSD UNIX


PROVIDE extends the capabilities of "an old, useful, and apparently under-utilized tool: the dynamic debugger." This statement could be made in 1988; however, modern IDEs universally incorporate dynamic debuggers, with breakpoints, single-step operation, data-state queries, tracing, patching and the like. It may be that few have adopted as graphically-oriented an approach as PROVIDE, which "extends the capabilities of the dynamic debugger in a number of ways, including:

  1. the use of computer graphics rather than text to depict process states
  2. continuous, rather than query-driven, display of user-defined process state representation
  3. direct manipulation of graphic process state representations, rather than a command language, for modifying data objects
  4. random, rather than sequential and unidirectional access to all process state arising during execution
  5. interactive control over program granularity
  6. state selection based on data states as well as control states
  7. automatic consistency maintenance of process states and displays in the face of modifications to programs and data

In the PROVIDE environment, a mainframe computer interprets the (C-like)program and gives access to a database of information about it (execution state, program variables, etc), and a number of PCs are used to access the information provided by the mainframe.

Windows onto the program and its environment include:

Selecting a module causes three windows to be opened

The author then deals with data visualisation (program animation). He eschews predefined dynamic visualisation, based on the assumption of a correct algorithm,reasoning that debugging is largely experimental. PROVIDE incorporates deferred binding of data to picture types; the pictures are run-time displays of program objects, and changing their representation does not involve retranslation or re-execution. The predefined picture types (e.g., pie chart, x-y plot, horizontal array, scalar value) are associated with particular program objects and literals (labels, for example); thereafter updates to the object are reflected in the picture.

PROVIDE allows backwards execution of the program by keeping a state history. As a program is interpreted, state transitions are recorded as frames in a process history database. All user requests for run-time information are treated as queries to the database. Frames may be selected for display by sequential execution, breakpoint, or Boolean expressions involving program objects (for example, active(fn-name), which selects the currently active function, or x==y, which finds the next state frame where x and y become equal).

Graphic data values may be directly manipulated. For example, the individual columns in a histogram display may be dragged up or down to increment or decrement the value of an array component, or a wedge in a pie-chart may be increased in size, proportionally reducing the values of the other variables displayed in the pie chart.



No syntax-directed editing

Myers, B.A.

Visual Programming, Programming by Example, and Program Visualization: a Taxonomy

 Conference Proceedings, CHI '86: Human Factors in Computing Systems, 1986, 59-66


There has been great interest recently in systems that use graphics to aid in the programming, debugging, and understanding of computer programs. The terms "Visual Programming" and "Program Visualisation" have been applied to those systems. Also, there has been a renewed interest in using examples to help alleviate the complexity of programming. This technique is called "Programming by Example." This paper attempts to provide more meaning to these terms by giving precise definitions, and then uses these definitions to classify existing systems into a taxonomy. A number of common unsolved problems with most of these systems are also listed.



Olson, A.M.

Icon systems for Object-oriented System Design

JVLC, 2, 1991, 52-74 





Papantonakis, A., and King, P.J.H.

Syntax and Semantics of Gql, a Graphical Query Language 

JVLC, 6, 1995, 3-25 


The problem of formalization for visual languages has been identified as an important one. We present in this paper a formal definition of both the syntax and semantics of Gql, a declarative graphical query language based on the functional data model. In Gql a query is fully and unambiguously represented by a single diagram and the user interaction is kept distinct from the language itself. In our approach for formalization we abstract from the world of graphics and concentrate on a world of sets and functions, called the base structure, which represent the various elements of the language. The syntactical definition of the language is completed by defining a set of rules that a base structure instance must satisfy, in order for it to correspond to a legal Gql query. The semantics of the language is given via a functionally defined, syntax-directed translation from Gql queries (represented as base structure instances) to list comprehensions. Finally, a form of attribute grammar is used in conjunction with the previous definitions for specifying in a single formalism both the syntax and semantics of Gql.



Paterno, F.

A Theory of User-Interaction Objects

JVLC, 5, 1994, 227-249 


A general theory is presented which formally describes user interface systems that manage communications between users and applications. Its purpose is to provide the formal semantics of a large spectrum of user interface systems by means of interaction objects which have been mathematically defined. The main features of this approach are that the relationships between input and output functionalities are completely addressed, and these systems are uniformly described by a multilayered composition of interaction objects derived from a specific architectural model for basic interactions with users. Within the framework thus obtained we make comparisons between different instances of interactive visual environments in order to establish whether their behaviour is equivalent at least for some aspects, and we give examples of properties that can be investigated.



Murray Pearson, Paul Lyons, and Mark Apperley

Synthesis of Digital ICs from Data Flow Diagrams

Proceedings of the First Asian Pacific Conference on Hardware Description Languages, Standards and Applications; December 1993, 84-88 


This paper describes the background, development, and testing of the PICSIL synthesis system, which is part of the PICSIL IC design system. Designers communicate their design intent to the PICSIL system using the PICSIL hardware description language comprising graphical notations for defining high level abstract organisational ideas and textual notations for defining their functionality. To allow the designer to maximise productivity the PICSIL system provides a synthesis manager to completely automate the synthesis process. The synthesis manager drives the lower level Olympus and octtools packages to provide a complete path from PICSIL input to a chip layout. The synthesis system has been successfully tested by synthesising three designs, one of which has been fabricated.


Pearson, M.W., Lyons, P.J., and Apperley, M.D.

"High-level Graphical Abstraction in Digital Design" 

VLSI Design , 5, 1, 101 - 110, 1996




Petre, M. and Green, T.R.G.

Learning to Read Graphics: Some evidence that "seeing" an Information Display is an Acquired Skill

JVLC, 4, 55-70, 1993 


This paper suggests that experience influences what "readers" of graphical representations look at and hence what they see, so that readership skills - both perceptual and interpretive - for graphical notations must be learned. It draws on results from two sets of empirical studies; observational studies of expert hardware designers using electronics schematics, and experiments comparing readability of textual and graphical programming notations. Less experienced users appear unable to exploit (or even notice) the graphical clues that might help them. The paper discusses "secondary notation", the "match-mismatch hypothesis" and a model of the programmer as an "active reader", in order to shed some light on what distinguishes expert from novice behaviour. It observes that clarity in a representation may well rely on good use of features that are not formally part of the notation, and it concludes that the importance of training and experience with respect to the use of graphical notations has been underestimated.


The authors of this paper are agnostic about the benefits of graphical notations. They point out that simply for an interface to be graphical is no guarantee of its quality.

Like paintings, GUIs are pictorial; like technical manuals, GUIs contain complex information. Unlike paintings, GUIs are not appreciated as a gestalt; like technical manuals, they are read in a goal-oriented way. Typographers lay documents out to facilitate this type of reading, and readers learn the skills to take advantage of the typographical clues. Similarly, GUI designers can provide a "secondary notation," based on grouping and adjacency, that reinforces information not emphasised or expressed by the formal notation They quote Raymond (in) {L.M. Hurvich (1981) Colour Vision Sinaur, Sunderland, Massachusetts], who argues that the possibility of analog mapping (adjacency implies relatedness) is the only specifically contribution of visual programming languages, and that other characteristics of contemporary graphical programming languages can be realized just as well in textual programming languages.

I find this an odd notion, as it seems to me that graphical languages score in several respect over textual languages; they can use different notations for functionality (e.g. HyperPascal's textual assignments, and boxes for grouping sequences of assignments) and "relatedness," (e.g. boxes for assignment sequences in HyperPascal), and they can abstract information out of the textual sequence which is not part of the functionality. For example, declarations, which clutter up a textual language like Pascal, can be abstracted out into another dimension by a graphical language, but still be readily accessible by hyperlinking, possibly even more accessible than in a textual language.

Nevertheless, secondary notation is clearly important, and a language should at the least facilitate the production of good secondary notation, if not produce it automatically (cf prettyprinting in textual languages).

The experiments detailed in the paper are interesting. They involved getting people to answer questions about similar programs expressed in graphical and textual notations. Questions about the graphical notation were answered more slowly than questions about textual notation in all conditions. The inference that graphical notation is inferior to textual notation seems inescapable; however, on analysis, the experiment seems artificial and the conclusion ("Far from guaranteeing clarity and superior performance, graphical representations may be more difficult to access") seems unjustified.

First, the textual notations used were both highly simplified, refined versions of textual programming languages, whereas the graphical notation (Labview) was a commercial programming language, with all the added complexity that that implies.

Secondly, the experimental "program" is unlike any program I've ever seen, comprising only a set of nested conditionals.

Thirdly to dismiss graphical notations in general on the basis of a single example is - at best - intellectually slapdash.

The pity of it all is that as the author of a visual programming language, I may seem to have an axe to grind, However, (I think) I'd just like some unambiguous information; HyperPascal has been designed as a mixed textual/graphical notation on the basis that test is better suited to some things (defining data manipulations), and graphics is better suited to others (showing relationships), and it would be nice to know whether that decision was sensible. Petre and Green's paper doesn't have the answer.


However, their paper does contain useful material;

the idea that "secondary notation" contributes to our understanding

the idea that novices confuse visibility with relevance, and imbue layout with logical significance

the idea that what a reader sees is largely a matter of what he or she has learned to look for

And whereas the refinement inherent in the textual notations seem to be refined versions Labview may not



"secondary notation" is an idea worth remembering when designing a language.

Poswig, J., Vrankar, G., and Morara, C.

VisaVis: a Higher-order Functional Visual Programming Language

JVLC 5, 83-111, 1994


The paper presents the functional visual Language VisaVis. We focus on the new, flexible interaction strategy substitution, that brings ease of construction to visual programs and integrates higher-order functions smoothly. In order to illustrate the capabilities of the implemented prototype, comparisons with visual languages are given throughout the text. The programming environment is outlined as well as the compilation into the meta-language FPP preserving (data-) parallelism inherent in the visual programs. Improvements to the programming environment


VisaVis is a functional language using zeroth order functions (constants and variables that return a value even when given no arguments), first-order functions (functions that take zeroth-order functions as arguments) and second-order functions (functions that take first-order functions as parameters). The authors acknowledge a debt to Rasure and Williams' (1991) language cantata (part of the Khoros system). The visual symbol set seems very complex. It is based around a key/keyhole metaphor



 Rasure, J.R., and Williams, C.S.,

 An Integrated Data Flow Visual Language and Software Development Environment

JVLC, 2, 1991, 217-246


The current generation of data flow based visual programming systems is all to often limited in application. It is our contention that data flow visual languages, to be more widely accepted for solving a broad range of problems, need to be more general in their syntax, semantics, translation schemes, computational model, execution methods and scheduling. These capabilities should be accompanied by a development environment that facilitates information processing extensions needed by the user to solve a wide range of application-specific problems. This paper addresses these issues by describing and critiquing the Khoros system implemented by the University of New Mexico, Khoros Group.

The Khoros infrastructure consists of several layers of interacting subsystems. A user interface development system (UIDS) combines a high-level user interface specification with methods of software development that are embedded in a code generation tool set. The UIDS is used to create, install and maintain the fundamental operators for cantata, the visual programming language component of Khoros.


The visual language in the Khoros system, cantata (no upper-case first letter) uses a dataflow paradigm - which in a visual language is virtually, if not completely indistinguishable from a function paradigm - to represent programs. Data flows between functional units with arbitrary numbers of inputs and outputs. The paper describes small and large applications in terms of the number of operators (functional units) they contain (25, and 150 respectively), and talks about the need to have a hierarchical menu structure (up to three levels are possible) for selecting operators in large programs. The authors have chosen not to use icons to identify the >250 operators available through cantata, though each operators is represented as an icon with a name-field.


cantata conditional

The illustration above, of a cantata conditional shows the names at the bottom of the icons, and a number of replicated controls at the top. Controls for the if-else are entered via a separate dialog, which seems a very indirect way of generating the information.

The buttons at the top of the glyph (icons are called glyphs in cantata) are used to destroy the glyph (though a bomb seems to imply an execution error), to return to the subform (textual representation of the current operator, rather than the window for the parent operator?), to reset the control information for the operator (useful in an interpreted system), and to execute the subroutine represented by the glyph (should this be called GO, rather than OFF?). The buttons seem to take up an inordinately large amount of space on the screen. Would pop-up menus (cf. HyperPascal's active components) be a better idea?

Cantata allows hierarchy of operators, like conventional languages' subroutines, but generally looks as though it has taken the graphical model to too low a level. The paper does not show arithmetic expressions, but the implication is that each operator would require one of the operator glyphs. Consequently, expressions of any complexity would be very large, and probably quite time-consuming to create.

Serot, J., Quenot, G., and Zavidovique, B.

A Visual Dataflow Programming Environment for a Real Time parallel Vision Machine

JVLC, 6, 1995, 327 - 347


Programming parallel architectures dedicated to real-time image processing (IP) is often a difficult and error-prone task. This mainly results from the fact that IP algorithms typically involve several distinct processing levels and data representations, and that various execution models as well as complex hardware are needed for handling these processing layers under real-time constraints.

Our goal is to permit an intuitive but still efficient handling of such an architecture by providing a continuous and readable path from the functional specification of an algorithm to its corresponding hardware implementation. For this, we developed a data-flow programming model which can act simultaneously as a functional representation of algorithms and as a structural description of their corresponding implementations on a target computer built up of 3-D interconnected data-driven processing elements (DDPs).

Algorithms are decomposed into functional primitives viewed as top-level nodes of a data-flow graph (DFG). Each node is given a known physical implementation on the target architecture, either as a single DDP or as an encapsulated sub-graph of DDPs, making the well-known mapping problem a topological one.

The target computer was built at ETCA and embeds 1024 custom data-driven processors and 12 transputers in a 3-D interconnected network, Concurrently with the machine, a complete programming environment has been developed. Relying on a functional compiler, a large library of IP primitives and automatic place-and-route facilities, it also includes various X-Window based tools aiming at visual and efficient access to all intermediary program representations.

In terms of visual languages, we try to share the burden between all the layers of this programming environment. Rather than including some display facilities in existing software environment (sic), we have taken advantage of the intuitiveness of functional representation, even textual, and of the hardware efficiency that provides immediate results, ultimately supporting hierarchical constructs.


Another (besides PICSIL) illustration of the application of Visual Programming to hardware design Probably not of direct relevance to the work on HyperPascal, though.


Direct Manipulation: A step beyond Programming languages

IEEE Computer, 16(8), 56-69, 1983


No abstract


This 1983 paper is regarded as a seminal work in the development of Graphical User Interfaces.There are lots of insights into how people interact with computers, such as the identification of and distinction between semantic knowledge (knowledge about what the software can do in the problem domain) and syntactic knowledge (knowledge about how to get the software to do it). Semantic knowledge is further divided into low-level functions, which are close to syntactic knowledge, and high-level functions, which are related to functions that can be decomposed into a series of low-level operations. Shneiderman asserts that novices confuse syntactic and semantic knowledge, thinking mainly at the syntactic level, and moving slowly to thinking at the semantic level as they gain experience. He further asserts that in direct manipulation systems, complex syntax does not have to be composed, as the necessary sequences of operations to complete a particular task are self-evident.

The paper describes such desiderata as "Display of the document in its final form," "cursor action that is visible to the user," cursor motion through physically obvious and intuitively natural means," "labelled buttons for action," "immediate display of the results of an action," "rapid action and display," and "easily reversible commands." It cites Visicalc, an early spreadsheet, and discusses the idea of spatial data management. It shows how these attributes - intuitive manipulation, instant feedback, spatial representation of data are present in video games. Interestingly, Shneiderman talks about easily reversed command in the context of video games, whereas, of course, the time-sequence modelled by many video games does not allow full undoing of actions - once the alien has killed you, you're not able to reverse the user action that allowed this to happen. He quotes Rutkowski (An introduction to the Human Applications Standard Computer Interface, Part I: Theory and Principles," Byte 7, 11, Oct 1982 pps 291-310) "the user is ale to apply intellect directly to the task; the tool itself seems to disappear." Sadly, this has not happened in many "direct manipulation" interfaces. Programmers have used standard interface components such as buttons and, especially, dialog boxes without thought as to their relevance to the task at hand.

The paper discusses various metaphors, and the greater ease of manipulating information with a natural graphical content, graphically, but also points out that not all graphical representations are helpful.


This paper, and Ben Shneiderman's work in general, has had a great deal of influence on the development of the field of Human Computer Interaction.

Shu, Nan C.

Visual Programming Languages: A Perspective and Dimensional Analysis

Chapter 1 in Chang? An expanded version of the paper "Visual Programming Languages: A Perspective and Dimensional Analysis" International Symposium on New Directions in Computing, August 12-14, 1985, Trondheim, Norway, pp 326-334


No abstract: the first paragraph is as follows:

In the last few years, the rapid decline of computing costs, coupled with the sharp increase of personal computers and "canned" software, has expanded dramatically the population of the computer user community. More and more people today are using computers. However, to many people, the usefulness of a computer is bounded by the usefulness of the canned application software available for the computer. Application programs written for a mass audience seldom give every user all the capabilities that he/she needs. Those who wish to use the computer to do something beyond the capabilities of the canned programs discover that they have to "program".



Nan Shu's seminal 1985 book is now somewhat outdated. She has attempted to categorise VPLs according to the following taxonomy

Visual Programming


Visual Environment

Visualization (sic) of Program and Execution

Visualization of Data or Information

Visualization of System Design

Visual Languages

For processing visual information

For supporting visual interaction

For actually programming with visual expression

She specifies that Visual Programming Languages are properly the last of these categories (languages for actually programming with visual expression) - a specification that has more recently run foul of the common commercial classification of languages such as Visual Basic, C++, and Delphi (languages for supporting visual interaction) as Visual Programming Languages.. In the VPL research community the phrase VPL is still used in the sense in which Nan Shu has used it.

The author then specifies a three-dimensional space into which any visual programming language can be represented as a triangle. The dimensions are characterised as language level, scope, and visual extent. Language level is inversely related to the amount of detailed instruction that has to be supplied in order to achieve a given result. Assembly language requires a lot of detail and is therefore a low level language. FORTRAN, C, Java, and so on, require much less detail and are therefore higher-level languages.

Language scope is directly related to the range of problems that the language can be used to solve. Special-purpose languages (database languages, for example) have a smaller scope than general-purpose languages (Pascal, for example).

Visual expression is directly related to the extent to which graphics are used to represent program components. It does not relate to the language's ability to handle graphical data, or its ability to generate programs with a graphical interface.

A language has a profile represented as a triangle in the three-dimensional space. For example, the Xerox Star operating system interface (the forerunner of the Macintosh Finder, and Windows Operating system interfaces), can be considered as a language with a limited scope (moving and copying, files, running applications), a fairly low-level language (complex sets of operations such as are possible with an OS with a pipe metaphor, such as UNIX, are unable to be performed directly), but a high visual extent (files, and directory structures are represented pictorially, and operations are represented by movement of these interface components around the screen). By superimposing the triangles for several languages on the same plot, it is possible to obtains some sort of feeling for their comparative power and "visualness."

Shu does not explicitly state that languages with a high visual extent are ipso facto a Good Thing, but in the context of the paper, it is tempting to draw this inference. However, this is not necessarily the case. Many Visual languages have used pictorial representations of very low-level operations, such as the operations in an arithmetic expression, and, in doing so, have made the program more difficult both to read and to write. Visual representations should be deployed where they are most useful - in representing relationships between things.



Storey, M.A., Fracchia, F.D. and Muller, H.A.

Customizing a Fisheye View Algorithm to Preserve the Mental Map

Journal of Visual Languages and Computing 10, 1999, 245-267,

Article No. jvlc. 1999.0124, available online at


Frequently, large knowledge bases are represented by graphs. Many visualization tools allow users of other applications to interact with and adjust the layouts of these graphs. One layout adjustment problem is that of showing more detail without eliding parts of the graph. Approaches based on a fisheye lens paradigm seem well suited to this task. However, many of these techniques are non-trivial to implement and their distortion techniques often cannot be altered to suit different graph layouts. When distorting a graph layout, it is often desirable to preserve various properties of the original graph in an adjusted view. Pertinent properties may include straightness of lines, graph topology, orthogonalities and proximities. However, it is normally not possibly to preserve all of the original properties of the graph layout. The type of layout and its application should be considered when deciding which properties to preserve or distort. This paper describes a fisheye view algorithm which can be customized to suit various different graph layouts. In contrast to other methods, the user can select which properties of the original graph layout to preserve in an adjusted view. The technique is demonstrated through its application to visualizing structures in large software systems.


The underlying idea in the algorithm described in this paper is that when an object in a graph is to be expanded, the  expansion takes place in three phases. First, the expansion is applied to the target object, and its new size is determined. Objects around it need to be pushed aside by a certain amount to give its new, enlarged, representation space. The result of moving all the surrounding objects outwards means that the complete image may now occupy more that the total available space. That is, it may have grown outside the boundaries of the current window. The this phases is therefore a reduction in size of the whole image to make it fit the available space.

This algorithm maintains the relative sizes of the image components, post-enlargement, but the enlarged object has been reduced again in order to fit the image into the window. As it stands, therefore the algorithm would be good for situations in which the size of a component with respect to other image components was determined by software, and the resultant image was drawn on the screen. However, if the user is dragging a component to enlarge it, she or he wants the component to end up with the size they specify, not a reduced size. the algorithm therefore needs modifying for use in a direct manipulation system.

The authors point out that it is not possible to maintain all the relationships between a set of objects when some of them are increased in size, but claim that different versions of their algorithm allow distortion oriented displays that maintain properties of the user's mental map that are important for particular types of graph.

The relevance to HyperPascal is that it may provide a way of allowing the user to collapse subtrees, while maintaining the overall appearance of a diagram, so that the user's mental model of the structure is not disturbed when subtrees collapse or are reinstated.


Tanimoto, S.T., and Glinert, E.P.

Designing Iconic Programming Systems: Representation and Learnability

IEEE Proceedings Workshop on Visual Languages, 1986, 54-60


Because they present computing objects graphically, iconic programming systems are potentially more intuitive and comprehensible to programmers than conventional, text-based systems. However, the problem of creating good pictorial representations for objects is more difficult than the problem of making up textual identifiers for objects. The creation of visual representations becomes easier and more effective when the system has been properly structured and guidelines are provided for designing icons as well as graphical displays of programs and data.

The ease with which a visual programming system may be used depends upon its incorporation of appropriate metaphors, graphical design, built-in curricular progressions, and through support for graphical composition. In order to achieve the goal of user-friendly visual programming systems, a new kind of software and hardware integration is required that ties graphical design, image processing, pictorial databases, programming techniques, and animation hardware all.


The authors is are interested in designing icons to support visual metaphors for programming. Their goal is to make programming languages which are "immediately graspable" - especially by children. They quote authors such as Piaget and Papert, who are active in early childhood education and psychology. The result is a fondness for completely graphical programming systems; in particular they seem to be oriented towards the graphical representation of operations. This is generally considered to be one of the areas in which iconic representations are inherently weak; it's comparatively easy to draw pictures of things, but not so easy to draw pictures of actions - especially actions in the abstract, without an example of a thing on which the action is being performed.

They talk about program animation in passing, but give no examples. The metaphor illustrated at the end of the paper is excessively concrete; robots operating on a production line are useful for providing initial understanding, but likely to prove intrusive for long-term use. This probably sums up the paper; it addresses the problems of learning-to-program, not the problems of programming-as-a-regular-activity.

Ward. P.T.

The Transformation Schema: An Extension of the Data Flow Paradigm to Represent Control and Timing

IEEE Transactions on Software Engineering, SE-12, 2, February 1986, 198-210


The data flow diagram has been extensively used to model the data transformation aspects of proposed systems. However, previous definitions of the data flow diagram have not provided a comprehensive way to represent the interaction between the timing and control aspects of a system and its data transformation behaviour. This paper describes an extension of the data flow diagram called the transformation schema. The transformation schema provides a notation and formation rules for building a comprehensive system model, and a set of execution rules to allow prediction of the behaviour over time of a system modeled in this way. The notation and formation rules allow depiction of a system as a network of potentially concurrent "centers of activity" (transformations), and of data repositories (stores), linked by communication paths (flows). The execution rules provide a qualitative prediction rather than a quantitative one, describing the acceptance of inputs and the production of outputs by the transformations but not input and output values.

The transformation schema permits the creation and evaluation of two different types of system models. In the essential (requirements) model, the schema is used to represent a virtual machine with infinite resources. The elements of the schema depict idealized processing and memory components. In the implementation modes, the schema is used to represent a real machine with limited resources, and the results of the execution predict the behaviour of an implementation of requirements. The transformations of the schema can depict software running on digital processors, hard-wired digital or analog circuits, and so on, and the stores of the schema can depict files, tables in memory and so on.



Ware, C.

The Foundations of Experimental Semiotics: A Theory of Sensory and Conventional Representation

JVLC, 4, 91-100, 1993


Experimental semiotics is defined as the elucidation of symbols that gain their meaning by being structured to take advantage of the human sensory apparatus. In making this definition a distinction is made between languages which are fundamentally sensory and those which are fundamentally conventional. Experimental semiotics us concerned with the former. Sensory representations are good (or bad) because they are well matched to the early stages of neural processing of sensory information. They tend to be stable across individuals and cultures. Conversely, conventional languages gain their power from culture and are dependent on the particular cultural milieu of an individual. This theoretical distinction provides a basis for testable predictions about the ease of learning for languages in the two classes. The examples given are mostly based on the visual modality, but the distinction also applies to other sensory modalities. Methods for testing claims about sensory versus conventional languages are discussed.



Wasserman, A.I.

Extending State Transition Diagrams for the Specification of Human-Computer Interaction


IEEE Trans. on Softw. Eng., SE-11, 8, 1985, 699-713


User Software Engineering is a methodology for the specification and implementation of interactive information systems. An early step in the methodology is the creation of a formal executable description of the user interaction with the system, based on augmented state transition diagrams. This paper shows the derivation of the USE transition diagrams, based on perceived shortcomings of the "pure" state transition diagram approach. In this way, the features of the USE specification notation are gradually presented and illustrated. The paper shows both the graphical notation and the textual equivalent of the notation, and briefly describes the automated tools that support direct execution of the specification.

This specification is easily encoded in a machine-processable form to create an executable form of the computer-human interaction.