Super-Resolved Surface Reconstruction From Multiple Images. With P. Cheeseman, B. Kanefsky, R. Kraft, J. Stutz, Maximum Entropy and Bayesian Methods, ed. G.R. Heidbreder, 293-308. Kluwer Academic Publishers, 1996. Also appeared as Technical Report FIA-94-12, NASA Ames Research Center, A.I. Branch. (See recent application to Mars Pathfinder images.)
This paper describes a Bayesian method for constructing a super-resolved surface model by combining information from a set of images of the given surface. We develop the theory and algorithms in detail for the 2-D surface reconstruction problem, appropriate for the case where all images are taken from roughly the same direction and under similar lighting conditions. We show the results of this 2-D reconstruction on Viking Martian data. These results show dramatic improvements in both spatial and gray-scale resolution. The Bayesian approach uses a neighbor correlation model as well as pixel data from the image set. Some extensions of this method are discussed, including 3-D surface reconstruction and the resolution of diffraction blurred images.
Reversible Agents: Need Robots Waste Bits to See, Talk, and Achieve? Proceedings of Workshop on Physics and Computation: PhysComp '92, 284-288. IEEE Computer Society Press 1992.
"Reversible agents" should run indefinitely, observing their world, acting to achieve goals, and talking with other autonomous agents. These requirements introduce extra intrinsic entropy costs, above what a computer requires to reversibly run a program. Goal states can unavoidably have lower entropy than initial states. Sensing costs, when features of the world relevant to an agent's analysis change unpredictably before the agent has time to reverse that analysis. On the other hand, talk between nearby agents can be cheap.
Bayesian Classification with Correlation and Inheritance. With J. Stutz, P. Cheeseman, in Proceedings of the 12th International Joint Conference on Artificial Intelligence 2:692-698. Morgan Kaufmann, 1991. An extended version is Bayesian Classification Theory. FIA-90-12-7-01, NASA Ames Research Center, A.I. Branch
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, have discovered new independently-verified phenomena, and have been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters though a class hierarchy. In this paper we summarize the mathematical foundations of Autoclass.
Toward Hypertext Publishing, Issues and Choices in Database Design. ACM SIGIR Forum 22(1), Winter, 1988. (Praised in PACS Review 1(3), 1990.) See also Make Finding Web Criticism Easy. Extropy 16:8, 1995.
Hypertext publishing, the integration of a large body (perhaps billions) of public writings into a unified hypertext environment, will require the simultaneous solution of problems involving very wide database distribution, royalties, freedom of speech, and privacy. This paper describes these problems and presents, for criticism and discussion, an abstract design which seems to solve many of them. This design, called LinkText, is presented both as a specification and as design approaches grouped around various levels of electronic publishing. (My HyperText '87 position paper is a longer summary.)