| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Canvas

Page history last edited by Michael van der Gulik 12 years, 9 months ago

Canvas

 

The code for Subcanvas is stored here: http://sourceforge.net/svn/?group_id=233785. The source is namespaced code and requires a SecureSqueak image to load.

 

TODO

  • Add modifiers (CTRL, ALT, etc) to mouse clickexts and keyboard events.
  • Implement and test canvas movement, raise, lower events.
  • Implement drag and drop.
  • Implement canvas management events (resize, destroy etc)
  • Implement fonts... somehow.
  • Implement colors in the canvas package.
  • Test scrolling something.
  • Make a crude widget set with at least buttons and text.
  • Implement SiteBrowser.
  • Implement hyperlink subcanvases (part of SiteBrowser).
  • Write test: make sure that events are sent to the closest subcanvas to the user, especially if they overlap.

    * Implement basic drawing commands in the API:

    drawRectangleFrom: topLeft to: bottomRight color: color ...... (fill in all details)

    * Implement a >>draw: command:

    Canvas>>draw: aShape

        aShape drawOn: self.

    Rectangle drawOn: aCanvas

        aCanvas drawRectangleFrom: topLeft to: bottomRight color: ... etc

     

Introduction

 

The Subcanvas package provides a secure API that allows programmers to draw 2-D graphics on the computer screen and react to mouse and keyboard events from the user.

 

Subcanvas is intended to be an API which could have multiple implementations. The current "reference implementation" uses Squeak's Forms and BitBLT. Other versions might render directly to OpenGL, the X11 protocol, VNC, Postscript and so forth.

 

Subcanvas is the primary graphical and event handling component of SecureSqueak. It is the replacement for Forms/BitBlt, EventSensor and the Morphic Canvas class.

 

Subcanvas is intended to be a secure graphics and event handling API. It is designed such that a particular application can only draw in it's own region of the screen (called a "subcanvas"). This prevents attacks such as fake password prompts. Each subcanvas has its own event handling thread, so that if an event handler for that subcanvas fails or stalls, no other subcanvas is affected.

 

Screenshot

It's not pretty, but:

http://securesqueak.blogspot.com/2008/11/subcanvas-first-graphics.html

 

Design

 

Subcanvases 

A canvas is a rectangular area on the user's screen. There is a root canvas which covers the entirety of the background of the Squeak window.

 

Each canvas can have a number of sub-canvases, each being a rectangular area with a position and a size (or extent).  Each sub-canvas's position is maintained by the parent canvas and the child canvases are unaware of their position. Sub-canvases appear "on-top" of their parent canvas and must exist within the rectangle that their parent canvas occupies.

 

Each canvas handles keyboard and mouse events independently. Each canvas is associated with an object which handles events from that canvas. When the user clicks the mouse or types something on the keyboard, events are sent to the "current" canvas (the details of which canvas is "current" are complex), which then forwards those events through to the object which handles those events. That event handler is typically written by the programmer of a particular application and is responsible for processing all events from that particular canvas and redrawing that canvas when it needs redrawing.

 

Easels

Your code does not actually hold references to Canvas objects. References to Canvas objects are available only to event handlers while an event is being processed; after processing the event, there is no guarantee that the canvas object is still valid (e.g. when the user closes the window). Instead, your code holds a reference to an Easel object.

 

Every Easel specifies the position and event handler of one Canvas object, as well as its event handler and subcanvases.

 

An Easel object has an API which allows your code to add, remove, move, resize, raise and lower the subcanvases of a particular canvas. Note that you cannot move or resize the canvas associated with that Easel; only the parent of a canvas can move or resize its children.

 

Easels are created from other Easels. Calling the >>addEaselAt:extent: method will add and return a new Easel object for a subcanvas.

 

An Easel object allows your code to specify the event handler of its associated canvas. You pass the event handler to an easel in Easel>>target:

 

An Easel object also gives your code a mechanism for causing a redraw event to be sent to the event handler object. Calling Easel>>needsRedraw will schedule a redraw event so that >>handleRedraw: is eventually (or more likely, immediately) invoked on that canvas's event handler.

 

Drawing operations 

Drawing operations are performed on a particular canvas in an event handler. If that canvas has a parent canvas, the parent canvas is completely unaffected by the drawing operations. If that canvas has children, those children will not be affected by the drawing operations.

 

All drawing operations on a canvas are clipped to the boundaries of that canvas. Canvases are transparent by default, except for the root canvas. Drawing operations include: rectangles, lines, text, circles, bezier curves and so forth. Drawing operations have four colours: red, green, blue and alpha (transparency).

 

Drawing operations can only be performed by event handlers (see below). A reference to a canvas is passed to that event handler. The reference to that canvas cannot be guaranteed to be valid after the event handler returns. If an application needs to redraw a canvas, it can send the something a message (somehow...) to mark that canvas as dirty. The canvas would then cause a >>needsRedraw: event with a canvas wrapped in the event.

 

Visible output to the screen is not guaranteed until the event handler returns. If, for example, double-buffering is supported, the buffers will be switched after the event handler returns.

 

The coordinate system uses micrometers rather than pixels, and has the origin (0, 0) in the top left corner of each canvas. If the canvas is a child canvas of another canvas, the drawing operations use the origin of the child canvas and are otherwise unaware of the parent canvas. The computer screen should be calibrated to make sure drawing operations are accurate in absolute measurements. The canvas API also provides a pixel pitch, and pixel positions can be retroactively determined using multiples of the pixel pitch if the client wants control over individual pixels.

 

A micrometer coordinate system was chosen because it allows GUI developers exact control over how a GUI will appear on the screen, regardless of the pixel pitch of that screen. Some laptop LCD screens have a very high pixel pitch making user interfaces too small to read, and printers have a pixel pitch that is about one sixth that of a computer screen.

 

If 32-bit signed integers are used, this makes the maximum size of a computer screen measured using micrometers about 2.1 metres wide. With any luck, user interfaces for a single user will never be that big.

 

There is no guarantee that pixels form the basis of the screen drawing operations. The graphics device used might, for example, be a vector graphics device such as a PDF file or OpenGL context. The concept of a pixel might also be confused by an implementation which does sub-pixel rendering on an LCD display.

 

A canvas supports at least the following drawing operations:

  • lines
  • rectangles (must be horizontal and vertical with the screen): filled and not filled.
  • other shapes such as polygons, ellipses, curves (?). These have not been thought about yet. You'd be surprised how much you can do with lines and rectangles.
  • Copy an area from one canvas to another. (XXX requires a static reference to a canvas here? Also, if canvases can be clipped, then why not just add a canvas as a child?)
  • Shading: PDF and XRender support gradients of various sorts.

 

The design has not yet been completed. In particular, I am unsure as to how fonts should ideally be rendered.

 

Currently the canvas has a deprecated-from-birth drawString: method which allows text to be produced for debugging purposes.

 

Moving sub-canvases

A parent canvas has access to the positions of it's child canvases. These child canvases can be moved by calling methods on that CanvasPosition object.

 

Usually, functionality of the underlying graphics architecture will use whatever efficient method it has to move subcanvases. For example, VNC defines a cheap "copy" operation to copy one area of the screen to another.

 

Clipping 

(not yet implemented)

By default, a canvas has an area on the screen which is from its origin to its extent.

 

Sometimes it is useful to have the visible portion of a canvas be only a small part of that canvas. For example, a web page is often taller than the window containing it so that a scroll bar is used to scroll up and down the web page. In this case, the canvas would be the entire web page, but only the part defined by the position of the scroll bar would be visible on screen.

 

To implement this, a special type of canvas is used which defines the visible bounds. Drawing operations are clipped to the visible area of the screen, and the dimensions of the visible area is available to the drawing routines so that they can avoid unnecessary drawing operations.

 

Event Handling 

Canvases handle events from the keyboard and the mouse. Events are delivered to a programmer-defined event handler. Every canvas must have an event handler.

 

Every canvas has an event handler. Your code defines and creates an event handler object, and passes it to Easel>>target:. Once assigned to a canvas, that event handler will start receiving events. An event handler must define a number of methods to react to events; these are all described in the Canvas.CanvasHandler class. This class is provided for convenience; you may createa subclass of it and only override the event handler methods you require.

 

Event handler methods are the only place your application has access to a canvas. It is expected that these event handler methods will update that canvas with drawing operations to provide quick feedback to the user. One event of particular importance is the "redraw" event; the >>handleRedraw: event handler should redraw the given area of the canvas. Typically, redraw events occur when the cached visual state of the canvas has been lost, such as when a canvas has been exposed on the screen.

 

Mouse events include mouse move events, left/middle/right button press, scroll buttons up/down, mouse dragging events and canvas enter/exit events. If no mouse button is pressed, then the event is delivered to the canvas underneath the mouse pointer. If a mouse button is being pressed when the event occurs (for example a mouse up event or a mouse drag event), then the canvas to which the mouse down event was sent will receive this event. For example, if the user pressed the mouse button, drags the mouse off the canvas and releases the mouse button, then the canvas will receive a mouse button, followed by mouse drag events (some in, some out of the canvas bounds), followed by a mouse up event outside the bounds of that canvas.

 

When the mouse moves out of the bounds of a canvas, that canvas will receive a "mouse leave" event. When the mouse moves into the bounds of a canvas from outside, that canvas will receive a "mouse enter" event.

 

Events from the keyboard are passed to a canvas which has the current "keyboard focus". The canvas handler can request the keyboard focus from any mouse event by invoking a method on that mouse event object.

 

Keyboard events are: key press, key character and key release. Key press and key release will contain the USB scan code of the hardware key involved; this scan code is common across all widely used keyboards in the world and is independent of the current keyboard mapping. The key character event contains a Unicode character that the key in question (or sequence of keys) would generate. Modifier keys such as SHIFT, CTRL, ALT and the Macintosh key will generate key press and key release events.

 

References to a canvas object are only considered valid when used within an event handler. References to Canvas objects should not be stored by an application or the event handler. The reason that canvas references are only considered valid in event handlers is to allow Subcanvas implementations to be guaranteed that no unexpected drawing operations will occur on a shared bitmap or form, which allows for optimisations. It also allows for asking the handler of a canvas to draw itself on multiple different canvases, such as when a high-quality vector based screenshot is wanted.

 

Each canvas has it's own event queue and event handling thread. Events are passed to that canvas by whatever means (this is platform specific) and each canvas holds a queue of events that need to be processed by an event loop running on that canvas. This ensures that canvases are temporally (time-wise) independent of each other and that a faulty application will have no effect on other subcanvases. If an event handler method takes a long time to complete, the canvas will be rendered somehow as "unresponsive" on the screen.

 

XXX A "stepping" timer will be provided which sends events at set periods. This will allow animations to occur. If the user can request redraws, then the user can implement this themselves.

 

TODO: also include events for the user logging in, logging out (the user might be logged in several times perhaps?), when the user goes idle (e.g. so that a site can run CPU-intensive stuff or to advertise the user as "away"), or when the user is in "do not disturb" mode, or "silent mode" (e.g. on a mobile).

 

Later expansion 

Because Smalltalk is a dynamically typed language, it is possible that a Subcanvas implementation can add extra methods to canvas and event implementations to provide extra functionality.

 

You could pass classes as parameters:

 

(canvas supports: OpenGLCanvas) ifTrue: [

     glCanvas := canvas create: OpenGLCanvas.

     glCanvas specialMethods etc...

     canvas add: glCanvas from: x@y extent: x2@y2 ].

 

Some method should be provided to query a canvas as to whether it supports functionality not described here.

 

Extra functionality might include:

  • 3-D drawing operations.
  • Use of plugins, such as a movie canvas, JPEG image loaders, and so forth.
  • Performing rotation, scaling or shearing operations on canvases.

 

Printing

Output to a printer is probably best handled using what GTK+ and Cairo provide. That is the only cross-platform API I can find.

 

Random thoughts (from here down)

 

Every Canvas has a model, which is asked to draw itself on a canvas, and react to events.

 

The Canvas API needs enhancing to support:

  • Sub-canvases.
  • Canvas "needsRedraw" status.
  • Event handling.
  • Double-buffering.
  • Canvases presenting transformations (clipping, rotation, zooming, warping) of other Canvases (maybe...?).

 

I would change Canvas by:

 

  • Allowing a canvas to have movable sub-canvases. These would map 1:1 to windows in the X window system, or cached bitmaps in VNC, or display lists / textures in OpenGL. These could be moved around the screen
  • Canvases could be implemented as bitmaps or vectored graphics; the application doesn't need to know what implementation is actually used.
  • Introduce a "needsRedraw" system of some sort. A Canvas implementation may or may not cache its contents (as a bitmap or vectored graphics/display list). Various implementations may discard the cached contents at times, or perhaps not even cache content.
  • Use micrometers rather than pixels as the unit of measurement and provide a "pixelPitch" method to return the size of a pixel. For example, my screen has a pixel pitch of 282 micrometers. A 600dpi printer would have a pixel pitch of around 42 micrometers. You could use a SmallInteger to store micrometer values.
  • Introduce, somehow, an event system closely coupled to a Canvas (because some events have coordinates relative to a canvas).
  • Allow canvases to use other canvases for: double-buffering, clipping (i.e. show part of another canvas in a bounding box on this canvas) and scrolling, transformations such as zooming and rotating (?), showing images (i.e. have a primitive ImageCanvas which loads from bitmapped data), one-off plotting (have a bunch of flyweight canvases to render fonts). The Canvas could replace the Form.
  • Canvases are transparent by default.

 

Here, the "drawer" is the object which draws (as opposed to the furniture item).

 

Child canvases

 

  • Thought: Child canvases might actually need "Views" as glue. ParentCanvas->View->ChildCanvas. Views could then decide what of the following child behaviours are used...
  • Another thought: Views might be Canvases.

 

One of the enhancements would be to add child canvases. Child canvases could be used in many ways, for example:

 

  • They could be "windows" (frameless) that would appear "on top of" a parent canvas but be clipped by the parent's extent. The contents of the canvas could be cached by the so that the child can be moved across the screen without needing to redraw itself.
  • They could be used for scrolling inside a parent that manages some scroll bars. The child canvas would be clipped and only have a portion of itself visible inside the clipped extent on the parent. By changing the part of the child canvas that is shown and having it re-drawn, the child canvas would appear to scroll inside the clipped boundary the parent provides.

 

Sub-canvases would be useful for caching bitmaps and moving them across the screen smoothly. For example, a 2-D scrolling game could be made using sub-canvases to draw the actors and layers of the background.

Sub-canvases have a z-index.

 

 

Sub-canvases could support transparency to be very cool.

 

A sub-canvas is clipped to the bounds of it's parent canvas, for security. This prevents a canvas drawing over controls that do not belong to it. A sub-canvas acting as a window would have a relative position to it's parent which it cannot change (but the parent can).

 

Sub-canvases are hidable or could be rendered in multiple places (e.g. for flyweights).

 

One sub-canvas would have the keyboard focus and would receive keyboard events.

 

A canvas might be special, e.g. an OpenGL canvas for 3-D graphics, or an MPEG canvas for hardware accelerated movies, or perhaps an image canvas that only takes a bitmap and can be used as a child canvas.

 

Every sub-canvas has a small mouse-pointer canvas which is what is used to draw the mouse pointer when it is hovering over that canvas.

 

A FormCanvas has other FormCanvases as sub-canvases.

 

Every canvas can have a location attached to it that will be visited if the user clicks that canvas. A clickable link would then be a transparent sub-canvas with a location attached to it.

 

Types of Sub-canvases:

  • The usual, drawable with a position. These would need to be of the same type as the parent. A FormCanvas has child FormCanvases. A PostscriptCanvas has child PostscriptCanvases.
  • FormCanvases could be rendered either as a stored bitmap, a shared parent bitmap with a set of occluded rectangles or a display list.
  • Special canvases, such as GLCanvases, may not be supported on all canvases (even though ideally they would be. OpenGL on Postscript anybody?).

 

How would you create a GLCanvas on a FormCanvas?

 

FormCanvas addChildCanvas: (GLCanvas new: 10000@10000) at: 10000@10000??

or

newCanvas := FormCanvas addGLCanvasAt: 10000@10000 size: 20000@20000.

 

(units could be made simpler: 10mm@10mm)

 

If this particular type of sub-canvas is not supported by the parent canvas, then it must raise an exception of some sort.

 

Perhaps a Canvas can have a fall-back rendering mechanism? If a particular parent canvas does not support the child canvas's rendering methods, it can fall back on either raster or bitmapped graphics mapped to the parent?

 

Perhaps use a factory somehow - make the parent canvas be a factory, or something?

 

 

Drawing

 

Possible types of Canvas include:

  • XCanvas rendering to an X Window System server (at last!).
  • Win32 canvas rendering using whatever there is on Windows. Ditto for Mac.
  • FormCanvas, of course.
  • Rome, using the Cairo library.
  • PostscriptCanvas, rendering to a .ps file. Ditto for printing APIs?
  • SVG canvas (cf: PostscriptCanvas).
  • VNCCanvas, rendering to a remote VNC client.
  • GLCanvas, using OpenGL.

 

Several of these canvases support native sub-canvases (XCanvas with Windows, GLCanvas with display lists, VNCCanvas, ...). Those that don't can emulate this.

 

Several support antialiasing (Rome, GLCanvas, ...?).

 

Some support transparency (GLCanvas). XCanvas notably doesn't, although there may be modern extensions that allow this, or the library could hack around this.

 

Some have native font support (XCanvas, Win32, Rome).

 

GLCanvas can also be 3-D, meaning that a sub-canvas stack can really be in 3-D. This API is limited to 2-D except for the stacking nature of canvases; a 3-D API would be more general (and real 3-D buttons that stick out would be very cool).

 

A Canvas could also do fancy effects: sub-canvases may drop shadows on the canvases below them, or cast a subtle light. It would be very cool if the shadow respected the transparent parts of a canvas. The canvas with active keyboard focus may be more illuminated than others. Canvases might be slightly reflective or have reflective parts, making them reflect the canvases above them. Fancy effects like this are much more suited to a proper 3-D environment than a 2-D drawing API.

 

For now: The Canvas implementation does it's best to render commands given to it. If it does not support anti-aliasing, then anti-aliasing is simply not done. If it does not support transparency, that is not done either. Programmers using canvases just need to be aware of this and not rely on transparency or antialiasing.

 

Because various renderable objects have a lot of properties, it might be better to render a particular object on a canvas rather than add special methods such as #lineFrom:to:colour:. These objects can be made immutable so that the canvas can cache them (if the canvas is of the caching sort).

 

Lines have:

  • Start, end points, or for a path, a series of points.
  • A stroke width.
  • A colour (including alpha).
  • Edges either being square or rounded. If a line is part of a path, then the corners may be rounded or flat.
  • Lines could be part of a longer path. The path might be enclosed or not.
  • dots / dashes?

 

Maybe lines are really rectangles - they have a width, a length and a fill colour!

 

Dots must have a circumference. They are really a circle.

 

Bezier or other curves?

Circles, eclipses, arcs?

 

Paths (rectangles, polygons) have:

  • A series of points; between these points are lines with the properties above.
  • Enclosed or not.
  • A means of rounding the corners (round / flat / none).
  • A means of rounding line ends (flat / round).
  • If enclosed, a fill colour or gradient.
    • Is a gradient a special type of colour? A gradient would need coordinates, so it would be specific to a canvas.
    • Is a colour specific to a canvas (e.g. colour mapping, colour correction, number of colours).
  • Perhaps the fill could be another canvas? Perhaps we could have a gradient canvas and colour canvases?

 

If fills and solid colours are actually special canvases, then perhaps they could be replaced with another canvas, such as an image?

 

A lazily drawn Canvas could then be used for any fill. This would also be useful for viewports. A canvas can be lazily drawn using a dirty-marking mechanism that asks the renderer of the canvas to redraw it.

 

 

Coordinate system

 

What coordinate system should a canvas use? Currently they use pixel-based coordinate systems, with the top-left corner being 0@0 (or 1@1?).

 

The options are:

 

  • Pixel-based. This gives code completely control over how pixels are placed at the expence of resolution independence - e.g. this has no meaning in a plotted environment and isn't convenient on a 600dpi printer canvas. The common problem here is when users upgrade from their 640x480 VGA screens to high-res 1920x1280 17" LCDs - they can no longer read their text because the pixels are so small.
  • Point-based, using "pt" or post-script or typography points. These are about 0.3mm, which is a silly unit and just a bit bigger than a pixel.
  • Metric units with floating-point coordinates, e.g. using millimeters.
  • Metric units with Integers, e.g. using micrometers. This has the advantage of using plain Integers for operations but still with high fidelity.

 

If a pixel-based system is used, then coordinates could either represent the center of each pixel, or integral coordinates representing the edges between pixels.

 

My preference is either of the metric units. These are device and resolution independent. Operations on pixels are usually basic operations - adding, dividing, comparing, so the same code would work with either floating point or integer operations except for equality comparisons.

 

An integer-based system could have a method that returns the size of the pixel (e.g. 257 micrometers), so pixel-by-pixel control can still be achieved with multiplication. For example, on my current monitor, each pixel is 282 micromillimeters (exactly!). If the GUI API provided a method for returning the pixel size (both horizonally and vertically) then an application can choose to align graphics with pixels, or alternatively just ignore them and use absolute coordinates instead (e.g. 100mm x 200mm).

 

Juan Vuletich's Morphic 3 has a very flexible coordinate system.

 

--> Leave this for later - for now, just use whatever Canvas already provides. Implement an improved UI device in version 2.0.

--> Or maybe this should go in; version 1.0 should provide a uniform graphics API.

 

Coordinate origin location

 

Should 0@0 be at the top left or bottom left?

Arguing for bottom-left:

  • Cartesian coordinate systems use the bottom left.
  • Measuring an object sitting on the "ground" (e.g. a font glyph) means measuring it from the bottom up. In meatspace, gravity is a downwards force.

 

Arguing top-right:

  • User interfaces and documents flow from top to bottom, so the top-left is the starting point. However, this only applies to European scripts. Other languages have right-to-left text.
  • The "gravity" of a page is an upwards force; objects will tend to congregate at the top of the page with empty space at the bottom of a page.
  • Resizing a window usually means that the Canvas in that window would be clipped from the bottom and the right sides, but the canvas would be moved if the top or left sides were moved. This puts the origin in the top left.
  • Java 2D, Cairo and the existing Squeak Canvas use the top-left corner as the origin.

 

This is really an arbitrary decision. Whichever approach is taken will have only a minor effect on code.

 

Models

 

Canvases typically have a model. The model is the target for events, and is asked to redraw the canvas.

 

Subcanvases each have their own model. The redrawing of a sub-canvas does not influence the redrawing of it's parents, and the drawing of a parent canvas does not influence the drawing of child canvases. Each is updated separately, although perhaps by the same thread for efficiency.

 

When an application starts up, the main canvas is passed to the model, which then adds sub-canvases to the model. For example:

 

SiteBrowser>>navigateTo: aSite 
currentCanvas reset. "remove all sub-canvases, remove all steppers, clear contents" 
aSite setUpCanvas: currentCanvas. 
currentCanvas markDirty: (currentCanvas bounds). 
  
MySite>>setUpCanvas: canvas 
canvas setModel: self. "Start receiving events." 
canvas addSubCanvas: (set up a sub-canvas here). 
  
MySite>>drawOn: aCanvas bounds: bounds 
"The bounds is the area that needs redrawing; I can choose to redraw just that, or everything. " 
... canvas drawing commands... 

 

 

Redrawing the Canvas

 

Canvases can only be drawen on by event handlers. The event handler method on the canvas's target/model accepts an event, which contains amongst other things a reference to a Canvas. That canvas is guaranteed to be valid until the event handler exits. When the event handler exits, the canvas reference may or may not remain valid depending on the implementation of that canvas; generic code should not rely on the reference being valid.

 

A periodic redraw mechanism should be provided - either include some stepping/timer mechanism to ensure the canvas is redrawn at periodic intervals, or a screen refresh mechanism which is synchronised (somehow) with the screen refresh rate.

 

Some ways of caching canvases would be:

  • For BitBlit on the display, each FormCanvas contains the bits it consists of already.
  • For OpenGL, a display list or a texture on a rectangle would be redrawn by the graphics card and wouldn't need callbacks to the drawer.
  • For VNC, the client (on the user's side) could cache bitmaps.
  • For The X Window System, I believe it's possible to store drawing info on the X terminal?

 

If the Canvas is cached in a display list or bitmap, then redrawing is not necessary for most move/expose operations. If the Canvas does not store state, then a redraw may occur whenever the canvas is exposed on the screen (e.g. brought to the front) or moved.

 

The Canvas needs to know when drawing is complete. One way is to mark the area on the canvas as "clean". Another is to return successfully from the drawOn:bounds: method.

 

Event handling

 

 

Mouse events have a screen location, which is a point position on the canvas that they occurred on.

Redrawing a sub-canvas could occur as an event. Redrawing would only occur when a canvas is damaged by some other canvas moving over it.

Keyboard events don't have a particular position. Keyboard events would be sent to the canvas which has "keyboard focus".

 

Events such as the morphic "stepping" architecture could be done externally to this.

 

Types of events:

  • onKeyPress (key, shift/ctrl/alt/modifier status)
  • onMouseMove (mouse buttons up/down?)
  • onMousePress (split up into several methods for each button? The application can do that.)
  • onDoubleClick? I don't really like double-clicks.
  • onNeedsRedraw (list of damaged areas)
  • onMouseEnter, onMouseLeave (mouse button status)
  • onKeyboardFocusEnter, onKeyboardFocusLeave.
  • onDrag, onDrop ? (pass something in which a small dragging canvas can be attached to which sticks to the mouse pointer?)
  • onCommand (cut/copy/paste/print/poweroff/pause/play etc - keyboard commands defined by the UI).
  • onCanvasMoved (?), onResized.
  • onCanvasDestroyed, onCanvasCreate ??
  • onShow, onHide ?

 

Events regarding redrawing or resizing need to be separate. Resizing events will cause the GUI framework to resize each widget.

 

Event hierarchy:

* Event

  * CanvasEvent (canvas, CTRL, ALT, SHIFT status)

    * MouseEvent (position, mouse button status)

    * KeyboardEvent (key, repeated(bool))

    * CanvasResizeEvent (oldSize, newSize), also for onShow, onHide

    * CanvasRedrawEvent (damage areas)

    * KeyboardFocusEvent (?)

    * TimerEvent (?)

    * BeginDragEvent (position, add cargo)

    * DropEvent (position, cargo)

    * LifecycleEvent - create, destroy canvases (?)

 

Commands

 

Some modern keyboards have special keys for many functions, e.g. "zoom in/out", "print", "back" etc. These could be mapped to "commands" which are sent to the canvas's model, as an event, to be interpreted as that particular command.

 

Language, user settings

(possibly a bad idea)

Perhaps some of the user's settings could be expressed through the canvas:

  • Current locale, language.
  • Current physical location (to the nearest city...?)
  • Access to attached devices: printers, scanners, cameras, storage, screen configuration, sound?

 

Then again these should all be handled by a higher level somewhere, such as the user's current "session".

 

Canvas classes

 

The root class of the Canvas hierarchy has:

  • A size (in micrometers)
  • A pixel-pitch (in micrometers).
  • A model, for reacting to events and redrawing, or null if events should be ignored.
  • keyboard focus (possibly managed outside the canvas).

 

Some Canvases have:

  • Cached display state - a bitmap, display list etc, which could be local or remote.
  • Child canvases, each with a position (rectangle, z-index) relative to this parent.
  • A RGB order for LCD screens.
  • Antialiasing info.
  • Font management of some sort???

 

Some rough ideas:

 

  • BasicCanvas - does not support child canvases.
  • ParentCanvas - supports child canvases... or views...?
  • ChildCanvas - is a child canvas. Maybe not needed if Views are used?
  • ClippedView - a view of a canvas which can be placed on a parent canvas. This shows a portion of another canvas.
  • WindowView - a view of a canvas which can be moved around like a window. Has a z-index. Could be implemented using ClippedView (superclass maybe?).
  • StaticView - a view which gives a snapshot of another canvas.
  • UpdatingView - a view which constantly updates? This would use the event system.
  • BitmapCanvas - a canvas that takes a given bitmap in a constructor like FormCanvas. Doesn't necessarily have vectored commands.
  • VectorCanvas - a canvas that can be drawn on using vectored graphics. Bitmaps can be added using Views (maybe??).
  • InteractiveCanvas - does events...?
  • TransformingCanvas - warps the drawing commands of the child canvas, e.g. FishEyeCanvas, RotatingCanvas etc, ala Morphic3.

 

Perhaps:

 

Canvas - bounds, position

  • subclass ParentCanvas. This would contain child canvases.
    • subclass Canvas (FormCanvas or RasterCanvas). This can load images too.
    • subclass VectorCanvas / GLCanvas. This would add images by converting FormCanvases to textures?
    • ...more canvas implementations
  • subclass ClippingCanvas
  • subclass SubCanvas ??BitmapCanvas|

 

 

e.g. the ClippedView:

 

ParentCanvas -> ClippedCanvas -> ImageCanvas.

 

The ClippedViewCanvas has coordinates for rendering on the parent, and coordinates for which part of its child canvas is rendered. An application would do drawOn: the ImageCanvas, and the results would be visible on the ParentCanvas. How this is implemented depends on the platform. E.g. for a BitBlt implementation, maybe there are renderOn: methods which are called from the parent canvas down to redraw the UI?

 

The user of the Canvas does not need to worry about the actual rendering to the screen. The Canvas architecture does that - the user of the canvas is concerned with drawing on the canvas when it receives a "please redraw on me" event.

 

If you have a TransformingCanvas that sees through to a canvas with children, those children should also be transformed.

 

 

Speciality Canvases

 

  • 3-D Canvases, such as OpenGL.
  • Canvases on 3-D elements inside the 3-D canvas.
  • MPEG Canvas for playing movies.
  • High-speed image manipulation canvas: rotating, zooming, etc.
  • Low-speed image manipulation canvas aka Gimp canvas supporting e.g. gaussian blurs, high-quality zooming, other techy image manipulation.
  • Hardware device-based canvas, e.g. showing a TV tuner.
  • External canvas; i.e. another OS window.
  • Externally implementated Canvas, e.g. FlashCanvas, HTMLCanvas, GTKWindowCanvas which can have GTK widgets added to it (although it wouldn't be drawn on?).

 

Some of these won't accept drawing commands - does that mean they aren't Canvases? Perhaps Canvas needs a superclass called maybe Window which has sub-windows (and Canvas does not), or perhaps Window is a separate class of which the above are superclasses?

 

Double-Buffering

 

In older hardware, double-buffering is done by having two Forms of which one is visible on screen, and they can be selected between with the twiddle of a pointer, preferably between screen refreshes. I'm not sure if BitBltting between forms is fast enough to have the same effect?

 

The API would be similar to a transactional API. The canvas is requested, drawn on, and changes are committed. On commit, the changes are presented on screen. This could be implemented with a simple >>commit method.

 

Every sub-canvas would have it's own >>commit methods. There might be ways of drawing sub-canvases without needing to know the status of its parent canvas.

 

Text

The Subcanvas API will probably also require a text API, because text can be handled differently by different APIs.

 

Alternatively, I see that TTF fonts appear to be rendered from within Squeak.

 

See Pango tutorial: http://www.ibm.com/developerworks/library/l-u-pango1/ for some text rendering concerns.

 

Text is rendered using:

  • Squeak's own text rendering using FormCanvas.
  • Pango, Cairo on a Cairo-based canvas.
  • Vectors in OpenGL.
  • X fonts in X.
  • Postscript fonts (Adobe Type 2? OpenType?) on Postscript, or maybe convert them to vector graphics?

 

OpenType seems to be a well understood font format.

 

Kerning is best done on a per-paragraph basis (see TeX), so it might be necessary to have a rectangle/paragraph of text. The width would be set; the height would depend on the text. Kerning would be done by the canvas rather than by the application. Paragraphs need to support centering, full-alignment, right/left alignment, text direction etc. Paragraphs would need to support mark-up: colours, bold/italic/small-caps/underlining,

 

The application should also be able to specify non-wrapped text for a single line; perhaps by setting the width to zero?

 

Font metrics need to be exposed to the client for special effects such as highlighting up to a particular character, or for putting fancy underlining on characters.

 

All sizes should be converted to micrometers.

 

Text could be either rendered using pre-build featherweight glyphs, or by rendering each character individually to the subpixels.

 

However, on some platforms with excessively large amounts of CPU and fancy graphics such as OpenGL, text drawing might be done using 2-D or 3-D primitives to make it really pretty and smoothly zoomable. So perhaps the actual rendering of text should be abstracted?

 

The API needs to have enough primitives to allow the advanced user to implement his own rendering. This could be as simple as rendering individual glyphs. One possibility is to make text layout a custom package.

 

FreeType only renders individual glyphs and provides metrics about them. FreeType generates bitmaps, so the Canvas only really needs to support some BitBlting mechanism with font rendering and text layout being done externally.

 

...perhaps I could only include support for rendering a particular glyph at a time? But then ligatures would not be supported...?

 

If Postscript stores text as a string and does it's own kerning, then it is more efficient to make an API which can handle whole paragraphs.

 

http://keithp.com/~keithp/talks/usenix2001/xrender/: The X Render extension is used for antialiased fonts, and does not do font formatting itself but rather supports pre-rendered glyphs send from the client to the X server.

 

PDF reference; the "included" fonts that are assumed to exist on each platform have been deprecated.

http://www.adobe.com/devnet/acrobat/pdfs/pdf_reference_1-7.pdf

 

Animation

 

Modern UIs seem to demand a level of animation: windows that fade out, move, wobble, and so forth. While I think this is lame, can this be built into Subcanvas?

 

Not all canvases would support animations; on slow platforms, they would be disabled and the canvas would transition to the end state immediately.

 

Some animation effects could include:

  • Fade in / fade out or transitioning one or multiple colours.
  • Moving a canvas across the screen.
  • Move a canvas along the Z-index, perhaps with shading as the canvas moves back. This might also move the canvas in 3-D somehow.
  • Transforming a canvas: shrinking or expanding, shearing.
  • non-linear effects, such as wobbly canvases or other stupid effects that modern OSes enforce on you.

 

Manipulating an entire canvas's colour (i.e. fading in/out with the alpha channel) would mean that canvas has some sort of "mask" on it. As of yet, this hasn't been throught through or implemented.

 

Most transformations could be done by an external library that manages the changes of various attributes over time.

 

Each Canvas implementation should be able to handle constantly changing attributes. If the z-index, canvas positions, size or canvas masks (if this ever gets implemented) constantly change, the implementation should try and present what it can on the screen in a timely manner. The frame rate should be maintained at an absolute minimal level, say, 2 frames per second. If this can't be maintained, then the implementation should drop details (such as excessive subcanvases, fancy effects) from the screen. Ideally, the frame rate would match the moniter refresh rate.

 

Stopping Morphic

 

If wanting to do this full-screen, use Display and Sensor. Loop on Sensor like the HandMorph>>processEvents. See HandMorph>>generateMouseEvent: and HandMorph>>generateKeyboardEvent: for examples of how to handle the raw events.

 

To stop Morphic, do "Project uiProcess terminate". Mwa ha ha. Then Display is all yours!

To restart, do "Project spawnNewProcess".

 

Testing

 

It would be nice to have a test screen of some sort, like the Philips TV test pattern: http://en.wikipedia.org/wiki/Test_card

 

This could be showen when the system is busy booting or loading up.

 

This needs testing:

  • Edges of monitor (or paper) correctly aligned (think badly configured CRTs). Also, that multi-monitor arrangements have the monitors in the right places.
  • That an LCD monitor is running at native resolution.
  • Fonts are sub-pixel antialiased properly. Note that LCDs can have different sub-pixel arrangements, and could be rotated 90 degrees for portrait mode.
  • Pixel pitch correctly configured (show a ruler showing centimeters).
  • Pixel pitch correct in both dimensions (show a series of circles of various sizes. Think badly configured widescreen TV). 
  • Lines (vertical / horizontal) are straight (think CRTs with a fishbowl effect going on).
  • Show the current time, animated.
  • Show sub-canvases of various sorts, animated. Or perhaps this is unnecessary - this should be a separate test.
    • Show a 3-D rotating image. If the monitor supports a true 3-D mode, show it off.
    • Show a movie 
  • Show colour capabilities - show full pallette if possible. E.g. if the display only has 256 colours.
  • Show actual pixel sizes along screen so it can be seen how big the screen actually is.
  • Perhaps show off screen performance? Or maybe we don't want to consume too much CPU.
  • Show that colours are accurate: black, white, primaries (RGB, CMYK).
  • Show monitor sensitivity: Show very very light gray lines getting darker, ditto for black lines. Maybe combine with a colour pallette?
  • Show various other capabilities of Canvas?
  • Show an actual picture (think: demos of printers in shops).
    • Show a series of pictures which have a lot of the primary colours. E.g. red flower, green grass, blue sky, etc.
    • Show a single picture under the rainbow bar which matches the same colours, e.g. a red flower, next to green grass, next to blue sea, etc. This would be very cool.
    • It would be good if the picture had absolute black and absolute white somehow in it. 
  • Perhaps show system information: version of VM, hostname, memory, etc?
    • Perhaps show network traffic, CPU load, memory usage? 
  • If used for booting, show a progress bar / log?
  • Show events - mouse movement, mouse clicks, modifier keys, keypresses, (commands?).
  • Perhaps it could include / be replaced by a console?
  • Show a logo?
  • If good unicode support is supported in the release, show that off?
  • Show motion blur / response time.
  • Look on the Internet for good Monitor calibration images. Perhaps there is a standard colour chart we could accommodate?
  • Don't include tests that might induce an epileptic fit.
  • Include multiple tests for different monitor sizes, from 160x160 4-color screen to 29275x21025 600dpi A0 paper. When screen size is reduced, reduce the number of tests included.
  • Test for flicker when double-buffering is not happening (e.g. alternating lines of pixels, somehow?).
  • Test sound:
    • Test all frequencies, such as the tests at the start of old cassette tapes. Show the sounds on the screen as a rotating bar pointing to frequencies maybe?
    • Test all channels - 2.1 / 5.1 / 7.1
    • Test all sinks: iterate through all devices. Perhaps it is just better to test the primary device.
    • Test all sources: show a waveform on the screen of the primary microphone input. 
    • Perhaps sound tests should be separate. 
  • Consider a printer test page version as well.
  • Put a logo in the middle 
  • Perhaps have a magnifying glass showing what is under the mouse pointer? 
  • Show the FPS if relevant.
  • Show standard screen resolution sizes as boxes: 640x480, 800x600, etc.
    • Animate and bounce them around the screen in the background. Make them transparent.
    • Stick rulers on them.
  • Tests for LCD monitor stats:
    • Response time - a series of moving lines, where the ones that move too fast to be seen show the response time? Take EInk displays and slow LCDs into account.
    • Refresh rate. Can this be shown? 
    • Input device lag. Move blocks on the screen in response to the mouse and keyboard.
    • Brightness?
    • Colour depth?
    • Pixel pitch (fanning out lines)
    • Viewing angle (hidden images?)
    • Contrast ratio?
    • Resolution (display on screen) 
  • Occasionally move things to prevent screen burn-in? But then some items need to remain in position, such as edge guides.
  • Include optical illusions for fun and games. 
  • Check for stuck pixels (whole screen one colour. Pixels might be stuck on or off, and might be R, G or B). 
  • Whether motion blur is supported.

 

http://www.lagom.nl/lcd-test/

http://www.youtube.com/watch?v=P6O7U6H0H38

See monitor calibration

 

Colors

Should be defined in an external package.

 

See http://people.csail.mit.edu/jaffer/Color/Dictionaries - NBS-ISCC Color Centroids seem elegant.

 

Links

 

Lessphic: http://piumarta.com/software/cola/canvas.pdf

http://en.wikipedia.org/wiki/Windows_Presentation_Foundation

See the Java APIs for Swing and Java2D for inspiration.

http://www.antigrain.com/research/font_rasterization/

http://cairographics.org/

Font rendering and text handling: http://www.pango.org/

http://www.w3.org/TR/css3-webfonts/

 

Events in Squeak:  http://isqueak.org/ioGetNextEvent

 

Input methods: http://www.microsoft.com/globaldev/handson/user/IME_Paper.mspx

Keyboard layouts: http://msdn.microsoft.com/en-gb/goglobal/bb964651.aspx

Keyboard USB scancodes (a standard! at last!): http://www.win.tue.nl/~aeb/linux/kbd/scancodes-14.html

But then Operating Systems define them differently again:

http://lxr.linux.no/linux/include/linux/input.h

http://classicteck.com/rbarticles/mackeyboard.php

 

FreeBSD graphics: See man pages for vga, vgl. It seems that drawing lines, ellipses, rectangles and copying blocks are supported by the driver (yay!).

http://www.freebsd.org/cgi/man.cgi?query=vgl&sektion=3&apropos=0&manpath=FreeBSD+7.2-RELEASE

 

Inputting any Unicode character: see ISO-14755 http://www.cl.cam.ac.uk/~mgk25/volatile/ISO-14755.pdf

Comments (0)

You don't have permission to comment on this page.