Through a transducer device and the movements effected from a digital pen, we have a pen-based interface that captures digital ink. This information can be relayed on to domain-specific application software that interpret the pen input as appropriate computer actions or archive them as ink documents, notes, or messages for later retrieval and exchanges through telecommunications means. Pen-based interfaces have rapidly advanced since the commercial popularity of personal digital assistants (PDAs) not only because they are conveniently portable, but more so for their easy-to-use freehand input modal that appeals to a wide range of users. Research efforts aimed at the latter reason led to modern products such as the personal tablet PCs (personal computers; Microsoft Corporation, 2003), corporate wall-sized interactive boards (SMART Technologies, 2003), and the communal tabletop displays (Shen, Everitt, & Ryall, 2003). Classical interaction methodologies adopted for the desktop, which essentially utilize the conventional pull-down menu systems by means of a keyboard and a mouse, may no longer seem appropriate; screens are getting bigger, the interactivity dimension is increasing, and users tend to insist on a one-to-one relation with the hardware whenever the pen is used (Anderson, Anderson, Simon, Wolfman, VanDeGrift, & Yasuhara, 2004; Chong & Sakauchi, 2000). So, instead of combining the keyboard, mouse, and pen inputs to conform to the classical interaction methodologies for these modern products, our ultimate goal is then to do away with the conventional GUIs (graphical user interfaces) and concentrate on perceptual starting points in the design space for pen-based user interfaces (Turk & Robertson, 2000).