Having reached the end of the semester, my research and development on my iPhone and Surface projects has come to a temporary halt. Over break I look forward to enjoying my family and help design materials for the Tangible, Embodied and Embedded Interfaces conference at MIT in January. However, this has been both a challenging and fulfilling experience developing on both platforms, and has yielded results that I am truly pleased with.
Some challenges did present themselves as I debugged iPhone the application, particularly when trying to find a method to read from a file hosted remotely and to retrieve an image’s path on the iPhone. While that was resolved after two days of hunting the internet (as detailed in the previous post), there currently remains a challenge when working with the UIPageControl. To make the UIPageControl work I had used some sample code from Apple’s developer resources, but unfortunately this didn’t play nicely with the method I devised to populate the Response screens with information from the JSON file. Although I effectively developed a system for updating to and retrieving from the JSON file into a NSArray, for some reason “flipping” through the dynamically generated pages caused the application to crash. Because the array is successfully populating and the UIPageControl works with any data not being retrieved from an array, repairing this issue will take additional time. Additionally, attempting to repair this took priority over debugging the PHP script which is meant to upload an image object, and although I have the Objective-C in place I know I need to go back and refine the PHP uploader method in order for it to be scalable. As such, these are two weaknesses to the application which I intend to repair given more time.
Also, there were challenges arising from how we populated the Surface’s ScatterView with Artwork ScatterViewItems, as it was difficult to communicate the location of tokens to the ScatterViewItems on the screen, necessary for the full effectiveness of our saving and deleting functions. We also lacked time to play with the retrieval of responses, and in the future might like to work on randomizing them while also importing more images.
Happily, the current weaknesses of both applications are not terminal, but simply could not be resolved in the four weeks we had to develop the aesthetic and functional prototypes on the Surface and the iPhone. As I intend to work with Dr. Orit Shaer and Jim Olson of the Davis Museum to test and refine these applications in the museum setting next semester, this is only the beginning of a project I am very excited about bringing to the Davis Museum. An additional feature I plan to bring to the application is the ability to read 2D codes using the iPhone’s camera, so that users can scan a code rather than type one in when unlocking responses. I plan to use the zxing image processing library with its iPhone module to do so, and I believe this will add a fun and enticing element to using the application. Additionally, I intend to write a PHP script to convert an XML file hosted on the server to a JSON file, as Jim and the museum curators are already storing information about the artworks in XML files. In so doing, I can avoid the hassle of working with the NSXML methods and continue to take advantage of the JSON framework, without inconveniencing the museum staff.
Although Clover Studio’s name may not be an intentional blend of the phrase “creativity lover,”  the visionary stylings of Okami are certainly in favor of this rumor. Developed by Clover Studio for the PS2 and ported to the Wii by Ready at Dawn, Okami is an action-adventure game framed by Japanese mythology and cel-shaded artwork that simulates a living sumi-e brush painting. In order to save Nippon from evil demons, the sun goddess Amaterasu inhabits the playable form of a white wolf at the game’s start. In addition to traveling from quest to quest, it is the player’s task during the adventure to use her “Celestial Brush” to perform acts of divine intervention. While these unusual game mechanics might seem to require a conscious suspension of disbelief, some of its most implausible features are actually essential to a consistent and immersive gameplay experience.
What makes these features immersive rather than interruptive is their implication in Okami’s greater gameplay metaphor of omnipotence. This metaphor mediates the “gap between gameworld and player or…gap between player and avatar”  in order to “help at least approximate or “fake”…experiences,”  such as the experience of being divine. As described in their paper “Games about LOVE and TRUST? Harnessing the Power of Metaphors for Experience Design,” Doris Rusch and Matthew Weise point out that dealing with this gap “creatively” can bring the player incredibly close to “stepping into the shoes of the hero [and] into the body of a completely different species.”  The player-avatar relationship, aesthetic schema, and Celestial Brush interface metaphor in Okamiact in concert to navigate this gap, and create an immersive experience in being omnipotent. Continue reading “Instant Divinity! or, Immersion in Okami” »
In order to save and retrieve responses from the iPhone, I’ve been looking for technologies to retrieve and create objects through access to a web server. The Surface’s C# methods for retrieving XML documents and writing to them are incredibly straightforward, but Apple apparently rendered their suite of NSXML methods private and therefore unusable on the iPhone for anything but parsing XML. Interesting decision. The KissXML API is supposed to wrap these methods and objects in a way that can be used on the iPhone, but unfortunately I keep getting an error that is driving me mad. No thanks to you, XML – moving on.
On the local side, I could have response objects be created within CoreData, yet the ominous statements from iPhone developers across the globe imply that I’d face a living hell come time to sync the application’s data with a web server. I’ve also looked at SQLite3, but to be frank I’m confused by the little documentation on how to use this with web services.
In fact, web services on the iPhone seem ill covered, or there is no true compendium of possibilities and/or sample code on the internet. What I was led to using is JSON, a lightweight computer data interchange format. This introduction, Andy Jacobs’ tutorials, and these tutorials proved essential to firstly learning what JSON is, learning how to use json-framework, and how to use PHP as a go-between my Objective-C code and my JSON code.
Let’s just say it’s been a marathon two days of educating myself about the different forms of data persistence available, and I’m so pleased with the web services solution I am using. Thanks to the authors of all of the posts above; I am finally able to read and write to a JSON file which can now be used to store responses! Furthermore, because the JSON is being posted by the PHP, I can do any series of things in the PHP such as convert it to XML or MySQL later, and retrieve from there as well in that event.
Excellent. It’s relieving to have this burden off of my shoulders…now, on to the next thing (prettify the interface more, upload images and save their URLs to the JSON file; my tasks for tomorrow).
In addition to my CS349 group project’s alpha prototype, I spent the two weeks before and including Thanksgiving Break working on the alpha prototype for my iPhone application for the Davis Museum. This involved developing the model-view-controller relationships for a navigation style application, programming the interactions, designing the GUI and researching tag-based reader SDKs for the iPhone.
Just the development process was somewhat iterative, as it was an ongoing learning experience – for example, I completely rewrote the application when I realized that my MVC model wasn’t very sustainable or usable. Fortunately, Apple’s supplied view controllers (such as the navigation view controller) and their sample code were there to help, and while even after writing little programs in Objective-C all semester I was having difficulty adjusting, Apple’s resources helped me to the point that I feel incredibly comfortable with programming for the iPhone. (Thanks for nothing, “Beginner’s Guide to iPhone Programming.”)
Unfortunately, much more difficult than expected was working with the File I/O. Unlike in C#, used on the Microsoft Surface, Objective-C on the iPhone seems not to easily support both reading and writing to an XML file on a network. Because this is essential to the functioning of my application, I anticipate spending the next week attempting to learn more about SQLite3 and Core Data, and determining which one I should be working with (and how, to store my data).
It has also been difficult to find a tag-reading SDK that I can use to tag works in the museum. While RedLaser offers a very inexpensive SDK considering the great technology they offer, having to upload items to Google Base in order to have them read is problematic, as the paintings obviously aren’t products to be sold. It’s encouraging to see that Microsoft has developed Microsoft Tag, software that would address our needs, yet they have not released the SDK and apparently they have no plans to (only potentially to release the API, and they haven’t done that yet either).
The latter two issues (implementing database access/reading/writing, finding and implementing a tag-reading SDK) are those that I see myself focusing on over the next two weeks. Something that I look forward to (in addition to resolving those) is also designing the aesthetic for the application, such as the icon and the rest of the graphic polish.
Over the two weeks including Thanksgiving Break, my CS349 group developed a prototype of our Surface application for the Davis Museum. I was involved in creating and programming the artwork objects that appear on the GUI, and programming the file i/o for the user responses. Because this was our deep, “risky area,” I implemented this fully and created a system for populating the responses (that had been saved earlier) on the screen. Finally, I also worked with my teammate Helen on having the Surface recognize the placement of our tags, and over the next week will develop an algorithm for sorting the words of the native screen based on the tags that are placed.
The development process was an interesting one – I experienced several trials and tribulations when installing the Surface development environment on my computer, and after purchasing and installing Windows 7 (twice – always install the 32-bit version, people) was heartbroken to discover that my computer’s resolution (1440×900) is just shy of the vertical requirements to run the Surface Simulator. No worries, though, an external monitor came to the rescue – but this was and is a definite setback.
When it came to actually developing for the Surface, getting the grid structure to behave the way I wanted it to presented some confusion, but the great result was that I developed some very reusable sample code that may help inform other CS349 projects in the future.
After plenty of researching, one happy surprise was that doing file I/O – and particularly with XML – is extremely intuitive on the Surface. After learning about the correct syntax and structure, implementing the input and output was fairly simple (unlike, say, on the iPhone, which is melting my brain right now). After demonstrating our application and attempting to tweak the output from the responses it’s become apparent that when the file saves it does so to a local file, which so far I am both unable to locate and edit. Furthermore, a next step is to only retrieve the most recent five or so responses.
At this point, the most significant tasks left to do are improve the display of information, the methods of “cleaning up” the screen, include guards against “dummy” data, and develop the sorting algorithm mentioned above. The great thing is that because our architecture is very modular it shouldn’t be difficult to work with our structures, and the most challenging final step will probably be to improve (and render as usable as possible) our aesthetic.
In “The Computer for the 21st Century,” Mark Weiser describes the ubiquity of writing as a “background presence” of “literacy technology,” which does not require active attention but is prepared to deliver information at a glance. At this point in our history, computers are not similarly embedded in the world around us but are instead only present in “a single box.” Weiser calls for location-aware devices that are intelligently adaptive, and that are built to address specific tasks. He has a vision of “pads” which are to computers what scrap paper is to paper; useful anywhere and the “anitdote to windows.” The idea of mobile pads and live boards (all of the above involving displays) to create ubiquity is at the heart of Weiser’s vision for future computing.
Unlike Weiser, in “Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms,” Ishii and Ullmer state their interest in moving away from GUIs nearly altogether, taking the idea of invisible computers quite literally and embedding them in everyday physical objects. As the authors themselves put it, they are more interested in “awakening richly-afforded physical objects, instruments, surfaces, and spaces to computational mediation, borrowing perhaps more from the physical forms of the pre-computer age than the present.” A key example is the ClearBoard, intended to change a wall “from a passive architectural partition to a dynamic collaboration medium.” The objects in the ambientROOM subtly display and communicate information by their very natures, and not by a concrete display of information on a GUI. As the authors put it, “GUIS fall short of embracing the richness of human senses and skills people have developed through a lifetime of interaction with the physical world.”
Our project is more within the scope of Weiser’s idea of ubiquitous computing, as it is an application written for a GUI, albeit a GUI embedded in a table surface. Particularly considering the project’s extension that I am developing on the iPhone, this system is comparable to pads (iPhone) and live boards (Surface), a bit more ubiquitous and spatially relevant to the different parts of the Davis Museum in which they will be used. Ways in which our project can be a bit more attuned to the goals of Ishii and Ullmer is by including iconography and objects in the application that are modeled on physical objects and interactions with them, moving away from common GUI idioms such as buttons to move forward and to close objects.
In addition to exploring how mobile technologies may be assistive within the Davis Museum, I’m part of a team this semester that is investigating how a Microsoft Surface in the museum’s lobby could improve visitors’ experiences. The problems we seek to address with our design are that students (a) find the lobby generally unwelcoming, (b) don’t realize that the museum is their space, and (c) appear to be the most attracted to social and personal experiences in the museum. By developing an application for the Microsoft Surface and installing it within the Davis Museum’s lobby, we seek to engage students who can see the Surface through the lobby’s glass walls, and who pass by them while they head towards the galleries.
Our application, in its native state, will display a word cloud including the names of the galleries’ themes. By adding one of three tokens (time period, materials, and geographical origins), the user can modify the word cloud to display the names of such subgroups. Upon dragging a subgroup out of the word cloud, a cluster of related images populates around the word, which the user can drag out of the cluster to look at individually. When dragged out, the image automatically enlarges. When the user selects it, a prompt slides out next to the image asking them to think critically about the piece in one way or another. To respond, the user may “finger paint” directly on the image or in a space below the question, and submit it to the database. Afterwards, they can view others’ responses as well.
In these ways, we believe users will feel personally connected to the works and invited to think about them analytically before and after they actually enter the galleries. Furthermore, they’ll feel connected to the other visitors of the museum, and as if they are part of an active and analytical community. By doing so, we believe our application is ideal for humanizing the lobby space, making the Davis Museum a more social space, and extending the learning of visitors through every stage of their visit and even beyond the museum’s doors.
For my independent study this semester, I’m researching how mobile technologies can enhance the museum experience. Particularly within a museum setting, where the focus is on the artworks themselves, how can having an informative mobile application enhance the experience without replacing it? Furthermore, what kind of benefits could a mobile application offer to a visitor which they cannot acquire elsewhere? Over the course of my independent study I hope to answer these questions, and work with Wellesley’s Davis Museum to develop and begin testing a prototype iPhone application.
It is my intention that this application will provide visitors to the Davis Museum with navigation assistance, retention assistance, the ability to “tag” pieces, and the ability to create preference-based tours. This will instill a greater spatial awareness in visitors based on their current locations, and build usable relationships between pieces based on facts and visitor contributions. This would transform their iPhone or iPod touch into a unique tool informing the choices made during a visitor’s museum experience.
So that learning can continue outside of the museum, I would also like to include the ability to retain and share a visitor’s experience. What if the application created a database of information about a visitor’s favorite pieces, tags, and recommended pieces they ought to see next? Or what if there were a mechanic for recording comments or artifacts that are private to the viewer, to share with friends later?
Recently, other TUI researchers and developers have become inspired to answer the demands of museum education as well. Research topics include museum education via mobile gaming, the ability to add tags to a database, and collaborative multi-device learning activities. The research of Jolien Schroyen, et. al. in the ARCHIE project revealed that creating a mobile gaming experience that runs in tandem to a museum tour helps students absorb the information and become engaged with the subject matter. Additionally, the work of Dan Cosley, et. al. in the MobiTags project found that “tagging” works was beneficial to viewers who wanted a less formal, more personal connection to the pieces. Also, it’s informative to visitors about the types of pieces in the museum as well as the types of visitors that have preceded them.
Clearly, a mobile platform is well suited to tailoring the museum experience to its unique visitors, using GPS information to keep them aware of their location and its offerings at all times. In conjunction with a tagging mechanic, this will allow viewers to feel as if the museum space is more like their personal space, tailored to their interests and their unique goals. Finally, by allowing visitors to bring the experience out the doors with them, a mobile application can allow visitors to share or recall their experience on the fly and in other public settings. As I develop and test prototypes, I will be interested to gather feedback about whether a game-like mechanic is as enticing for all demographics as it is for young students, and what other techniques can be used to make the museum experience a more social one as well.