Microsoft Surface tests
I had one of these awesome boxes to play with this week. The Surface is Microsoft’s multi touch display table. It runs on Windows Vista Business OS and we can develop in a normal Visual Studio environment using C#. In our case, we wanted to look at ideas for use of the table by television presenters, so we looked at a few interactive ideas and the connectivity of the box.
One of the box’s advantages is that it comes as a solid (if heavy) complete unit with both the normal multi touch surface for control, but also bluetooth connectivity and a pattern recognition system which is able to read Microsoft’s own format of image tags. The demo software just draws people into the system and the people that we showed it to were easily persuaded of its functionality. The Vista operating system isn’t seen by most users. The Surface has it’s own wrapper around all the loaded applications, with a touch menu to bring them up. The software can look at the touches to the screen and small round ones it assumes are fingers. It can even tell from the slight shadow to one side, which direction the finger is coming from; on the US Pesidents demo this is cleverly used to turn over a card to be facing the correct side of the table for reading by the person that touched it. It is neat ideas like this that will bring a different experience to the user. The programmer has to think more about how the system may be used by more than one person and by users all around the table. No more square Windows thinking.
In my case, I decided to look at newspapers rather than the usual photo application. We receive a feed of high resolution pdf files of the newspaper pages. These can be converted to rather large jpg files to be displayed by the system. The controls that come with the library make it very simple to load the pictures into an interactive screen that can be viewed and manipulated by several people around the table. Using C# within visual studio, we were able to both learn the toolbox and put together an application in a couple of days. The large images show the power of the system really well. Users were able to zoom into even the smallest part of the page to read the details of a story, with no worry about the text being blurred. This is not a high resolution screen, but it was very appealing and easy to use. Not much around on the web in terms of examples yet, but one I found was this two part drag/drop example at CodeProject.
(typical programmer error – leave the keyboard on the touch screen for the picture!)
All the development is done within Visual Studio as normal. We have libraries to connect to other systems for transfer of images, socket connections to control the 3D systems that generate graphics to other displays. It would fit into a networked PC system with no trouble at all. The Surface interface makes it very easy to add the applicatons to the main interactive menu. A simple xml fil is all you need, with a couple of png or similar files for the menu (some apps have animating versions of these) and the app appears as a choice in the menu list.
Main problem, as usual, is that when people see the ease of getting the first application tests together, they assume it will only be a few more days until they get a bells and whistles version with all the ideas that have been generated. It is as easy as WPF on a normal winforms app, but it will generate lots of ideas from the users so keep the sketchbook handy.
We tried a studio test as well, which gave use a few problems. The system has a secondary output display which is great for a developer. He/she can be programming on the second screen while the Surface uses the main display. We did try to get the output into a second monitor, to give a view of what was happening on the screen for people not at the table, but we failed here. Not enough time or planning probably. As shown in the picture, there are a few usb ports to allow mouse, keyboard, bluetooth adn a couple more which could be brought to the outside of the cabinet if needed. The screen display has a component and vga out with audio, to give a second display to the system.
We plan to do more tests later and will probably not just want a simple reflection of the screen. The background display will be more of an aggregation of what has been done on the main ‘live’ screen. This might be normal windows graphics or it might involve the vizRT systems to produce graphics that could be used in a more augmented reality fashion and keyed over the set. One problem we did have in the studio, was that some of the lighting rig affects the interactions on the Surface sensors. Some of the bright lights must produce infrared at a power that causes the Surface to interpret them as touches to the screen. The effect was to have false touches to the screen and to see the pictures jumping around somewhat.
All in all, a good first start. It generated a number of positive ideas and showed us how easily we could put the main interface together. We now need to sit back and talk through a more planned, practical system for day to day use.