On November 27th I was one of the attendees for the first ever TedxRyersonU, an independently held TED event at Ryerson University. There were two talks that day that I felt directly related to what we as interactive content developers do. Both speakers work at the Digital Media Zone (DMZ) at Ryerson, a place Ryerson president Sheldon Levy calls “the next MIT”.
Hecham Ghazal was up first with Parallel Human Processing. I wondered what Ghazal meant by this title, but it soon became clear that he sees the internet as a socio-cognitive network – one that can solve complex tasks/problems better and faster than any artificial intelligence (AI). As Ghazal pointed out, the best AI still can’t tell you if a joke is funny but people on the internet can.
Since the start up of social networks Ghazal realized that they resembled a neural network where each neuron was represented by a human being and the axons were represented by internet connections. The problem is how do we harness the power of social networks to solve complex problems that require cognitive intelligence? One solution is Ghazal’s company LeanIn which allows users to create social networks to filter through the massive amounts of online video so that you can easily find videos you want to watch and even which section of a video everyone thinks you should watch.
Ghazal’s solution deals with organization and filtering of video content, but what else can we do with social networks? I think that this is an area that we can definitely focus on and perhaps show clients that social networks can do more than generate buzz around a product.
Next up was Dr. Hossein Rahnama with Business Models for Mobile Applications. Rahnama’s research focusses on context-aware computing and he’s best known for his involvement in the Paris Metro Application.
What I found extremely interesting about the application was how well it automatically adapts to the context it’s in – whether that be ambient noise levels or recognizing broad swipe gestures for users with impaired vision. This type of adaptation brought to mind that we usually talk about applications degrading nicely, in which case we’re typically thinking about system capabilities. In Rahnama’s talk I learned that it’s not just about the system your application is running on but also about the external environment. For example, his application would automatically switch to a minimal interface if the surrounding bandwidth couldn’t support the regular graphically rich interface. As someone who has waited countless times for Google maps to render I can definitely appreciate this feature.
Much of Rahnama’s talk focussed on context-aware applications that were developed by both graduate and undergraduate computer science students. One of these is an on campus application that will populate lecture notes based on which building the user is currently in. As part of the application’s services, some lectures are recorded and can be accessed from a user’s phone allowing students to sit back and pay attention instead of the typical mad scribbling of notes.
Another application, RAIMA (RAIMA publication abstract), dealt with multi-context visualization to allow users to network easily at conferences. Rahnama joked that he was known as the guy who researched dating applications, and I can see why after hearing about the eight dimension matching involved in this application. Essentially users build a profile specific for their work interests which then plots distances to other users based on similarities. The application displays a graph with icons of other users along with their similarity distance so that you know who to talk to. You could then message users directly without ever having to hunt them down in person.
Applications aside*, Rahnama focussed his talk on methods for how to approach application development. Though being context-aware is highly important in current mobile development he also pointed out the importance of being platform agnostic now that browsers on most mobiles support technology that can handle highly interactive applications. Other points he touched on in terms of a design model were harnessing collective intelligence, using data as the driving force, and lightweight programming models. For developers these concepts can be viewed in terms of building a core which you build tools and APIs for. Users can then use your tools to create novel content. A great example of users creating content is the Audiotool application where users can generate their own music and share with others – all within the browser, and with tools developed by people in our industry.
What I took from Rahnama’s talk was that as mobile computing grows, we as developers need to step back from trying to keep the visual appearance of an application or a site the exact same. Instead we should be focusing on how to create experiences that can adapt to both the system and the real time environment they’re in.
I believe that as interactive developers we are in the right place to create some really amazing things with the concepts from these talks. We already use social networks in many of the sites or applications we build. How about we go one step further? For example, why not let users help a game evolve into something better? Maybe the game could aggregate various ways users win levels and create a smarter AI. What about creating applications that are so context-aware they almost seem intelligent? Granted we have many applications that use geo location but we can go even further. Let’s think about concepts like ambient sound, current bandwidth, or how the user is interacting with the screen.
Yes, trying to create highly adaptable sites and applications is a lot more work but interactive developers have a history of pulling off the impossible. We’re already seeing the evolution of products that allow us to be somewhat platform agnostic with our development (e.g. Adobe AIR for almost everything), making the process of multi platform development just a little easier. Perhaps one day we can look forward to mobile browsers that have direct hooks to system hardware. To quote Rahnama “Innovation should be fun and social” and I think that we tend to follow that motto. It’s just a matter of seeing how much further we can go with it.
*There were so many applications that Rahnama went through. Here are the highlights on some more:
- A TTC version of the Paris Metro application that can automatically synch with your calendar and notify people you have a meeting with on how late you’re going to be. If you’re wondering about reception, this does require installing hardware throughout the subway.
- Automated interactive blogging that generates a comic strip at the end of the day based on phone usage that day. You can also invite friends to modify the comic strip if you’d like.
- A social table using the Microsoft Surface to suggest conversation topics based on your barcoded profile stuck to the bottom of your glass.
- Augmented reality at the ROM which allows you to use your phone or special eyewear to look at artifacts around you and receive information about them. The application also opens up a stereoscopic map of the museum to help you find your way around.