We created something wonderful and kind of nerdy. It’s called BeerRecognition. It’s an immersive experience with a central role for beer bottles. To do so we mixed pattern recognition, augmented reality and beer brands together in one concept. A recipe for success.
As Strategic Innovator I have the honour and responsibility to create excuses to tinker with new technology for the coolest humans of all: normal, generic people like you and me. And because we’re a digital design agency we tend to use new technologies as inspiration to create something fun, useful, and valuable.
We like to call this flavour of applying innovative technology and human centred ideas ‘applied innovation’
Beerrecognition came to be when we decided: “When drinking awesome beers, people should be enveloped by the full beer experience. A label and fancy website can only do so much.”
And so we applied our process to figure out how we can make the beer tell it’s own story. We mostly followed the Human Centered Design steps to get to the answers, although to make room for our discovery we also included ‘tactile tinkering’ into our method.
Human Centered Design consists of these activities that are iteratively run through: Empathise, Define, Ideate, Prototype, Test. In our innovation program we use Prototyping as a leverage to further the other activities. That’s the process of ‘tactile tinkering’.
Tactile tinkering basically is touching and manipulating the tech and design and figuring out how it can be applied even better. Holding your experiment, re-ideate, re-empathise and re-structuring the story is something that drastically improved our outcomes AND learnings.
We try to start our discovery projects with a ‘what if’-scenario. This way it’s open to interpretation and makes us think about the possible outcomes. For Beer recognition it was:
What if we immerse people into the story and inspiration around a beer, or other drink.
Beers seem to be a logical choice because their labels are very distinguishable and pretty fun. There aswel real fans, materials and backstories of beer, so plenty of inspiration to go around.
Did I mention fun must be part of our discovery projects? Well it is! 😀 We found that intrinsic motivation when designing an idea is very helpful if you want to push the enveloppe of what’s possible, but also to drive energy in innovation teams and stake holders.
So ‘just recognising beers’ isn’t cool enough, or rather it was not enough for the concept to work. We wanted something that blows people away, not just inform them with cute factoids.
Pushing technology to discover usage, mythbusters style
Most of the time we have a set goal in mind before we start. Like: discover a technology concept like: recognising things based on unique physical properties.
We know how things SHOULD work, but most of the time we have not seen a good example based on existing technology.
In true Mythbusters style we work the challenge from two sides:
- Can we create our idea with existing technology?
- What do we need to replicate the ideal scenario?
Our goal: how can we combine common design trickery and bleeding edge technology to build the perfect human centred experience.
Under the hood
Augmented Beerreality in Unity
We wanted to emerge people into the world of each unique beer. Adding Augmented Reality (AR) seemed as a logical step to take.
Unity is a (mobile) visual engine used to build games, AR apps and all kinds of visual powerful experiences. Advised by our AR expert we build an environment where somebody can step in, show the beer and gets immersed with inspiration and information.
Recognizing beers with Tensor Flow
At the backend we used the Artificial Intelligence tool Tensor Flow to ‘learn’ the labels of our beers. Imagine over 730 images of each beer bottle to learn each angle of a beer beer bottle!
There are AI tools available that recognise bottles in general, but we needed to recognise specific brands of beer. Like the general cloud services of Amazon, Google, Apple and Microsoft we had to learn a specific Model of -for example- a ‘Lost in Spice’ beer.
Innovation is improvisation
We hoped to use a new Real Sense Intel camera with depth perception to distinguish people from the background of the environment that people step into.
Alas, the camera release was delayed and we had to use a technology that weatherman use since the 90’s: green screens. When you step into the experience you also step in front of a bright green screen. This specific colour is replaced in Unity with a beautiful, fun and animated backgrounds.
Of course this seems a bit ‘dodgy’ but it really pulled the experience together. Green screens might not be the end solution, but are perfectly fine to experiment with.
Learnings and next steps, with you?
This was a really cool project to work on, but it’s not finished by a long shot because we’ve got an arm-long wishlist to incorporate. To mention some: ditch the green screen and use camera’s with depth perception, use gestures to interact with the experience, use more interactive animations, track the body in the experience, so we can even put some elements ON you.
Not to mention the technical learnings we got and want to elaborate on: get the learning of bottles and labels to greater heights, experiment with faster hardware, but also: create this on a mobile platform. Perhaps this should be a SnapChat plug-in? Who knows! Lot’s to experiment!
All in all we think we’re ready to make the next step and both implement AR and learning our models with Tensor Flow for production ready applications and we want to invite you to build and re-imagine the beer-experience from the ground up!
Imagine the potential to not only create inspiring environments for your experience driven marketing, but also empowering employees to see more with the help of AI, AR and high definition optics we find in every mobile tool in the field.
Do you want to know more about our Innovation Process, AR, AI, or Applied Innovation? Drop us a line! We’re sure you won’t be disappointed!
Life to me is tinkering. Tinkering is all about starting, the power of serendipity and sharing. Tinkering is figuring out how things work, applying that knowledge and learning in the proces about stuff, people and life.
This is part 1 of the wrongfully named tetralogy on innovation and tinkering. Written on May 16th, 2016
Let me tell you how I learned how tinkering formed my life, friendships and view on the world.
Who of you have every started a project because it sounded cool, ignoring the fact you knew nothing at all about the subject?
I did! I do, actually I am right now!
This ‘Gong ho’ attitude about things is the basis of tinkering, or ‘klooien’ in Dutch. It might sounds informal, unfocussed or even childish, but it is a rather powerful way to create things, create energy and thoughts. Tinkering gave me direction, professionally and personally.
My first start in tinkering
When I was a awkward bucked toothed boy in elementary school I found I liked learning in a specific way. I need to emerge myself into a subject. I basically created a world and story around my obsession.
You know those obsessed kids that can’t stop spewing facts about spacecraft, dinosaurs, or physics? I was one of those kids. I still am!
After the first protected, but also emotionally confusing 12 years of my life, I found this emergence into subjects is a great way to connect with people. This is something pretty important for kids that age. Looking back, that was the first time I started to exploit tinkering socially.
Tinkering is kind of a social process, because the process makes you think about all ingredients you need to progress towards a goal.
Mostly I start with something that fascinates me personally, like recreating a movie that inspired me. Then I just start and continuously hone my skills to get a level of understanding of the subject. Honing the skills means learning about the material I use, but also learning about the people involved.
I try to learn about how they can excel in our little project and make them happy. Come to think of it, I think I approach materials and tools like I approach people. Trying to find harmony between the three of them by exploring the boundaries of all of them.
In this process I keep tinkering and find ways to approach little problems and solve the puzzle.
I say ‘puzzle’, but that implies a predefined outcome. This is not persé not the case. I like to work towards a level of completion, but not a specific end result.
With each step we take, we should feel free to backtrace, follow a tangence, or even start over. This is where serendipity comes in.
In a nutshell serendipity is finding when you’re not searching. These are magical moments in life and projects, when you see that glistening of a rough diamond when you’re down in the dirt of something seemingly unrelated.
Learning to recognise those little gems and grabbing them, is the toughest thing. In school and life we learned to focus and keep ourselves from being distracted.
I think we need to unlearn this, because dealing with these distractions is exactly what we’re build for. Freely following any interesting pursuit is key in tinkering.
The skill to identify, use and trust serendipity without prejudice is tough, but also critical in tinkering. Serendipity is a true catalyst of the tinkering process.
Unlearning the fear for serendipity is however a deceptively simple, but also tough thing to do. It is like learning a new skill.
Curiosity > Fear
That skill of embracing serendipity is to me the skill of letting curiosity win from the fear of the unknown. It is the skill of turning the unknown into a world of possibilities.
The main tool for me to turn fear into a forward motion is curiosity. Pure wonderment about things, people and the possible directions a story can take.
I think following ones curiosity is one of the most underestimated skills in life. Curiosity is like a little compass. It not only drives innovation, but also creativity and I think even happiness.
With that grand statement I perhaps also explain how tinkering progressed me through various jobs without losing direction. It’s being curious, just doing it and sharing with whomever wants to join.
Can we transform visual- and auditive emotional cues into emoji’s? And if so, can we improve the your digital experience by reading between the lines? I like to think so. This is the pitch I send my colleagues to get them to research the subject.
Mirabeau constantly researches how digital services can improve the human experience. One of the biggest hurdles in interpreting what a person needs is the ‘you know what I mean’-factor. Although people can say one thing, often times they need, or mean totally something else.
My pitch basically asks: could the recording and ‘reading into’ emotions make digital services more empathetic, efficient and powerful? This pitch is one of the directions I found interesting enough to explore.
The next step to intent detection: combine inputs and experience
One part of the research is to see how we can better detect intent. Although there are some interesting services that try to analyse your intent based on text, I think we need to combine a couple of technologies to make sure ‘we know what you mean’.
So perhaps if we record and combine facial expressions, gestures and sounds to determine your tone of voice.
Basically we want to see if we can ‘put emoji’s between the lines’ based on your face, voice and gestures, so digital services can better ‘read between the lines’.
While we’re at it we probably must also see if we can use machine learning to hone our digital skills to ‘read’ your intent. So there’s an I.A. aspect to this as wel.
Using emoji’s to annotate intent
To interpret emotions we also need a way to record them together with the words we express ourself.
So could emoji’s be the music-score to our lyrics?
Although emoji’s are a cultural outing and might be interpreted in many ways, we think we can use some of them as clear cues to express stress, happiness, jokes, sarcasm anger, excitement or even despair.
Imagine you’re a bit peckish. You’d probably say: “I’m hungry”. A robot build to fulfil your every need would start cooking a full brunch right away, but is that the right response? Well, that depends, right?
Read between the words
You might have meant: “I’m kind of hungry, so I might want to get a cookie in a while”. In an alternative scenario you might have skipped breakfast and are borderline ‘hangry’ (a fierce form of hunger expressed with a lot of curse words).
There’s a big difference between “I’m hungry 😅” and “I’m hungry 😡”.
Emoji’s could be a great way to record your intent ‘between the words’, rather than ‘between the lines’. With this added ‘intent’, machines can help you just like ‘they know what you mean’.
That’s our second part of the research: can we use emoji’s to predict, personalise and help you in a better way?
TL;DR: Let’s see if we can detect emotions, transcribe them to emoji’s and use them to read between the lines to better digital and physical services alike.
Simon sounds a bit like a classical evangelist. It’s like he sees millennials as helpless chicks in a bucket, in need of rescue. I don’t agree and think Simon might need some saving. My buddy Maarten send this video to me. I loved it! It made me think. Please watch it first, then read my little rant. 🙂
So, I think he’s got this millennial thing the wrong way around. It sounds like he thinks millennials have no defence mechanism to cope with the ‘real world’, ‘normal’ social mechanisms and above all have no idea what’s really important in life. That view seems weird to me, or at least exaggerated and his attempt to help them seems overly protective.
All the anxieties he talks about for example. Millennials are not being hit by their phones, abused by beautiful selfies of their friends or Stockholmized by Facebag! The smothering love and care of their parents are not making them jump of bridges, nor is positive reenforcement and effort-medals making them entitled little princesses. Everybody is an entitled little princes at some point in their life! That’s fine!
People are resilient, flexible animals and live lives differently than their parents. And millennials are different than people like Simon!
BTW: I intentionally don’t use the word ‘generation’, because I think it has to do with whether or not you can view change with an open mind. Generally, when you grow older, having an open mind is though. Millennials definitely have the advantage, because of their age and the lack of sense of history.
Generational gaps are quite normal
There is a disconnect between people like Simon and millennials, and that’s quite obvious and quite normal. It’s called a generational gap, or whatever. Every pair of generations separated by big social- or technological- jumps have this radical divide between the two. AND WE COPE. BOTH ENDS OF THE GAP END UP FINE!
Of course we all get some cuts and bruises from the friction, but we all will be fine.
Mostly btw, it’s the older generation that finds itself triggering the divide with forceful demand for conformism. The ‘younger’ people don’t understand that and fighting back, reenforcing their identity as a group and individuals.
We already experienced this before and after the big wars, draughts, mechanisation, financial depressions, social and sexual liberation, computerisation, introduction and decline of believes and cultural views and everything in between.
Simon skipped the gap
To me it sounds like Simon is the one in distress btw 😉 And rightfully so! He never went through a (real) generational gap. He was probably spoon-fed prosperity and just want similar things for his children, just like his parent did. But he can’t.
He just found out the generation after him doesn’t want the same dreams that he and his parents dreamt. They want to disregard the past and embrace the future, but also face a different future. The propped up prosperity bubble build by the three generations around Simon has been popped and exposed as a pipe dream.
Up is left, black is love, truth is evil and Simon is part of the problem.
The millennials are not ingrained with a sense of gratitude that people like Simon inherited from parents that likely have seen some part of a war. Perhaps they do have a better, less clouded view of the world, but at the least it’s different, and fine!
Pretentious and Xenophobic BS
How Simon is wallowing in his well put, but quite belittling narrative on the lives of millennials seems to me as a detached, pretentious and xenophobic. He might be partly right, but I think the millennials probably have the best tools to explore and thrive in this world. They’re the boss now.
I can imagine this might be frightful for Simon and the like and to his credit, it seems like he wants to help the millennial by inviting him to his comfortable world.
Folks like Simon seem to perceive life in an different way than millennials, they cope with life, problems and lovely things in different way. That’s fine! Trust them!
Simon needs a hug, not millennials
This lecturing Simon gave us in the video, it sounds more like victimisation of normal people that have a different view on life, have different tools to socialise and have different needs for a up and coming society change.
And Simon and other people with similar views, the ‘old guys’, perceive their coping mechanisms as alien, impossible and unobtainable. Their not, it’s not, it’s fine, they’re fine. Simon should be too.
There needs to be an automagic way to update the address bar while scrolling. This way one can deep link to a specific part of a webpage without hassle. This is crucial for sharing content on mobile- and other ‘fatfinger’ devices.
The address you shared starts all browsers at the top of the webpage in stead of straight to the quote! That’s awkward!
This problem has been solved quite a long time ago in the language of the web: HTML. They invented so called ‘Achors’. Anchors work like little pieces of bookmark-velcro. Just add a hash-sign (#) combined with the anchor-name to the address of the page and your browser will jump right to the relevant quote. That’s what we call ‘deep linking’. In the address bar of the browser you get URL’s like:
Unfortunately these anchors are invisible to the human eye. As a web developer you can expose an anchor list by adding a table of contents in the document so people can jump right to the part of interest. This is however a lost art and rarely used in favor of a clean styled page (which I can relate to).
Continue reading Enabling deep linking (for mobile devices), the right way!
Tagging has been both the content producers friend and foo for some years now. It’s the ‘cheap way out’ for relevance in a world dominated by a user generated content. This article shows you one way to attack the cluttered and seemingly random tag clouds, helping your readers to make sense of their intuitions and your content.
First let’s answer a question: Why would one use tags or taxonomy (defining -in this case: content- in a few basic terms) on a blog, photo site or youtube?
I start with this question because there is no one answer. It rather depends on the person you speak to, be it a content producer, (interaction) designer, librarian or the end user of your content. The following might cover 80 percent of the reasons to use taxonomy or tags:
- make implicit links between content explicit, by actually generating a little group of content items that share that ‘tag’.
- making a machine -more precise SEO compatible- summary of the particular content item, that’s also readable for humans.
- providing a intuitive (but mostly serendipitous) pathway for content consumers to meander through your (and others) content items
- (this is for the librarians among us) adding meta data to structure your content base.
But what are tags best used for?
Tags are meta data
For all intent and purposes tags are meta data. Tagging objects or ‘taxonomy’ stems from the olden day’s when archeologists and librarians needed to sort and describe their objects in a neutral, objective and retraceable way.
By making an index card for each object, people could find and also determine the relation of objects with others. Using reusable words like”rock”, “Denver” and “novel”, “youth” things can be found, inventoried and ultimately used for research or entertainment.
User generated content clouds
Now a days tags are the ‘easy way out’ for content producers (read: consumers) to relate content (photos, video, audio, text) with each other by tossing some descriptive couple of comma separated words in a text box, in stead of describing the ‘object’ in a short little paragraph.
This ‘ease’ of tagging has expanded to the user generated content generation. Resulting in seemingly trivial, but -when applied to large mountain of content- suddenly and surprisingly accurate way to relate and meander through content.
The most famous and well applied taxonomy is the one on flickr.com and other non-text based content hubs (photos, videos, audio). tag clouds, with large words defining a lot of usage and small the less explored words.
With a large and productive user base these sites circumvent the inaccuracy of a unorganized group of content producers using their vocabulary to ‘tag’ their content. By sheer power of numbers of some of these ‘tags’ that overlap the tags used by other ‘producers’ and so forming a relevant cloud of photos.
Perhaps more importantly these tags did also -as time progressed- create groups of people joined by a limited, but statistically powerful vocabulary.
Tags are the measure of relevance between pieces of content
The more threads a spider spins between two branches, the stronger the connection is between the two branches. The same goes with shared tags between pieces of content.
Google makes a statistically ‘web’ of relevant properties of a piece of content in it’s search engine.
Hard links (i.e. linkable tags) however are the editor’s way on ‘forcing’ relevance between pieces of content. The relevance is an implicit web, growing with each piece of content that’s endowed with the same tag.
Are tags the SEO path to heaven?
By the way: a quick note for people that find tags the golden path to SEO heaven. It’s not if it’s not done right.
If done right however, it makes your content more relevant, like it would if you would add links to pages that cover the same topic or are otherwise inspiration or source for your content.
In any case: if you focus on the human aspect of taxonomy, you will be rewarded automatically in the long run (if not instantly). SEO is made to enable people to find content, not to feed machines.
The FAT method
Great, now we go into the nitty gritty of how to accurately and without creating ‘link noise’ make great tags for your content.
The FAT method stands for “Four Axis Taxonomy”. It’s a basic framework or reminder on how to consistently taxonomize your content.
In this method we step aside from the ‘spur of the moment’, and ‘you know what I mean’ way of tagging the content and have the statistics take over.
For a proper way to tag content you have to take some time to plan the taxonomy landscape for your current AND future content. This might seem like a librarian, laborious and tedious approach for an informal tool to define content, but it will pay of.
Our goal with taxonomy is to make only relevant links from one piece of content to the other. Although it’s possible and sometimes even desirable to accidentally link two seemingly unrelated pieces of content -some call it serendipity-, it’s not the goal. Relevance is the reason people and machines ‘click’ on tags, so creating relevance is the name of the game.
To define a piece of content there are four axis over which a person or machine can regard a piece of content. These axis might seem arbitrary, but I’ll illustrate them with some basic questions.
The one question should be always on your mind: “What terms are the desirable path ways to reach this particular piece of content”.
One would predefine the tags for each of these axes and perhaps add as you go to provide linkage in the future. Tagging should have the foundation of consistency and the flexibility for content growth.
- Subject (s)
- Media kind
Sorry for choosing an ambivalent word for this, but it’s used on purpose. With ‘subject’ I mean both the ‘matter’ your content is regarding, as well as the persons, objects playing a role in your content.
An interview might be a good example: choose the name of the interviewer and of the interviewed person as tags. Combine the first and last name in one tag; you don’t want to relate all interviews with people called ‘Bob’ be connected, or if you do, make it a well considered choice.
Be concise with tagging the ‘matter’ at hand. The topic of your content might be an interview about the introduction of a book a person wrote about the love life of Henry Kissinger. Reasonable subject tags would be “introduction”, “book”, “love life”, “Henry Kissinger”. This way one relates all content considering introductions, books, love lifes and Henry Kissinger.
“Why is this piece of content here” is the central question for this axis. It might be arbitrary and even subject to make a category, rather than a tag.
In any case it’s an important connection between content and in most cases a relevant one for periodical and reoccurring content. You might be describing that the content is part of a series with a certain name (for example “series”, “love life of famous people I never heard of”), or that it’s location or situation is relevant (so: “introduction product”, “random ideas” or “walk in the park”).
Define what kind of content your tagging so also the form of the content is searchable. Think in term of “commercial, film, landscape, sound scape, interview, press release, review, discussion, event, product, demonstration, editorial, etc”. This might be an arbitrary bunch of tags, but people will remember that video has take the form of an interview with David Hasselhoff.
Adding the form name “interview” will make a more accurate description of your content. Remember: when tagging audio and video, you might want to mention the execution of that particular media, like: an animation, narration or an interpretive dance.
This might be the ‘buck shot’ tagging part of the method, so apply this sparse and precise. Entities are things, persons and other ‘sticky objects’ mentioned, revered to and ‘visible’ in a piece of content.
It’s basically the stuff people might remember besides the subjects in the piece of content. You can choose from weird quotes, remarks, colors, sounds and circumstances.
I once looked for a music video clip where two guys were hopping and overtaking each other in weirs suites. When you look on Youtube for these terms you might not find the video clip of the ‘Fine Young Cannibals’ ‘She drives me crazy’. So consider the sticky parts of your content.
Rules to tag by
- Constraint yourself to a minimum amount of tags, it’s not a sport to get as many synonyms in one article. The more tags you use, the more chances you have to relate to an unrelated piece of content.
- Be consistent and non-ambivalent when choosing your tag-terms. If possible, make use of the auto complete functionality of tagging tools, and consider the value of every new tag.
- Make a list of reoccurring tags for each axis. For example you’re writing bike reviews. Make a predefined and researched list of bike types to choose from when ever you start a tagging a new piece of content.
- Use tags or terms that are commonly used in the readers ‘world’, but don’t fall for the ‘contemporary’ trap and add ‘awesome’ or other terms that are obviously a lingual fad.
- Taxonomy is thought from the end user and might be used to make a semantic link between pieces of content, but remember that with a wide audience it’s more valuable to be consistent. The audience will get used to your view and might even adept it.
- Decide if you’re going to use singular or plural form for tags. This will make it more consistent and will make more relevant connections.
A while ago I started to record my ‘tweets’ into an archive I own: mecorder.com, based on ThinkUp. The though behind this is I found myself posting most of my ideas and thoughts to twitter, but could not effectively trace back those awesome links and tweet.
ThinkUp is actually a tool to give you insight into your online conversations. So not only does it record your tweets and facebook posts, but it also adds the conversation with others to the mix. That way you can search and see what the conversation was, long after the tool you used has gone. So, at least it’s a handy little tool, but also very insightful.
Obviously ‘mecorder’ is a combination of ‘me’ and ‘recorder’. That would be the ultimate goal: to record my online presence, as well as the cloud around my ‘posts’. In the first case just to have a place my ‘public mental notes’ can live indefinitely, but also for it to have meaning, even if it was in the most insignificant way.
Value over time
I believe all content has it’s value, not only in the present, but also as part of a timeline from yesterday into the future. In that respect we are building a history of small seemingly insignificant pings, that hopefully come together to a symphony of some beauty.
Call me a sentimental historian, born in the body of a creator. I just like to make my content to last, in whatever form it needs.
However, the most important reason to start recording my ‘tweets’ is to own the content I make, including the scribbles and time dependent nonsense I frequently post. Because as one is using ‘services’ like flickr, twitter, facebook, you hand over ‘you’, or parts of you, in tiny little increments.
When these services have gone (and all of them will pass), a significant part of you will be gone, for ever, even for you, the creator. Effectively you are not the owner of your own thoughts, how ever insignificant you think they might be at this time.
This thought made me record my tweets, to analyze, search upon (did you know twitter only makes the 3200 most recent tweets available to you?) and to once in a while re-read, just for giggles and fun.
Apps like you would see on your iPhone, iPad and computer are not here to stay. They were however a necessary step to make to truly build a way to show, edit, make, share information in an optimal way.
Websites were made to show, make and share information. It’s seen as the transparent non platform to do stuff online with information. With the coming of mobile, the development of technologies and computing power went to slow to get an optimized way to share, make via websites on a mobile platform. That’s why App’s were the savior for mobile information, just like applications were on PC’s. Apps and applications are an optimized way to get right to the core of the business or information, be it either editing, viewing, sharing, consuming text, video, audio, social links or images.
Apps philosophy is inside out
The Apps philosophy is however inside out, the wrong way. An App works like a doorway to a real small set of information, or just one view on information with a tiny swiss army knife to do stuff with the information (share, edit, etc). Some developers made Apps with a combined set of actions like twitter. It can make photo’s and tweet them together with text and it does so in a very convenient way for certain contexts. But if you strip away the Apps from the actions and information, you are left with just that: actions and information.
“I’d like to call Robbert”, is one of the use cases of a phone. As it’s designed to make calls, you just have to select Robbert and press Call. With many other applications on the phone we now have to choose the tool and then select the piece of information we like to apply the tool on.
Information as the core
If we’d take the ‘call Robbert’- use case in a wider view, it makes more sense to select the subject first (Robbert) and then select what you’d like to do with it: “I’d like to contact Robbert”. There might be a bunch of ways to contact Robbert, but it’s Robbert I’d like to contact. In this case it makes more sense to logically present information like ‘contacts’ and select context driven tools to edit, share and -in this case- interact with the information.
The Web-OS by Palm and more recently the Microsoft Metro interface are interesting ways to make a information centered interface. It gives you combined, aggregated and some times a curated information view on your social context, news and other information.
Context driven information actions
That thought of “I’m in this context, I need this information to do this for me” is the core of information centered operation. It’s basically: making it possible for information to be made available in a fluent and transparent way, dependent on the context. The user can decide to either presented in a certain way, being edited, added, called whatever it needs by changing the context.
How many times have you glossed over or read the following or similar sentence:
The Terms and Conditions have changed. Please read and accept to continue accessing the site.
Dear companies, please explain what has changed instead of ‘expecting’ that we’ll just give up on the 19 pages of legal gibberish.
- Some one please make a online service that archives and tracks all ‘Terms and Conditions’ and ‘Terms of Service’ of most online services as an independent consumer rights service.
- Explain your Terms in a clear format.
- Explain and visualize changes in the Terms in a clear format
- Agreement reversal: people should be able to have second thoughts about them agreeing on the ‘terms’, and temporary suspend the company ability to use the data of that particular account.
The Terms Archive
Archive the Terms. Such an archive would look like a ‘social’ press release website. An archive per company of all past and current ‘Terms’. In such an archive people can find their own service/ website/ shop and both archive new, re-read and compare the various Terms and Conditions. Comparing terms would be of the most important features. This will give the consumer insight into the changes and be part of the change.
Message to all online services: if you value your users (being online value equity, or actual customers) you should help them through the Terms and Conditions and Terms of Service! There are several formats for that. One would be an indexed video with some one explaining the terms in a modular way without any redirect to other pieces of information, so no caveats. One should be able to ‘get’ each part of the terms. Other way’s is a ‘layman’ version of the document, explaining the consequences of each of the terms.
Explain and visualize changes in your Terms
This is the actual crux of the matter. It’s hard to see, compare and detect the changes made in any document. Companies should put some effort into explaining or make insightful what the actual changes are, and also talk about why the terms changed. A visual format would be a ‘track changes’ (like Microsoft Word) kind of version of the document. This will reflect on how your brand is perceived and trust in your service.
If someone has second thoughts about accepting the terms and conditions of a particular service, they should be able to reverse the agreement and thus suspend the implementation of the changes of the reversed terms. In broad terms that would either mean that the particular account would be suspended with the option to reactivate the account, but also block for the company to use the user-data.
[2011-05-16] We have an update, after some extra thorough Googling we found the TOSBack website an initiative of the EFF (Electronic Frontier Foundation) and TOS Back is available as an Open Source project, which implements some of the features described above! It’s still in the beta, it’s a start. It’s and archive and it does give a visual difference view. Funny to see most changes in the Terms are spelling errors.