While working in my half assed youtuber studio with bright color accurate light I noticed something. The lights seem to mildly mitigate my winter blues. Lets embiggen that effect! The experiment starts now, with more, larger and better quality video light panels.
Every year I have a moderate case of the winterblues. It seems to be mostly connected to being less outside and seeing less of the sun due to the season and the weather that goes with it in The Netherlands. It’s effects are regular dips in energy levels, sleeping longer and eating a bit more.
a wonderful thing happened: my energy level increased when I worked with that light on
To solve that change in my day to day live I forced myself to go outside and at the least bike to my work everyday, BUT this has not happened because of the soft lockdown we’re having. I also use a fake daylight lamp with bright blue light. That helps a bit. What also changed was the amount of video-calls.
To brighten up my video quality I rigged up my old photography lights (Lowel Ego) behind the camera and a wonderful thing happened: my energy level increased when I worked with those light on.
This is the challenge: do new and brighter videolights help even more? That’s why I ordered some and I’m going to use them as work lights. To counter the placebo effect, or some form of bias, we need to try this with more people.
So if you suffer from a winterblues and have access to video lights, please use them as work light and measure your mood. I love to hear from you!
So, the one thing that has been bugging me as an audiophile is of course: sound quality. Although the mic of the phone/ webcam is pretty near to your face, in most cases it’s not pointed to your mouth. I wanted to improve the sound, yet not compromise on its mobility and upgraded to this new setup:
That made me get this very, very nice and probably a little too expensive microphone. It’s the Røde VideoMic NTG, a microphone with a double lineage: one of professional and semi professional film and music audio, the other an affordable and noticeable boost to your mobile phone setup.
Røde unified these two families into one: a very good sounding shotgun microphone (so very directional), that can be connected to either your camera, iphone and laptop! With that combination they created a very versatile microphone. This makes the cost of the mic reasonable. Judge for yourself and your situation of course.
I also use a little ‘magic arm’ of Manfrotto to mount the VideoMic NTG to the previously used microphone-stand. It perfectly compliments the shock mounted VideoMic NTG.
The main concern is you have a good mic pointed and near to your face. For me: I have learned that if you can invest into a mic of quality it makes you buy the thing once. If you don’t: prepare to resell and buy upgrades through out time.
Streaming and example
I got one and mounted it to my previously cobbled together setup to create better sounding videos with my iPhone and perhaps even stream it to Twitch, Reddit Public Access and Youtube Live.
Here’s an explainer video saying pretty much the same, but now you can also judge the sound quality.
Waiting on an improvement
The one caveat is: I would like to use it as a iPhone microphone for my Zoom, Whereby, Skype and Webex sessions and this is not possible yet, because unlike the setup for laptop, the iPhone does not allow it to be connected through USB.
When I connect with an iPhone, or iPad through USB, I can use the headphone as a monitor and listen live to myself and other people in the call. I can also simultaneously charge my iPhone with a little adapter and keep the microphone powered. This way I can listen and stream indefinitely with just the mic and my iPhone on high quality.
Perhaps this will be fixed in a later update. We’ll keep you updated.
To get your message across, it’s important for people to hear you, and see you, but there’s something more. Your voice and face is also your tool of enchantment, it makes people better understand you. So, how can we make you shine, without breaking the bank (mostly).
I have seen so many online faces in the last couple of months to last me a lifetime. How come people still sound and look so weird in their online conversations? The quick answer is: people can’t, or don’t check.
There you go: Tip # 1: always check your vanity mirror and vanity headphone, to check if you are audible AND visible.
Tip #2: Computer/ laptop You’ll probably be find with any laptop, or computer, but if you have an extra big screen, you’ll find working will be a bit more pleasant.
Tip# 3 Face your webcam straight on to prevent up-the-nose webcam angles 🤣. Use a parcel box to boost the height of your laptop and use separate keyboards.
Tip #4 Network cable! If you connect your computer with a cable to your router, the quality of your video sessions wil drastically improve! Here’s an example cable. You can vary the length depending on the distance between your computer and your router.
There’s nothing less reliable than a Wifi connection.
Tip #5 Maximise your bandwidth For example: ask your roommates/ family to minimise usage of streaming services like Twitch, Youtube, Netflix and online gaming.
Tip #6 Working headset (headphone AND microphone) connected to your laptop. In our experience wired headsets work better than Bluetooth headsets. Most of the time your phone will be equipped with a headset that works okay. Don’t have a microphone? No worries! As long as you use a headphone, you’ll be fine. My favourites are still the Apple iPhone headphones, but please check out Aartjan’s best conference call headsets list. It might take a while, but his picks are cheap and sound good.
Tip #7 Please use wired headsets!. Please do not use bluetooth headsets. The microphone sound quality can be very poor when combined to Webex and batteries of bluetooth headsets are commonly shorter than you like. If you have a wired headsets, please use that.
Tip #8 External mouse and/ or trackpad You’ll probably be scrolling, zooming and even drawing some during the day, so mouse users will have a slight benefit. Also, if you have a seperate mouse and trackpad, it will likely be a little more silent.
Tip #9 Good webcam, good light. We like to see your faces, so make sure your webcam works. Make sure your face is well lit, so everybody can read your expression and help out before you even ask.
Tip # 10 Smart phone with Whatsapp with Whatsapp is handy to keep taps on your coach and team. Make sure you have other backup means to contact everybody in case something goes wrong. There’s nothing more awkward than finding out .
I like to up my game, so what microphone should I get?
That’s the most asked question I get. Here’s some very common and great working microphones. These are my favourites (that are available at moment of writing this article)
Okay, what never is stated enough on the sales websites is the following: get the microphone close to your mouth! Most common rule of thumb is:
a maximum of two fists should fit between your mouth and the microphone
If there is more room between you and your microphone, you wil hear more and more of your room.
Follow the wise words of these people
Why repeat what others said better. The short of these videos content is: get a mic that’s near to your mouth, and rejects the sound from your room as much as possible. Make sure your room is sounding ‘cosy’ by using bookshelves, furniture, curtains and carpets. And finally: test your audio, so you can hear yourself, and optimise where possible.
The short of any sound improvement: If you want to drastically optimise your audio for presenting, it might be good to take a sneak peak at streamers and youtubers. I often advice looking at these videos. They are often times quite specific to their craft, but have very valid points that might help you out:
Drawing is one of my favourite ways to share and co-create thoughts. 🥰 In a online meeting heavy time I kind of felt at a loss 😅, but in this blogpost, you’ll find my slightly nerdy way to still share my drawings with you. 🤩 Time to SHOW what you think and stream what you think by using your hands and draw your thoughts!
I found that just explaining something through voice over a conference call isn’t optimal, because an important aspect was missing. I fixed this already in the real world by drawing a lot during meetings.
Now I needed to fix it for online. Now, there are a some of tablet based drawing tools that can also stream, but I like a more hands on approach, that works with limited means. Check out my video, photos with list and the shopping list below.
My setup: Webcam and Computer
Above you can see my setup. It’s really easy:
Microphone stand with boom arm on the table
Ball joint on the boom arm tip
Ball joint screwed into the webcam base (a real cheap webcam I had laying around)
Webcam connected to my computer via USB.
Computer running a browser and navigated to whereby.com
So, what’s up with the blue tape markers? Well, that’s the area that’s recorded. So if I need to keep things out of frame, like an extra screen with the chat, or extra pens and rulers, I know exactly where to put them.
My mobile phone setup
This setup I roughly copied from Nick Shabazz, who is a knife reviewer. His channel is a so called ‘talking hands’ channel, so he only shows his hands and the camera is pointed to his table and in Nicks case: his knives.
This setup is perhaps even easier:
Microphone stand with boom arm on the table
Ball joint on the boom arm tip
Ball joint screwed into the phone clamp I took from a selfie stick.
Phone is clamped in the phone clamp, screen up, camera down and running the whereby.com app.
I could add a long USB power cable so I can stream indefinitely, but I forgot to add that in the photo.
Stuff you need
Okay, so is this expensive? I don’t think so. Excluding your choice of webcam, I calculated it to be 50 to 60 euros, depending where you shop.
Here’s my shopping list:
A nice and sturdy microphone stand with boom. I choose this K&M because it has a heavy, padded base and just the right height, without being too big, or ‘in the way’ on even small tables. Alternatively, you can use pretty much any stand, as long as it can reach over your work area and keep the area where your hands are free.
Because I choose a microphone stand with boom I also needed a flexible place to articulate my webcam on/ affix my selfy-stick-iphone-clamp to. This ball joint does the trick pretty well. With this ball joint you can screw right into the base of most webcams.
Almost finally you’ll need a mobile phone clamp if you’re going to use a mobile phone. This one of Manfrotto might seem like overkill (it’s a big name brand for photography), but I only have one phone, so I like it to be as secure as possible, so it won’t drop out.
And finally a thread adaptor to make the clamp fit to the stand …
When did ‘people’, became ‘users’? In every tech revenue report, website launch, and app video we’re referred by ‘user’. It seems companies and their employees refer to us as dependant, complicit, cannon fodder. Which is kind of funny, because their products reflect that! I guess I am ‘food’ for them. 😅
This is a write up about our common word usage and the influence it has on the way we do our job and the way our products and services are shaped.
Somehow we commonly use words like ‘users’, ‘consumers’, ‘members’ en ‘citizens’ when we actually are talking about ‘people’. Using these words is caustic to our designs, products and services, because #wordsmatter.
In the back of our heads we process data differently when we use the words ‘users’, rather than something relatable as ‘our neighbours’, ‘john, of accounting’, or even ‘swimmers’. Even using “personas” have put a distance between us and the people we try to reach, or make things for.
As a result we tend to hesitate, or just not know who we should talk to in our design processes, and if the people in our ‘target audience’ are the right match with that magical persona unicorn. It was meant to relate behaviour and preference, but it failed miserably.
Why don’t users care?
As a prime example our pension fund friends are without exception blind for the elephant in the room: nobody cares about the pension until it’s ‘too late’. All because they do not know who they are making for: people with more on their mind than that automated financial transaction.
They are just focussed on their financial products and the baffling feedback that people are discontent, but don’t want a dialogue. It’s an industry that found they lost their relevance. No ‘user’ test, ‘persona workshop’ will ever fix that.
Context is bigger, better and untapped
We need to consider people as people with way bigger contexts than ‘users! These contexts more than: just ‘holding the phone’, or ‘paying the bill’, or ‘exceeding the subscription’, or ‘willingly passing on their most intimate details via devices’, or ‘content grazer’ and subsequently ‘banner watcher’, ‘rule follower’, ‘tax payer’.
In stead we should be talking about people with lives, half distracted by your smelly dog, being ‘hangry’ in a long supermarket line, people that fart, communities with problems, mothers with emotional moments in the park, students with crushing debts, children overcoming ailments, or people dying without family, or strangers accidentally creating new life in the back of the 1998 Opel Kadett.
When we say user, we say: dumb f*ck
These words: ‘user’, ‘consumer’, ‘citizens’ represent something totally different. They are defined by a narrow, dumbed down grazer, mouth breather, complicit drone, or sometimes a mysterious group of unicorns, willing to spend copious amounts of money for exclusivity or inclusiveness.
Our clients ask of us to trade with them via ‘seamless’ interfaces and to ‘make them go’ through websites and talking tubes. Rather than doing research of the behaviour and needs of people, we try to a-b-test the button to make more money, but who are we kidding really.
Who do we design, make and create for?
Regarding people in antiquated words is not sustainable! We should do more. We shouldn’t design, build and create for an alien race! We’re creating for our mom, for the part time cashier at the swimming pool where your son got his A-certificate, for the soldier defending our right to complain, for the millions of people we like and the millions of people we don’t understand.
We create for people of the future, that don’t even exist yet.
Make with empathy and curiosity
We make for people that share many many traits with us: we want to be happy, we tend to stick together, we find fun stuff fun, and procrastinate everything we think is dumb, we put effort if it pays off and we like to share in real life, not via vapid button-pressing, we are addicted and distracted easily, we are persistent and annoying if we need to be and we are invested and blissfully happy with the smallest and most important things.
Let’s make, create and design with empathy, curiosity and without prejudice. Let’s discover through a dialogue with people as individuals, but also communities, neighbours, friends, families and couples. Let’s create new, valuable things that are based on intrinsic needs.
Let’s use the right words and change our behaviour towards the people we do business with on a daily basis. Infuse them with honest, empowering language and point to the boundless untapped opportunity we left in the wayside because we talked about behaviour of empty vessels called ‘users’, ‘consumers’ and ‘citizens’, rather than the lives of actual people like and -most likely- unlike you and me.
We created something wonderful and kind of nerdy. It’s called BeerRecognition. It’s an immersive experience with a central role for beer bottles. To do so we mixed pattern recognition, augmented reality and beer brands together in one concept. A recipe for success.
As Strategic Innovator I have the honour and responsibility to create excuses to tinker with new technology for the coolest humans of all: normal, generic people like you and me. And because we’re a digital design agency we tend to use new technologies as inspiration to create something fun, useful, and valuable.
We like to call this flavour of applying innovative technology and human centred ideas ‘applied innovation’
Beerrecognition came to be when we decided: “When drinking awesome beers, people should be enveloped by the full beer experience. A label and fancy website can only do so much.”
And so we applied our process to figure out how we can make the beer tell it’s own story. We mostly followed the Human Centered Design steps to get to the answers, although to make room for our discovery we also included ‘tactile tinkering’ into our method.
Human Centered Design consists of these activities that are iteratively run through: Empathise, Define, Ideate, Prototype, Test. In our innovation program we use Prototyping as a leverage to further the other activities. That’s the process of ‘tactile tinkering’.
Tactile tinkering basically is touching and manipulating the tech and design and figuring out how it can be applied even better. Holding your experiment, re-ideate, re-empathise and re-structuring the story is something that drastically improved our outcomes AND learnings.
We try to start our discovery projects with a ‘what if’-scenario. This way it’s open to interpretation and makes us think about the possible outcomes. For Beer recognition it was:
What if we immerse people into the story and inspiration around a beer, or other drink.
Beers seem to be a logical choice because their labels are very distinguishable and pretty fun. There aswel real fans, materials and backstories of beer, so plenty of inspiration to go around.
Did I mention fun must be part of our discovery projects? Well it is! 😀 We found that intrinsic motivation when designing an idea is very helpful if you want to push the enveloppe of what’s possible, but also to drive energy in innovation teams and stake holders.
So ‘just recognising beers’ isn’t cool enough, or rather it was not enough for the concept to work. We wanted something that blows people away, not just inform them with cute factoids.
Pushing technology to discover usage, mythbusters style
Most of the time we have a set goal in mind before we start. Like: discover a technology concept like: recognising things based on unique physical properties.
We know how things SHOULD work, but most of the time we have not seen a good example based on existing technology.
In true Mythbusters style we work the challenge from two sides:
Can we create our idea with existing technology?
What do we need to replicate the ideal scenario?
Our goal: how can we combine common design trickery and bleeding edge technology to build the perfect human centred experience.
Under the hood
Augmented Beerreality in Unity
We wanted to emerge people into the world of each unique beer. Adding Augmented Reality (AR) seemed as a logical step to take.
Unity is a (mobile) visual engine used to build games, AR apps and all kinds of visual powerful experiences. Advised by our AR expert we build an environment where somebody can step in, show the beer and gets immersed with inspiration and information.
Recognizing beers with Tensor Flow
At the backend we used the Artificial Intelligence tool Tensor Flow to ‘learn’ the labels of our beers. Imagine over 730 images of each beer bottle to learn each angle of a beer beer bottle!
There are AI tools available that recognise bottles in general, but we needed to recognise specific brands of beer. Like the general cloud services of Amazon, Google, Apple and Microsoft we had to learn a specific Model of -for example- a ‘Lost in Spice’ beer.
Innovation is improvisation
We hoped to use a new Real Sense Intel camera with depth perception to distinguish people from the background of the environment that people step into.
Alas, the camera release was delayed and we had to use a technology that weatherman use since the 90’s: green screens. When you step into the experience you also step in front of a bright green screen. This specific colour is replaced in Unity with a beautiful, fun and animated backgrounds.
Of course this seems a bit ‘dodgy’ but it really pulled the experience together. Green screens might not be the end solution, but are perfectly fine to experiment with.
Learnings and next steps, with you?
This was a really cool project to work on, but it’s not finished by a long shot because we’ve got an arm-long wishlist to incorporate. To mention some: ditch the green screen and use camera’s with depth perception, use gestures to interact with the experience, use more interactive animations, track the body in the experience, so we can even put some elements ON you.
Not to mention the technical learnings we got and want to elaborate on: get the learning of bottles and labels to greater heights, experiment with faster hardware, but also: create this on a mobile platform. Perhaps this should be a SnapChat plug-in? Who knows! Lot’s to experiment!
All in all we think we’re ready to make the next step and both implement AR and learning our models with Tensor Flow for production ready applications and we want to invite you to build and re-imagine the beer-experience from the ground up!
Imagine the potential to not only create inspiring environments for your experience driven marketing, but also empowering employees to see more with the help of AI, AR and high definition optics we find in every mobile tool in the field.
Do you want to know more about our Innovation Process, AR, AI, or Applied Innovation? Drop us a line! We’re sure you won’t be disappointed!
Life to me is tinkering. Tinkering is all about starting, the power of serendipity and sharing. Tinkering is figuring out how things work, applying that knowledge and learning in the proces about stuff, people and life.
This is part 1 of the wrongfully named tetralogy on innovation and tinkering. Written on May 16th, 2016
Let me tell you how I learned how tinkering formed my life, friendships and view on the world.
Who of you have every started a project because it sounded cool, ignoring the fact you knew nothing at all about the subject?
I did! I do, actually I am right now!
This ‘Gong ho’ attitude about things is the basis of tinkering, or ‘klooien’ in Dutch. It might sounds informal, unfocussed or even childish, but it is a rather powerful way to create things, create energy and thoughts. Tinkering gave me direction, professionally and personally.
My first start in tinkering
When I was a awkward bucked toothed boy in elementary school I found I liked learning in a specific way. I need to emerge myself into a subject. I basically created a world and story around my obsession.
You know those obsessed kids that can’t stop spewing facts about spacecraft, dinosaurs, or physics? I was one of those kids. I still am!
After the first protected, but also emotionally confusing 12 years of my life, I found this emergence into subjects is a great way to connect with people. This is something pretty important for kids that age. Looking back, that was the first time I started to exploit tinkering socially.
Tinkering is kind of a social process, because the process makes you think about all ingredients you need to progress towards a goal.
Mostly I start with something that fascinates me personally, like recreating a movie that inspired me. Then I just start and continuously hone my skills to get a level of understanding of the subject. Honing the skills means learning about the material I use, but also learning about the people involved.
I try to learn about how they can excel in our little project and make them happy. Come to think of it, I think I approach materials and tools like I approach people. Trying to find harmony between the three of them by exploring the boundaries of all of them.
In this process I keep tinkering and find ways to approach little problems and solve the puzzle.
I say ‘puzzle’, but that implies a predefined outcome. This is not persé not the case. I like to work towards a level of completion, but not a specific end result.
With each step we take, we should feel free to backtrace, follow a tangence, or even start over. This is where serendipity comes in.
In a nutshell serendipity is finding when you’re not searching. These are magical moments in life and projects, when you see that glistening of a rough diamond when you’re down in the dirt of something seemingly unrelated.
Learning to recognise those little gems and grabbing them, is the toughest thing. In school and life we learned to focus and keep ourselves from being distracted.
I think we need to unlearn this, because dealing with these distractions is exactly what we’re build for. Freely following any interesting pursuit is key in tinkering.
The skill to identify, use and trust serendipity without prejudice is tough, but also critical in tinkering. Serendipity is a true catalyst of the tinkering process.
Unlearning the fear for serendipity is however a deceptively simple, but also tough thing to do. It is like learning a new skill.
Curiosity > Fear
That skill of embracing serendipity is to me the skill of letting curiosity win from the fear of the unknown. It is the skill of turning the unknown into a world of possibilities.
The main tool for me to turn fear into a forward motion is curiosity. Pure wonderment about things, people and the possible directions a story can take.
I think following ones curiosity is one of the most underestimated skills in life. Curiosity is like a little compass. It not only drives innovation, but also creativity and I think even happiness.
With that grand statement I perhaps also explain how tinkering progressed me through various jobs without losing direction. It’s being curious, just doing it and sharing with whomever wants to join.
Can we transform visual- and auditive emotional cues into emoji’s? And if so, can we improve the your digital experience by reading between the lines? I like to think so. This is the pitch I send my colleagues to get them to research the subject.
Mirabeau constantly researches how digital services can improve the human experience. One of the biggest hurdles in interpreting what a person needs is the ‘you know what I mean’-factor. Although people can say one thing, often times they need, or mean totally something else.
My pitch basically asks: could the recording and ‘reading into’ emotions make digital services more empathetic, efficient and powerful? This pitch is one of the directions I found interesting enough to explore.
The next step to intent detection: combine inputs and experience
One part of the research is to see how we can better detect intent. Although there are some interesting services that try to analyse your intent based on text, I think we need to combine a couple of technologies to make sure ‘we know what you mean’.
So perhaps if we record and combine facial expressions, gestures and sounds to determine your tone of voice.
Basically we want to see if we can ‘put emoji’s between the lines’ based on your face, voice and gestures, so digital services can better ‘read between the lines’.
While we’re at it we probably must also see if we can use machine learning to hone our digital skills to ‘read’ your intent. So there’s an I.A. aspect to this as wel.
Using emoji’s to annotate intent
To interpret emotions we also need a way to record them together with the words we express ourself.
So could emoji’s be the music-score to our lyrics?
Although emoji’s are a cultural outing and might be interpreted in many ways, we think we can use some of them as clear cues to express stress, happiness, jokes, sarcasm anger, excitement or even despair.
Imagine you’re a bit peckish. You’d probably say: “I’m hungry”. A robot build to fulfil your every need would start cooking a full brunch right away, but is that the right response? Well, that depends, right?
Read between the words
You might have meant: “I’m kind of hungry, so I might want to get a cookie in a while”. In an alternative scenario you might have skipped breakfast and are borderline ‘hangry’ (a fierce form of hunger expressed with a lot of curse words).
There’s a big difference between “I’m hungry 😅” and “I’m hungry 😡”.
Emoji’s could be a great way to record your intent ‘between the words’, rather than ‘between the lines’. With this added ‘intent’, machines can help you just like ‘they know what you mean’.
That’s our second part of the research: can we use emoji’s to predict, personalise and help you in a better way?
TL;DR: Let’s see if we can detect emotions, transcribe them to emoji’s and use them to read between the lines to better digital and physical services alike.