Human Machine Interface : Highlight of SXSW 2018

2018-04-13T15:32:03+00:00 23 March 2018|Categories: Blog|Tags: , , , |0 Comments
--Human Machine Interface : Highlight of SXSW 2018

Last year at SXSW 2017, the highlight was Mary Lou Jepsen – who with her portable MRI in an helmet, was able to get instant 3D image of the brain. She announced  brain to brain communication in 2022 .

This year, the Whaoo highlight goes to Ramses Alcaide from Neurable. Neurable uses the waves emitted by our brain to understand what we really want. With a self learning artificial intelligence layer on top, the BCI helmet  analyzes and interprets the waves of our brain and deducts immediately executable orders we want to convey to a machine, without the need for us to make any gesture!

SXSW 2018 - BCI

During his demonstration, a player with a virtual reality helmet with six BCI sensors – was able to interact very rapidly in a video game where he had to fight for his life against many enemies. Impressive démo which surpasses all the little experiment of BCI seen so far. According to its designers, the tool requires only a minute adjustment to a new brain and then self learning begins. As the hours and days, it interprets better and better the nature of our brain waves  … and you become invincible thanks to the speed of your reactions.

SXSW 2018 - machineWithin 2 years, the designer promises us an even less invasive BCI that could fit behind our ear at the end of a branch of augmented reality glasses, capable of recording and analyzing our brain waves.

As envisaged in the image here, this man will see in his AR glasses many suggestions appearing  at the right time. Thus, the AI will identify the person he meets, know the price of his clothes and reading his reactions thanks to the BCI, will be capable to order the grocery for dinner he really wants when passing by the Burger King image and observing the association of ideas it triggers in his brain … and sending a command to Amazon with the ingredients of his favorite recipe to be delivered immediately… without him having to blink an eye.  

Dream or Nightmare ?! To get a feeling of what could be our life in a full AR world, look at this video on HyperReality of Mr Matsuda who in 2016 illustrated the concept.

So this should mark the end of the early age of the imperfect human/machine interface where we have, like today, to adjust to each machine interface; now it is up to the machine to do so by tuning with our most powerful organ, our brain!


New Architecture for AI required.

To control all this, we must rethink the architecture of our AI systems. This is the challenge that Leslie Capper Yearsly of A-kin wants to adress. She is the former owner of COGNE, an AI startup which IBM bought 6 year ago. With it, IBM could develop the universal interfaces with the right API to transform an AI tool capable of winning Jeopardy into what is now  Watson. For her, our typical current AI architecture is historically based on verticals dedicated on one issue. She believe we need a transversal system which could control all the AI vertical for us.

Indeed, we all know now that – when a clearly defined objective, perfect data and specific rules that do not change over time, such as for the game GO, – an artificial intelligence built specifically for this mission always wins as demonstrated by AlhaGo. (Will also know, thanks to experiment in chess after Big Blue win over Kasparov, that a pair with one AI and one human brain is the best combination.)

In our everyday life, conditions as stated above rarely happen; we constantly change aim in response to the data we receive which themselves are often imperfect … in an environment where the rules of the game change. This is the area where our human intelligence is extremely strong with very fast “perception action” loops involving all areas of our intelligence.

Leslie Capper Yearslei has set itself the task of designing an AI architecture, closer to our everyday life … and she promises results within two years.


Usages and interactions with robots.

Numerous findings on experiments on the human-machine interface show that man needs an empathetic feedback to be able to better communicate with the machine. This is what drives the development of small friendly robots watching you nicely with eyes full of empathy.

For many researchers there is an interesting parallel to exploit in the psychology we use in our relationship with animals; if you put a camera on the head of your horse, you see precisely through the movement of its ears, whether or not you have captured his attention.

So in summary – for this new man-machine interface with our brain – we must now consider a new architecture of AI – on an open source platform  – that would control all my applications driven by our brain. This platform would be my true Trusted Third Party because it precisely understand and read my brain – and has as only objective to serve me.  It will make sure that each vertical it controls, has a clear defined mission and that it can do it. It will orchestrate between all the verticals by transmitting orders, issued by my brain it can read perfectly well, to the right vertical.

To illustrate the amount of work and education we still need to do, let’s take  Alexa as a counter-example not to follow.  Supposedly this “smart speaker” (which is in fact a “smart microphone”) will simplify my life. In fact the mission of this artificial vertical system, which is not even free,  is completely hidden and deceptive; its primary objective is to increase the sales of Amazon. In addition, this application platform need to access all my private conversations to get to this poor result without saying precisely the treatment it intend to do and what data it keeps afterwards.

InnoCherche – March 2018.

2018-04-13T15:32:03+00:00 23 March 2018|Categories: Blog|Tags: , , , |0 Comments

Laisser un commentaire :

Share This