Can Artificial Intelligence be Organic or Biologique?

After having talked about the way that artificial intelligence at times contributes to pollution both through running algorithms and as well in the sense that it contributes to server capacity in big data there might be interesting to consider whether there are any other more sustainable solutions. There seems to be a wave of more care in regards to sustainability rolling into the artificial intelligence community. As such I found the new company AnotherBrain interesting. I saw their company after news being spread online about their €19 million fundraising in series A. They do have a good pitch on their website:

“We create a new kind of new Artificial Intelligence: bio-inspired, frugal (low data and energy) and human-friendly, an alternative to deep learning, for people”

The company announced this investment round was led by a consortium of public and private investors, including SEB Alliance, Robinson Technologies, and existing shareholders Alpha Intelligence Capital, Daphni and Cathay Capital. They have developed a deep learning AI they have dubbed OrganicAI™ (they even trademarked it).

As a side note however being French they may have slightly misunderstood or ignored the meaning of the word. Organic in French is biologique.

Organic matter, matter that has come from a once-living organism, is capable of decay or is the product of decay, or is composed of organic compounds.

Biologique is french; you will find that it means organic in the colloquial, Whole-Foods sense of the word. It seems to be a buzzword designed to inspire some confidence in the alleged natural goodness of the product associated with it.

As such this OrganicAI™ can be understood more in this regard, as in a way the wholefoods within the field of artificial intelligence. It is said to mimic how the human brain learns and functions. It does therefore not rely on a corpus of big data for training. It claims to operate on lower cost and have fewer energy requirements. It is delivered as software, but is soon available on an AI-chip. They make audacious claims such as:

“…helping cars achieve full autonomous driving (Level 5) by the middle of the next decade.”

Whether they can live up to this remains to be seen. Maisonnier the co-founder was known as the serial innovator, founder and chairman of Aldebaran Robotics, the developer of humanoid robots Pepper and Nao. His company was aquired by SoftBank Group.

  • 2017 — seed round of €11 million
  • 2019––series A €19 million

It promises three things:

  1. Explainability
  2. Independence from the cloud (making it more private and secure)
  3. Reduced energy consumption.

AnotherBrain has 38 employees and seems to be a fast growing company.

Personally I think OrganicAI™ sounds like a solution that is much needed especially in regards to the problems that are arising relating to the three issues outlined by AnotherBrain. However there is no easy way to understand how it does any of these three points well. I can be joyful, and at the same time critical.

In the video Maisonnier talks about his ‘highly educated team’. He says: “We don’t have to deal with this tricky problem of privacy.” He talks of kindness, benevolence and personal responsibility with an irresponsible tone. This does sound a lot like Federated Learning that Google talks of, yet the Google model and application is far better explained than OrganicAI™, and their competitiveness can be challenged.

My general outlook is hopeful, yet my current estimation is that they have to begin with their first point of explainability to make me understand their solution. The current sell seems to be Maisonnier, his experience with robotics and the chip. What they are striving for seems a worthy cause, so I do recommend that AnotherBrain really has to step up their game in terms of making it more clear how their solution will be more explainable. Right now of course it seems like explainability to ‘the experts’ as opposed to the ‘black box’ nobody can explain.

Anand Rao and Ilana Golbin May 18th, 2018 retrieved from the PwC blog

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts.

Explaining how algorithms ‘think’ is already wrong at the outset. Algorithms cannot think, they can only compute (run). Even if a query is understood it does not mean it is answered – believing you have answered without constantly ensuring responsible actions is questionable in itself.

Explainable does in no way equal responsible, understandable or transparent. How will they endeavour to fulfil their first point and how does their solution compare in terms of energy efficiency compared to other solutions? On one hand I find the information online to be severely lacking.

There is additionally no social scientists and mostly AI engineers and neuroscientists on the team, and as such I strongly questions how diverse responsible AI for society could be built by a team so heavy on the natural sciences and so devoid of any other competencies.

On the other hand I must congratulate AnotherBrain on their funding round, and we do need companies that are far more responsible in the applications of technology within the field of artificial intelligence. This can of course be improved throughout the further development of the company.

So I ask Bruno Maisonnier kindly to take this to heart in the next stage of development for AnotherBrain and OrganicAI™. It is in humanity’s best interest and for our planet that we need to make AI work.

This is day 128 of #500daysofAI. If you enjoy this article please give me a response as I do want to improve my writing or discover new research, companies and projects.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store