top of page

Shaping Tomorrow: Our Vision on the Future of AI

  • Photo du rédacteur: Luc-Yves Pagal Vinette
    Luc-Yves Pagal Vinette
  • 24 juin
  • 10 min de lecture

Dernière mise à jour : 27 juin

Introduction


We’ve been absolutely delighted recently for GSD’s podcast about Top Global  Startups with Gary Fowler. It was indeed a short discussion, but we managed to pass a message across about the need in the industry for a new approach in AI whereby we have introduced our views based on Bio-Inspired AI model : https://www.linkedin.com/posts/fowlerinternational_gsdpresents-topglobalstartups-garyfowler-activity-7339306871037566977-NbBh?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAFHpsBqx_4cQZwOg1EsJozLDJz8UfOdzo From the GSD's podcast, it was also highlighted this particular sequence : https://www.linkedin.com/posts/fowlerinternational_gsdpresents-aiforall-globalstartups-activity-7341495150759571456-c6nf?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAAFHpsBqx_4cQZwOg1EsJozLDJz8UfOdzo


Nevertheless, I wanted to provide a comprehensive explanation of our vision about Bio-Inspired AI model and how does it differentiate from the current LLM-based AI models.


The purpose of this new approach in AI, would be mostly not to brag about our intelligence but to humbly open-up new possibilities in our industry. Indeed, there is nowadays a growing consensus about GenAI and Agentic AI being challenged in their efficiency and to address specific operational use cases. Therefore, I certainly believe that a fresh consideration should be given to the AI market, which is used more to hide a lack of innovation and creativity and attract as much as investors money as possible.


The Need for a New AI Approach


As so many of you are witnessing, LLMs-based GenAI and Agentic AI are starting now to demonstrate market failures all around the market. Despite a never slowing hype, market implementations have demonstrated a critical inability to enact operational values neither total cost of ownership decrease. Can we call it a market, if yes why?


The current state of what I call pseudo-AI is currently not addressing the requirements and expectations of B2B customers (enterprises) and even Technology companies what we now identify as Techcos. Indeed, there are multiple reasons why things are not going in the right direction:


  1. LLMs (Large Language Models)


Used for GenAI and serving as a base approach for Agentic, are categorized as cumbersome and dependent of training/inference loops and full of biased considerations without any capabilities to be specifically trained for given purposes or use cases. Therefore, as designed, they tend to be extremely painful to maintain or improve and certainly very costly to implement.

 

  1. Token Management and GPU- dependence


This argument is perhaps in my view the most important one. Indeed, Token management is a core challenge in LLM and GenAI systems. Every prompt is split into tokens, which consume GPU memory and compute resources during processing. As request volumes grow, current token management systems struggle to efficiently allocate and release GPU resources, often resulting in oversubscription and high operational costs for enterprises. This largely benefits GPU vendors such as Nvidia and hyperscalers but makes small-scale or more importantly large-scale AI deployments very expensive and less practical for many corporate environments.

 

3. Training & Inference loops vs Data Sovereignty


Data sovereignty remains a central challenge as organizations increasingly deploy AI solutions across regions. As cloud and edge AI services expand, data sovereignty—the need to keep data within its country of origin and under local laws—has become a real challenge for deploying LLMs, GenAI, and Agentic AI across regions. These AI models rely on customer data for training and inference, raising tough questions: Can we guarantee that sensitive data stays local during training?

As LLM-based AI is trapped into training/inference makes the system both ineffective and costly.
AI Training & Inference loop

What about when inference happens and results are transferred back to applications—how do we ensure security against breaches?Strict regulations like GDPR and CCPA mean that mishandling data can have serious legal consequences. As AI adoption grows, companies must prioritize local data processing, stricter governance, and transparency to protect sensitive information and stay compliant.

Last thing, how come an AI platform trapped in training/inference loops and without any hints of self-learning capabilities would be remotely capable for turning sentient !!?

 

  1. GPU and Energy efficiency


When we consider AI for either a small or large-scale deployments, we need weigh in on much of an impact the dependence upon GPU and the mismanagement of tokens and resources can have on the energy required to allow an efficient and pertinent use case to be addressed with LLMs, GenAI or Agentic AI.

GPU is a fast but kind of a dead wood element making LLM-based AI unsustainable for small scale and large scale deployments.
Power Hungry AI

Indeed, if we take a given setup as an example such as Nvidia DGX H100 is typically configured as a dual-socket server (2 CPUs), which would host up to 8 H100 GPUs could consume between 7000 to 10.000 watts. At such level of power draw, it would be difficult to implement at scale nor even to extend such approaches at edge where both space and power are scarcely limited.


5. Lack of adaptability


A major limitation of GenAI and Agentic AI systems is their lack of true adaptability. While these models excel at generating content and responding to prompts based on patterns defined by their training data. However, they struggle to generalize beyond what they have already seen or trained on. When faced with new, unusual, or rapidly changing situations, GenAI models often fail to adjust their behavior or reasoning in meaningful ways or specific context such as given use cases since situations change or evolve in abnormal conditions. 

LLM-based AI is based on pre-learning and pre-defined approach that are limit to the bias used for its training however it is unable to go outside its set boundaries.
Rigid & Constrained GenAI

This is because their outputs/responses are tightly bound to the data they were trained on, making them less effective in dynamic environments or when handling rare or evolving information. Unlike adaptive AI, which can learn and optimize in real time, GenAI typically requires extensive retraining to incorporate new knowledge/capabilities or respond to shifting or evolving contexts. 

 

As a result, these systems can be extremely slow to react or unresponsive to change, unable to personalize deeply, and may miss important nuances in unfamiliar scenarios—highlighting a critical gap for organizations seeking flexible, future-proof AI solutions. Additionally, LLMs are actual very cumbersome and would be virtually impossible to any specific Telco, Techco o enterprise low-layer services but only applicable to higher layer services such as OSS, BSS or API exposure function, for more details please read my other paper on AI-RAN.


Considering these challenges, is it reasonable to refer to this as AI?


Ambear Group’s Bio-Inspired AI Vision


Our AI vision is shaped by our board members’ deep experience and understanding of the telecom industry. Over the years, we have witnessed the steady commoditization of connectivity-based services. Despite several attempts to move the industry beyond hardware dependence—such as the interesting but ultimately limited approaches of SDN and NFV—true transformation has remained inconclusive.


More recently, we’ve seen a shift toward the application layer, with innovations like Multiverse, 5G, network slicing, API network exposure, and now LLMs with both GenAI and Agentic. Yet, in our view, these technologies have missed the mark. Alos, rather than delivering true intelligence, recent LLMs and related AI models function more like advanced search engines with advanced analytics. Our experience in both emerging and developed markets has made us feel that the current AI approach was not specifically designed to deliver operational benefits or to address truly valuable enterprise and Techco use cases such as Edge deployments.


Our vision is to deliver a next-generation AI technology: Bio-Inspired AI. This approach is specifically thought and designed for enterprises of all sizes and Techcos. It offers true intuitive capabilities to address key use cases in both emerging and developed markets, supported by distinctive characteristics.


Key Characteristics of the Bio-Inspired AI model


1. Self-Learning Capabilities


AI should focus on intelligence and benefiting from a pre-training while avoiding constant re-training and manual intervention. Self-Learning means  lifelong adaptability with the inherent ability to learn from events and critical data. Therefore, the AI platform must be able to retain critical knowledge while absorbing/integrating new data while avoiding catastrophic data loss.

In a context of Bio-Inspired AI models, self-learning could be a higher level of complexity but the ROI is significant for lower costs and cost-effective access to AI and unlimited possibilities.
Self-Learning should escape from Training/Inference cost loops

Self-Learning also addresses the complex aspects of Data sovereignty by minimizing the need to move critical outside local or regional boundaries for retraining. Also, it reduces some dependence on inference, which would significantly lowering risks when applying trained data.


2. End of GPU Dependence


A GPU centric AI solution brings a wealth of drawbacks : High-costs, supply-chain constraints and energy consumption. The combination of these elements would make the AI solution hard to scale beyond a certain limit and wouldn’t be applicable in-service contexts where both space and power are scarce such as Edge or rural locations.

Commoditized CPU is the next chapter for AI for sustainability, energy efficiency and ability to tackle both space and energy restrained contexts and would pair well with Bio-Inspired AI models.
Power Hungry and Costly GPUs

Although, an AI solution based upon commoditized CPUs would definitively be a game changer that would allow quite nicely with sustainability goals. Bio-Inspired AI is actually a very-low consuming energy model as it is events-driven and cross-AI domain communications only happen in given circumstances, and it is not talkative by nature.


3. sustainability & Energy efficiency


In a world and a time where climate change has never been this present and visible, it is  ironic to base a technology on hardware that are extremely power hungry. Although, GPU is a luxury but certainly not an obligation as LLMs and GenAI type technologies are cumbersome and leveraging very large data sets.

If GPU might be used in a long and large scale, this could endanger power grid and ability to sustain such approaches in a long run.
Sustainable Intelligence for Enterprises of all sizes

Nevertheless, a precise bio-inspired AI platform designed purposefully would be lightweight therefore requiring a fraction the resources that a typical LLM-based GenAI solution would require.  


4. Lightweight and Scalable Architecture


The telecom industry is attempting to evolve in a direction that is so challenging that both technology and leadership find it difficult to respond. Indeed, for years, we’ve seen very advanced technologies or demand from the industry actors (Telcos, Techcos, enterprises) that couldn’t be fulfilled because of a lack of additional intelligence to sustain the required level of complexity. Such technologies were : Network Slicing, 5G, Multiverse, Edge services and wireline-wireless architecture convergence, Multi-Domain Service Orchestration and Service Assurance as well as Open RAN etc..

Lightweight AI models allow to tackle the tiniest applications or service design to allow innovations at all levels.
Lightweight AI like a Feather

Subsequently, it resulted either in market failures, half-baked market responses or inapplicable solutions or very slow transitions due to formidable engineering challenges. I believe that for some of these technologies did require an additional layer of intelligence such as AI. However, to make AI fully capable of addressing various domains or set of expectations especially Edge or lower-layer service intelligence such as RAN or NWDAF, it would require an AI platform extremely lightweight with scalable capabilities.


By de-facto, Bio-Inspired AI models are crafted with use case purpose in mind therefore they are made as lightweight as possible for self-learning, cross-domain comms and being real-time capable. A nice side effect will then be to be as extremely lightweight and extremely scalable to serve adequately from 2-4 sites enterprises to multinationals especially when scaling can be rather complex to manage when considering it in either vertical or horizontal directions especially when it is time to free-up resources.

 

5. leveraging Commoditized CPU hardware


Commoditized CPU hardware has the main advantage of rendering GPUs optional. But, most importantly, as hinted about it in the previous chapter where scaling can be complex but also can become extremely costly when spikes of communication rise up unexpectedly pushing the boundaries of costs beyond any prior anticipations. One might argue that costs-limiting tools exists with hyperscalers allowing to reduce that risks but at a possible very important cost: customer trust.

Reducing AI access is not only noble but a guarantee of solutions that could be applicable to all needs and any use cases of value. Although, today's approach is highly selective to those who can afford it.
Commoditized CPUs for an Affordable AI Access & Scalability

Scaling is a natural occurrence in Telco/Techco cloud, Hyperscaler cloud or edge service contexts however it must be managed like anything else but commoditized CPU hardware alleviate such possibilities without breaking the bank for the customers either a Techco or an enterprise. One key element though, by leveraging typical CPU hardware solutions would certainly help in democratizing access to AI and improve market competitiveness for emerging markets as well as for SMEs and could naturally accelerate the most expected notion of B2B2X approaches.

 

6. Cross-domain communication and Real- Time


Currently, AI is thought as a set of silos where typically each implementation would then be isolated from one another mostly due to different AI vendors are involved. On the contrary, we believe that Bio-Inspired AI model even when implemented in different domains such as network, service assurance, Cloud & Edge or Security. They should be seen as different parts of the same global AI organism where cross-domain communication would be a critical part of required alignment between the various AI parts.


Siloed AI has no future in the future where Telecom infrastructure is largely distributed where AI should be used for service and configuration alignment across domains
Cross-Domain & Real-Time AI communications

As per the various instruments of an orchestra working together to make a symphony, we anticipate the need for clear alignment and cross-domain communication would be critical to align everything. For instance, in a situation of a security breach would naturally trigger a set of re-configuration requirements impacting the networking, cloud/edge and the monitoring layers. Therefore, real-time communications and possibly implying different methodologies than the typical TCP/IP stack would be required to limit the possible damages throughout the service infrastructure.  


7. Targeted  Use Cases : B2B and Techcos


To remain lightweight, efficient and less power hungry than any typical LLM-based AI implementations and guaranteeing a cost-efficient access to AI. At Ambear Group, we believe that AI must be tailored for given use cases environments where an adaptive AI layer could clearly bring significant value notably operational.

Bio-Inspired AI targeted use cases, to be effective ans successful, it requires : Cost-effectiveness, adaptability, scalability and sustainability
Bio-Inspired AI based Use Cases

For an initial approach, we believe that Overlay Networking, Proactive Security, Cloud/Edge and Service Assurance are domains of great importance and interests for both Techcos and enterprise entities. For instance, we believe that:

  • Overlay Networking should be evolving to self-optimization, able to determine the potential energy, carbon and cost impacts of each configuration options and should be elevated to sustainable networks

  • Security should be proactive rather than reactive to reinforce its ability to react in case of complex attacks and avoiding panicking reactions resulting in inefficient measures.

  • Cloud & Edge consumers are still unable to leverage multi-cloud/edge solutions for both cost optimization, adaptive and intelligent service structure to maximize operational value for customers to use the best of cloud/edge various hyperscaler partners.

  • Service Assurance require a significant approach towards data convergence. As Techcos and enterprises would gradually become dependent on the convergence of data for both monitoring and access to KPIs and their readability. AI advanced AI capabilities would facilitate the convergence of automation and overall service status to inform/assure customers or users of the end-to-end service quality.

 

Key Characteristics of the Bio-Inspired AI model

  • True Adaptability: Unlike LLM-based GenAI, Bio-Inspired AI can learn and evolve in real time, minimizing the need for costly retraining and manual intervention.


  • Energy Efficiency: Designed to run on standard CPU hardware, it dramatically reduces energy consumption and operational costs, making it suitable for both edge and large-scale deployments.


  • Data Sovereignty and Security: Self-learning capabilities enable local data processing, helping organizations comply with regulations like GDPR and CCPA while minimizing data movement and exposure risks.


  • Lightweight and Scalable: The architecture is purpose-built to be lightweight and scalable, supporting everything from small enterprises to multinational Techcos, and adaptable to both emerging and developed markets.


  • Cross-Domain Intelligence: Enables seamless communication and coordination across network, cloud, security, and service assurance domains—breaking down silos and enabling holistic, real-time decision-making.


  • Cost-Effective and Democratized: By leveraging commoditized CPUs, the platform lowers barriers to AI adoption for SMEs and emerging markets, promoting broader market competitiveness.


Conclusion:


The rapid evolution of AI demands solutions that are not only powerful but also practical, adaptable, and sustainable. Existing GenAI and Agentic AI models have fallen short of delivering operational value, scalability, and cost-effectiveness for enterprises and Techcos.


Bio-Inspired AI represents a new paradigm—one that combines self-learning, energy efficiency, and real-time cross-domain intelligence to address the real needs of businesses in both emerging and developed markets. By moving beyond the limitations of today’s LLM-centric approaches, we are building an AI platform that is ready to deliver measurable value, drive operational excellence, and support the next generation of enterprise innovation.

 

Written by Luc-Yves Pagal Vinette

 
 
 

Commentaires


Contact
  • LinkedIn Social Icône

©2019 Tous droits réservés

Thank you! Message Sent!

bottom of page