Stories linking everyone in Telecom

Table of Contents

AI agents and PortaAIM won’t take over humanity (at least for now). Although, at times, we think they should. Nevertheless, the recent growth of generative pre-trained transformers, large language models, and multimodal interaction has vastly extended the day-to-day application of AI in our lives and businesses. More thrilling: in 2025, the growth of AI applications is expected to accelerate even further (and faster).

Wait, why does PortaOne care? Did we get caught up in the hype and change the focus of our business from billing and communications to the promised land of AI? Not really. But, it is true that, at some point, all those AI “gold prospectors” will need a return on investment. That’s when PortaOne will come ready with its battle-tested pickaxes and jeans pardon, with PortaAIM to monetize that agentic AI.

What Do AI Agents, PortaAIM, and the Beatles Have in Common?

Why is everyone so excited about AI agents? We didn’t have time to watch hundreds of hours of YouTube videos. (Remember when people used to refer to those as “weeks” before WfH and streaming platforms altered our perception of time?) So instead, we consulted one of those AI agents to find out. (We’ll say more on its parent company later in this story.)

The answer was coherent and concise: agentic AI finally brings the dream of replacing an actual workforce with robots closer to reality by understanding commands (prompts) better and making autonomous decisions. (Thus substantially decreasing the time humans need to intervene.) Whoopsie, are we all now fired?

If you haven’t been following the news, here’s the upshot. Around 1977, John Lennon wrote and recorded a solo demo, which he didn’t finish until he died in 1980. The remaining band members tried restoring the recording for the 1994-1995 Beatles reunion. However, at that time, Paul, George, and Ringo considered the recording quality too bad for a studio release. In 2001, George Harrison died, making things even more complicated. In 2022, a team of sound engineers restored the song using machine-learning-assisted audio restoration technology and won a Grammy Award for Best Rock Performance in February 2025. Please read on to see the parallels we make between “Now and Then,” AI agents, and PortaAIM’s role in understanding the opportunities (and challenges) AI creates for modern business.

Still Human, Still Here

No, we aren’t fired (yet), and this year’s Grammy Awards provide an illustrative answer to “why.” The Beatles did not use generative AI to create “Now and Then.” Instead, their production team used highly sophisticated AI audio restoration techniques and computerized video editing to make the song’s video, including John Lennon and George Harrison. (Both musicians were dead by the time of the video.) The outcome: AI agents amused the fans, restored the masterpiece, and did not take over the world (or human creativity).   

The Game of Tunes (and of the Agentic AI)

However, you still need John Lennon to win the Grammy for “AI song” — and to translate a great song into a commercial success (i.e. money in your bank account). You also need George Martin, the producer of The Beatles. (Not to be confused with George R. R. Martin, the author of seemingly never-to-be-finished “The Songs of Ice and Fire” book series AKA the book version of “Game of Thrones.”)

Entrepreneurs who create AI agents to be used by others will use PortaAIM in a similar way. They’ll use it to charge their customers, analyze customer behavior and associated costs, and fine-tune their pricing, avoiding revenue leaks and running off money in the bank because there were too many loss-generating users or products. Use the right tool to monetize the AI agent you’ve built, and you can thrive and build your competitive advantages. That’s why we invite you to start by reading this story and booking a call with our sales representative or writing to our CEO on LinkedIn.

A Backgrounder: Who’s Already Here? (In the World of AI Agents and PortaAIM Monetizing Them)

OpenAI and its flagship product, ChatGPT, are becoming synonymous with the AI industry. Sort of like “Xerox” is for copy machines. Although OpenAI’s first attempt to create a marketplace for AI-based applications (GPT store) seems unsuccessful. Perhaps on of the contributing factors to that lack of traction is that there is no straightforward monetization for creators of the new “GPT apps.” Nevertheless, OpenAI’s recent feud with the Chinese DeepSeek demonstrates that the pack’s leader is “defending” its market niche from the audacious rookie. But ChatGPT is not the only app out there. So, let’s see who else is building a name.

Anthropic

Founded by siblings Daniela and Dario Amodei after they departed from Open AI during the COVID lockdown, Anthropic’s early brainstorming sessions involved socially distanced backyard meetings. And boy, those meetings proved fruitful. As of November 2024, Amazon is Anthropic’s key investor, with a total investment pledge of $8 billion. Google comes second, pledging $2 billion. One of the main reasons behind Jeff Bezos’s generosity is AWS Trainium chips. 

Interested in learning more about Anthropic’s founding story? In this two-hour interview, Daniela and Dario explain how they came up with the idea to create the new company, why Anthropic is a “public benefit corporation,” and why safety matters for AI research. Watching this before creating and monetizing your AI agents via PortaAIM is a good idea.

The Nvidia v. Amazon Trainuim Spiel

Nvidia chips have become synonymous with the “machine” in “machine learning” and beyond. The logic is straightforward: you need computational power to train AI models for cancer or HIV cures. (And to generate Balenciaga popes.) You need a heck of a lot of it. Computer chips do the computations, and Nvidia manufactures chips. Welcome to one of the wealthiest corporations of the 2020s.

This video presents a fascinating discussion about Amazon’s chip rivalry with Nvidia. It’s reminiscent of Intel and AMD in the good old days. It highlights Anthropopic and data management startup Databricks as the key players behind the Amazon Trainum chip. Who knows — perhaps one day, the Amazon or Nvidia chip will power your AI agent, supported by PortaAIM as its monetization framework.

 What would you do if you were Amazon, another corporation of the rich dudes club? You buy a startup that proved successful in making those chips (hello, Annapurna) and then try making your own. Then, you would pledge $8B to a startup (hello again, Anthropic) that primarily uses your computing systems. From Anthropic’s point of view, this strategic partnership is (currently) a huge win as well. Besides a considerable investment, Anthropic has a say in Amazon’s AI chip development strategy. It’s similar to the Microsoft + Open AI “10-year partnership,” which started in 2019 and was recently adjusted to give Sam Altman more freedom.

Model Context Protocol 

“Okay, this is all fascinating,” you say. “But how is this billion-dollar AI chip race between tech giants relevant to my business of creating AI agents or PortaAIM helping me monetize them?” Hold your horses, please; the next station is MCP.

Matt Pocock briefly explains MCP and shows how it might be relevant for you to create your first AI agents. PortaAIM is turning this venture into a profit generator.

According to the current pace of AI agent technology development, Anthropic introduced the Model Context Protocol (MCP) in November 2024 — “a long time ago.” MCP does to agentic AI what TCP/IP did to the Internet in the 1990s. AI agents need datasets (at least for now) for autonomous decisions and retrieval-augmented generation (RAG). In Anthropic’s own words, MCP “connects AI assistants to the systems where data lives,” thereby solving this challenge.

Perplexity

Founded by Aravind Srinivas (among others), an Open AI alumnus, and one of the fastest-to-grow AI companies (with a current market valuation of $9B), Perplexity specializes in AI-assisted search “with better fact-checking.” While the company does not explicitly specify “better than whom,” you don’t need AI (pun intended) to know they are referring to ChatGPT. While the fact-checking claim is dubious, at the current stage of AI-assisted search development, Perplexity’s model-agnostic approach positions it as a meta-AI tool.

Mr. Srinivas jokes about being “a misfit” and not dropping out of college “like many other AI startup founders.” (That’s a jab at Sam Altman.) He also explains how the Socratic method helps build robust AI systems. It might help you develop your next AI agent, and land on PortaAIM as the way to monetize it.

The meta-AI (an engineering approach) should not be confused with Llama, that is, the AI by Meta (an open-source LLM model by Meta Platforms, Inc., formerly known as Facebook, Inc.), although Mark Zuckerberg appears to have almost trademarked the engineering approach as well.

As of February 2025, Perplexity lets customers perform searches within its system using five “advanced models” (GPT-o4 and o3-mini by Open AI, R1 by DeepSeek, Claude 3.5 Sonnet by Anthropic, and Perplexity’s own Sonar, launched on January 21, 2025). It also offers two “reasoning models”: o4 by OpenAI and R1 by DeepSeek. Yes, AI/ML engineers still use Star-Wars-inspired naming whenever possible. And the “reasoning models” deserve a separate explainer.

The “Reasoning Models”

Quoting Perplexity: “Reasoning models are specialized AI models designed to enhance complex problem-solving tasks. These models are particularly adept at handling queries that require multi-step logical processes, deep analysis, and advanced cognitive functions.” Put simply, “reasoning models” perform the “intelligence” part of AI while “search models” act somewhat similar to Google’s PageRank technology (patent expired in 2019).

“Auto” Selection Mode

In early February 2025, Perplexity introduced “Auto” mode for the reasoning models, which is revolutionary for the niche of meta-AI solutions. According to Perplexity’s UI, “Auto” mode works “best for daily searches.” There’s no mention of “Auto” mode in Perplexity’s documentation. (As of February 2025, only the Sonar API documentation exists.) However, when asked about it, the system itself replied, naming the features:

  1. Task optimization: The system offers to select between various models on behalf of the user based on the task it faces (e.g., conducting a web search, providing an in-depth analysis on the subject, or retelling the text).
  2. Default model fallback: When internal sanity checks signal trouble (hello, Lucie), the model selection system may default to Perplexity’s standard model.
  3. Continuous adaptation: This being a typical meta-AI task, Perplexity promises its model selection system will learn from your usage patterns and adjust its model selection accordingly.
  4. Limits management: This is the most interesting feature from a billing standpoint. Perplexity’s example is: “For users with limited access to certain models (e.g., Claude 3.5 Sonnet with its five answers per day limit), the auto mode likely manages these restrictions to ensure optimal usage.”

The issue of the “reasoning” models and deferring their automatic selection to a “selection model” (or an “internal AI agent” for PortaAIM purposes) is very relevant to billing and revenue assurance. That’s why we plan to cover it in our next blog posts.

Mistral

Founded in 2023 by Arthur Mensch, Timothee Lacroix, and Guillaume Lample, Mistral AI rapidly waves in the AI agents industry. Unlike Anthropic, founded by siblings after they left OpenAI, Mistral’s origins stem from a university friendship among its founders, who studied at École Polytechnique and later worked at Meta and Google DeepMind. Mistral is also Europe’s largest AI unicorn based on market valuation

In this episode of Unsupervised Learning, Arthur Mensch, CEO of Mistral AI, discusses the future of LLMs and the role of open-source software. As AI continues to evolve, integrating AI agents with PortaAIM can successfully help you monetize them based on both open-source and commercial licenses. Mensch also explains Mistral’s market strategy and the competitive landscape.

Mistral is particularly friendly towards AI agents. It was several months ahead of the entire AI crowd, releasing the alpha version of its AI agents in late summer 2024. It offered two methods for working with AI agents:

  1. La Plateforme Agent Builder: This is an interface designed for non-technical users, allowing them to configure AI agents without extensive coding knowledge.
  2. Agent API: This is a more advanced tool that provides developers with a programmatic approach. It enables them to integrate AI agents into existing workflows or applications, offering deeper customization and integration capabilities.
This video examines Mistral’s tools for building and fine-tuning custom AI agents. It demonstrates how developers can create and deploy AI agents for PortaAIM to assist in monetizing them afterward.

The Mistral Agent API

Mistral Agent API is currently a part of Mistral API, although the documentation on it is “coming soon.” (As it has been since summer 2024 as of February 2025.) Using Mistral API allows willing businesses to use AI agents programmatically. Developers can use HTTP requests to create, configure, and manage agents, making integrating them into various applications easy. For instance, a customer can create an agent using a POST request to specify the model, temperature, and instructions, followed by a PATCH request to adjust settings dynamically. Once configured, customers can deploy and access the agents via the API or Mistral’s chat interface, Le Chat.

Open-source v. Open-weight

Although both have “open” in their names, “open-source” and “open-weight” AI models differ significantly. Open-source models allow users to view, modify, and distribute the source code and training data. In contrast, open-weight models mandate sharing only the model’s numerical parameters or weights without revealing the underlying code or data.

Mistral AI supports open-source AI, releasing models like Mistral NeMo and Mistral Small under open licenses. However, recent partnerships (including Microsoft and Stelantis) have raised concerns about Mistral’s potential shift away from these principles. Critics worry that Mistral may be moving towards more closed models, similar to OpenAI’s transition a year ago.

Here, Developers Digest introduces Mistral Small 3, a 24B parameter model, highlighting its efficiency and multilingual support. Relevant to monetizing AI agents through PortaAIM, Mistral 3 improves agent performance in tasks such as information retrieval and automation, providing low latency and strong customization options. Although the “24B parameter” model sounds impressive, it is relatively compact by modern AI standards, and it runs on a MacBook with 32 GB of RAM or a regular PC with a standard Nvidia RTX 4090 GPU. To compare, GPT-3 by OpenAI had 175B parameters in 2020, while DeepSeek R1 boasts 671B parameters.

UiPath

Being the pioneer of Robotic Process Automation (RPA) since early 2010s, UiPath slightly missed the LLM bandwagon. However, our Romanian neighbors are worth mentioning in an AI agents via PortaAIM story for more reasons than because their founder, Daniel Dines, is the wealthiest Romanian, and a person who donated one million euros to restore Notre Dame de Paris.

Daniel Dines shares the UiPath story (with his trademark East European accent) with Roelof Botha on the Sequoia Capital corporate podcast. After creating your AI agents and monetizing them through PortaAIM, you may also get an invitation to be a guest.

In October 2024, UiPath announced its Agent Builder, claiming, “Agentic automation is the natural evolution of RPA.” However, as of February 2024, the Agent Builder is still in the “vision for the future” mode, with minimal product documentation or demos available. UiPath has two other recent AI products, though: DocPath and CommPath. These each deserve particular attention.

DocPath, CommPath, Context Grounding, and Why They Are Relevant for AI Agents Monetized with PortaAIM

Like people, AI models sometimes hallucinate. The recent story of “Lucie the AI chatbot” is a good illustration. “Context grounding” (akin to electrical grounding) enables the LLM to be more “aware” of its context, be it a generative task or a research one. UiPath discovered it could capitalize on its previous RPA research and thus created DocPath and CommPath.

DocPath is particularly relevant when you are “educating” an AI chatbot to be your customer service agent. DocPath will provide encoder-decoder models to fine-tune the knowledge base and the context your chatbot should understand about your products. CommPath claims to do a similar task “for communications,” meaning emails and chats. (We could not find evidence of CommPath working with audio/video.) Anyway, context grounding looks like an interesting technology worth watching in the context of AI for small and medium businesses.

Cohere

In 2017, researchers from Google Brain published a seminal research paper titled “Attention Is All You Need.” A tribute to the Beatles, the paper introduced the concept of a “transformer” for the first time. For those of you who are not PhDs or the AI Illuminati — that is the “T” in GPT. Fast-forward to 2025: we asked one of the now-numerous GPTs out there to explain the paper in plain terms… and it actually did. (Spoiler: we tested this explainer on an actual sixth-grader, and he still dismissed it as “too complicated.”)

Aidan Gomez is among the authors of “Attention Is All You Need.” He left Google in 2019 (at 22) to create Cohere with Ivan Zhang and Nick Frosst. Cohere is based in Toronto, Ontario, and San Francisco, California (as well as Palo Alto, London, and NYC) and specializes in enterprise-focused AI. Most recently, Cohere raised $500M in July 2024, bringing its market valuation to $5.5B.

Mr. Gomez shares Cohere’s story and vision for the future of enterprise-scale AI. Watch this episode to understand how your AI agents monetized via PortaAIM could be part of that feature.

North Platform

Cohere introduced North (they prefer to call it “went North”) in January 2025. North Platform is a technophobic nightmare that came true: “Any employee, regardless of their technical background, can effortlessly create, customize, and share an AI agent with just a few clicks.” According to Cohere, “This includes agents for core business functions like HR, finance, customer support, and IT.”

We haven’t tried North ourselves yet, although we applied for their early access program. Under the hood, North relies heavily on Cohere’s proprietary multimodal AI search system, Compass. Making Canadians proud (again), Cohere claims that “North outperforms competitors like Microsoft Copilot and Google Vertex AI in RAG and human evaluation tests.” (Whatever that means, Perplexity.)

Compass Multimodal Framework

Compass by Cohere is similar to CommPass and DocPath by UiPass (by technology as well as by naming convention). It is built on a foundation embedding model, allowing indexing and searching multi-aspect data (emails, invoices, CVs, etc.) for improved domain knowledge acquisition (an AI-themed version of the term “learning” as applicable to non-humans).

Alexander, Cohere’s software engineer, explains Compass to Jason Liu. Tune in to uncover the intricate differences between “vector retrieval,” “sparse retrieval,” and other fascinating nuances to create your own AI agents and monetize them through PortaSwitch.

Learning to use Cohere API and Compass combined with North Platform might be handy for creating enterprise-level AI agents, which you can monetize using PortaAIM. You can use connectors to feed your data into the Cohere system. Besides, because Cohere is mindful of its academic origins, you can apply for its AI Research Grant Program.

Postman

Since its founding in 2014, India-based Postman has become one of the most popular platforms for building and using APIs. In January 2025, it joined the gold rush by launching Postman AI Builder. Before opening offices in the Bay Area in 2017, Abhinav Asthana doubted whether he could afford the next month’s rent in Bangalore. These humble origins imprinted into Postman’s corporate DNA — valuing cost-efficient and simple solutions.

Postman AI Builder claims to simplify LLM integration into your development workflow — a solution you would need when creating AI agents and monetizing them with PortaAIM.

So far, Postman AI Builder seems like an attempt to reverse the company’s declining valuation by leveraging Postman’s expertise in APIs and developers’ trust in the exploding field of agentic AI. However, Postman has proven itself a strong performer and might repeat its API success with developer-centered AI agents.

LangChain

Launched in October 2022 by Harrison Chase, LangChain has quickly become “the popular kid of the class” for its ability to simplify the deployment of LLM-powered applications. The framework is particularly noted for its modular design, supporting various applications from chatbots and document summarization to synthetic data generation and RAG.

Watch Harrison Chase discuss LangChain’s role in building agentic systems on The TWIML AI Podcast with Sam Charrington. LangChain simplifies LLM integration, enabling developers to create sophisticated AI applications. PortaAIM can help monetize these AI agents by providing flexible billing solutions tailored to dynamic AI performance.

In 2023, LangChain attracted significant investment, raising over $20M in funding (at a $200M valuation) from prominent venture firms like Sequoia Capital and Benchmark. Thousands of contributors on platforms like GitHub and Discord support the company’s growth. However, not all community members are optimistic, leading to Reddit debates between them and Mr. Chase. In May 2025, LangChain will host its first Interrupt AI Agent conference. During that event, we might see how its community relations develop further.

Developing AI Agents and Why You Should Try

All this time, we were (or pretended to be) assuming that you’ve already built your AI agent and are just looking for ways to monetize it (him? or her? These times, one should be careful) with a billing system such as PortaAIM. But what if you have no agent? What if you are new to the agentic AI business and still want to get started? Great! We asked AI (who else do people ask these days?) for you, and the answer was quite clear. Below, we explain the two no-code platforms that were “the AI’s first choice.” However, there are ten more if you want to dig deeper. 

AI Agents Solution #1: CrewAI

Unlike the “Big AI Seven” we reviewed in the backgrounder section, there is no Wikipedia article (as of the time of this story) about CrewAI. However, if you are experimenting with a no-code entry-level solution for creating your first AI agent, CrewAI has been one of the first recommendations since October 2023. Founded by Brazilian software developer João Moura, Crew AI got an $18M series A investment in October 2024. These are not the billion-dollar investments of OpenAI, Anthropic, or Mistral; still, that’s a lot for a very young company.

Although CrewAI claims to be a “no-coding solution,” even minor tweaks would still require some basic understanding of scripts, as this two-hour video demonstrates. You can still create your first AI agent and monetize it with PortaAIM. 

We registered and tried CrewAI’s interface ourselves. The core of CrewAI is Templates and Crew Studio. With “Templates,” you can pick an existing AI agent out of 14 templates, ranging from “Enterprise Content Marketing Crew” (oh, hello, dear colleagues) or “Prospect Analysis Crew” to a “Sales Offer Generator” or “Customer Fraud Flagging Template.” With “Studio,” you can polish existing templates or create your own. 

AI Agents Solution #2: Langflow

Interestingly, another entry-level AI agent solution also has Brazilian founders: Rodrigo Nader and his friend Gabriel. Something in the Brazilian climate (or culture) invites people to think about creating intelligent robots for all jobs. According to Mr. Nader’s LinkedIn, Langflow appeared in August 2021. In April 2024, DataStax acquired Langflow for an undisclosed amount.

Unlike the CrewAI video (two hours), the Langflow video that YouTube recommended (by showing it first in the research results) is only three minutes long. And it does the job. We created our own AI agent, with the PortaBilling monetization setup being the only important thing to do. (Read on for more on that.)

Like CrewAI, Langflow has an open trial, so we were able to get in just by signing in with Google. Langflow has its (currently) 17 AI agents sorted by use cases (“assistants,” “coding,” “content creation,” etc.). Clicking any of those will open a mind chart with API key input fields, hints, and tricks. 

Monetizing Your AI Agents with PortaAIM

All those colorful agentic AI presentations and videos usually don’t tell you about their APIs draining customers’ prepaid funds, creating taxation ambiguities, or other revenue assurance issues. None of these mean you shouldn’t try making your first AI agents and further experimenting with AI in your field. But they do mean you should be careful when offering them to your end users. Nevertheless, all the Googles, Amazons, and OpenAIs of the modern world need you, dear small to medium entrepreneur, to use their services and slowly but steadfastly recoup the multi-billion dollar investments they once received.

Ultimately, this video by the PortaOne content marketing team explains how to monetize AI agents with PortaAIM. Watch it to understand two case studies: 1) onboarding and activating a client with PortaAIM and 2) customer tracking.

With agentic AI, we have 1) substantial electricity consumption, 2) third-party provider dependency (that is, the LLM model providers and agent frameworks we described above), 3) a potentially huge volume of billable events for computations (if your AI agent becomes popular), 4) initially low predictability of the load dynamics, and 5) various avenues for the fraudulent use. (Because the underlying technology is new and still buggy.)

Does this all remind you of something? Of course it does. It reminds you of the good old days of early IP telephony! And, for us, it makes us think of PortaAIM. We have long specialized in high-load systems, revenue assurance, and flexible billing models. We will gladly assist you if you want us to adopt those features for your next AI agentic startup.

So, Goodbye to SaaS in the World of AI Agents?

No, agentic AI won’t kill SaaS. At least not in 2025, according to Bain & Co. Frankly, AI agents and PortaAIM monetization solutions for AI still have a long way to go. While all the transformers, RAGs, and the best AI-assisted sound editing techniques are already out there, you still need John, Paul, George, and Ringo to dig the gold out of all that AI with a simple yet beautiful song. The more sophisticated the tools become, the more skill and understanding they require from an artist or entrepreneur.

And Yet…

At the same time, businesses must adopt AI agentic tools into their daily routines and business models (or die). The “reasoning models” are already far beyond human capabilities with tasks like research and analysis. As a content writer, there’s no point now in spending an hour on Google to find a good joke for a blog post segment — there’s an AI agent for that. However (as of February 2025), you still need a human to put those jokes and words together. We asked AI to handle the task. It’s impressive: 199 sources, the text was ready in less than 5 minutes (compared to three weeks), and the jokes were… fine. (You’re right, Perplexity, my coffee is sacred.) However, you can always scroll up and judge for yourself which way has the better results.

Here’s just a reminder of where we started today. We hope you now know more about AI agents and PortaAIM solutions for monetizing them.

So, if you haven’t yet created an “experimental lab” to put AI within the context of your business, you should. It won’t hurt, and you might develop business ideas to attract the next multibillion investment spree to your business. And we are always here to help kick the dollars out of those ideas and into your revenue stream via PortaAIM. Please do not hesitate to book a call with our sales representative. And remember, you can always write to our CEO on LinkedIn.

Share this story