A Real-Talk Guide on Who Makes the Brains, Who Builds the Apps, and Where We All Fit In
If you feel like your Facebook feed or LinkedIn is inundated with news regarding “new AI models,” which are supposedly ten times more intelligent than the last release, you are not the only one experiencing that feeling. For those of you in Malaysia seeing everyone from Uncle who sells insurance, to students at Taylor’s using these tools but honestly asking the average person who is really “behind the curtain” they probably will reply “ChatGPT” or “Google”. Right now, this generative AI companies’ landscape has changed dramatically in terms of both quantity as well as complexity since (2) years ago. No longer can we use the phrase “two horse races,” there are now (3) schools of thought within the generative AI ecosystem. Which include: the developers who create the engine. The developers who create the vehicles; and the resellers of fuel. If you would like to find out where things are headed as well as come off sounding intelligent when you yam cha, we will illustrate what the landscape looks like in (2026).
- Why does the generative AI companies landscape look like a multi-layer cake?
- Open-Source vs Closed-Source: Is the secret sauce better than the public recipe?
- Who are the real “Generative AI market leaders” in the application layer?
- Why does the Asia-Pacific region have its own generative AI company’s landscape?
- How do we choose which “Brain” to use without getting a headache?
Is it all just ChatGPT? Understanding the Foundation Layers
A look at the heavy lifters who build the massive models that power everything else.
Why are some models “Free” while others are “Secret”?
The battle between open-source and closed-source AI and what it means for your privacy.
The “Apps” we use: Where the real magic happens for users
How specialized AI startups are taking the ‘brains’ and turning them into useful tools.
The Generative AI Companies Landscape in Southeast Asia
How local players and global giants are fighting for the Asian market.
Why does the generative AI companies landscape look like a multi-layer cake?

To imagine the entire artificial intelligence (AI) world as an enormous restaurant, we would visualize a significant number of different chefs instead of just one. In fact, the current landscape of generative AI is quite divided into multiple industries. First, we need to consider the “kitchen equipment” manufacturers whose products support AI having “the capacity to think.” The next step in the AI manufacturing chain involves companies that produce foundations of large language models. Such as OpenAI, Anthropic, and Google. These companies collectively spend billions to train large language models to assist with and understand human language.
Why does all this matter? Presently, most of the applications we use on our mobile devices do not have independent thought. Instead, they leverage an application programming interface (API), which serves as a digital connector between mobile devices and the master chefs’ large language models (LLMs). As an example, the local writing application on a mobile device may have used GPT-4o as the base intelligence. When mapped, we can see a large amount of power concentrated in the hands of a few individuals, while usage (of technology) is nearly ubiquitous. The global generative AI ecosystem can be viewed in terms of this hierarchy. First are the companies that aggregate and supply the generative AI market space and develop the raw intelligence. Second are the companies that package that intelligence into something we can all use. Such as a virtual assistant (for legal help) or the creation of TikTok videos.
Open-Source vs Closed-Source: Is the secret sauce better than the public recipe?
The debate about what type of Generative AI companies landscape they use, whether “proprietary” (think closed source like OpenAI or Claude, you have no idea how they do it, you just pay them to use it. And typically because they are the best-developed examples, you have to live in “their garden” as a user) and “open-source” (look at everything Meta (Facebook) has done with Llama, they gave everything away for free), is ongoing among the insiders who are involved in the development and use of generative AI. For example, if you were a major banking institution in Kuala Lumpur, you would likely want to use an open-source system. So that you could host the AI on your servers and not have to send your customer data to the U.S. While there are very good reasons for wanting to use a proprietary system. There are also very strong arguments for wanting to use an open-source solution.
Based on the current situation of AI model providers, the gap between “Free” and “Paid” is very much closing. For instance, Llama 3 is almost as intelligent and capable as some earlier versions of GPT-4. This significant development has created a very competitive landscape for AI in terms of generative AI. As a result, we are seeing the AI industry giants unable to charge anything they want because there are free alternatives to their services available. The gap between “Free” and “Paid” models is closing. Llama 3 is already as smart as some versions of GPT-4. This shift is making the generative AI competitive landscape very intense. Suddenly, the big giants can’t just charge whatever they want because there’s a free alternative that is “good enough.”
Who are the real “Generative AI market leaders” in the application layer?

People discuss AI startups and technology giants as though small companies will always be outdone by their bigger counterparts. But in fact, many will exist in a thriving segment of the GenAI industry simply. Because they know more about specific industries than do companies like Google and Microsoft. Companies such as Microsoft and Google dominate the generative artificial intelligence landscape through products like Copilot and Gemini. Both have an incredibly large number of points of distribution. However, a graphic designer is likely to prefer using tools such as MidJourney or Canva absolutely no matter how robust data processing capabilities exist within those tools. Their focus makes them superior for that purpose. These application-layer companies do not build the brains; they build the “hands” that perform the work required of them.
At one extreme in today’s generative AI landscape, we have large cloud-enabled datacenters developed by companies. Such as Microsoft and Amazon, while on the opposite end we have nimble startups providing tools focused on completing specific tasks. As an example writing emails, learning to code, composing music, and many others. The role of BidaTech AI is to help companies who are attempting to navigate through the often-confusing generative AI marketplace by taking on the backend complicated capabilities while allowing for all of the necessary applications (tools) to communicate with each other seamlessly for end-users who do not have the technical skills to understand how computers “talk.” This type of administrative support is critical to ensuring an “application-layer” environment functions properly.
Why does the Asia-Pacific region have its own generative AI company’s landscape?
When we examine only US news sources, the view is that the entire world utilizes ChatGPT. The generative AI landscape in the Asia-Pacific region, however, has many differences. For instance, there exist two Chinese titans: Baidu (with its Ernie Bot) and Alibaba. In Southeast Asia, we are witnessing the emergence of “sovereign AI.” As those with experience developing generative AI products will attest, models built out of Silicon Valley do not always comprehend how to work with “Manglish” or understand cultural contexts associated with events like Hari Raya.
In addition to having difficulty comprehending diversity in the way people use language today, companies that lead the generative AI industry will be required to deal with local enterprises developing generative AI products specifically for individual languages. Such as Bahasa Melayu, Thai, and Vietnamese. The divide between the AI infrastructure versus the application layer will be very pronounced as the US companies continue to provide the AI hardware infrastructure. The Asia-Pacific region is being established as the home of the king of the “localized applications.” Many of the foundation model comparison reports published over the last few months include Asian foundation models in their comparisons. Therefore, the overall GenAI ecosystem is no longer simply from the West to the East. Instead, Asian companies are developing their products. Thereby increasing the resilience and intrigue of this more recently emerging marketplace for Generative AI products for all users in both regions.
How do we choose which “Brain” to use without getting a headache?

Due to the number of popular generative AI companies, the majority of users don’t need to know every company. Rather, the only information required to proceed is whether speed, cost or quality is most important. When creating a simple email, it requires little more than a “small” model (and is nearly free). Conversely, if they’re doing advanced finance modelling, greater power will be required with “heavyweight” models. Such as GPT-4 or Claude 3.5. Comprehending the generative AI provider’s landscape demonstrates that one does not need a Ferrari to go grocery shopping. At times a Perodua Axia (smaller model) can be the most efficient/affordability/energy-conserving option.
Overall, the overview of the generative AI model companies, indicate that the technology will continue becoming cheaper and better daily. Being that the cost to obtain global industry level intelligence has come down to near zero. That is, overcoming the barriers of finding available AI. Rather than, how to make it a natural part of everyday life, will now present an obstacle. At this point in time, you require administrative support and smart integration of support systems, provided by partners. Such as BidaTech AI, which allows the user to continue their job without having to know about the technology on a day-to-day basis.