top of page

THE EXPLAINER

Intermittent posts on buying and selling enterprise software, construction software, AI-enabled applications and more.

How AI-Enabled ERP Is Setting Up Customers and Software Vendors for Disruption

Updated: Apr 13

Artificial intelligence (AI) in ERP software is the wave of the future, AI-enabled ERP's underpinnings here for the long haul?

Technology decision makers must consider the future of AI-enabled ERP and what it means for their access to newer technologies.

Enterprise applications including enterprise resource planning (ERP) software provides a natural platform for AI. It provides a well-indexed, complete and authoritative data set for AI. Transactional histories provide an underpinning for predictive functions, and a predictable landscape for process automation and agentic AI.


Not every enterprise software product, and not every AI technology, is created equal, however.


Rapid AI Investment In Times of Rapid Evolution

Today, the Large Language Model (LLM) dominates commercial AI use cases, and major tech companies and ERP software vendors are sinking capital into the infrastructure to run them alongside or inside their products.


According to Futurum Group:

·        Amazon projects $200 billion (up from $131 billion in 2025).

·        Alphabet (Google) estimates $175–$185 billion (up from $91 billion in 2025).

·        Microsoft is on its way toward $120 billion or more

·        Meta has invested an estimated $115–$135 billion


While early iterations of software relying on natural language processing (NLP) were designed to support customer-service chatbots and relied on rules-based algorithms for what to say next given a specific prompt, these limitations drove software vendors to progress from word-by-word processing to training the AI on massive models for increased context-awareness.


LLMs to a large extent got their start in the contact center, while capturing human workers’ actions to fine-tune for specific use cases through Reinforcement Learning from Human Feedback (RLHF). The resulting application set is a boon for human-computer interaction because it enables us to use our own language instead of a computer language to tell machines what to do. Like humans, LLMs are trained on massive parameters to model statistical probabilities. Because language is not inherently logical, models must use brute-force pattern recognition to ferret out meaning, driving up computational costs.


But even the Microsoft AI CEO Mustafa Suleyman acknowledges an obvious constraint, if trying to rationalize it away in this recent piece on MIT Technology Review:


“Consider that leading labs are growing capacity at nearly 4x annually. Since 2020, the compute used to train frontier models has grown 5x every year. Global AI-relevant compute is forecast to hit 100 million H100-equivalents by 2027, a tenfold increase in three years. Put all this together and we’re looking at something like another 1,000x in effective compute by the end of 2028. It’s plausible that by 2030 we’ll bring a additional 200 gigawatts of compute online every year—akin to the peak energy use of the UK, France, Germany, and Italy put together ...


“The obvious constraint here is energy. A single refrigerator-size AI rack consumes 120 kilowatts, equivalent to 100 homes. But this hunger collides with another exponential: Solar costs have fallen by a factor of nearly 100 over 50 years; battery prices have dropped 97% over three decades. There is a pathway to clean scaling coming into view.”


Not every LLM will be an energy and compute power hog. A language model working on a smaller data set consumes less energy than an AI built for general intelligence. An LLM designed to parse the legal language of a contract and identify risks to mitigate may be an an efficient use case, but an LLM designed to perform more mathematically-oriented tasks like projections or predictions based on broad data sets may be less efficient.

Other key industrial AI functions require a more deterministic approach, generating consistent and accurate outputs for predictive maintenance, automation and operations. These may rely on other technologies and will sometimes be located at the edge, in an internet of things (IoT) device or near the facts on the ground.


LLM And Chip Agnosticism May  Become Attractive

Enterprise software, once implemented, often runs inside a company for a decade or more. This means asking the vendor questions about the technology roadmap, and their ability to swap out new tech for old is important. For a company running a modern ERP solution, technological barriers to swapping out one AI vendor or technology for another should be few. But they may in fact stumble across contractual, license and cost barriers.


This is an essential issue for enterprise technologists to consider because there are AI approaches already in commercialized usage that have a lot to recommend them. They may want the freedom to leverage different AI computational approaches on hardware more efficient than those in the data centers tech majors have invested do heavily in.


Alternative neural architectures are overcoming the "quadratic scaling" problem of required compute power quadrupling every time text length doubles. Promising models include:

  • State Space Models (SSMs) & Mamba, which offer "linear scaling," so memory and compute needs grow proportionally with document length instead of exponentially

  • Hierarchical Reasoning Models (HRM), which can match LLM reasoning with just a fraction of the training data

  • Liquid Learning Networks (LLNs), which adjust their parameters in real-time based on incoming data, making them highly efficient for time-series and continuous data processing. 


Toronto-based startup Taalas eliminates compute and power demand by hardwiring entire AI models directly into a solid state device. Consisting of an entire silicon wafer the size of a hubcap, they eliminate execution layers for a 100× boost in efficiency, according to Vaishal Chauduan on Bisinfotech.


Wafer-scale acellerator startup Cerebras, meanwhile, is similarly removing barriers between AI model components, putting GPU capabilities, central processing and additional memory into a single silicon wafer the size of a hubcap.


Bio-inspired computing like neuromorphic chips running neural processing units, meanwhile, emulate the electrical efficiency of biological brains, relying on neuromorphic chips that only fire when data inputs change. Neuromorphic chips are currently used currently in edge devices like predictive maintenance sensors, autonomous vehicles, robots and medical devices.



“Many researchers now view neuromorphic computing as an appealing alternative to packing more processors into ever-tinier spaces. This approach could conserve energy in computation or, when used with existing systems, offset the energy use of existing frameworks. Thus far, the energy savings and other benefits aren’t substantial enough to attract large companies, especially those that have invested heavily in other AI architectures.”


Other startups like Cortical Labs and FinalSpark are progressing beyond neuromorphic to true biocomputing, relying not on hardware but wetware--human neurons grown from stell cells that underpin ultra-low energy AI systems.



ERP Modernity Matters for AI

Enterprise software, by definition, should provide a consistent platform for AI, but the degree to which information and business logic can be leveraged differs based largely on its technological underpinnings and age.

    

Representational State Transfer (REST) is an architectural style for designing networked applications, primarily over HTTP, using a stateless, client-server model. It enables efficient, standardized communication by treating data as resources, manipulated through standard methods like GET, POST, PUT, and DELETE, making it popular for web services.


REST is a more modern method than Simple Object Access Protocol (SOAP), which communicates to other applications using .XML messages.


RESTful APIs on the other hand can support multiple protocols including JSON, XML, HTML, YAML, and CSV. It is also flexible, thanks to its well-defined communication protocols that function similar to the way a web browser access web sites and applications through the world wide web.


On the web, HTML, and HTML Standards like W3C and WHATWG specifications provide a predictable experience, combined with broadly adopted tools like CSS, and JavaScript. Semantic HTML meanwhile helps browsers understand the structure of content, which underpins assistive technologies while improving overall experience.


JSON is the format required for AI agents, microservices, and real-time data processing. RESTful APIs can also deliver the fine-grained access control (FGAC) to allow organizations to follow the Principle of Least Privilege (PoLP), which opens or restricts access to specific access and permission to narrow swaths of a data table, while specifying what can be done with the information.


BOTTOM LINE: ERP is a natural platform for enterprise AI. Technology decision makers will want to explore with current and prospective vendors what their future options will be. How can they extend the solution with various current and emerging AI technologies? They will also want to review enterprise software portfolios to determine which applications best support AI, and develop strategies to add agentic and other AI tools to various parts of the business based on strategic priorities.

bottom of page