LLMs : A base layer for building applications

Published by Sohhom on

I came across this tweet recently:

ISA, short for Instruction Set Architecture, is a standard way to expose the ability of computers to programmers. It is a standardized interface that allows humans to write sequences of operations to build up to more complex pattern processing capabilities, such as displaying images or playing music.

In the post-transformer world, we have an emerging new paradigm where the LLMs (or rather, LMMs) have created the ability to express different type of cognitive operations that were simply non feasible earlier, or required a prohibitive amount of developer time and effort.

In my opinion, “LLMs are the new ISAs” is a very apt summary of the situation.

Instruction Set Architectures (ISAs) provided a standard set of commands that all compatible hardware could interpret, which allowed programmers to write code without needing to adapt it for each specific machine’s unique hardware. This standardization meant that developers could focus on software innovation rather than hardware-specific details, greatly reducing development time and complexity. With ISAs like x86 and ARM, a single codebase could run across various devices, making programming more efficient and enabling broader software distribution.

In the early days of computers, ISAs unlocked new applications. Developers could then build complex programs, like operating systems, games, and productivity tools, without starting from scratch for each platform. ISAs essentially created a universal language for computers, facilitating the explosion of software development and accelerating the computing industry’s growth.

In a similar way, the current proliferation of (both open source and API-delivered) LLMs means that these models would be available to act as a versatile tool for language comprehension or generation. In other words, they are becoming fundamental tools for any software that interacts with, interprets, or produces natural human language in some way.

Because language is such a versatile medium—used for communication, analysis, information retrieval, and more—having a standard, highly capable model for understanding language enables a broad range of applications. Here are some apps that could potentially be built with the LLM acting as a base layer:

  • Summarization and Information Extraction: In legal and medical contexts, an LLM can be used to generate summaries, extract key points, or answer specific queries based on complex documents. A legal assistant application powered by an LLM might summarize recent case law for quick review. Instead of manually reviewing every document, professionals get summaries or relevant excerpts, significantly speeding up their workflow and ensuring they have quick access to important information.
  • Sentiment Analysis for Brand Monitoring: An LLM powered app can analyze large volumes of user-generated comments and reviews about a business, identifying positive, neutral, or negative sentiment and even categorizing topics or themes. Sentiment analysis tools can become more nuanced, detecting sarcasm, mixed emotions, or specific tones—something traditional rule-based sentiment analysis struggles with.
  • Code Assistance and Documentation Generation: Developers might use LLMs to help with code generation, debugging, or creating documentation. An LLM can interpret code comments or plain language instructions and generate code snippets, explain code, or write documentation automatically. This helps streamline development processes, reduce repetitive work, and make coding more accessible to non-experts or beginner programmers.

To know more about similar ideas, follow Eric on X (formerly Twitter) and see this page: https://myconf.io/news/myconf-2024/speaker-announcement-erik-meijer/