Chapter 2. How to deliver GenAI-powered apps

OutSystems generative AI

Table of Contents

The most common method of building GenAI-powered apps requires programming languages, frameworks, data science, and more. There’s a reason for that–no one really knows that you can do it all with low-code.

Low-code is a visual software development approach that enables developers to create applications by minimising traditional, intricate hand-coding processes. It introduces a user-friendly, drag-and-drop development environment that makes app development approachable for coding novices and seasoned pros. This section explores the tools and skills needed to build GenAI-powered apps using traditional methods and shows how you can do it with a low-code platform.

Building GenAI apps with traditional code is complex and requires skills that extend into the realms of AI/ML engineering, data science, database administration, and human behaviour. Skip to “Using low-code to accelerate GenAI app development” to learn how low-code dramatically simplifies GenAI development.

Using traditional methods to develop apps embedded with GenAI

GenAI applications are a combination of cloud technology, software development languages, data science, data analytics, DevOps, processing, and AI models–which includes a special kind of ML called large language models (LLMs). LLMs can solve multiple use cases out of the box, while traditional machine learning requires significant model training. LLMs make it possible to take advantage of artificial intelligence without having specialised AI talent to build models and engineer features. LLMs aside, developers must become wizards of serverless, infrastructure as code, machine learning concepts, algorithms, and other advanced techniques. This includes supervised and unsupervised learning, GenAI frameworks, deep learning architectures, and optimisation.

Part of the GenAI application development process also requires knowing which model or models to use and deciding whether to build your own or use those that are publicly available. You also have to be familiar with the software development frameworks available for GenAI.

Get to know GenAI terminology

Using GenAI means learning a few new terms. Here are some important concepts that will help you get ahead:

  • Foundation models – The models that generative AI uses are called foundation models. Large and versatile, they are trained on extensive data that can be adapted to do things like write text or analise images. The number keeps growing as AI experts experiment with them.
  • Large language models (LLMs) are a type of foundation model. They generate and complete written content on a massive scale. ChatGPT and Claude use LLMs to generate complete paragraphs of text, for example.
  • Vector search is a type of search capability where documents and queries are represented as vectors to enable high search performance in textual data. It’s the new standard for retrieving content from large knowledge bases of text documentation, which in turn is passed on to LLMs for GenAI use cases.
  • Retrieval augmentation generation (RAG) is an AI technique that combines LLMs with content retrieval to enable use cases based on proprietary data sources. RAG consists of retrieving text from a knowledge base (for example, using vector search), which in turn is passed on to an LLM with a set of instructions to generate an answer. One example is a chatbot that retrieves data from customer support knowledge and passes that content on to an LLM to generate an informed answer to the user’s question.
  • Diffusion models are used for image generation and video or image synthesis. DALL-E 2 and MidJourney are popular examples.
  • Few-shot prompting improves outputs from LLMs by including examples in a prompt by showing the model exactly what you are looking for in terms of output structure, tone, and style.

The languages of traditional GenAI development

The opinions on the programming languages that are best for building GenAI apps are many, varied, and vocal. However, there are five that the experts from the development teams at OpenAI, Anthropic, Meta, Cohere, and Google agree are most commonly used to build GenAI apps. Some provide AI services and backends and rely on traditional coding languages for creating user interfaces.

Python

Python is probably the most widely used language in artificial intelligence and GenAI development because of its versatility, simplicity, readability, data visualisation and analytics capabilities, and extensive community support. Developers can use it for natural language processing (NLP), predictive modelling, image recognition, face detection, chatbots, and more. However, if a GenAI app needs computationally intensive tasks that require careful consideration of project requirements, Python might not be the best choice.

JavaScript

Like Python, JavaScript offers accessibility and versatility, so it is a popular language for generative AI. JavaScript runs directly in web browsers, which means developers can use it to build interactive generative AI applications that run on any device with an internet connection. Additionally, JavaScript’s wide adoption and large community provide access to numerous libraries and frameworks specifically designed for AI tasks.

R

Commonly used in statistical computing and data analysis, R provides packages for generative modelling. These packages allow the manipulation and visualisation of data for GenAI development, especially generative art. Because it is a data science language, R has a steep learning curve for developers.

Prompt English

Prompt English isn’t a programming language per se, but it does use human language to design effective prompts that generative AI applications can use to “answer” appropriately. Prompt engineers create inputs that interact optimally with other inputs in a GenAI tool so a model can write marketing emails, generate code, engage with customers, create digital art, and so on. Prompt engineering requires linguistic proficiency, problem-solving skills, AI framework and NLP knowledge, creative writing, ethical awareness, a commitment to iterative testing, and a basic understanding of human cognition, psychology, and sociology.

Prompting examples: Chain-of-Thought and Tree-of-Thought

Chain-of-Thought (CoT) prompting is a technique used to improve the ability of LLMs to solve complex problems. It involves structuring prompts into smaller, more manageable steps and articulating these steps explicitly before reaching a conclusion. Here’s a high-level overview of the CoT process:

  • Break down the problem: The prompt is structured to guide the model to break the problem down into a series of logical steps.
  • Reason step-by-step: The model then processes each step individually, articulating the reasoning process that leads from one step to the next.
  • Conclude: The model combines the insights from each step to arrive at a final solution.

Tree-of-Thought (ToT) builds on the CoT concept. It creates a tree-like structure of ideas. Each idea in the tree represents a problem-solving step, and the LLM can evaluate each step to decide if it’s viable. This approach is inspired by how humans solve problems through trial and error. It allows the LLM to:

  • Explore different ideas.
  • Reevaluate when needed.
  • Self-evaluate intermediate thoughts.
  • Decide whether to continue with a path or choose another.
  • Correct errors automatically.
  • Accumulate knowledge.

Other languages

Developer communities and forums reveal that other languages that are not commonly associated with generative AI are also being used. For example, the use of Java for generative AI applications is growing in the IBM engineering community.11 Several popular GenAI environments offer Java SDKs. The language is well-suited to creating AI agents and analytics embedded into business software—and to empowering recommendation engines.

Some developers choose to use C++ for computationally intensive tasks in generative AI.12 It also provides low-level control over hardware resources, making it efficient for training large-scale generative models. And for those who are interested in applying their C# skills to generative AI, there are tutorials that demonstrate how they can be done.13

One language by itself is not sufficient

It’s important to note that one language alone is rarely sufficient for building a GenAI-powered application or embedding GenAI into an existing one. Developers should expect to use two at the very least.

The skills needed to develop GenAI applications with traditional methods

In addition to the traditional GenAI languages, developers must be skilled at using and understanding AI frameworks, libraries, techniques, and more.

AI frameworks

AI frameworks provide high-level APIs and tools for building and implementing AI models faster and more easily. LangChain, Haystack, and LlamaIndex are the most common frameworks used for GenAI development.

LangChain

Created specifically for GenAI, LangChain is an open-source framework available in both Python- and Javascript-based libraries that helps software developers create applications using large language models (LLMs). It provides tools and APIs that simplify the integration of LLMs into projects.

A fundamental concept in LangChain is the chain, a series of automated actions that connect a user’s query to the model’s output. Chains allow developers to combine multiple components to create a coherent application, connect multiple data sources, and generate unique content. They can also be used to provide context-aware responses by holding various AI components together.

Haystack

Haystack is an open-source Python framework for building production-ready LLM applications, retrieval-augmented generative pipelines, and state-of-the-art search systems that work intelligently over large document collections. Haystack users report that it is more useful for developing large-scale search systems and conversational AI than Langchain.14

LlamaIndex

LlamaIndex is an orchestration framework designed to amplify the capabilities of LLMs like GPT-4. It indexes the private or domain-specific data into formats optimised for LLMs to support RAG. So, when someone asks GenAI a question, it can find relevant information from a large collection of data and use it to provide an accurate and detailed answer. This can lead to better, more relevant responses, especially for topics that require up-to-date or specific knowledge.

Hugging Face

Although it is not a typical AI orchestration framework, developers also rely on Hugging Face when building GenAI applications. Hugging Face provides a vast repository of pre-trained models and a community-driven approach to model sharing and development. Hugging Face models can be fine-tuned on specific tasks, saving developers valuable time and computational resources. A Model Hub acts as a central hub for sharing, discovering, and collaborating on NLP models. The Hugging Face “Transformers” library provides easy-to-use APIs for working with pre-trained models, enabling developers to integrate powerful language capabilities into their applications with minimal effort.

Deep learning frameworks

In addition, there are some GenAI applications that were developed with deep learning frameworks such as Pytorch, TensorFlow, and Keras. For example, Pytorch powers ChatGPT-4, and TensorFlow and Keras have been used to develop generative adversarial networks, which are foundation models used in anomaly detection, data augmentation, picture synthesis, and text-to-image and image-to-image translation. However, these frameworks are not focused on GenAI and are not being used as much in current GenAI projects.

Natural language processing techniques

Natural language processing (NLP) trains computers to understand, interpret, and generate human language, bridging the gap between human language and machine language. The list of NLP techniques is long (14 and counting) and includes:

  • Tokenisation: Breaking down a text into smaller units, such as words or subwords (tokens).
  • Part-of-speech tagging: Tagging each word in a sentence, such as nouns, verbs, adjectives, etc, to understand the role and context of words.
  • Named entity recognition: Identifying and classifying names of people, organisations, locations, dates, and more in a text.
  • Sentiment analysis: Determining the emotion expressed in a piece of text, such as positive, negative, or neutral.
  • Language modelling: Building statistical models that predict the sequence of words in a given language that will be used in NLP tasks, including text generation.

Large-scale data processing, handling, and storage

A modern data stack for GenAI is a combination of tools and technologies. They enable the collection, processing, storage, analysis, and utilisation of data for training and deploying generative models. Here’s a look at some of the key components.

Real-time data streaming platformsData lakes or warehousesDistributed data processing frameworksData preparation and feature engineering

Facilitate the collection and ingestion of data from various sources, including user interactions, sensors, or any relevant input.

  • Apache Kafka
  • Amazon Kinesis
  • Azure Event Hubs
  • Google Cloud Pub/Sub

Provide a centralised repository for structured and unstructured data.

  • Amazon Simple Storage Service
  • Google Cloud Storage
  • Azure Blob Storage
  • Modern data warehouse (BigQuery)
  • Data lake (Snowflake)

Transform and clean data and handle large-scale batch processing or enable real-time stream processing.

  • Apache Spark
  • Apache Flink
  • Apache Beam
  • Apache Tika

Clean, prepare, and engineer features from raw data. Feature engineering is crucial for training effective generative models.

  • Pandas
  • Dask
  • Apache Spark MLlib
  • Vector DB

Platforms for deploying GenAI models and apps

Kubernetes and Docker are essential tools for moving GenAI models into production and ensuring they can serve predictions at scale. Docker helps by packaging an AI model and its environment into a container so it can run reliably across different computing environments. Kubernetes manages these containers—helping to deploy, scale, and operate them efficiently. Although these platforms streamline the process of deploying and managing GenAI applications, learning them is complicated.

 

Building GenAI apps using traditional methods is complex and daunting

The formidable list of languages and skills required to build GenAI-powered apps can seem overwhelming. But for the dogged and determined developer, it’s possible to learn them and go on to have a successful career. But there’s a better way to do this than going all-in on months or years of training—use low-code.

How developers are learning GenAI skills

Here are some of the ways developers are skilling up for GenAI:

  • Formal education such as university courses and online programs
  • Hands-on experience with projects and open-source contribution
  • Staying up-to-date on trends by consulting research papers, reading blogs, reaching out to other developers with expertise, and attending conferences.
  • Collaboration with predictive analytics experts, social scientists, engineers, and ethicists to address data, ethical, and societal challenges
  • Asking for help and mentorship in communities like StackOverflow and GitHub or in subreddits.

Using low-code to accelerate GenAI app development (and excitement)

Low-code platforms have proven to be excellent at building enterprise applications fast and incurring lower costs than traditional coding. Generative AI apps are no different from other applications, and low-code gives you a choice. You can invest heavily in a GenAI team, training, and tools to do it from scratch or you can fire up a low-code platform and get started almost right away. Visual interfaces, easy-to-use connectors, pre-built components, and drag-and-drop functionality–these features of low-code platforms can open many doors for developers who would like to (or have been asked to) build GenAI apps.

Focusing on strategy instead of execution

With low-code, developers can focus on what their GenAI applications should do, not how to make them do it. They can connect to foundation models with a minimum of effort and a maximum of speed, simplicity, and satisfaction. And for data scientists who are interested in building applications with their models, low-code is one of the best ways to do it.

Low-code platforms make it easy to bring generative AI into apps and workflows. With a few clicks, developers can bring together the best of LLMs and RAG. Low-code platforms also offer connectors to GPTs, LLMs, and GenAI functionality from Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. With these connections, no one has to go through the hassle of building models or choosing frameworks.

Low-code also enables developers to create their own AI agents without writing a line of Java or Python or tapping into the Langchain library. At least one low-code platform provides development teams with what they need to build generative AI agents and embed them in new or existing applications. Some platforms even accelerate the agent-building process by providing a library of quick-start generative AI apps, including a support ticket application that answers customer questions accurately and a sales intelligence application that summarises support calls.

GenAI apps built with low-code

  • A manufacturing company used a low-code OpenAI connector to integrate ChatGPT with its employee application.
  • A software company is using GenAI to provide customer support and close tickets more quickly, improving the customer experience.
  • In an ecommerce application, customers use a GenAI chatbot developed with low-code to get help placing orders, personalised recommendations, and answers to frequently asked questions.
  • A GenAI-powered virtual assistant delivered by a low-code platform enables employees to access information and complete tasks more efficiently.
  • A GenAI application for sales provides call summaries in Slack, suggests the next best actions in an email, and generates actionable insights in Salesforce.
  • A developer community created a user experience that connects to multiple LLMs and generates translations of its forums.

It’s important to note that although developers without basic AI and ML knowledge can use a low-code platform to build a GenAI application, generally it’s not the best idea. Yes, low-code eliminates a lot of the complexity, especially compared to traditional development, but there are things that developers still need to know.

What developers need to know to build GenAI apps with low-code

When developers use low-code to build GenAI apps, the knowledge required is a much shorter list than that of traditional development, and it’s more about understanding concepts than intense coding and framework study. Here are examples:

  • A basic understanding of AI and ML concepts. This includes supervised and unsupervised learning, neural networks, and natural language processing.
  • Familiarity with how data is collected, cleaned, and preprocessed for training and evaluating GenAI models effectively. They should also know how data integration, data quality assessment, and feature engineering work.
  • Principles of user experience (UX) design: Creating user-friendly interfaces and intuitive workflows is essential for the adoption and success of GenAI applications. Developers should be able to create visually appealing and easy-to-use applications.

These are just some of the reasons for considering low-code for GenAI app delivery. However, here’s one that’s even more compelling: while traditional coding is being upended by GenAI and might one day be eliminated by it, low-code and GenAI can work in tandem.

Useful resources

Gartner® Emerging
Tech Impact Radar

Explore Gartner's in-depth analysis on Generative AI.

AI Adoption in Software
Development: Report Insights

How enterprises are navigating AI adoption and implementation.

Related resources

ApplicationsApps Built with OutSystems

Experience the power of OutSystems low-code with the OutSystems App Library.

InitiativesGenerative AI Solutions

Looking to innovate your business? Take a look at the OutSystems GenAI solutions to drive business value....

ReportAI Advantages for IT Leaders

Dive into the challenges of AI, how to solve them, and the opportunities to reshape tech.

WebinarHow to Leverage GenAI

Start redefining team productivity and creativity by leveraging generative AI.

BlogTech Tips for Using Generative AI

5 key tech tips for implementing generative AI to transform your business.

ArticleAI vs. ML

Understand the differences between artificial intelligence and machine learning.

11Andre Tost, Making the case for Java in Generative AI, 2023 (plus comments). LinkedIn, 24 November.

12 C++ Generative AI Inference: Production Ready Speed and Control, 2024. ChemicalQDevice, 13 June.

12 Chris Pietschmann, Build a Generative AI App in C# with Phi-3 SLM and ONNX, 2024. Build Five Nines, 18 May.

14 Sarfraz Nawaz, Langchain vs Haystack: Which is Best for AI development?, 2024. LinkedIn, 1 April.

Originally published on OutSystems.com