Demystifying LangChain Expression Language

Rajesh K
GoPenAI
Published in
6 min readDec 22, 2023

--

Source — StableDiffusion

Introduction

Navigating the intricacies of complex operations in the realm of LLM can be challenging. Thankfully, the LangChain Expression Language (LCEL) offers a solution by presenting a declarative and effective method for constructing and implementing advanced language processing pipelines. LCEL streamlines the composition of chains, facilitating a smooth transition from prototyping to production. This blog post offers an extensive exploration of LCEL’s features, covering the initial setup to its advanced functionalities. LCEL empowers users to adopt a declarative method for composing chains, simplifying tasks such as streaming, batch processing, and handling asynchronous operations.

Advantages of LangChain Expression Language:

  1. Simplified Chain Composition: Streamlined Engagement: Core components are easily accessed through intuitive pipe operations, simplifying the process of chain composition.
  2. Efficient Language Model Calls: Streamlined Model Interactions: Built-in support for batch, async, and streaming APIs eliminates the intricacies of optimizing language model calls, enhancing efficiency.
  3. Structured Conversational Flows: Defined Interaction Framework: LCEL establishes a clear structure for Conversation Retrieval Chains, VectorStore retrieval, and Memory-based prompts, contributing to structured conversational flows.
  4. Function Calling: Seamless Methodology: Similar to OpenAI’s offerings, LCEL introduces a seamless approach to function calling, promoting code clarity and usability.

To grasp the significance of LCEL, Let’s now explore practical code examples showcasing the prowess of LCEL.

Installing prerequisite libraries

pip install -qU \
langchain \
openai\
docarray \
tiktoken \
faiss-cpu

To grasp LCEL syntax, let’s start by constructing a basic chain using the traditional LangChain syntax

import os
os.environ["OPENAI_API_KEY"] = "<openai-api-key>"
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser

prompt = ChatPromptTemplate.from_template(
"Give me small report about {topic}"
)
model = ChatOpenAI(
temperature=0,
model="gpt-3.5-turbo",
#openai_api_key=openai_api_key
)
llm = ChatOpenAI(temperature=0)
output_parser = StrOutputParser()

In a conventional LangChain setup, these elements would be linked together using an LLMChain:

from langchain.chains import LLMChain

chain = LLMChain(
prompt=prompt,
llm=model,
output_parser=output_parser
)

# and run
out = chain.run(topic="Large Multimodal Model")
print(out)

Constructing the chain in an alternative manner using pipe operators (|) LCEL way:

# Using LCEL 

lcel_chain = prompt | model | output_parser

# and run
out = lcel_chain.invoke({"topic": "Large Multimodal Model"})
print(out)

LCEL simplifies the creation of intricate chains from fundamental components by offering:

  1. Unified Interface: Each LCEL object adheres to the Runnable interface, encompassing a standardized set of invocation methods (invoke, batch, stream, ainvoke, …).

Underdstaing Runnable and pipes

The syntax follows standard Linux piping, introduced in Python. The | operator takes output from the left and feeds it into the function on the right.

When the Python interpreter encounters the | operator connecting two objects, such as obj1 | obj2, it invokes the or method of object obj2 by passing object obj1 as an argument. This implies that the following patterns are interchangeable. Keeping this in mind, we can create a Runnable class that takes a function, transforming it into a chainable function using the pipe operator |. This forms the essence of LCEL

class Runnable:
def __init__(self, func):
self.func = func

def __or__(self, other):
def chained_func(*args, **kwargs):
return other(self.func(*args, **kwargs))
return Runnable(chained_func)

def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)

def add_one(x):
return x + 1

def add_two(x):
return x + 2

# run them using the object approach
chain_or = Runnable(add_one).__or__(Runnable(add_two))
print(chain_or(3))
# run using a|b
chain_pipe = Runnable(add_one)| (Runnable(add_two))
print(chain_pipe(3))

Exploring common prompt tasks and scenario

  • To regulate text generation in your chain, you have the option to incorporate stop sequences. In this configuration, the process of generating text concludes when a newline character is detected
chain = prompt | model.bind(stop=["\n"])
result = chain.invoke({"topic": "Large Multimodal Model"})
print(result)
  • LCEL facilitates the attachment of function call information to your chain, enhancing the functionality and providing valuable context during text generatio.This example attaches function call information to generate a summary.
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser

functions = [
{
"name": "summary",
"description": "A summary",
"parameters": {
"type": "object",
"properties": {
"setup": {"type": "string", "description": "LMM summary"},
"punchline": {
"type": "string",
"description": "Summary",
},
},
"required": ["setup", "punchline"],
},
}
]
chain = prompt | model.bind(function_call={"name": "summary"}, functions=functions) | JsonOutputFunctionsParser()
result = chain.invoke({"topic": "Large Multimodal Model"}, config={})
print(result)
  • LCEL enables the creation of Retrieval-augmented generation chains, merging retrieval and language generation steps for a comprehensive and sophisticated approach to content creation
from operator import itemgetter

from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain.vectorstores import FAISS

# Create a vector store and retriever
vectorstore = FAISS.from_texts(
[" ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings","Artificially induced intelligence"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()

# Define templates for prompts
template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

model = ChatOpenAI()

# Create a retrieval-augmented generation chain
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)

result = chain.invoke("what is Artificial intelligence?")
print(result)
  • The integration of Runnables allows for the concatenation of Multiple Chains, enabling a seamless connection between distinct processes for enhanced and cohesive text generation. In this instance, a branching and merging chain is applied to construct a rationale, analyze its merits and drawbacks, and then generate a conclusive response.
prompt1 = ChatPromptTemplate.from_template("is this {city} caputal of this country?")
prompt2 = ChatPromptTemplate.from_template(
"what country is the city {city} in? respond in {language}"
)

model = ChatOpenAI()

chain1 = prompt1 | model | StrOutputParser()
py
chain2 = (
{"city": chain1, "language": itemgetter("language")}
| prompt2
| model
| StrOutputParser()
)

result = chain2.invoke({"city": "Rome", "language": "spanish"})
print(result)
  • LCEL facilitates the splitting and merging of chains through RunnableMaps. Here’s an example illustrating Branching and Merging:
from operator import itemgetter

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser

planner = (
ChatPromptTemplate.from_template("Generate an argument about: {input}")
| ChatOpenAI()
| StrOutputParser()
| {"base_response": RunnablePassthrough()}
)

arguments_for = (
ChatPromptTemplate.from_template(
"List the pros or positive aspects of {base_response}"
)
| ChatOpenAI()
| StrOutputParser()
)
arguments_against = (
ChatPromptTemplate.from_template(
"List the cons or negative aspects of {base_response}"
)
| ChatOpenAI()
| StrOutputParser()
)

final_responder = (
ChatPromptTemplate.from_messages(
[
("ai", "{original_response}"),
("human", "Pros:\n{results_1}\n\nCons:\n{results_2}"),
("system", "Generate a final response given the critique"),
]
)
| ChatOpenAI()
| StrOutputParser()
)

chain = (
planner
| {
"results_1": arguments_for,
"results_2": arguments_against,
"original_response": itemgetter("base_response"),
}
| final_responder
)

result = chain.invoke({"input": "Agile"})
print(result)

Runnable Batch, Stream & Async processing

Batch processing is enhanced through LangChain’s Expression Language, streamlining LLM queries by executing multiple tasks concurrently. LangChain’s batch functionality optimizes inputs by employing parallel LLM calls, ensuring efficient and improved performance in interactions with the LLM model.

model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("Given the items: {items}, What games can I play")
chain = prompt | model | StrOutputParser()
response = chain.batch([{"items": "bat, ball, gloves"},{"items":"stick, ball, gloves, pads"}])
print(response)

Stream functionality in LangChain facilitates immediate data flow, making it well-suited for dynamic chatbots and live-stream applications. ChefBot exemplifies this capability by seamlessly streaming information, eliminating any wait time and showcasing the power of LangChain in dynamic contexts.

chain = prompt | model
for s in chain.stream({"items": "ball ,jersey, shoes"}):print(s.content, end="")

By leveraging ainvoke and await methods, a seamless asynchronous execution is achieved. This empowers tasks to operate independently, substantially enhancing responsiveness and application speed within the realm of asynchronous capabilities.

response = await chain.ainvoke({"items": "shuttle cock, bat"})
print(response)

Parallelize steps

RunnableParallel, also known as RunnableMap, simplifies the simultaneous execution of multiple Runnables. It seamlessly returns the output of these Runnables as a map, providing an efficient and straightforward approach to parallel processing.

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel

model = ChatOpenAI()
story_chain = ChatPromptTemplate.from_template("tell me a story about {topic}") | model
poem_chain = (
ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model
)

map_chain = RunnableParallel(story=story_chain, poem=poem_chain)

map_chain.invoke({"topic": "goofy"})

Execute custom functions ( Lambda)

The RunnableLambda, within LangChain, serves as an abstraction enabling the transformation of custom Python functions into pipe functions.This example bears similarity to the Runnable class introduced earlier in the article.

from langchain_core.runnables import RunnableLambda

def add_one(x):
return x + 1

def add_two(x):
return x + 2

# wrap the functions with RunnableLambda
add_one = RunnableLambda(add_one)
add_two = RunnableLambda(add_two)
chain = add_one | add_two
chain.invoke(0)

Conclusion

As we conclude this examination, it’s essential to recognize that the Langchain Expression Language (LCEL) spans beyond the subjects we’ve covered. LCEL effortlessly delves into diverse domains, encompassing Conversational Retrieval Chains, Multi-LLM Chain Fusion, Tools Integration, Memory Enhancement, SQL Querying, and Python REPL Coding, showcasing its versatility and broad functionality.

References

--

--