Skip to content

13 LLM Frameworks

LangChain

  • Fully featured, powerful and heavyweight framework.
    1
    2
    3
    4
    5
    6
    from langchain_openai import ChatOpenAI
    
    llm = ChatOpenAI(model='gpt-5-mini')
    response = llm.invoke(tell_a_joke)
    
    display(Markdown(response.content))
    

LiteLLM

  • Quick and lightweight framework.
  • Has a built in pro feature call Prompt Caching where same / similar subsequent queries to the LLM will cost cheaper due to caching.
    from litellm import completion
    response = completion(model='gpt-5-mini', messages=tell_a_joke)
    response_content = response.choices[0].message.content
    display(Markdown(response_content))
    
    # Usage statistics
    print(f'Input tokens: {response.usage.prompt_tokens}')
    print(f'Output tokens: {response.usage.completion_tokens}')
    print(f'Total tokens: {response.usage.total_tokens}')
    print(f'Total cost: {response._hidden_params['response_cost']*100:.4f} cents.')