Langchain Async explained. Make multiple OpenAI chatgpt API calls at the same time video files (Patreon)
Downloads
Content
Code files are for video: https://youtu.be/eAikW9o1Ros
MAKING PARALLEL CALLS TO THE LLM WILL COST EXTRA TOKENS!!
Async is important because you can run multiple calls to the OpenAI API at the same time instead of one by one. Therefore it results in significant speed gains when you want to make multiple calls at the same time.
Relevant links:
langchain async blog post: https://blog.langchain.dev/async-api/
miniconda: https://docs.conda.io/en/latest/miniconda.html
Python Async walkthrough: https://realpython.com/async-io-python/#setting-up-your-environment
langchain async for llm calls: https://langchain.readthedocs.io/en/latest/modules/llms/async_llm.html
langchain async for chain calls: https://langchain.readthedocs.io/en/latest/modules/chains/async_chain.html
lanchain async for agent calls: https://langchain.readthedocs.io/en/latest/modules/agents/examples/async_agent.html
wikipedia article about Kaprekar's constant: https://en.wikipedia.org/wiki/6174_(number)