Using Prompts & LangChain for non-agent use cases?

Hey everyone! The prompts Quickstart guide with LangChain is great:

When trying to log a more simple model than the one from the agent, I however cannot see a Trace that shows me the model architecture etc. How can I accomplish this? The use case is comparing different LLMs from OpenAI, Huggingface on a single set of prompts. I’m new to WandB, so your help is much appreciated!

from langchain.callbacks.tracers import WandbTracer

wandb_config = {“project”: “wandb_langchain_simple_documentation”}
tracer = WandbTracer(wandb_config)

llm = OpenAI(temperature=0)
from langchain import PromptTemplate, OpenAI, LLMChain

prompt_template = “What is a good name for a company that makes {product}?”

llm = OpenAI(temperature=0)
llm_chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain(“colorful socks”)

Hey @brutusai , thanks for giving Prompts a spin!

Unfortunately the Model Architecture for the LangChain integration isn’t working at the moment due to a change in how LangChain serialise their models. We don’t have a timeline on this just yet as we depend on LangChain. It is on our radar though and we will be pushing to get it fixed.

Is there any further update on this?

Is there any further update on this?

[Discourse post]

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.