Package llmscope
llmscope
llmscope is a Python library designed to simplify interactions with Large Language Models (LLMs) by providing a stateful, fluent interface for managing conversation history, tool usage, and response parsing. It leverages libraries like mirascope for LLM calls and pydantic for data validation and parsing.
A Taste of llmscope
from pydantic import BaseModel
import llmscope
class CodeItemDoc(BaseModel):
summary: str = Field("A concise description of this function/struct/enum... ")
example: str = Field("Provide an example of how this item is used. ")
@llmscope.fn("openai", model="gpt-4o")
def generate_doc(llm, code: str, item_type: Literal["function", "struct"]) -> CodeItemDoc:
# Organize your prompts like `print`
llm.system("You are a professional software engineer writing documentation. ")
if item_type == "function":
llm.system("* For functions, start with a verb and describe the functionality.")
else:
llm.system("* For structs, start with a noun phrase summarizing this type.")
llm.user(code)
# Elegant structured output selectors.
# Using `os.fork` to collect different json schemas in different places.
# Equilvalent to mirascope.llm.call(tools=[ViewCodeSpace], response_model=CodeItemDoc).
while tool := llm.try_tool(ViewCodeSpace):
llm.system(tool.call())
return llm.parse(CodeItemDoc)
Installation
pip install llmscope[openai,anthropic,...]
(Note: Similar to Mirascope, llmscope also relies upon different dependencies for different LLM providers, see full list of providers: https://mirascope.com/api/llm/call/)
Usage
Use the @fn decorator to wrap a function that defines the agent's behavior. This decorator automatically injects an LLMState instance.
from llmscope import fn, BaseTool, Field
# Assuming necessary imports for Provider, BaseMessageParam, etc.
# Define a tool (using mirascope's BaseTool)
class EmotionTool(BaseTool):
"""Tool to represent a chosen emotion."""
emotion: str = Field(..., description="The name of the emotion.")
reason: str = Field(..., description="A brief reason for choosing this emotion.")
def call(self):
print(f"Tool Call: Emotion={self.emotion}, Reason={self.reason}")
return f"Emotion {self.emotion} acknowledged."
# Define the agent function
@fn(provider="openai", model="gpt-4o") # Configure provider and model
def emotion_agent(llm, initial_prompt: str):
llm.system("You are an llm that chooses emotions when asked.")
llm.user(initial_prompt)
# Loop while the LLM decides to use the EmotionTool
while tool_call := llm.try_tool(EmotionTool):
result = tool_call.call() # Execute the tool
print(f"Tool Result: {result}")
# Add tool execution result back to the conversation
llm.assistant(f"Okay, I chose {tool_call.emotion}.") # Or use llm.tool(tool_call=..., content=result) with mirascope
llm.user("Okay, choose another different emotion and explain why.")
# If no tool is called, get the final text response
try:
final_response = llm.generate()
print(f"Agent's final text response: {final_response.content}")
except Exception as e:
# Handle cases where generate might fail or is used incorrectly (e.g., after try_parse)
print(f"Could not generate final response: {e}")
return "Agent finished."
# Run the llm
result = emotion_agent("Choose an emotion and explain why.")
print(result)
2. Key LLMState Methods
config(provider=..., model=..., call_params=...): Sets the LLM provider, model, and optional call parameters.msg(role, *message): Adds a message to the history.system(*message),user(*message),assistant(*message): Convenience methods formsg.try_tool(ToolClass): Attempts to get the LLM to use the specified tool. Returns the tool instance if successful,Noneotherwise. Can be used in a loop.try_tools(*ToolClasses): Similar totry_toolbut for multiple possible tools. Returns a list of successful tool calls.try_parse(PydanticModel): Attempts to parse the LLM response into the given Pydantic model without finalizing the request. Useful for checking intermediate structured outputs.parse(PydanticModel): Finalizes the request and parses the LLM response into the given Pydantic model. RaisesValidationErroron failure.generate(): Finalizes the request and returns the raw LLM response (BaseCallResponsefrom mirascope), typically used when no specific parsing or tool use is expected at the end.
Important: Methods like try_tool, try_parse, parse, and generate trigger internal state management and potentially LLM calls. Avoid calling config or msg between a try_ call and its corresponding parse or generate call within the same logical block.
License
Functions
def fn(provider: Literal['anthropic', 'azure', 'bedrock', 'cohere', 'gemini', 'google', 'groq', 'litellm', 'mistral', 'openai', 'vertex', 'xai'] | None = None,
model: str | None = None) ‑>-
Expand source code
def fn(provider: Provider | None = None, model: str | None = None) -> callable: """Create a decorator to initialize and inject LLMState into a function. This function acts as a factory that generates a decorator. When this decorator is applied to a function, it modifies the function's behavior. The decorated function will receive an `LLMState` object as its first argument. This `LLMState` object can be used for LLM generation, managing messages, and handling tools and response schemas. The decorated function is expected to accept `LLMState` as its first positional argument, followed by its original arguments (`*args`, `**kwargs`), and should return a `runner` object. Args: provider (Provider | None, optional): The provider instance to assign to `state.provider`. Defaults to None. model (str | None, optional): The model identifier string to assign to `state.model`. Defaults to None. Returns: callable: A decorator function that wraps the target function, injects the configured `LLMState`, and returns the `runner` produced by the target function. """ def decorator(func): def wrapper(*args, **kwargs): state = LLMState() state.provider = provider state.model = model runner = func(state, *args, **kwargs) return runner return wrapper return decoratorCreate a decorator to initialize and inject LLMState into a function.
This function acts as a factory that generates a decorator. When this decorator is applied to a function, it modifies the function's behavior. The decorated function will receive an
LLMStateobject as its first argument. ThisLLMStateobject can be used for LLM generation, managing messages, and handling tools and response schemas.The decorated function is expected to accept
LLMStateas its first positional argument, followed by its original arguments (*args,**kwargs), and should return arunnerobject.Args
provider:Provider | None, optional- The provider instance to assign
to
state.provider. Defaults to None. model:str | None, optional- The model identifier string to assign
to
state.model. Defaults to None.
Returns
callable- A decorator function that wraps the target function,
injects the configured
LLMState, and returns therunnerproduced by the target function.
Classes
class LLMRequest (provider: Literal['anthropic', 'azure', 'bedrock', 'cohere', 'gemini', 'google', 'groq', 'litellm', 'mistral', 'openai', 'vertex', 'xai'],
model: str,
messages: list[mirascope.core.base.message_param.BaseMessageParam],
tools: list[mirascope.core.base.tool.BaseTool],
call_params: dict | None = None,
response_model: pydantic.main.BaseModel | None = None)-
Expand source code
@dataclass class LLMRequest(BaseException): """Represents a request to the language model.""" provider: Provider model: str messages: list[BaseMessageParam] tools: list[BaseTool] call_params: dict | None = None response_model: BaseModel | None = None def generate(self): """Generates a response from the language model based on the request.""" kwargs = {} if self.call_params is not None: kwargs["call_params"] = self.call_params if self.response_model is not None: kwargs["response_model"] = self.response_model return llm.call(provider=self.provider, model=self.model, tools=self.tools, **kwargs)(lambda: self.messages)()Represents a request to the language model.
Ancestors
- builtins.BaseException
Instance variables
var call_params : dict | None-
The type of the None singleton.
var messages : list[mirascope.core.base.message_param.BaseMessageParam]-
The type of the None singleton.
var model : str-
The type of the None singleton.
var provider : Literal['anthropic', 'azure', 'bedrock', 'cohere', 'gemini', 'google', 'groq', 'litellm', 'mistral', 'openai', 'vertex', 'xai']-
The type of the None singleton.
var response_model : pydantic.main.BaseModel | None-
The type of the None singleton.
var tools : list[mirascope.core.base.tool.BaseTool]-
The type of the None singleton.
Methods
def generate(self)-
Expand source code
def generate(self): """Generates a response from the language model based on the request.""" kwargs = {} if self.call_params is not None: kwargs["call_params"] = self.call_params if self.response_model is not None: kwargs["response_model"] = self.response_model return llm.call(provider=self.provider, model=self.model, tools=self.tools, **kwargs)(lambda: self.messages)()Generates a response from the language model based on the request.
class LLMState-
Expand source code
class LLMState: """ Manages the state and configuration for an llm's interaction with a language model. This class provides a fluent interface for building LLM requests, including setting the provider and model, adding messages, defining tools, and specifying response parsing models. It uses an internal mechanism involving subprocesses (`_command`) to collect tool and response schema definitions across chained calls (`try_tools`, `try_parse`) before finally executing the LLM request (`generate`, `parse`). Attributes: provider (Provider | None): The LLM provider instance (e.g., OpenAI, Anthropic). model (str | None): The specific model name to use (e.g., "gpt-4o"). messages (list[BaseMessageParam]): A list of messages constituting the conversation history. """ provider: Provider | None = None model: str | None = None messages: list[BaseMessageParam] = [] call_params: dict | None = None _tools: list[BaseTool] = [] _response_model: list[type] = [] _response_value: Any = None _child_writeback: Any = None def __init__(self): pass def config(self, provider: Provider | None = None, model: str | None = None, call_params: dict | None = None): """Configure the llm's provider, model, and call parameters. This method updates the llm's configuration. It includes an assertion to prevent configuration changes between specific method calls like `try_tools`, `try_tool`, `try_parse`, `parse`, and `generate`. Args: provider (Provider | None, optional): The API provider to set for the llm. If None, the current provider remains unchanged. Defaults to None. model (str | None, optional): The model name to set for the llm. If None, the current model remains unchanged. Defaults to None. call_params (dict | None, optional): The call parameters to set for the llm's API calls. If None, the current call parameters remain unchanged. Defaults to None. Returns: self: The llm instance itself, allowing for method chaining. Raises: AssertionError: If called between `try_tools`, `try_tool`, `try_parse`, `parse` and `generate` methods. """ assert self._child_writeback is None and self._response_value is None, "Called `llm.config` between `try_tools`, `try_tool`, `try_parse`, `parse` and `generate`." if provider is not None: self.provider = provider if model is not None: self.model = model if call_params is not None: self.call_params = call_params return self def msg(self, role: str, *message: list[Any]): """Append a message to the llm's memory. This method takes a role and one or more message strings, processes them, and adds them to the llm's message history (`self.messages`). If the last message in the history has the same role, the new content is appended to it. Otherwise, a new message object is created. It also use `textwrap.dedent` to remove common leading whitespace from the message content. Args: role (str): The role of the message sender (e.g., "user", "assistant", "system"). *message (list[Any]): One or more message parts (typically strings) to be combined into a single message content. Each part is dedented before joining. Returns: LLMState: The llm instance itself, allowing for method chaining. Raises: AssertionError: If called between specific asynchronous operations like `try_tools`, `try_tool`, `try_parse`, `parse`, or `generate`, indicating improper usage. """ message = " ".join(textwrap.dedent(m) for m in message) + "\n" assert self._child_writeback is None and self._response_value is None, "Called `llm.msg` between `try_tools`, `try_tool`, `try_parse`, `parse` and `generate`." if len(self.messages) > 0 and self.messages[-1].role == role: self.messages[-1].content += " ".join(message) + "\n" else: self.messages.append(BaseMessageParam(role=role, content=" ".join(message) + "\n")) return self def system(self, *message: list[Any]): """Send a system message. See `msg` for more details. Args: *message: A list of messages to be sent as system messages. Returns: LLMState: The llm instance itself, allowing for method chaining. """ return self.msg("system", *message) def user(self, *message: list[Any]): """Send a user message. See `msg` for more details. Args: *message: A list of messages to be sent as system messages. Returns: LLMState: The llm instance itself, allowing for method chaining. """ return self.msg("user", *message) def assistant(self, *message: list[Any]): """Send a assistant message. See `msg` for more details. Args: *message: A list of messages to be sent as system messages. Returns: LLMState: The llm instance itself, allowing for method chaining. """ return self.msg("assistant", *message) def _command(self, update_state: callable, validate: callable, final: bool = False) -> Any | None: # If it is subprcess, update current state and return to the main process later if self._child_writeback is not None: update_state() if final: with os.fdopen(self._child_writeback, "wb") as write_pipe: pickled_types = pickle.dumps((self._tools, self._response_model)) write_pipe.write(pickled_types) os._exit(0) else: return None # If it is main process, we need to check if the response value is already exists and validate it. if self._response_value is not None: resp = validate(self._response_value) if resp is not None: self._response_value = None self._tools = [] self._response_model = [] return resp else: if final: raise ValidationError(f"Invalid response value: {type(self._response_value)}. Expected: {self._response_model + self._tools} ") return None # Otherwise 1. Create a subprocess to collect the _tools and response schema, 2. Generate a value and check. else: if final: assert self._tools == [], "Unknown Error" assert self._response_model == [], "Unknown Error" update_state() else: r, w = os.pipe() pid = os.fork() if pid == 0: os.close(r) self._child_writeback = w return self._command(update_state, validate, final) os.close(w) os.waitpid(pid, 0) with os.fdopen(r, "rb") as read_pipe: pickled_types = read_pipe.read() (self._tools, self._response_model) = pickle.loads(pickled_types) assert all(issubclass(m, BaseTool) for m in self._tools) assert all(issubclass(m, BaseModel) for m in self._response_model) self._work_value() return self._command(update_state, validate, final) def try_tools(self, *tools: list[type]) -> list[BaseTool] | None: """Attempts to let the llm use a specific set of tools. This method uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. Args: *tools (list[type]): A variable number of tool types (classes) to try. Each type must be a subclass of `BaseTool`. Returns: list[BaseTool] | None: A list containing the validated tool call objects from the response if the validation is successful for at least one of the provided tools. Returns `None` if the response does not contain a valid call to any of the specified tools. """ assert all(issubclass(t, BaseTool) for t in tools), "All tools must be subclass of `BaseTool`." def validate(value: BaseCallResponse): if not hasattr(value, 'tools') or type(value.tools) is not list: return None v = [v for v in value.tools if any(_validate(t, v.model_dump()) and v.tool_call.function.name == t.__name__ for t in tools)] if len(v) == 0: return None return v return self._command( update_state=lambda: self._tools.extend(tools), validate=validate, ) def try_tool(self, tool: type) -> BaseTool | None: """Attempts to let the llm use a specific tool. This method is a convenience wrapper around `try_tools` for a single tool type. It calls `try_tools` with the provided tool type and returns the first successfully initialized tool instance if any, otherwise None. See `try_tools` for more details on the underlying mechanism and potential error handling. Args: tool (type): The class type of the tool to attempt to use. Returns: BaseTool | None: An instance of the tool if successfully initialized and added, otherwise `None`. """ result = self.try_tools(tool) if type(result) == list and len(result) > 0: return result[0] else: return None def try_parse(self, ty: type, final: bool = False) -> Any: """Attempts to parse and validate a value against the specified type. This method uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. Args: ty (type): The expected type for the value. Must be a type supported by Pydantic's TypeAdapter for validation. Returns: Any: The parsed and validated value conforming to the type `ty`. The exact behavior on validation failure depends on the implementation of the `_command` method. Raises: AssertionError: If the provided type `ty` is not validatable by Pydantic. """ assert TypeAdapter(ty), "Type must be validable through pydantic. " return self._command( update_state=lambda: self._response_model.append(ty), validate=lambda value: value if _validate(ty, value) else None, final=final ) def parse(self, ty: type) -> Any: """Parse the content into the given type. Note that when this method is often used in conjunction with `try_parse` and `try_tools`. `try_parse` or `try_tools` uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. The forked process will be ended here, returning all type information (tools and response schemas) into the main process. This method can also be used alone, in which case it will not use `os.fork` to create a subprocess. It will directly generate the response and parse it into the given type. Args: ty (type): The target type to parse the content into. Returns: Any: An instance of the specified type `ty` representing the parsed content. Raises: ValueError: If the content cannot be successfully parsed into the specified type `ty`. """ return self.try_parse(ty, final=True) def generate(self) -> BaseCallResponse: """Generate a response from the language model. Note that when this method is often used in conjunction with `try_tools` but not `try_parse`. `try_tools` uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. The forked process will be ended here, returning all type information (tools and response schemas) into the main process. This method can also be used alone, in which case it will not use `os.fork` to create a subprocess. It will directly generate the response and parse it into the given type. The method will return AssertionError when used with `try_parse`. Returns: BaseCallResponse: The generated response from the language model, from `mirascope`. Raises: AssertionError: when used with `try_parse`. """ assert len(self._response_model) == 0, "Must not use `generate` together with `try_parse`." return self._command( update_state=lambda: None, validate=lambda value: BaseCallResponse.model_validate(value), final=True ) def _work_value(self): assert self.provider is not None, "Provider must be set. See https://mirascope.com/api/llm/call/." assert self.model is not None, "Model must be set. See https://mirascope.com/api/llm/call/. " if len(self._response_model) == 0: response_model = None elif len(self._response_model) == 1: response_model = self._response_model[0] else: response_model = Union[*self._response_model] response = LLMRequest( provider=self.provider, model=self.model, messages=self.messages, tools=self._tools, call_params=self.call_params, response_model=response_model, ).generate() if response is None: raise RuntimeError("No response from LLM.") self._response_value = response self._tools = [] self._response_model = []Manages the state and configuration for an llm's interaction with a language model. This class provides a fluent interface for building LLM requests, including setting the provider and model, adding messages, defining tools, and specifying response parsing models. It uses an internal mechanism involving subprocesses (
_command) to collect tool and response schema definitions across chained calls (try_tools,try_parse) before finally executing the LLM request (generate,parse).Attributes
provider:Provider | None- The LLM provider instance (e.g., OpenAI, Anthropic).
model:str | None- The specific model name to use (e.g., "gpt-4o").
messages:list[BaseMessageParam]- A list of messages constituting the conversation history.
Class variables
var call_params : dict | None-
The type of the None singleton.
var messages : list[mirascope.core.base.message_param.BaseMessageParam]-
The type of the None singleton.
var model : str | None-
The type of the None singleton.
var provider : Literal['anthropic', 'azure', 'bedrock', 'cohere', 'gemini', 'google', 'groq', 'litellm', 'mistral', 'openai', 'vertex', 'xai'] | None-
The type of the None singleton.
Methods
def assistant(self, *message: list[typing.Any])-
Expand source code
def assistant(self, *message: list[Any]): """Send a assistant message. See `msg` for more details. Args: *message: A list of messages to be sent as system messages. Returns: LLMState: The llm instance itself, allowing for method chaining. """ return self.msg("assistant", *message)Send a assistant message. See
msgfor more details.Args
*message- A list of messages to be sent as system messages.
Returns
LLMState- The llm instance itself, allowing for method chaining.
def config(self,
provider: Literal['anthropic', 'azure', 'bedrock', 'cohere', 'gemini', 'google', 'groq', 'litellm', 'mistral', 'openai', 'vertex', 'xai'] | None = None,
model: str | None = None,
call_params: dict | None = None)-
Expand source code
def config(self, provider: Provider | None = None, model: str | None = None, call_params: dict | None = None): """Configure the llm's provider, model, and call parameters. This method updates the llm's configuration. It includes an assertion to prevent configuration changes between specific method calls like `try_tools`, `try_tool`, `try_parse`, `parse`, and `generate`. Args: provider (Provider | None, optional): The API provider to set for the llm. If None, the current provider remains unchanged. Defaults to None. model (str | None, optional): The model name to set for the llm. If None, the current model remains unchanged. Defaults to None. call_params (dict | None, optional): The call parameters to set for the llm's API calls. If None, the current call parameters remain unchanged. Defaults to None. Returns: self: The llm instance itself, allowing for method chaining. Raises: AssertionError: If called between `try_tools`, `try_tool`, `try_parse`, `parse` and `generate` methods. """ assert self._child_writeback is None and self._response_value is None, "Called `llm.config` between `try_tools`, `try_tool`, `try_parse`, `parse` and `generate`." if provider is not None: self.provider = provider if model is not None: self.model = model if call_params is not None: self.call_params = call_params return selfConfigure the llm's provider, model, and call parameters.
This method updates the llm's configuration. It includes an assertion to prevent configuration changes between specific method calls like
try_tools,try_tool,try_parse,parse, andgenerate.Args
provider:Provider | None, optional- The API provider to set for the llm. If None, the current provider remains unchanged. Defaults to None.
model:str | None, optional- The model name to set for the llm. If None, the current model remains unchanged. Defaults to None.
call_params:dict | None, optional- The call parameters to set for the llm's API calls. If None, the current call parameters remain unchanged. Defaults to None.
Returns
self- The llm instance itself, allowing for method chaining.
Raises
AssertionError- If called between
try_tools,try_tool,try_parse,parseandgeneratemethods.
def generate(self) ‑> mirascope.core.base.call_response.BaseCallResponse-
Expand source code
def generate(self) -> BaseCallResponse: """Generate a response from the language model. Note that when this method is often used in conjunction with `try_tools` but not `try_parse`. `try_tools` uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. The forked process will be ended here, returning all type information (tools and response schemas) into the main process. This method can also be used alone, in which case it will not use `os.fork` to create a subprocess. It will directly generate the response and parse it into the given type. The method will return AssertionError when used with `try_parse`. Returns: BaseCallResponse: The generated response from the language model, from `mirascope`. Raises: AssertionError: when used with `try_parse`. """ assert len(self._response_model) == 0, "Must not use `generate` together with `try_parse`." return self._command( update_state=lambda: None, validate=lambda value: BaseCallResponse.model_validate(value), final=True )Generate a response from the language model.
Note that when this method is often used in conjunction with
try_toolsbut nottry_parse.try_toolsusesos.forkto create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. The forked process will be ended here, returning all type information (tools and response schemas) into the main process.This method can also be used alone, in which case it will not use
os.forkto create a subprocess. It will directly generate the response and parse it into the given type.The method will return AssertionError when used with
try_parse.Returns
BaseCallResponse- The generated response from the language model, from
mirascope.
Raises
AssertionError- when used with
try_parse.
def msg(self, role: str, *message: list[typing.Any])-
Expand source code
def msg(self, role: str, *message: list[Any]): """Append a message to the llm's memory. This method takes a role and one or more message strings, processes them, and adds them to the llm's message history (`self.messages`). If the last message in the history has the same role, the new content is appended to it. Otherwise, a new message object is created. It also use `textwrap.dedent` to remove common leading whitespace from the message content. Args: role (str): The role of the message sender (e.g., "user", "assistant", "system"). *message (list[Any]): One or more message parts (typically strings) to be combined into a single message content. Each part is dedented before joining. Returns: LLMState: The llm instance itself, allowing for method chaining. Raises: AssertionError: If called between specific asynchronous operations like `try_tools`, `try_tool`, `try_parse`, `parse`, or `generate`, indicating improper usage. """ message = " ".join(textwrap.dedent(m) for m in message) + "\n" assert self._child_writeback is None and self._response_value is None, "Called `llm.msg` between `try_tools`, `try_tool`, `try_parse`, `parse` and `generate`." if len(self.messages) > 0 and self.messages[-1].role == role: self.messages[-1].content += " ".join(message) + "\n" else: self.messages.append(BaseMessageParam(role=role, content=" ".join(message) + "\n")) return selfAppend a message to the llm's memory.
This method takes a role and one or more message strings, processes them, and adds them to the llm's message history (
self.messages). If the last message in the history has the same role, the new content is appended to it. Otherwise, a new message object is created. It also usetextwrap.dedentto remove common leading whitespace from the message content.Args
role:str- The role of the message sender (e.g., "user", "assistant", "system").
*message:list[Any]- One or more message parts (typically strings) to be combined into a single message content. Each part is dedented before joining.
Returns
LLMState- The llm instance itself, allowing for method chaining.
Raises
AssertionError- If called between specific asynchronous operations
like
try_tools,try_tool,try_parse,parse, orgenerate, indicating improper usage.
def parse(self, ty: type) ‑> Any-
Expand source code
def parse(self, ty: type) -> Any: """Parse the content into the given type. Note that when this method is often used in conjunction with `try_parse` and `try_tools`. `try_parse` or `try_tools` uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. The forked process will be ended here, returning all type information (tools and response schemas) into the main process. This method can also be used alone, in which case it will not use `os.fork` to create a subprocess. It will directly generate the response and parse it into the given type. Args: ty (type): The target type to parse the content into. Returns: Any: An instance of the specified type `ty` representing the parsed content. Raises: ValueError: If the content cannot be successfully parsed into the specified type `ty`. """ return self.try_parse(ty, final=True)Parse the content into the given type.
Note that when this method is often used in conjunction with
try_parseandtry_tools.try_parseortry_toolsusesos.forkto create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. The forked process will be ended here, returning all type information (tools and response schemas) into the main process.This method can also be used alone, in which case it will not use
os.forkto create a subprocess. It will directly generate the response and parse it into the given type.Args
ty:type- The target type to parse the content into.
Returns
Any- An instance of the specified type
tyrepresenting the parsed content.
Raises
ValueError- If the content cannot be successfully parsed into the
specified type
ty.
def system(self, *message: list[typing.Any])-
Expand source code
def system(self, *message: list[Any]): """Send a system message. See `msg` for more details. Args: *message: A list of messages to be sent as system messages. Returns: LLMState: The llm instance itself, allowing for method chaining. """ return self.msg("system", *message)Send a system message. See
msgfor more details.Args
*message- A list of messages to be sent as system messages.
Returns
LLMState- The llm instance itself, allowing for method chaining.
def try_parse(self, ty: type, final: bool = False) ‑> Any-
Expand source code
def try_parse(self, ty: type, final: bool = False) -> Any: """Attempts to parse and validate a value against the specified type. This method uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. Args: ty (type): The expected type for the value. Must be a type supported by Pydantic's TypeAdapter for validation. Returns: Any: The parsed and validated value conforming to the type `ty`. The exact behavior on validation failure depends on the implementation of the `_command` method. Raises: AssertionError: If the provided type `ty` is not validatable by Pydantic. """ assert TypeAdapter(ty), "Type must be validable through pydantic. " return self._command( update_state=lambda: self._response_model.append(ty), validate=lambda value: value if _validate(ty, value) else None, final=final )Attempts to parse and validate a value against the specified type.
This method uses
os.forkto create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model.Args
ty:type- The expected type for the value. Must be a type supported by Pydantic's TypeAdapter for validation.
Returns
Any- The parsed and validated value conforming to the type
ty. The exact behavior on validation failure depends on the implementation of the_commandmethod.
Raises
AssertionError- If the provided type
tyis not validatable by Pydantic.
def try_tool(self, tool: type) ‑> mirascope.core.base.tool.BaseTool | None-
Expand source code
def try_tool(self, tool: type) -> BaseTool | None: """Attempts to let the llm use a specific tool. This method is a convenience wrapper around `try_tools` for a single tool type. It calls `try_tools` with the provided tool type and returns the first successfully initialized tool instance if any, otherwise None. See `try_tools` for more details on the underlying mechanism and potential error handling. Args: tool (type): The class type of the tool to attempt to use. Returns: BaseTool | None: An instance of the tool if successfully initialized and added, otherwise `None`. """ result = self.try_tools(tool) if type(result) == list and len(result) > 0: return result[0] else: return NoneAttempts to let the llm use a specific tool. This method is a convenience wrapper around
try_toolsfor a single tool type. It callstry_toolswith the provided tool type and returns the first successfully initialized tool instance if any, otherwise None. Seetry_toolsfor more details on the underlying mechanism and potential error handling.Args
tool:type- The class type of the tool to attempt to use.
Returns
BaseTool | None- An instance of the tool if successfully
initialized and added, otherwise
None.
def try_tools(self, *tools: list[type]) ‑> list[mirascope.core.base.tool.BaseTool] | None-
Expand source code
def try_tools(self, *tools: list[type]) -> list[BaseTool] | None: """Attempts to let the llm use a specific set of tools. This method uses `os.fork` to create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model. Args: *tools (list[type]): A variable number of tool types (classes) to try. Each type must be a subclass of `BaseTool`. Returns: list[BaseTool] | None: A list containing the validated tool call objects from the response if the validation is successful for at least one of the provided tools. Returns `None` if the response does not contain a valid call to any of the specified tools. """ assert all(issubclass(t, BaseTool) for t in tools), "All tools must be subclass of `BaseTool`." def validate(value: BaseCallResponse): if not hasattr(value, 'tools') or type(value.tools) is not list: return None v = [v for v in value.tools if any(_validate(t, v.model_dump()) and v.tool_call.function.name == t.__name__ for t in tools)] if len(v) == 0: return None return v return self._command( update_state=lambda: self._tools.extend(tools), validate=validate, )Attempts to let the llm use a specific set of tools.
This method uses
os.forkto create a subprocess that collects the tools and response schema and assemble all these type information into a request to the language model.Args
*tools:list[type]- A variable number of tool types (classes) to
try. Each type must be a subclass of
BaseTool.
Returns
list[BaseTool] | None- A list containing the validated tool call
objects from the response if the validation is successful for
at least one of the provided tools. Returns
Noneif the response does not contain a valid call to any of the specified tools.
def user(self, *message: list[typing.Any])-
Expand source code
def user(self, *message: list[Any]): """Send a user message. See `msg` for more details. Args: *message: A list of messages to be sent as system messages. Returns: LLMState: The llm instance itself, allowing for method chaining. """ return self.msg("user", *message)Send a user message. See
msgfor more details.Args
*message- A list of messages to be sent as system messages.
Returns
LLMState- The llm instance itself, allowing for method chaining.