monoai.tools
Tools are used to extend the capabilities of an agent.
In MonoAI tools are simple functions that return a string, you can easily create your own tools by defining a function and registering it with the agent. To let the agent know how to use the tool, you need to add a docstring to the function in google format.
For example:
def get_weather(location:str):
"""
Get the weather for a given location:
Args:
location: The location to get the weather for
Returns:
The weather for the given location
"""
return f"The weather for {location} is sunny"
1""" 2Tools are used to extend the capabilities of an agent. 3 4In MonoAI tools are simple functions that return a string, you can easily create your own tools by defining a function and registering it with the agent. 5To let the agent know how to use the tool, you need to add a docstring to the function in google format. 6 7For example: 8```python 9def get_weather(location:str): 10 \"\"\" 11 Get the weather for a given location: 12 Args: 13 location: The location to get the weather for 14 Returns: 15 The weather for the given location 16 \"\"\" 17 18 return f"The weather for {location} is sunny" 19``` 20""" 21 22from .domain_whois import domain_whois 23from .webscraping import scrape_web_with_requests, scrape_web_with_selenium, scrape_web_with_tavily, scrape_web 24from .websearch import search_web_with_duckduckgo, search_web_with_tavily 25 26__all__ = ["domain_whois", "search_web_with_duckduckgo", "search_web_with_tavily", "scrape_web_with_requests", "scrape_web_with_selenium", "scrape_web_with_tavily", "scrape_web"]
2def domain_whois(domain:str): 3 4 """ 5 Get the whois for a given domain: 6 7 Args: 8 domain (string): the domain 9 """ 10 11 try: 12 from whois import whois 13 except ImportError: 14 raise ImportError("whois is not installed. Please install it with 'pip install whois'") 15 16 result = whois(domain) 17 return str(result)
Get the whois for a given domain:
Args: domain (string): the domain
2def search_web_with_duckduckgo(query: str, max_results: int = 10, exclude_domains: list[str] = None): 3 """Search the web using DuckDuckGo search engine. 4 5 Args: 6 query: The query to search for 7 max_results: The maximum number of results to return. Default is 5 8 exclude_domains: The domains to exclude from the search. Default is None 9 10 Returns: 11 A dictionary containing: 12 data: The search results as a list of dictionaries 13 text: The results merged into a single string 14 """ 15 search_engine = _DuckDuckGoSearch(max_results, exclude_domains) 16 response, text_response = search_engine.search(query) 17 return {"data": response, "text": text_response}
Search the web using DuckDuckGo search engine.
Args: query: The query to search for max_results: The maximum number of results to return. Default is 5 exclude_domains: The domains to exclude from the search. Default is None
Returns: A dictionary containing: data: The search results as a list of dictionaries text: The results merged into a single string
20def search_web_with_tavily(query: str, max_results: int = 10, exclude_domains: list[str] = None): 21 """Search the web using Tavily search engine. 22 23 Args: 24 query: The query to search for 25 max_results: The maximum number of results to return. Default is 5 26 exclude_domains: The domains to exclude from the search. Default is None 27 28 Returns: 29 A dictionary containing: 30 data: The search results as a list of dictionaries 31 text: The results merged into a single string 32 """ 33 search_engine = _TavilySearch(max_results, exclude_domains) 34 response, text_response = search_engine.search(query) 35 return {"data": response, "text": text_response}
Search the web using Tavily search engine.
Args: query: The query to search for max_results: The maximum number of results to return. Default is 5 exclude_domains: The domains to exclude from the search. Default is None
Returns: A dictionary containing: data: The search results as a list of dictionaries text: The results merged into a single string
5def scrape_web_with_requests(url: str): 6 """Scrape a webpage using the requests library. 7 8 This function uses the requests library for basic web scraping. 9 It's fast and lightweight but doesn't handle JavaScript-rendered content. 10 11 Args: 12 url: The URL to scrape 13 14 Returns: 15 A dictionary containing: 16 html: The HTML content of the page 17 text: The extracted text content from the page 18 19 Raises: 20 requests.RequestException: If the request fails 21 """ 22 scraper = _RequestsScraper() 23 response, text_response = scraper.scrape(url) 24 return {"html": response, "text": text_response}
Scrape a webpage using the requests library.
This function uses the requests library for basic web scraping. It's fast and lightweight but doesn't handle JavaScript-rendered content.
Args: url: The URL to scrape
Returns: A dictionary containing: html: The HTML content of the page text: The extracted text content from the page
Raises: requests.RequestException: If the request fails
27def scrape_web_with_selenium(url: str, headless: bool = True, wait_time: int = 10): 28 """Scrape a webpage using Selenium WebDriver. 29 30 This function uses Selenium to handle dynamic content and JavaScript. 31 It's useful for websites that require JavaScript execution to load content. 32 33 Args: 34 url: The URL to scrape 35 headless: Whether to run Chrome in headless mode. Default is True 36 wait_time: Maximum time to wait for elements to load in seconds. Default is 10 37 38 Returns: 39 A dictionary containing: 40 html: The HTML content of the page after JavaScript execution 41 text: The extracted text content from the page 42 43 Raises: 44 Exception: If scraping fails 45 """ 46 scraper = _SeleniumScraper(headless=headless, wait_time=wait_time) 47 response, text_response = scraper.scrape(url) 48 return {"html": response, "text": text_response}
Scrape a webpage using Selenium WebDriver.
This function uses Selenium to handle dynamic content and JavaScript. It's useful for websites that require JavaScript execution to load content.
Args: url: The URL to scrape headless: Whether to run Chrome in headless mode. Default is True wait_time: Maximum time to wait for elements to load in seconds. Default is 10
Returns: A dictionary containing: html: The HTML content of the page after JavaScript execution text: The extracted text content from the page
Raises: Exception: If scraping fails
51def scrape_web_with_tavily(url: str, deep: bool = False): 52 """Scrape a webpage using the Tavily API. 53 54 This function uses the Tavily API for advanced content extraction. 55 Tavily provides clean, structured content extraction but doesn't return raw HTML. 56 57 Args: 58 url: The URL to scrape 59 deep: Whether to use advanced extraction mode. Default is False 60 61 Returns: 62 A dictionary containing: 63 html: None (not available with Tavily) 64 text: The extracted and cleaned content from the page 65 66 Raises: 67 Exception: If the Tavily API call fails 68 """ 69 scraper = _TavilyScraper(deep=deep) 70 response, text_response = scraper.scrape(url) 71 return {"html": response, "text": text_response}
Scrape a webpage using the Tavily API.
This function uses the Tavily API for advanced content extraction. Tavily provides clean, structured content extraction but doesn't return raw HTML.
Args: url: The URL to scrape deep: Whether to use advanced extraction mode. Default is False
Returns: A dictionary containing: html: None (not available with Tavily) text: The extracted and cleaned content from the page
Raises: Exception: If the Tavily API call fails
74def scrape_web(url: str, engine: str = "requests", deep: bool = False): 75 """Scrape a webpage using the specified engine. 76 77 This is a convenience function that dispatches to the appropriate 78 scraping function based on the engine parameter. 79 80 Args: 81 url: The URL to scrape 82 engine: The engine to use (requests, tavily, selenium). Default is requests 83 deep: If using tavily, whether to use the advanced extraction mode. Default is False 84 85 Returns: 86 A dictionary containing: 87 html: The HTML content of the page (not available if using tavily) 88 text: The content of the page merged into a single string 89 90 Raises: 91 ValueError: If an invalid engine is specified 92 """ 93 94 if engine == "requests": 95 return scrape_web_with_requests(url) 96 elif engine == "tavily": 97 return scrape_web_with_tavily(url, deep=deep) 98 elif engine == "selenium": 99 return scrape_web_with_selenium(url) 100 else: 101 raise ValueError(f"Invalid engine: {engine} (must be 'requests', 'tavily', or 'selenium')")
Scrape a webpage using the specified engine.
This is a convenience function that dispatches to the appropriate scraping function based on the engine parameter.
Args: url: The URL to scrape engine: The engine to use (requests, tavily, selenium). Default is requests deep: If using tavily, whether to use the advanced extraction mode. Default is False
Returns: A dictionary containing: html: The HTML content of the page (not available if using tavily) text: The content of the page merged into a single string
Raises: ValueError: If an invalid engine is specified