Auto-translation used

"Wow!" or why is this the first local AI agent that does everything by itself

Imagine: you enter a single description of the task, and the artificial intelligence completely writes the necessary code itself, saves it to a file, automatically installs the necessary dependencies (if required) and immediately runs the resulting script!And all this in a local environment, without sending your code to someone else's cloud. If an error occurs, the AI agent can figure out what went wrong on its own and adjust the query to the model in an attempt to fix the problem. It's really impressive!

Like any experimental technology, this AI agent is far from ideal and can make mistakes.:

  • There may be false positives or situations when the agent will not be able to correct the error correctly and will endlessly reformulate the request.
  • The code that the agent generates may sometimes not work as you expect. Sometimes there may be logical errors or inconsistencies with your environment.

It is important to understand that this is still a beta version and does not guarantee 100% stable operation. In some cases, you will have to manually intervene and adjust the result. However, even now it demonstrates how far development automation has progressed.

  1. Python 3.8+Make sure you have a modern version of Python installed. You can check with the command: python --version

     2. Ollama LLM server (or another local/remote LLM server option)

  • In the example, the code is configured on the server at http://localhost:11434/v1 .
  • If you are using Ollama, run the Ollama server locally. Make sure that it accepts requests on the correct port (default is 11434).
  • If you want to use another LLM service, replace the address. http://localhost:11434/v1 to the appropriate one.

     3. qwen_agent (or a similar package for interacting with your LLM)

  • Install via PyPI: pip install qwen-agent

Save this code to a file, for example, agent.py:

import json
import os
import subprocess
from qwen_agent.llm import get_chat_model

def reformulate_request_agent(user_request, error_description=None):
"""
Function for reformulating the request to the model.
    Accepts:
    - user_request: the source text of the request from the user.
    - error_description: Error message if the code did not run.
    
    Returns:
    - Reformulated text of the request for re-generation of the code.
    """
llm = get_chat_model({
'model': 'qwen2.5-coder:14b', # Model name
        'model_server': 'http://localhost:11434/v1 ', # The address of your Ollama or other LLM server
        'api_key': 'EMPTY', # Specify the key if necessary
    })

    # Forming a message for the system
    base_message = user_request
    if error_description:
        base_message +=f"An error message was received: {error_description}."

    messages = [
        {
            'role': 'system', 
            'content': (
                "You are an assistant that reformulates user requests to ensure they are suitable "
                "for generating Python code. Your reformulated request should be clear and precise."
            )
        },
        {'role': 'user', 'content': base_message}
    ]

    # Here, the model generates a response (a reformulated query).
    responses = []
    for responses in llm.chat(messages=messages, stream=True):
        pass

    # Returning the final reformulation
    return responses[0]['content']


def save_and_run_code(code, name_file, dependencies=None):
"""
A function that:
    1. Saves the transferred Python code to a file.
    2. Installs dependencies (if specified).
    3. Runs the created file and returns the result (or error).

    :param code: A string with Python code.
    :param name_file: The name of the file to save the code.
    :param dependencies: A string with a list of dependencies (separated by a space).
    :return: Dictionary format {"success": bool, "output" or "error":str}
    """
# Converting the code to the desired format (escape characters '\n' or '\t' suddenly arrived)
formatted_code = code.replace("\\n", "\n").replace("\\t", "\t")

    # 1. Save the code to a file
    with open(name_file, "w", encoding="utf-8") as file:
        file.write(formatted_code)
    print(f"Code saved to file: {name_file}")

    # 2. Installing dependencies (if any)
if dependencies:
        print(f"Installing dependencies: {dependencies}")
        try:
            # pip install <dependencies>
            subprocess.run(["pip", "install"] + dependencies.split(), check=True, capture_output=True, text=True)
print("Dependencies have been successfully installed.")
        except subprocess.CalledProcessError as e:
            error_message = f"Error installing dependencies: {e.stderr}"
print(error_message)
return {"success": False, "error": error_message}

    #3. Running
the try code:
        result = subprocess.run(["python", name_file], capture_output=True, text=True)
        if result.returncode != 0:
# If the code returned with an error (not 0)
return {"success": False, "error": result.stderr}
        # If everything is OK, we return the result
        print(f"Result of {name_file} file execution:\n{result.stdout}")
return {"success": True, "output": result.stdout}
    except Exception as e:
        print(f"Error when starting the file {name_file}: {e}")
return {"success": False, "error": str(e)}


def reformulate_request(error_description):
"""
The simplest function that generates a new request based on the error.
    """
return f"Fix the error: {error_description}"


def test():
"""
The main function that shows an example of the agent's work.

    Steps:
    1. We receive a request from the user (task description).
    2. We form messages for LLM, where we indicate that we want to receive ready-made Python code.
    3. The model tries to generate the code by calling the save_and_run_code function.
    4. If an error occurs, the user is prompted for an updated query (or automatically reformulated).
    5. Repeat until the correct result is obtained.

    Important: Since the agent is still in beta, sometimes there may be cyclical attempts to fix the error.
    If you see that he can't figure it out for too long, abort the execution and adjust the code manually.
    """
# Connecting the model (specifying the server and the model name)
llm = get_chat_model({
'model': 'qwen2.5-coder:14b', # Model name
        'model_server': 'http://localhost:11434/v1 ', # Address of Ollama or another LLM server
        'api_key': 'EMPTY',
    })

    # Request example (instead of input, you can simply specify a string)
req_user = str(input("Enter your code description: "))
req_user2 = req_user

    while True:
        # We print the request to see what we are sending
        print("Current request to the model:", req_user)

        # Forming messages for LLM
        messages = [
            {
                'role': 'system', 
                'content': (
                    "You are an assistant that generates Python code based on user descriptions. "
                    "In addition to generating the code, you are responsible for providing the file name "
                    "where the code will be saved and a list of dependencies required to execute the code. "
                    "Ensure the output is executable and properly formatted."
                )
            },
            {'role': 'user', 'content': req_user}
        ]

        # We describe the function that the model can call (function call)
functions = [{
            'name': 'save_and_run_code',
            'description': 'Generates Python code, saves it to a specified file, installs dependencies, and executes it.',
            'parameters': {
                'type': 'object',
                'properties': {
                    'code': {
                        'type': 'string',
                        'description': 'Python code',
                    },
                    'name_file': {
                        'type': 'string',
                        'description': 'file name for saving the code',
                    },
                    'dependencies': {
                        'type': 'string',
                        'description': 'A space-separated list of dependencies to install before running the code',
                    },
                },
                'required': ['code', 'name_file'],
            },
        }]

        # Sending a model request with a description of the functions
        responses = []
        for responses in llm.chat(messages=messages, functions=functions, stream=True):
            pass

        # Adding the model's response to the message history
        messages.extend(responses)  
        last_response = messages[-1]

        # If the model wants to call a function (function_call), then we call it
        if last_response.get('function_call', None):
            try:
                available_functions = {
                    'save_and_run_code': save_and_run_code,
                }
                function_name = last_response['function_call']['name']
                function_to_call = available_functions[function_name]

                # Arguments for the function
                function_args = json.loads(last_response['function_call']['arguments'])

                # Calling the function (saving, installing dependencies, running code)
function_response = function_to_call(
code=function_args.get('code'),
name_file=function_args.get('name_file'),
dependencies=function_args.get('dependencies'),
)

                # Checking the result (success or error)
if function_response.get("success"):
print("Code executed successfully!")
print(f"Result: {function_response['output']}")
break
                else:
error_description = function_response.get("error")
print(f"Error detected: {error_description}")
# You can ask the user to clarify, or reformulate the request automatically
                    req_user = reformulate_request_agent(req_user, error_description)

            except:
# If an error has occurred, try to reformulate
                print('Error during function call.')
                req_user = reformulate_request_agent(req_user)

        else:
# The model could not generate a valid function_call
            print("The model could not generate the function. Let's try to reformulate the query.")
req_user = reformulate_request_agent(req_user2, "The model did not generate a valid query.")

if __name__ == '__main__':
    test()

How to launch: python agent.py

After that, a prompt will appear in the console: "Enter your code description:".For example, you can enter: Write a program that outputs numbers from 1 to 5.

The agent will try to generate the code, save it, install dependencies (if specified), and execute.

As an illustrative example, we can ask our AI agent to generate code that draws a Sierpinski triangle (often called a pyramid) in Python using the turtle module. The agent does not always cope the first time: errors, typos, or incorrect function calls may occur. However, after several clarifications, he proposed the following working code:

import turtle

def sierpinski(degree, points):
"""
A recursive function that draws a Sierpinski triangle
    depending on the depth of the degree recursion and the coordinates of the points.
    """
    colormap = ['blue', 'red', 'green', 'white', 'yellow', 'violet']
    draw_triangle(points, colormap[degree % 6])

    if degree > 0:
        sierpinski(degree - 1,
                   {
                       'left': get_mid(points['left'], points['top']),
                       'right': get_mid(points['right'], points['top']),
                       'top': points['top']
                   })
        sierpinski(degree - 1,
                   {
                       'left': points['left'],
                       'right': get_mid(points['left'], points['right']),
                       'top': get_mid(points['left'], points['right'])
                   })
        sierpinski(degree - 1,
                   {
                       'left': get_mid(points['right'], points['left']),
                       'right': points['right'],
                       'top': get_mid(points['right'], points['top'])
                   })

def draw_triangle(points, color):
"""
Draws a filled triangle based on coordinates in the points dictionary.
    """
    turtle.fillcolor(color)
    turtle.up()
    turtle.goto(points['left'])
    turtle.down()
    turtle.begin_fill()
    turtle.goto(points['right'])
    turtle.goto(points['top'])
    turtle.goto(points['left'])
    turtle.end_fill()

def get_mid(p1, p2):
"""
Returns the coordinate of the midpoint between two points (p1 and p2).
"""
return ((p1[0] + p2[0]) / 2, (p1[1] + p2[1]) / 2)

# Setting up the turtle window
wn = turtle.Screen()
wn.bgcolor('black')

# We define three main points (vertices) of a large triangle
points = {
    'left': (-300, -150),
    'right': (300, -150),
    'top': (0, 300)
}

# Draw a Sierpinski triangle with a depth of 5
sierpinski(5, points)

# Hiding the "turtle" and waiting for the window to close
turtle.hideturtle()
wn.exitonclick()
  1. The main function of sierpinski(degree, points):Draws a triangle at each recursion level.If degree > 0, it calls itself three times with new coordinates ("sub-triangles") of a smaller size.
  2. Draws a triangle at each recursion level.
  3. If degree > 0, it calls itself three times with new coordinates ("sub-triangles") of a smaller size.
  4. The draw_triangle(points, color) function:Gets a dictionary with three coordinates ('left', 'right', 'top') and fills the triangle with the appropriate color.
  5. Gets a dictionary with three coordinates ('left', 'right', 'top') and fills the triangle with the appropriate color.
  6. get_mid function(p1, p2):Returns the midpoint of the segment between p1 and p2.
  7. Returns the midpoint of the segment between p1 and p2.
  8. Configuring the turtle window.Screen(), selects the background color, and sets the base coordinates for the large triangle.
  9. sierpinski challenge(5, points) it gives the depth of recursion (the number of levels of "cutting") — the greater the depth, the more detailed the drawing, but the longer the rendering.
  • At first, the agent could generate incomplete or incorrect code, where variables or functions were not declared, or important parts of the logic were skipped.
  • After the agent was informed about the error (or he found the error himself at startup), he reformulated the request and adjusted his code.
  • As a result, after several iterations, a working example was obtained.

This perfectly illustrates the pros and cons of using the beta version of the AI agent.:

  • Plus: the agent can do most of the routine and "remember" syntax, rendering, recursion, etc.
  • Minus: when an error occurs, one does not always immediately understand how to fix it; sometimes it requires the intervention of a person who will accurately point out the problem.

Thus, our AI agent, although not perfect, is already helping to save time and simplify the process of creating code — even for building such curious shapes as the Sierpinski triangle (or pyramid) using the turtle library.

  • Full automation: The AI agent works locally and performs all the steps from code generation to its launch.
  • Saving time and effort: You don't need to switch between the code editor, terminal, and browser to find dependencies — the agent takes care of it.
  • Beta version: It is important to remember that the system is still a bit damp and does not always handle errors correctly. There may be a situation when she gets "stuck" trying to fix the code.
  • A ready-made framework for expansion: Do you want to add checks, add tests, or implement support for other languages? All this is possible within the same architecture, where the model interacts with Python functions.

Enjoy using it!

Comments 0

Login to leave a comment